BatchRefUpdate: repro racy atomic update, and fix it
PackedBatchRefUpdate was creating a new packed-refs list that was
potentially unsorted. This would be papered over when the list was
read back from disk in parsePackedRef, which detects unsorted ref
lists on reading, and sorts them. However, the BatchRefUpdate also
installed the new (unsorted) list in-memory in
RefDirectory#packedRefs.
With the timestamp granularity code committed to stable-5.1, we can
more often accurately decide that the packed-refs file is clean, and
will return the erroneous unsorted data more often. Unluckily timed
delays also cause the file to be clean, hence this problem was
exacerbated under load.
The symptom is that refs added by a BatchRefUpdate would stop being
visible directly after they were added. In particular, the Gerrit
integration tests uses BatchRefUpdate in its setup for creating the
Admin group, and then tries to read it out directly afterward.
The tests recreates one failure case. A better approach would be to
revise RefList.Builder, so it detects out-of-order lists and
automatically sorts them.
Fixes https://bugs.eclipse.org/bugs/show_bug.cgi?id=548716 and
https://bugs.chromium.org/p/gerrit/issues/detail?id=11373.
Bug: 548716
Change-Id: I613c8059964513ce2370543620725b540b3cb6d1
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When we are cloning we have no refs at all yet, and there cannot
(or at least should not) be any other thread doing something with
refs yet.
Locking loose refs is thus not needed, since there are no loose
refs yet and nothing should be trying to create them concurrently.
Let's skip the whole loose ref locking when we are cloning a repository.
As a result, JGit will write the refs directly to the packed-refs
file, and will not create the refs/remotes/ directories nor the
lock files underneath when cloning and packed refs are used. Since
no lock files are created, any problems on case-insensitive file
systems with tag or branch names that differ only in case are avoided
during cloning.
Detect if we are cloning based on the following heuristics:
* HEAD is a dangling symref
* There is no loose ref
* There is no packed-refs file
Note, however, that there may still be problems with such tag or
branch names later on. This is primarily a five-minutes-past-twelve
stop-gap measure to resolve the referenced bug, which affects the
Oxygen.2 release.
Bug: 528497
Change-Id: I57860c29c210568165276a123b855e462b6a107a
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Allow creating symbolic references with link, and deleting them or
switching to ObjectId with unlink. How this happens is up to the
individual RefDatabase.
The default implementation detaches RefUpdate if a symbolic reference
is involved, supporting these command instances on RefDirectory.
Unfortunately the packed-refs file does not support storing symrefs,
so atomic transactions involving more than one symref command are
failed early.
Updating InMemoryRepository is deferred until reftable lands, as I
plan to switch InMemoryRepository to use reftable for its internal
storage representation.
Change-Id: Ibcae068b17a2fc6d958f767f402a570ad88d9151
Signed-off-by: Minh Thai <mthai@google.com>
Signed-off-by: Terry Parker <tparker@google.com>
Support force writing reflog on a per-update basis
Even if a repository has core.logAllRefUpdates=true, ReflogWriter does
not create reflog files unless the refs are under a hard-coded list of
prefixes, or unless the forceWrite bit is set. Expose the forceWrite bit
on a per-update basis in RefUpdate/BatchRefUpdate/ReceiveCommand,
creating RefLogWriters as necessary.
Change-Id: Ifc851fba00f76bf56d4134f821d0576b37810f80
Ensure ReflogWriter only works with a RefDirectory
The ReflogWriter constructor just took a Repository and called
getDirectory() on it to figure out the reflog dirs, but not all
Repository instances use this storage format for reflogs, so it's
incorrect to attempt to use ReflogWriter when there is not a
RefDirectory directly involved. In practice, ReflogWriter was mostly
only used by the implementation of RefDirectory, so enforcing this is
mostly just shuffling around calls in the same internal package.
The one exception is StashDropCommand, which writes to a reflog lock
file directly. This was a reasonable implementation decision, because
there is no general reflog interface in JGit beyond using
(Batch)RefUpdate to write new entries to the reflog. So to implement
"git stash drop <N>", which removes an arbitrary element from the
reflog, it's fair to fall back to the RefDirectory implementation.
Creating and using a more general interface is well beyond the scope of
this change.
That said, the old behavior of writing out the reflog file even if
that's not the reflog format used by the given Repository is clearly
wrong. Fail fast in this case instead.
Change-Id: I9bd4b047bc3e28a5607fd346ec2400dde9151730
Fix missing RefsChangedEvent when packed refs are used
With atomic ref updates using packed refs, JGit did not fire a
RefsChangedEvent. This resulted in a user-visible regression in
EGit: the UI would not update after a "Fetch from upstream...".
Presumably it would also make Gerrit miss out on ref changes?
Strengthen the BatchRefUpdateTest by also asserting the expected
number of RefsChangedEvents, and ensure modCnt is incremented in
RefDirectory.commitPackedRefs() when refs really changed (as opposed
to some internal housekeeping operation, such as packing loose refs).
Bug: 521296
Change-Id: Ia985bda1d99f45a5f89c8020ca4845e7a66e743e
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
RefDirectory: Add in-process fair lock for atomic updates
In a server scenario such as Gerrit Code Review, there may be many
atomic BatchRefUpdates contending for locks on both the packed-refs file
and some subset of loose refs. We already retry lock acquisition to
improve this situation slightly, but we can do better by using an
in-process lock. This way, instead of retrying and potentially exceeding
their timeout, different threads sharing the same Repository instance
can wait on a fair lock without having to touch the disk lock. Since a
server is probably already using RepositoryCache anyway, there is a high
likelihood of reusing the Repository instance.
Change-Id: If5dd1dc58f0ce62f26131fd5965a0e21a80e8bd3
RefDirectory: Retry acquiring ref locks with backoff
If a repo frequently uses PackedBatchRefUpdates, there is likely to be
contention on the packed-refs file, so it's not appropriate to fail
immediately the first time we fail to acquire a lock. Add some logic to
RefDirectory to support general retrying of lock acquisition.
Currently, there is a hard-coded wait starting at 100ms and backing off
exponentially to 1600ms, for about 3s of total wait. This is no worse
than the hard-coded backoff that JGit does elsewhere, e.g. in
FileUtils#delete. One can imagine a scheme that uses per-repository
configuration of backoff, and the current interface would support this
without changing any callers.
Change-Id: I4764e11270d9336882483eb698f67a78a401c251
On-disk reflogs are not stored in the packed-refs file, so we cannot
ensure atomic updates. We choose the lesser evil of dropping failed
reflog updates on the floor, rather than throwing an exception even
though the underlying ref updates succeeded.
Add tests for reflogs to BatchRefUpdateTest.
Change-Id: Ia456ba9e36af8e01fde81b19af46a72378e614cd
The existing packed-refs file provides a mechanism for implementing
atomic multi-ref updates without any changes to the on-disk format or
lockfile protocol. We just need to make sure that there are no loose
refs involved in the transaction, which we can achieve by packing the
refs while holding locks on all loose refs. Full details of the
algorithm are in the PackedBatchRefUpdate javadoc.
This change does not implement reflog support, which will come in a
later change.
Change-Id: I09829544a0d4e8dbb141d28c748c3b96ef66fee1