BatchRefUpdate: repro racy atomic update, and fix it
PackedBatchRefUpdate was creating a new packed-refs list that was
potentially unsorted. This would be papered over when the list was
read back from disk in parsePackedRef, which detects unsorted ref
lists on reading, and sorts them. However, the BatchRefUpdate also
installed the new (unsorted) list in-memory in
RefDirectory#packedRefs.
With the timestamp granularity code committed to stable-5.1, we can
more often accurately decide that the packed-refs file is clean, and
will return the erroneous unsorted data more often. Unluckily timed
delays also cause the file to be clean, hence this problem was
exacerbated under load.
The symptom is that refs added by a BatchRefUpdate would stop being
visible directly after they were added. In particular, the Gerrit
integration tests uses BatchRefUpdate in its setup for creating the
Admin group, and then tries to read it out directly afterward.
The tests recreates one failure case. A better approach would be to
revise RefList.Builder, so it detects out-of-order lists and
automatically sorts them.
Fixes https://bugs.eclipse.org/bugs/show_bug.cgi?id=548716 and
https://bugs.chromium.org/p/gerrit/issues/detail?id=11373.
Bug: 548716
Change-Id: I613c8059964513ce2370543620725b540b3cb6d1
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Keep track of the original cause for a packfile invalidation.
It is needed for the sysadmin to understand if there is a real
underlying filesystem problem and repository corruption or if it is
simply a consequence of a concurrency of Git operations (e.g. repack
or GC).
Change-Id: I06ddda9ec847844ec31616ab6d17f153a5a34e33
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Fix pack files scan when filesnapshot isn't modified
Do not reload packfiles when their associated filesnapshot is not
modified on disk compared to the one currently stored in memory.
Fix the regression introduced by fef78212 which, in conjunction with
core.trustfolderstats = false, caused any lookup of objects inside
the packlist to loop forever when the object was not found in the pack
list.
Bug: 546190
Change-Id: I38d752ebe47cefc3299740aeba319a2641f19391
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Fix GC to delete empty fanout directories after repacking
The prune method did not delete empty fanout directories when loose
objects moved to a new pack file but only when loose unreferenced
objects were pruned.
Change-Id: Ia068f4914c54d9cf9f40b75e8ea50759402b5000
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When reading from a packfile, make sure that is valid
and has a non-null file-descriptor.
Because of concurrency between a thread invalidating a packfile
and another trying to read it, the read() may result into a NPE
that won't be able to be automatically recovered.
Throwing a PackInvalidException would instead cause the packlist
to be refreshed and the read to eventually succeed.
Bug: 544199
Change-Id: I27788b3db759d93ec3212de35c0094ecaafc2434
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Move throw of PackInvalidException outside the catch
When a packfile is invalid, throw an exception explicitly
outside any catch scope, so that is not accidentally caught
by the generic catch-all cause, which would set the packfile
as valid again.
Flagging an invalid packfile as valid again would have
dangerous consequences such as the corruption of the in-memory
packlist.
Bug: 544199
Change-Id: If7a3188a68d7985776b509d636d5ddf432bec798
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Do not redundantly call File.lastModified() for extracting the
timestamp of the PackFile but rather use consistently the FileSnapshot
which reads all file attributes in a single bulk call.
Change-Id: I932675ae4fe56dcd3833dac249816f097303bb09
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Due to finite filesystem timestamp resolution the last modified
timestamp of files cannot detect file changes which happened in the
immediate past (less than one filesystem timer tick ago).
Read and consider file size also, so that differing file size can help
to more accurately detect file changes without reading the file content.
Use bulk read to avoid multiple stat calls to retrieve file attributes.
Change-Id: I974288fff78ac78c52245d9218b5639603f67a46
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
The pack reload mechanism from the filesystem works only by name
and does not check the actual last modified date of the packfile.
This lead to concurrency issues where multiple threads were loading
and removing from each other list of packfiles when one of those
was failing the checksum.
Rely on FileSnapshot rather than directly checking lastModified
timestamp so that more checks can be performed.
Bug: 544199
Change-Id: I173328f29d9914007fd5eae3b4c07296ab292390
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
In case of concurrent pack file access, threads may wait on the idx()
function even for already open files. This happens especially with a
slow file system.
Performance numbers are listed in the bug report.
Bug: 543739
Change-Id: Iff328d347fa65ae07ecce3267d44184161248978
Signed-off-by: Juergen Denner <j.denner@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
PackFile: report correct message for checksum mismatch
When the packfile checksum does not match the expected one
report the correct checksum error instead of reporting that
the number of objects is incorrect.
Change-Id: I040f36dacc4152ae05453e7acbf8dfccceb46e0d
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
(cherry picked from commit 436c99ce59)
Externalize the message and log the pack file with absolute path.
Change-Id: I019052dfae8fd96ab67da08b3287d699287004cb
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
(cherry picked from commit 9665d86ba1)
ObjectDirectory: extra logging on packfile exceptions
Display extra logging, including the exception with the associated
stacktrace, whenever a packFile can't be read and thus removed
from the packlist.
Change-Id: I97a4e31dc427bfcc0baae438dcbe2dcd4704b824
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
(cherry picked from commit 962babc4b2)
Externalize warning message in RefDirectory.delete()
Change-Id: Icec16c01853a3f5ea016d454b3d48624498efcce
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
(cherry picked from commit 5e68fe245f)
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Suppress warning for trying to delete non-empty directory
This is actually a fairly common occurrence; deleting the parent
directories can work only if the file deleted was the last one
in the directory.
Bug: 537872
Change-Id: I86d1d45e1e2631332025ff24af8dfd46c9725711
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
(cherry picked from commit d9e767b431)
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
FS_POSIX.createNewFile(File) failed to properly implement atomic file
creation on NFS using the algorithm [1]:
- name of the hard link must be unique to prevent that two processes
using different NFS clients try to create the same link. This would
render nlink useless to detect if there was a race.
- the hard link must be retained for the lifetime of the file since we
don't know when the state of the involved NFS clients will be
synchronized. This depends on NFS configuration options.
To fix these issues we need to change the signature of createNewFile
which would break API. Hence deprecate the old method
FS.createNewFile(File) and add a new method createNewFileAtomic(File).
The new method returns a LockToken which needs to be retained by the
caller (LockFile) until all involved NFS clients synchronized their
state. Since we don't know when the NFS caches are synchronized we need
to retain the token until the corresponding file is no longer needed.
The LockToken must be closed after the LockFile using it has been
committed or unlocked. On Posix, if core.supportsAtomicCreateNewFile =
false this will delete the hard link which guarded the atomic creation
of the file. When acquiring the lock fails ensure that the hard link is
removed.
[1] https://www.time-travellers.org/shane/papers/NFS_considered_harmful.html
also see file creation flag O_EXCL in
http://man7.org/linux/man-pages/man2/open.2.html
Change-Id: I84fcb16143a5f877e9b08c6ee0ff8fa4ea68a90d
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
GC: Avoid logging errors when deleting non-empty folders
I88304d34c and Ia555bce00 modified the way errors are handled when
trying to delete non-empty reference folders. Before, this error was
silently ignored as it was considered an expected output. Now, every
failed folder delete is logged which can be noisy.
Ignore the DirectoryNotEmptyException but log any other error avoiding
deletion of an eligible folder.
Signed-off-by: Hector Oswaldo Caballero <hector.caballero@ericsson.com>
Change-Id: I194512f67885231d62c03976ae683e5cc450ec7c
Suppress warning for trying to delete non-empty directory
This is actually a fairly common occurrence; deleting the parent
directories can work only if the file deleted was the last one
in the directory.
Bug: 537872
Change-Id: I86d1d45e1e2631332025ff24af8dfd46c9725711
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Since I3870cadb4, GC task was always delegated to an executor even when
background option was set to false. This was an issue because if more
than one GC object was instantiated and executed in parallel, only one GC
was actually running because of the single thread executor.
Change-Id: I8c587d22d63c1601b7d75914692644a385cd86d6
Signed-off-by: Hugo Arès <hugo.ares@ericsson.com>
Remove completely the empty directories under refs/<namespace>
including the first level partition of the changes, when they are
completely empty.
Bug: 536777
Change-Id: I88304d34cc42435919c2d1480258684d993dfdca
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Use java.nio to delete path to get detailed errors
Get the full IOException of the reason why a directory
cannot be removed during GC.
Change-Id: Ia555bce009fa48087a73d677f1ce3b9c0b685b57
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
After packaging references, the folders containing these references are
not deleted. In a busy repository, this causes operations to slow down
as traversing the references tree becomes longer.
Delete empty reference folders after the loose references have been
packed.
To avoid deleting a folder that was just created by another concurrent
operation, only delete folders that were not modified in the last 30
seconds.
Signed-off-by: Hector Oswaldo Caballero <hector.caballero@ericsson.com>
Change-Id: Ie79447d6121271cf5e25171be377ea396c7028e0
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Log as warning when an attempt to remove a directory
fails. This helps troubleshooting some bugs like the GC leaving
behind empty directories.
Change-Id: Idb94ce17f8be9668a970c7ecae31436bf434073c
Signed-off-by: Luca Milanesio <luca.milanesio@gmail.com>
From the javadoc for Files.list:
"The returned stream encapsulates a DirectoryStream. If timely disposal
of file system resources is required, the try-with-resources construct
should be used to ensure that the stream's close method is invoked
after the stream operations are completed."
This is the only call to Files#newDirectoryStream that is not already in
a try-with-resources.
Change-Id: I91e6c56b5d74e8435457ad6ed9e6b4b24d2aa14e
(cherry picked from commit 1c16ea4601)
The intent with the setCompressionLevel and checkExisting methods (which
are already public) is for callers to be able to call them, but they
can't do that if the class itself is not public.
Change-Id: I014044fec3bfa1d33775500345efd60eb5d45bde
PackInserter: Ensure objects are written at the end of the pack
When interleaving reads and writes from an unflushed pack, we forgot to
reset the file pointer back to the end of the file before writing more
new objects. This had at least two unfortunate effects:
* The pack data was potentially corrupt, since we could overwrite
previous portions of the file willy-nilly.
* The CountingOutputStream would report more bytes read than the size
of the file, which stored the wrong PackedObjectInfo, which would
cause EOFs during reading.
We already had a test in PackInserterTest which was supposed to catch
bugs like this, by interleaving reads and writes. Unfortunately, it
didn't catch the bug, since as an implementation detail we always read a
full buffer's worth of data from the file when inflating during
readback. If the size of the file was less than the offset of the object
we were reading back plus one buffer (8192 bytes), we would completely
accidentally end up back in the right place in the file.
So, add another test for this case where we read back a small object
positioned before a large object. Before the fix, this test exhibited
exactly the "Unexpected EOF" error reported at crbug.com/gerrit/7668.
Change-Id: I74f08f3d5d9046781d59e5bd7c84916ff8225c3b
Replace explicit calls to initCause where possible
Where the exception being thrown has a constructor that takes a
Throwable, use that instead of instantiating the exception and then
explicitly calling initCause.
Change-Id: I06a0df407ba751a7af8c1c4a46f9e2714f13dbe3
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
CorruptObjectException has a constructor that takes Throwable and
calls initCause with it. Use that instead of instantiating the
exception and explicitly calling initCause.
Change-Id: I1f2747d6c4cc5249e93401b9787eb4ceb50cb995
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Use new StoredObjectRepresentationNotAvailableException constructor
In 5e7eed4 a new StoredObjectRepresentationNotAvailableException
constructor was added, that takes a Throwable to initialize the
exception cause.
Update more call sites to use this constructor instead of first
instantiating it and explicitly calling initCause().
All callers now use the new constructor, so annotate the other one as
deprecated.
Change-Id: I6d2a7e289a95f0360ddebf904cfd8b6c18fef10c
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
When we are cloning we have no refs at all yet, and there cannot
(or at least should not) be any other thread doing something with
refs yet.
Locking loose refs is thus not needed, since there are no loose
refs yet and nothing should be trying to create them concurrently.
Let's skip the whole loose ref locking when we are cloning a repository.
As a result, JGit will write the refs directly to the packed-refs
file, and will not create the refs/remotes/ directories nor the
lock files underneath when cloning and packed refs are used. Since
no lock files are created, any problems on case-insensitive file
systems with tag or branch names that differ only in case are avoided
during cloning.
Detect if we are cloning based on the following heuristics:
* HEAD is a dangling symref
* There is no loose ref
* There is no packed-refs file
Note, however, that there may still be problems with such tag or
branch names later on. This is primarily a five-minutes-past-twelve
stop-gap measure to resolve the referenced bug, which affects the
Oxygen.2 release.
Bug: 528497
Change-Id: I57860c29c210568165276a123b855e462b6a107a
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When a GC operation is interrupted, temporary packs and indexes can be
left on the pack folder. In big, busy repositories this can lead to
significant amounts of wasted disk space if this interruption is done
with a certain frequency.
Remove stale temporary packs and indexes at the end of the GC process so
they do not accumulate. To avoid interfering with a possible concurrent
JGit GC process in the same repository, only delete temporary files that
are older than one day.
Change-Id: If9b6c1e57fac8a6a0ecc0a703089634caba4caae
Signed-off-by: Hector Caballero <hector.caballero@ericsson.com>
When running on NFS there was a chance that JGits LockFile
semantic is broken because File#createNewFile() may allow
multiple clients to create the same file in parallel. This
change provides a fix which is only used when the new config
option core.supportsAtomicCreateNewFile is set to false. The
default for this option is true. This option can only be set in the
global or the system config file. The repository config file is not
taken into account in this case.
If the config option core.supportsAtomicCreateNewFile is true
then File#createNewFile() is trusted and the behaviour doesn't
change.
But if core.supportsAtomicCreateNewFile is set to false then after
successful creation of the lock file a hardlink to that lock file is
created and the attribute nlink of the lock file is checked to be 2. If
multiple clients manage to create the same lock file nlink would be
greater than 2 showing the error.
This expensive workaround is described in
https://www.time-travellers.org/shane/papers/NFS_considered_harmful.html
section III.d) "Exclusive File Creation"
Change-Id: I3d2cc48d8eb280d5f7039eb94da37804f903be6a
Honor trustFolderStats also when reading packed-refs
Then list of packed refs was cached in RefDirectory based on mtime of
the packed-refs file. This may fail on NFS when attributes are cached.
A cached mtime of the packed-refs file could cause JGit to trust the
cached content of this file and to overlook that the file is modified.
Honor the config option trustFolderStats and always read the packed-refs
content if the option is false. By default this option is set to true
and this fix is not active.
Change-Id: I2b65cfaa8f4aba2efbf8a5e865d3f09f927e2eec
So far, in order to get the pack directory it was necessary to resolve
it from the object directory. This resolution is already done when
creating the object directory, so simplify the call by just adding a
getter to the pack directory.
Change-Id: I69e783141dc6739024e8b3d5acc30843edd651a7
Signed-off-by: Hector Caballero <hector.caballero@ericsson.com>
When invoking File.toPath(), an (unchecked) InvalidPathException may be
thrown which should be converted to a checked IOException.
For now, we will replace File.toPath() by FileUtils.toPath() only for
code which can already handle IOExceptions.
Change-Id: I0f0c5fd2a11739e7a02071adae9a5550985d4df6
Signed-off-by: Marc Strapetz <marc.strapetz@syntevo.com>
Applications that use ObjectInserters to create lots of individual
objects may prefer to avoid cluttering up the object directory with
loose objects. Add a specialized inserter implementation that produces a
single pack file no matter how many objects. This inserter is loosely
based on the existing DfsInserter implementation, but is simpler since
we don't need to buffer blocks in memory before writing to storage.
An alternative for such applications would be to write out the loose
objects and then repack just those objects later. This operation is not
currently supported with the GC class, which always repacks existing
packs when compacting loose objects. This in turn requires more
CPU-intensive reachability checks and extra I/O to copy objects from old
packs to new packs.
So, the choice was between implementing a new variant of repack, or not
writing loose objects in the first place. The latter approach is likely
less code overall, and avoids unnecessary I/O at runtime.
The current implementation does not yet support newReader() for reading
back objects.
Change-Id: I2074418f4e65853b7113de5eaced3a6b037d1a17
ObjectDirectory: Remove last modified check in insertPack
GC explicitly handles the case where a new pack has the same name as an
existing pack due to it containing the exact same set of objects. In
this case, the pack passed to insertPack will have the same name as an
existing pack, but it will also almost certainly have a later mtime than
the existing pack.
The loop in insertPack tried to short-circuit when inserting a new pack,
to avoid walking more of the pack list than necessary. Unfortunately,
this means it will never get to the check for an identical name,
resulting in a duplicate entry for the same PackFile in the pack list.
Remove the short-circuit so that insertPack does not insert a duplicate
entry.
Change-Id: I00711b28594622ad3bd104332334e8a3592cda7f
Allow creating symbolic references with link, and deleting them or
switching to ObjectId with unlink. How this happens is up to the
individual RefDatabase.
The default implementation detaches RefUpdate if a symbolic reference
is involved, supporting these command instances on RefDirectory.
Unfortunately the packed-refs file does not support storing symrefs,
so atomic transactions involving more than one symref command are
failed early.
Updating InMemoryRepository is deferred until reftable lands, as I
plan to switch InMemoryRepository to use reftable for its internal
storage representation.
Change-Id: Ibcae068b17a2fc6d958f767f402a570ad88d9151
Signed-off-by: Minh Thai <mthai@google.com>
Signed-off-by: Terry Parker <tparker@google.com>
ReflogWriter: Align auto-creation defaults with C git
Per git-config(1), core.logAllRefUpdates auto-creates reflogs for HEAD
and for refs under heads, notes, tags, and for HEAD. Add notes and
remove stash from ReflogWriter#shouldAutoCreateLog. Explicitly force
writing reflogs for refs/stash at call sites, now that this is
supported.
Change-Id: I3a46d2c2703b7c243e0ee2bbf6948279800c485c
Support force writing reflog on a per-update basis
Even if a repository has core.logAllRefUpdates=true, ReflogWriter does
not create reflog files unless the refs are under a hard-coded list of
prefixes, or unless the forceWrite bit is set. Expose the forceWrite bit
on a per-update basis in RefUpdate/BatchRefUpdate/ReceiveCommand,
creating RefLogWriters as necessary.
Change-Id: Ifc851fba00f76bf56d4134f821d0576b37810f80