Matthias Sohn [Fri, 3 Dec 2021 21:45:05 +0000 (22:45 +0100)]
Update maven plugins
- build-helper-maven-plugin to 3.2.0
- jacoco-maven-plugin to 0.8.7
- maven-antrun-plugin to 3.0.0
- maven-dependency-plugin to 3.2.0
- maven-enforcer-plugin to 3.0.0
- maven-jar-plugin to 3.2.0
- maven-javadoc-plugin to 3.3.1
- maven-jxr-plugin to 3.1.1
- maven-pmd-plugin to 3.15.0
- maven-project-info-reports-plugin to 3.1.2
- maven-resources-plugin to 3.2.0
- maven-shade-plugin to 3.2.4
- maven-site-plugin to 3.9.1
- maven-source-plugin to 3.2.1
- maven-surefire-plugin to 3.0.0-M5
- spotbugs-maven-plugin to 4.3.0
- tycho and tycho-extras to 1.7.0
RefDirectory.scanRef: Re-use file existence check done in snapshot creation
Return immediately in scanRef if the loose ref was identified as
missing when a snapshot was attempted for the ref. This will help
performance of scanRef when the ref is packed but has a corresponding
empty dir in 'refs/'.
For example, consider the case where we create 50k sharded refs in
a new namespace called 'new-refs' using an atomic 'BatchRefUpdate'.
The refs are named like 'refs/new-refs/01/1/1', 'refs/new-refs/01/1/2',
'refs/new-refs/01/1/3' and so on. After the refs are created, the
'new-refs' namespace looks like below:
$ find refs/new-refs -type f | wc -l
0
$ find refs/new-refs -type d | wc -l
5101
At this point, an 'exactRef' call on each of the 50k refs without
this change takes ~2.5s, where as with this change it takes ~1.5s.
FileSnapshot: Lazy load file store attributes cache
Doing a getFileStoreAttributes call even when the file doesn't
exist is unnecessary. This call is particularly slow on some
filesystems. Instead, do it only when the file exists and load
the appropriate cache.
This update can help speed up RefDirectory.exactRef when the ref
is packed, but has a corresponding empty dir for it under 'refs/'.
This scenario can happen when an atomic 'BatchRefUpdate' creates
new sharded refs.
For example, consider the case where we create 50k sharded refs in
a new namespace called 'new-refs' using an atomic 'BatchRefUpdate'.
The refs are named like 'refs/new-refs/01/1/1', 'refs/new-refs/01/1/2',
'refs/new-refs/01/1/3' and so on. After the refs are created, the
'new-refs' namespace looks like below:
$ find refs/new-refs -type f | wc -l
0
$ find refs/new-refs -type d | wc -l
5101
At this point, an 'exactRef' call on each of the 50k refs without
this change takes ~30s, where as with this change it takes ~2.5s.
The reftable format is a block based format, but allows for variably
sized blocks. This obviously happens for reflog blocks (which are zlib
compressed), but is also accepted for index blocks: In the spec, this
is motivated as
To achieve constant O(1) disk seeks for lookups the index must be
a single level, which is permitted to exceed the file's
configured block size, but not the format's max block size of
15.99 MiB.
Hence, when parsing a block, one cannot be sure of its exact size:
after reading a default-size block (eg. 4kb), the block header may
state that the block is in fact larger.
Before, the code would mark the block as `truncated`, noting
// Its OK during sequential scan for an index block to have been
// partially read and be truncated in-memory. This happens when
// the index block is larger than the file's blockSize. Caller
// will break out of its scan loop once it sees the blockType.
This looks like either
* a remnant of never-implemented functionality. There is no reason to
ever sequentially scan an index block.
* alluding to sequential scan of the data blocks before the index
blocks (eg. scanning refs, which ends when we find the first ref index
block, and we can then ignore the index block).
This comment is followed by code that populates the
restartTbl/restartCnt fields relative to the (possibly truncated)
buffer. If the buffer is truncated, this essentially reads garbage,
leading to OOB array access when using the index block.
Fix this by dropping the truncated logic and issuing a second read if
the first read was short.
Add a test.
We have never observed this failure scenario at Google. We use 64kb
blocksize, which requires us to need fewer index entries. The reftable
spec mentions an Android repo of size 36M. With 64kb blocks, that's
just 562 index entries. Even with historical growth, we are long from
requiring an index whose size exceeds a single block.
When adding the analogous test for seeking refs, there was no failure.
This points to another possibility which is that the code tries to
avoid writing large index blocks for refs.
Thomas Wolf [Sun, 25 Jul 2021 13:44:35 +0000 (15:44 +0200)]
[test] Create keystore with the keytool of the running JDK
Call keytool with the absolute path of "java.home". Otherwise a keytool
for a different, maybe even newer Java version might be picked up, and
then the keystore may not be readable by the JVM used to run the tests.
BatchRefUpdate: Skip saving conflicting ref names and prefixes in memory
Rather than getting all ref names and prefixes and saving them
in memory to perform the check for conflicting names, rely on
RefDirectory.isNameConflicting as it is no longer an expensive
call after it was optimized in Ie994fc.
The old optimization to save ref names and prefixes in memory
was targeted towards making clones faster. With this change,
the clone performance is unaffected when tests were done with
repos containing many(~500k) refs.
Here are few recorded elapsed times for creating 10 branches
using BatchRefUpdate on NFS based repositories with varying
loose refs count. As seen here, this change helps improve the
BatchRefUpdate performance from O(n^2) to O(1).
loose_refs_count with_change without_change
50 241 ms 310 ms
300 263 ms 1502 ms
1k 181 ms 4241 ms
2k 204 ms 6440 ms
9k 158 ms 25930 ms
20k 154 ms 60443 ms
50k 171 ms 135199 ms
110k 157 ms 329450 ms
160k 209 ms 396328 ms
This update improves the Gerrit notedb migration performance
as it uses BatchRefUpdate to write change meta refs similar to
the test performed above.
Update tests to record the number of events fired post-setup and only
assert for events fired during BatchRefUpdate.execute. For tests which
use writeLooseRef to setup refs, create new tests which assert the
number of RefsChangedEvent(s) rather than updating the existing ones
to call RefDirectory.exactRef as it changes the code path.
Avoid having to scan over ALL loose refs to determine if the
name is nested within or is a container of an existing reference.
This can get really expensive if there are too many loose refs.
Instead use exactRef and getRefsByPrefix which scan based on a
prefix.
With a simple shell script(like below) using jgit client to create
1k refs in a new repository on NFS, this change brings down the time
from 12mins to 7mins.
for ref in $(seq 1 1000); do
jgit branch "$ref"
done
Here are few recorded elapsed times to create a new branch on NFS
based repositories with varying loose refs count. As we see here,
this change improves the name conflicting check from O(n^2) to O(1).
loose_refs_count with_change without_change
50 44 ms 164 ms
300 45 ms 1193 ms
1k 38 ms 2610 ms
2k 44 ms 6003 ms
9k 46 ms 27860 ms
20k 45 ms 48591 ms
50k 51 ms 135471 ms
110k 43 ms 294252 ms
160k 52 ms 430976 ms
Thomas Wolf [Tue, 4 May 2021 21:48:56 +0000 (23:48 +0200)]
LockFile: create OutputStream only when needed
Don't create the stream eagerly in lock(); that may cause JGit to
exceed OS or JVM limits on open file descriptors if many locks need
to be created, for instance when creating many refs. Instead create
the output stream only when one really needs to write something.
Bug: 573328
Change-Id: If9441ed40494d46f594a896d34a5c4f56f91ebf4 Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Petr Hrebejk [Mon, 30 Nov 2020 19:19:53 +0000 (20:19 +0100)]
Fix PackInvalidException when fetch and repack run concurrently
We are running several servers with jGit. We need to run repack from
time to time to keep the repos performant. I.e. after push we test how
many small packs are in the repo and when a threshold is reached we run
the repack.
After upgrading jGit version we've found that if someone does the clone
at the time repack is running the clone sometimes (not always) fails
because the repack removes .pack file used by the clone. Server
exception and client error attached.
I've tracked down the cause and it seems to be introduced between jGit
5.2 (which we upgraded from) and 5.3 and being caused by this commit:
Move throw of PackInvalidException outside the catch -
https://github.com/eclipse/jgit/commit/afef866a44cd65fef292c174cad445b3fb526400
The problem is that when the throw was inside of the try block the last
catch block catched the exception and called openFailed(false) method.
It is true that it called it with invalidate = false, which is wrong.
The real problem though is that with the throw outside of the try block
the openFail is not called at all and the fields activeWindows and
activeCopyRawData are not set to 0. Which affects the later called tests
like: if (++activeCopyRawData == 1 && activeWindows == 0).
The fix for this is relatively simple keeping the throw outside of the
try block and still having the invalid field set to true. I did
exhaustive testing of the change running concurrent clones and pushes
indefinitely and with the patch applied it never fails while without the
patch it takes relatively short to get the error.
Matthias Sohn [Wed, 25 Nov 2020 16:00:09 +0000 (17:00 +0100)]
Ensure that GC#deleteOrphans respects pack lock
If pack or index files are guarded by a pack lock (.keep file)
deleteOrphans() should not touch the respective files protected by the
lock file. Otherwise it may interfere with PackInserter concurrently
inserting a new pack file and its index.
The problem was caused by the following race.
All mentioned files are located in "objects/pack/".
File endings relevant in "pack" dir:
.pack
.keep
.idx
.bitmap
When ReceivePack receives a pack file it executes the following steps:
ReceivePack.service():
receivePackAndCheckConnectivity():
receivePack():
receive the pack
parse the pack, returns packLock (.keep file)
PackInserter.flush():
write tmpPck file: "insert_<random>.pack"
write tmpIdx file: "insert_<random>.idx"
real pack name: "pack-<SHA1>.pack"
real index name: "pack-<SHA1>.idx"
atomic rename tmpPack to realPack
atomic rename tmpIdx to tmpIdx
execute commands
unlock pack by removing .keep file
trigger auto gc if enabled
When PackInserter.flush() renames the temporary pack to the final
"pack-xxx.pack" file the temporary pack index file "insert_xxx.idx"
has no matching .pack file with the same base name for a short interval.
If deleteOrphans() ran during that interval it deduced the pack index
file was orphaned. Subsequently the missing pack index caused
MissingObjectExceptions since objects contained in the pack couldn't be
looked up anymore.
Bug: https://bugs.chromium.org/p/gerrit/issues/detail?id=13544
Change-Id: I559c81e4b1d7c487f92a751bd78b987d32c98719 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Michael Keppler [Tue, 14 Apr 2020 07:31:50 +0000 (09:31 +0200)]
Remove double blank from sentence start
Multiple whitespaces are not normalized when reading properties files,
therefore leading to unwanted space/indentation in console or UI output.
Change-Id: I1f5224fe359e0cac493e0237872afc75dc8b9fbe Signed-off-by: Michael Keppler <Michael.Keppler@gmx.de> Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
(cherry picked from commit ebbc3efce73278d6e0dbb1acd099db2446b1bed9)
Thomas Wolf [Thu, 2 Apr 2020 19:15:18 +0000 (21:15 +0200)]
FS.runInShell(): handle quoted filters and hooksPath containing blanks
Revert commit 2323d7a. Using $0 in the shell command call results in
the command string being taken literally. That was introduced to fix
a problem with backslashes, but is actually not correct.
First, the problem with backslashes occurred only on Win32/Cygwin,
and has been properly fixed in commit 6f268f8.
Second, this is used only for hooks (which don't have backslashes in
their names) and filter commands from the git config, where the user
is responsible for properly quoting or escaping such that the commands
work.
Third, using $0 actually breaks correctly quoted filter commands
like in the bug report. The shell really takes the command literally,
and then doesn't find the command because of quotes.
So revert this change.
At the same time there's a related problem with hooks. If the path to
the hook contains blanks, runInShell() would also fail to find the
hook. In this case, the command doesn't come from user input but is
just a Java File object with an absolute path containing blanks. (Can
occur if core.hooksPath points to such a path with blanks, or if the
repository has such a path.)
The path to the hook as obtained from the file system must be quoted.
Thomas Wolf [Wed, 25 Mar 2020 08:13:20 +0000 (09:13 +0100)]
Handle non-normalized index also for executable files
Commit 60cf85a4 corrected the handling of check-in for files where
the index version is non-normalized, i.e., contains CR-LF line endings.
However, it did so only for regular files, not executable files.
Bug: 561438
Change-Id: I372cc990c5efeb00315460f36459c0652d5d1e77 Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Alex Blewitt [Tue, 25 Feb 2020 21:14:59 +0000 (21:14 +0000)]
Expose FileStoreAttributes.setBackground()
The FS.setAsyncFileStoreAttributes() static method calls
FileStoreAttributes.setBackground() as its implementation, but there are
other public attributes on this inner class already and there isn't a
real reason why this needs to be private.
By making it public we allow callers to be able to invoke it directly.
Although it doesn't appear that it would make a difference, by calling a
static method on the FS class, all static fields and the transitive
closure of class dependencies must be loaded and initialised, which can
be non-trivial.
Callers referring to FS.setAsyncFileStoreAttributes() may be replaced
with FS.FileStoreAttributes.setBackground() with no change of behaviour
other than improved performance due to less class loading required.
Bug: 560527
Change-Id: I9538acc90da8d18f53fd60d74eb54496857f93a5 Signed-off-by: Alex Blewitt <alex.blewitt@gmail.com>
Han-Wen Nienhuys [Tue, 18 Feb 2020 19:44:10 +0000 (20:44 +0100)]
Update reftable storage repo layout
The change Ic0b974fa (c217d33, "Documentation/technical/reftable:
improve repo layout") defines a new repository layout, which was
agreed with the git-core mailing list.
It addresses the following problems:
* old git clients will not recognize reftable-based repositories, and
look at encompassing directories.
* Poorly written tools might write directly into
.git/refs/heads/BRANCH.
Since we consider JGit reftable as experimental (git-core doesn't
support it yet), we have no backward compatibility. If you created a
repository with reftable between mid-Nov 2019 and now, you can do the
following to convert:
Thomas Wolf [Thu, 27 Feb 2020 19:04:47 +0000 (20:04 +0100)]
Cygwin expects forward slashes for commands to be run via sh.exe
FS_Win32_Cygwin replaces backslashes by / as a side-effect of
relativize(). When support for core.hooksPath was added, paths were
relativized in a different place using Path.resolve(), which doesn't
do that transformation. As a result hooks could not be run on Cygwin
in some cases.
Do the transformation in FS_Win32_Cygwin.runInShell(). In all other
places, File or Path objects are used, which give no guarantee about
the file separator (typically the system-dependent default separator),
so doing the transformation earlier still wouldn't guarantee that
sh.exe indeed gets a command string using forward slashes.
Bug: 558577
Change-Id: I3c07eb85f0ac7c5628a2e92f990e5cdb7ecf532f Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
David Pursehouse [Thu, 27 Feb 2020 11:27:31 +0000 (20:27 +0900)]
Move array designators from the variable to the type
As reported by Sonar Lint:
Array designators should always be located on the type for better code
readability. Otherwise, developers must look both at the type and the
variable name to know whether or not a variable is an array.
Change-Id: If6b41fed3483d0992d402d8680552ab4bef89ffb Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Resolving the hostname comes with a performance penalty. We no longer
store the timestamp resolution in the global git config which might be
copied around to other machines but in a dedicated jgit config meant for
automatically determined options like timestamp resolution. Hence there
is no strong reason anymore to have a hardware specific identifier in
the subsection name of file timestamp resolution options.
Bug: 560414
Change-Id: If8dcabe981eb1792db84643850faa6033f14b1cf Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Han-Wen Nienhuys [Tue, 21 Jan 2020 18:06:30 +0000 (19:06 +0100)]
Simplify ReftableCompactor
The ReftableCompactor supported a byteLimit, but this is currently
unused. The FileReftableStack has a more sophisticated strategy that
amortizes compaction costs.
Rename min/maxUpdateIndex to reflogExpire{Min,Max}UpdateIndex to
reflect their purpose more accurately.
Since reflogs are generally pruned chronologically (oldest entries are
expired first), one can only prune entries on full compaction, so they
should not be set by default.
Rephrase the function Reader#minUpdateIndex and maxUpdateIndex. These
vars are documented to affect log entries, but semantically, they are
about ref entries. Since ref entries have their timestamps
delta-compressed, it is important for the min/maxUpdateIndex values to
be coherent between different tables.
The logical timestamps for log entries do not have to be coherent in
different tables, as the timestamps of a log entry is part of the key.
For example, a table written at update index 20 may contain a tombstone
log entry at timestamp 1.
Therefore, we set ReftableWriter's min/maxUpdateIndex from the merged
tables we are compacting, rather than from the compaction settings
(which should only control reflog expiry.)
The previous behavior could drop log entries erroneously, especially
in the presence of tombstone log entries. Unfortunately, testing this
properly requires both an API for adding log tombstones, and a more
refined API for controlling automatic compaction. Hence, no test.