Matthias Sohn [Fri, 3 Dec 2021 21:45:05 +0000 (22:45 +0100)]
Update maven plugins
- build-helper-maven-plugin to 3.2.0
- jacoco-maven-plugin to 0.8.7
- maven-antrun-plugin to 3.0.0
- maven-dependency-plugin to 3.2.0
- maven-enforcer-plugin to 3.0.0
- maven-jar-plugin to 3.2.0
- maven-javadoc-plugin to 3.3.1
- maven-jxr-plugin to 3.1.1
- maven-pmd-plugin to 3.15.0
- maven-project-info-reports-plugin to 3.1.2
- maven-resources-plugin to 3.2.0
- maven-shade-plugin to 3.2.4
- maven-site-plugin to 3.9.1
- maven-source-plugin to 3.2.1
- maven-surefire-plugin to 3.0.0-M5
- spotbugs-maven-plugin to 4.3.0
- tycho and tycho-extras to 1.7.0
RefDirectory.scanRef: Re-use file existence check done in snapshot creation
Return immediately in scanRef if the loose ref was identified as
missing when a snapshot was attempted for the ref. This will help
performance of scanRef when the ref is packed but has a corresponding
empty dir in 'refs/'.
For example, consider the case where we create 50k sharded refs in
a new namespace called 'new-refs' using an atomic 'BatchRefUpdate'.
The refs are named like 'refs/new-refs/01/1/1', 'refs/new-refs/01/1/2',
'refs/new-refs/01/1/3' and so on. After the refs are created, the
'new-refs' namespace looks like below:
$ find refs/new-refs -type f | wc -l
0
$ find refs/new-refs -type d | wc -l
5101
At this point, an 'exactRef' call on each of the 50k refs without
this change takes ~2.5s, where as with this change it takes ~1.5s.
FileSnapshot: Lazy load file store attributes cache
Doing a getFileStoreAttributes call even when the file doesn't
exist is unnecessary. This call is particularly slow on some
filesystems. Instead, do it only when the file exists and load
the appropriate cache.
This update can help speed up RefDirectory.exactRef when the ref
is packed, but has a corresponding empty dir for it under 'refs/'.
This scenario can happen when an atomic 'BatchRefUpdate' creates
new sharded refs.
For example, consider the case where we create 50k sharded refs in
a new namespace called 'new-refs' using an atomic 'BatchRefUpdate'.
The refs are named like 'refs/new-refs/01/1/1', 'refs/new-refs/01/1/2',
'refs/new-refs/01/1/3' and so on. After the refs are created, the
'new-refs' namespace looks like below:
$ find refs/new-refs -type f | wc -l
0
$ find refs/new-refs -type d | wc -l
5101
At this point, an 'exactRef' call on each of the 50k refs without
this change takes ~30s, where as with this change it takes ~2.5s.
The reftable format is a block based format, but allows for variably
sized blocks. This obviously happens for reflog blocks (which are zlib
compressed), but is also accepted for index blocks: In the spec, this
is motivated as
To achieve constant O(1) disk seeks for lookups the index must be
a single level, which is permitted to exceed the file's
configured block size, but not the format's max block size of
15.99 MiB.
Hence, when parsing a block, one cannot be sure of its exact size:
after reading a default-size block (eg. 4kb), the block header may
state that the block is in fact larger.
Before, the code would mark the block as `truncated`, noting
// Its OK during sequential scan for an index block to have been
// partially read and be truncated in-memory. This happens when
// the index block is larger than the file's blockSize. Caller
// will break out of its scan loop once it sees the blockType.
This looks like either
* a remnant of never-implemented functionality. There is no reason to
ever sequentially scan an index block.
* alluding to sequential scan of the data blocks before the index
blocks (eg. scanning refs, which ends when we find the first ref index
block, and we can then ignore the index block).
This comment is followed by code that populates the
restartTbl/restartCnt fields relative to the (possibly truncated)
buffer. If the buffer is truncated, this essentially reads garbage,
leading to OOB array access when using the index block.
Fix this by dropping the truncated logic and issuing a second read if
the first read was short.
Add a test.
We have never observed this failure scenario at Google. We use 64kb
blocksize, which requires us to need fewer index entries. The reftable
spec mentions an Android repo of size 36M. With 64kb blocks, that's
just 562 index entries. Even with historical growth, we are long from
requiring an index whose size exceeds a single block.
When adding the analogous test for seeking refs, there was no failure.
This points to another possibility which is that the code tries to
avoid writing large index blocks for refs.
Thomas Wolf [Sun, 25 Jul 2021 13:44:35 +0000 (15:44 +0200)]
[test] Create keystore with the keytool of the running JDK
Call keytool with the absolute path of "java.home". Otherwise a keytool
for a different, maybe even newer Java version might be picked up, and
then the keystore may not be readable by the JVM used to run the tests.
BatchRefUpdate: Skip saving conflicting ref names and prefixes in memory
Rather than getting all ref names and prefixes and saving them
in memory to perform the check for conflicting names, rely on
RefDirectory.isNameConflicting as it is no longer an expensive
call after it was optimized in Ie994fc.
The old optimization to save ref names and prefixes in memory
was targeted towards making clones faster. With this change,
the clone performance is unaffected when tests were done with
repos containing many(~500k) refs.
Here are few recorded elapsed times for creating 10 branches
using BatchRefUpdate on NFS based repositories with varying
loose refs count. As seen here, this change helps improve the
BatchRefUpdate performance from O(n^2) to O(1).
loose_refs_count with_change without_change
50 241 ms 310 ms
300 263 ms 1502 ms
1k 181 ms 4241 ms
2k 204 ms 6440 ms
9k 158 ms 25930 ms
20k 154 ms 60443 ms
50k 171 ms 135199 ms
110k 157 ms 329450 ms
160k 209 ms 396328 ms
This update improves the Gerrit notedb migration performance
as it uses BatchRefUpdate to write change meta refs similar to
the test performed above.
Update tests to record the number of events fired post-setup and only
assert for events fired during BatchRefUpdate.execute. For tests which
use writeLooseRef to setup refs, create new tests which assert the
number of RefsChangedEvent(s) rather than updating the existing ones
to call RefDirectory.exactRef as it changes the code path.
Avoid having to scan over ALL loose refs to determine if the
name is nested within or is a container of an existing reference.
This can get really expensive if there are too many loose refs.
Instead use exactRef and getRefsByPrefix which scan based on a
prefix.
With a simple shell script(like below) using jgit client to create
1k refs in a new repository on NFS, this change brings down the time
from 12mins to 7mins.
for ref in $(seq 1 1000); do
jgit branch "$ref"
done
Here are few recorded elapsed times to create a new branch on NFS
based repositories with varying loose refs count. As we see here,
this change improves the name conflicting check from O(n^2) to O(1).
loose_refs_count with_change without_change
50 44 ms 164 ms
300 45 ms 1193 ms
1k 38 ms 2610 ms
2k 44 ms 6003 ms
9k 46 ms 27860 ms
20k 45 ms 48591 ms
50k 51 ms 135471 ms
110k 43 ms 294252 ms
160k 52 ms 430976 ms
Thomas Wolf [Tue, 4 May 2021 21:48:56 +0000 (23:48 +0200)]
LockFile: create OutputStream only when needed
Don't create the stream eagerly in lock(); that may cause JGit to
exceed OS or JVM limits on open file descriptors if many locks need
to be created, for instance when creating many refs. Instead create
the output stream only when one really needs to write something.
Bug: 573328
Change-Id: If9441ed40494d46f594a896d34a5c4f56f91ebf4 Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
In a distributed setting, one can have multiple datacenters use
reftables for serving, while the ground truth for the Ref database is
administered centrally. In this setting, replication delays combined
with compaction can cause update-index ranges to overlap.
Such a setting is used at Google, and the JGit code already handles
this correctly (modulo a bugfix that applied in change I8f8215b99a).
Remove the restriction that was applied at FileReftableDatabase.
Petr Hrebejk [Mon, 30 Nov 2020 19:19:53 +0000 (20:19 +0100)]
Fix PackInvalidException when fetch and repack run concurrently
We are running several servers with jGit. We need to run repack from
time to time to keep the repos performant. I.e. after push we test how
many small packs are in the repo and when a threshold is reached we run
the repack.
After upgrading jGit version we've found that if someone does the clone
at the time repack is running the clone sometimes (not always) fails
because the repack removes .pack file used by the clone. Server
exception and client error attached.
I've tracked down the cause and it seems to be introduced between jGit
5.2 (which we upgraded from) and 5.3 and being caused by this commit:
Move throw of PackInvalidException outside the catch -
https://github.com/eclipse/jgit/commit/afef866a44cd65fef292c174cad445b3fb526400
The problem is that when the throw was inside of the try block the last
catch block catched the exception and called openFailed(false) method.
It is true that it called it with invalidate = false, which is wrong.
The real problem though is that with the throw outside of the try block
the openFail is not called at all and the fields activeWindows and
activeCopyRawData are not set to 0. Which affects the later called tests
like: if (++activeCopyRawData == 1 && activeWindows == 0).
The fix for this is relatively simple keeping the throw outside of the
try block and still having the invalid field set to true. I did
exhaustive testing of the change running concurrent clones and pushes
indefinitely and with the patch applied it never fails while without the
patch it takes relatively short to get the error.
Matthias Sohn [Wed, 25 Nov 2020 16:00:09 +0000 (17:00 +0100)]
Ensure that GC#deleteOrphans respects pack lock
If pack or index files are guarded by a pack lock (.keep file)
deleteOrphans() should not touch the respective files protected by the
lock file. Otherwise it may interfere with PackInserter concurrently
inserting a new pack file and its index.
The problem was caused by the following race.
All mentioned files are located in "objects/pack/".
File endings relevant in "pack" dir:
.pack
.keep
.idx
.bitmap
When ReceivePack receives a pack file it executes the following steps:
ReceivePack.service():
receivePackAndCheckConnectivity():
receivePack():
receive the pack
parse the pack, returns packLock (.keep file)
PackInserter.flush():
write tmpPck file: "insert_<random>.pack"
write tmpIdx file: "insert_<random>.idx"
real pack name: "pack-<SHA1>.pack"
real index name: "pack-<SHA1>.idx"
atomic rename tmpPack to realPack
atomic rename tmpIdx to tmpIdx
execute commands
unlock pack by removing .keep file
trigger auto gc if enabled
When PackInserter.flush() renames the temporary pack to the final
"pack-xxx.pack" file the temporary pack index file "insert_xxx.idx"
has no matching .pack file with the same base name for a short interval.
If deleteOrphans() ran during that interval it deduced the pack index
file was orphaned. Subsequently the missing pack index caused
MissingObjectExceptions since objects contained in the pack couldn't be
looked up anymore.
Bug: https://bugs.chromium.org/p/gerrit/issues/detail?id=13544
Change-Id: I559c81e4b1d7c487f92a751bd78b987d32c98719 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Thomas Wolf [Thu, 19 Nov 2020 12:23:05 +0000 (13:23 +0100)]
PacketLineIn: ensure that END != DELIM
Just allocate the two string objects directly. The previously used
new StringBuilder(0).toString() returns the same object for both END
and DELIM when run on Java 15, which breaks the wire protocol since
then END and DELIM cannot be distinguished anymore.
Bug: 568950
Change-Id: I9d54d7bf484948c24b51a094256bd9d38b085f35 Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
(cherry picked from commit 7da0f0a8f37e35e9c3108588f1e6f7a7381d8f77)
Matthias Sohn [Sun, 7 Jun 2020 22:32:51 +0000 (00:32 +0200)]
Merge branch 'master' into stable-5.8
* master:
Add benchmark for strategies how to move a file
Add getter for unpackErrorHandler in ReceivePack
Upgrade maven-project-info-reports-plugin to 3.1.0
Upgrade maven-shade-plugin to 3.2.4
ObjectDirectoryInserter: Open FileOutputStream in try-with-resource
ObjectDirectoryInserter: Remove redundant 'throws' declarations
ObjectDirectory: Further clean up insertUnpackedObject
Add Git#shutdown for releasing resources held by JGit process
ApplyCommand: use context lines to determine hunk location
GPG: don't prompt for a passphrase for unprotected keys
Fix typo in org.eclipse.jgit.ssh.jsch.test MANIFEST
Fix ProtectedMembersInFinalClass warning flagged by error prone
Use version range to define fragment host bundle version
ObjectDirectory: Explicitly handle NoSuchFileException
ObjectDirectory: Fail immediately when atomic move is not supported
Fix jgit packaging
Fix InvalidInlineTag error flagged by error prone
Fix BadComparable error flagged by error prone
Add tests for RawTextComparator.WS_IGNORE_CHANGE.hash()
Update Orbit to R20200529191137 for final Eclipse release 2020-06
Organize manifest of org.eclipse.jgit.pgm
Do not include log4j implementation in jgit
Decouple JSch from JGit Core
Decouple BouncyCastle from JGit Core
Verify that the user home directory is valid
WindowCache: conditional JMX setup
RawTextComparator.WS_IGNORE_CHANGE must not compare whitespace
Revert "PackBitmapIndex: Not buffer inflated bitmap in BasePackBitmapIndex"
Update jetty to 9.4.28.v20200408
Add 4.16 staging target platform
In-memory SSH keys for the "no files" sshd tests
Builder API to configure SshdSessionFactories
TransportHttp: abort on time-out or on SocketException
Ignore core.eol if core.autocrlf=input
Attributes: fix handling of text=auto in combination with eol
Bazel: Remove superfluous dependencies flagged by unused_deps
Log stack trace if CachingKeyPairProvider hits unexpected exception
Update Orbit to S20200519202422 and ant to 1.10.8
Include full IssuerFingerprint in GPG signature
Bazel: Fix src_sha1 of bcpg-jdk15on
Suppress API error for new method BitmapIndex.Bitmap#retrieveCompressed
Fix wrong @since tags added in dcb0265
PackBitmapIndex: Set distance threshold
PackBitmapIndex: Not buffer inflated bitmap in BasePackBitmapIndex
PackBitmapIndex: Remove convertedBitmaps in the Remapper
PackBitmapIndex: Reduce memory usage in GC
PackBitmapIndex: Add AddToBitmapWithCacheFilter class
PackBitmapIndex: Add util methods and builder to BitmapCommit
PackBitmapIndex: Move BitmapCommit to a top-level class
Refactor: Make retriveCompressed an method of the Bitmap class
Fix downloading LFS Object fails behind proxy
Allow for using custom s3 host with lfs server
ReceivePack: adding IterativeConnectivityChecker
Moving transport/internal -> internal/transport
Fix error occurring during checkout
Change-Id: I675cfb9eee956bab4903f28d8021115afc753dcd Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Matthias Sohn [Fri, 5 Jun 2020 11:01:11 +0000 (13:01 +0200)]
Add benchmark for strategies how to move a file
We can either
- try moving the file and, in case the target directory doesn't exist,
handle the NoSuchFileException this raises to create the target
directory
- or we always first test if the target directory exists and create it
in case it is missing
This means if the target directory of the move already exists the
first strategy is faster by around 25 us/op otherwise the second one
is faster by around 30 us/op. Which one is favorable depends on the
average probability that the target directory exists in real world
scenarios.
Change-Id: I03653b408b859a796508dfa1471b36c65633534e Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Jack Wickham [Fri, 22 May 2020 10:00:18 +0000 (11:00 +0100)]
Add getter for unpackErrorHandler in ReceivePack
The current mechanism for updating the unpack error handler requires
that the error handler is replaced entirely, including communicating
the error to the user. Adding a getter means that delegating
implementations can be constructed so that the error can be processed
before sending to the user, for example for logging.
Change-Id: I4b6f78a041d0f6f5b4076a9a5781565ca3857817 Signed-off-by: Jack Wickham <jwickham@palantir.com>