Matthias Sohn [Fri, 15 Oct 2021 20:45:18 +0000 (22:45 +0200)]
Merge branch 'stable-5.10' into stable-5.11
* stable-5.10:
Fix missing peel-part in lsRefsV2 for loose annotated tags
reftable: drop code for truncated reads
reftable: pass on invalid object ID in conversion
Update eclipse-jarsigner-plugin to 1.3.2
Fix running benchmarks from bazel
Update eclipse-jarsigner-plugin to 1.3.2
Matthias Sohn [Fri, 15 Oct 2021 20:32:00 +0000 (22:32 +0200)]
Merge branch 'stable-5.9' into stable-5.10
* stable-5.9:
Fix missing peel-part in lsRefsV2 for loose annotated tags
reftable: drop code for truncated reads
reftable: pass on invalid object ID in conversion
Update eclipse-jarsigner-plugin to 1.3.2
Fix running benchmarks from bazel
Update eclipse-jarsigner-plugin to 1.3.2
Unfortunately, the existing UploadPackTest didn't reveal this issue
although it provided the test case needed to do so: testV2LsRefsPeel.
This is because The UploadPackTest uses InMemoryRepository which
internally uses Dfs* implementations. The issue is only reproducible
when using the FileRepository.
It is a non-trivial task to refactor the UploadPackTest to work against
both InMemoryRepository and FileRepository and this change is not trying
to do that. This change creates a new test:
UploadPackLsRefsFileRepositoryTest and copies the necesssary code from
the UploadPackTest.
The reftable format is a block based format, but allows for variably
sized blocks. This obviously happens for reflog blocks (which are zlib
compressed), but is also accepted for index blocks: In the spec, this
is motivated as
To achieve constant O(1) disk seeks for lookups the index must be
a single level, which is permitted to exceed the file's
configured block size, but not the format's max block size of
15.99 MiB.
Hence, when parsing a block, one cannot be sure of its exact size:
after reading a default-size block (eg. 4kb), the block header may
state that the block is in fact larger.
Before, the code would mark the block as `truncated`, noting
// Its OK during sequential scan for an index block to have been
// partially read and be truncated in-memory. This happens when
// the index block is larger than the file's blockSize. Caller
// will break out of its scan loop once it sees the blockType.
This looks like either
* a remnant of never-implemented functionality. There is no reason to
ever sequentially scan an index block.
* alluding to sequential scan of the data blocks before the index
blocks (eg. scanning refs, which ends when we find the first ref index
block, and we can then ignore the index block).
This comment is followed by code that populates the
restartTbl/restartCnt fields relative to the (possibly truncated)
buffer. If the buffer is truncated, this essentially reads garbage,
leading to OOB array access when using the index block.
Fix this by dropping the truncated logic and issuing a second read if
the first read was short.
Add a test.
We have never observed this failure scenario at Google. We use 64kb
blocksize, which requires us to need fewer index entries. The reftable
spec mentions an Android repo of size 36M. With 64kb blocks, that's
just 562 index entries. Even with historical growth, we are long from
requiring an index whose size exceeds a single block.
When adding the analogous test for seeking refs, there was no failure.
This points to another possibility which is that the code tries to
avoid writing large index blocks for refs.
Antonio Barone [Wed, 1 Sep 2021 10:04:07 +0000 (12:04 +0200)]
GitServlet: allow to override default error handlers
GitServlet delegates repository access over HTTP to the GitFilter
servlet.
GitServlet, in turn, can be extended by jgit consumers to provide custom
logic when handling such operations.
This is the case, for example, with Gerrit Code Review, which provides
custom behavior with a GitOverHttpServlet [1].
Among possible customizations, the ability of specifying a custom error
handler for UploadPack and ReceivePack was already introduced in
GitFilter by Idd3b87d6b and I9c708aa5a2, respectively.
However the `setUploadPackErrorHandler` and `setReceivePackErrorHandler`
methods were never added to the GitServlet.
Expose the `setUploadPackErrorHandler` and `setReceivePackErrorHandler`
methods to the GitServlet, so that consumers of the jgit library might
specify custom error handlers.
Thomas Wolf [Sun, 25 Jul 2021 13:44:35 +0000 (15:44 +0200)]
[test] Create keystore with the keytool of the running JDK
Call keytool with the absolute path of "java.home". Otherwise a keytool
for a different, maybe even newer Java version might be picked up, and
then the keystore may not be readable by the JVM used to run the tests.
Antonio Barone [Wed, 2 Jun 2021 15:13:17 +0000 (18:13 +0300)]
Retry loose object read upon "Stale file handle" exception
When reading loose objects over NFS it is possible that the OS syscall
would fail with ESTALE errors: This happens when the open file
descriptor no longer refers to a valid file.
Notoriously it is possible to hit this scenario when git data is shared
among multiple clients, for example by multiple gerrit instances in HA.
If one of the two clients performs a GC operation that would cause the
packing and then the pruning of loose objects, the other client might
still hold a reference to those objects, which would cause an exception
to bubble up the stack.
The Linux NFS FAQ[1] (at point A.10), suggests that the proper way to
handle such ESTALE scenarios is to:
"[...] close the file or directory where the error occurred, and reopen
it so the NFS client can resolve the pathname again and retrieve the new
file handle."
In case of a stale file handle exception, we now attempt to read the
loose object again (up to 5 times), until we either succeed or encounter
a FileNotFoundException, in which case the search can continue to
Packfiles and alternates.
The limit of 5 provides an arbitrary upper bounds that is consistent to
the one chosen when handling stale file handles for packed-refs
files (see [2] for context).
BatchRefUpdate: Skip saving conflicting ref names and prefixes in memory
Rather than getting all ref names and prefixes and saving them
in memory to perform the check for conflicting names, rely on
RefDirectory.isNameConflicting as it is no longer an expensive
call after it was optimized in Ie994fc.
The old optimization to save ref names and prefixes in memory
was targeted towards making clones faster. With this change,
the clone performance is unaffected when tests were done with
repos containing many(~500k) refs.
Here are few recorded elapsed times for creating 10 branches
using BatchRefUpdate on NFS based repositories with varying
loose refs count. As seen here, this change helps improve the
BatchRefUpdate performance from O(n^2) to O(1).
loose_refs_count with_change without_change
50 241 ms 310 ms
300 263 ms 1502 ms
1k 181 ms 4241 ms
2k 204 ms 6440 ms
9k 158 ms 25930 ms
20k 154 ms 60443 ms
50k 171 ms 135199 ms
110k 157 ms 329450 ms
160k 209 ms 396328 ms
This update improves the Gerrit notedb migration performance
as it uses BatchRefUpdate to write change meta refs similar to
the test performed above.
Update tests to record the number of events fired post-setup and only
assert for events fired during BatchRefUpdate.execute. For tests which
use writeLooseRef to setup refs, create new tests which assert the
number of RefsChangedEvent(s) rather than updating the existing ones
to call RefDirectory.exactRef as it changes the code path.
Avoid having to scan over ALL loose refs to determine if the
name is nested within or is a container of an existing reference.
This can get really expensive if there are too many loose refs.
Instead use exactRef and getRefsByPrefix which scan based on a
prefix.
With a simple shell script(like below) using jgit client to create
1k refs in a new repository on NFS, this change brings down the time
from 12mins to 7mins.
for ref in $(seq 1 1000); do
jgit branch "$ref"
done
Here are few recorded elapsed times to create a new branch on NFS
based repositories with varying loose refs count. As we see here,
this change improves the name conflicting check from O(n^2) to O(1).
loose_refs_count with_change without_change
50 44 ms 164 ms
300 45 ms 1193 ms
1k 38 ms 2610 ms
2k 44 ms 6003 ms
9k 46 ms 27860 ms
20k 45 ms 48591 ms
50k 51 ms 135471 ms
110k 43 ms 294252 ms
160k 52 ms 430976 ms
Matthias Sohn [Mon, 10 May 2021 22:56:57 +0000 (00:56 +0200)]
Merge branch 'stable-5.9' into stable-5.10
* stable-5.9:
LockFile: create OutputStream only when needed
Remove ReftableNumbersNotIncreasingException
Fix stamping to produce stable file timestamps
Thomas Wolf [Tue, 4 May 2021 21:48:56 +0000 (23:48 +0200)]
LockFile: create OutputStream only when needed
Don't create the stream eagerly in lock(); that may cause JGit to
exceed OS or JVM limits on open file descriptors if many locks need
to be created, for instance when creating many refs. Instead create
the output stream only when one really needs to write something.
Bug: 573328
Change-Id: If9441ed40494d46f594a896d34a5c4f56f91ebf4 Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Thomas Wolf [Fri, 19 Mar 2021 08:35:34 +0000 (09:35 +0100)]
sshd: try all configured signature algorithms for a key
For RSA keys, there may be several configured signature algorithms:
rsa-sha2-512, rsa-sha2-256, and ssh-rsa. Upstream sshd has bug
SSHD-1105 [1] and always and unconditionally uses only the first
configured algorithm. With the default order, this means that it cannot
connect to a server that knows only ssh-rsa, like for instance Apache
MINA sshd servers older than 2.6.0.
This affects for instance bitbucket.org or also AWS Code Commit.
Re-introduce our own pubkey authenticator that fixes this.
Note that a server may impose a penalty (back-off delay) for subsequent
authentication attempts with signature algorithms unknown to the server.
In such cases, users can re-order the signature algorithm list via the
PubkeyAcceptedAlgorithms (formerly PubkeyAcceptedKeyTypes) ssh config.
Apache MINA sshd 2.6.0 appears to use only the first appropriate
public key signature algorithm for a particular key. See [1]. For
RSA keys, that is rsa-sha2-512. This breaks authentication at servers
that only know the older (and deprecated) ssh-rsa algorithm.
With PubkeyAcceptedAlgorithms, users can re-order algorithms in
the ssh config file per host, if needed. Setting
PubkeyAcceptedAlgorithms ^ssh-rsa
will put "ssh-rsa" at the front of the list of algorithms, and then
authentication at such servers with RSA keys works again.
Matthias Sohn [Tue, 9 Mar 2021 17:00:55 +0000 (18:00 +0100)]
Merge branch 'master' into stable-5.11
* master:
Manually set status of jmh dependencies
Update DEPENDENCIES report for 5.11.0
Add dependency to dash-licenses
PackFile: Add id + ext based constructors
GC: deleteOrphans: Use PackFile
PackExt: Convert to Enum
Restore preserved packs during missing object seeks
Pack: Replace extensions bitset with bitmapIdx PackFile
PackDirectory: Use PackFile to ensure we find preserved packs
GC: Use PackFile to de-dup logic
Create a PackFile class for Pack filenames
Matthias Sohn [Sun, 7 Mar 2021 17:41:05 +0000 (18:41 +0100)]
Manually set status of jmh dependencies
The following jmh dependencies were approved as works-with:
- jmh-core/1.21 has GPL-2.0 license and was approved in CQ20517
- jmh-generator-annprocess/1.21 has GPL-2.0 license and was approved in
CQ20518
Nasser Grainawi [Thu, 4 Mar 2021 21:14:43 +0000 (14:14 -0700)]
PackFile: Add id + ext based constructors
Add new constructors to PackFile to improve a common use case where
callers know the directory, id, and extension, but previously needed to
construct a valid file name (with prefix, '.', etc) to create a
PackFile. Most callers can use the variant that has id as an ObjectId,
but provide an id as String variant too.
Martin Fick [Tue, 15 Dec 2020 21:20:44 +0000 (14:20 -0700)]
Restore preserved packs during missing object seeks
Provide a recovery path for objects being referenced during the pack
pruning race. Due to the pack pruning race, it is possible for objects
to become referenced after a pack has been deemed safe to prune, but
before it actually gets pruned. If this happened previously, the newly
referenced objects would be missing and potentially result in a
corrupted ref.
Add the ability to recover from this situation when an object is missing
but happens to still be available in a pack in the "preserved"
directory. This is likely only useful when used in conjunction with the
--preserve-old-packs GC option, which prunes packs by hard-linking to
the preserved directory. If an object is missing and found in a pack in
the preserved directory, immediately recover that pack and its
associated files (idx, bitmaps...) by moving them back to the original
pack directory, and then retry the operation that would have failed due
to the missing object. This retry can now succeed and the repository
may avoid corruption. This approach should drastically reduce the
chance of a corrupt repository during pack pruning at very little extra
cost. This extra cost should only be incurred when objects are missing
and a failure would normally occur.
Change-Id: I2a704e3276b88cc892159d9bfe2455c6eec64252 Signed-off-by: Martin Fick <quic_mfick@quicinc.com> Signed-off-by: Nasser Grainawi <quic_nasserg@quicinc.com>
Nasser Grainawi [Thu, 11 Feb 2021 06:26:17 +0000 (23:26 -0700)]
Pack: Replace extensions bitset with bitmapIdx PackFile
The only extension that was ever consulted from the bitmap was the
bitmap index. We can simplify the Pack code as well as the code of
all the callers if we focus on just that usage.
Nasser Grainawi [Thu, 11 Feb 2021 06:33:43 +0000 (23:33 -0700)]
PackDirectory: Use PackFile to ensure we find preserved packs
Update scanPacksImpl and listPackDirectory (renamed to
getPackFilesByExtById) to use the new PackFile functionality to
validate file names and complete pack file sets (.pack, .idx, etc).
Most importantly, this allows a later change to rely on scanPacks() to
complete a packList that contains packs with the 'old-' prefix in their
extension.
This also eliminates duplication of logic for how to identify and
construct pack files.
Nasser Grainawi [Thu, 11 Feb 2021 05:51:05 +0000 (22:51 -0700)]
Create a PackFile class for Pack filenames
The PackFile class is intended to be a central place to do all
common pack filename manipulation and parsing to help reduce repeated
code and bugs. Use the PackFile class in the Pack class and in many
tests to ensure it works well in a variety of situations. Later changes
will expand use of PackFiles to even more areas.
Change-Id: I921b30f865759162bae46ddd2c6d669de06add4a Signed-off-by: Nasser Grainawi <quic_nasserg@quicinc.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Thomas Wolf [Mon, 1 Mar 2021 07:30:09 +0000 (08:30 +0100)]
HTTP: cookie file stores expiration in seconds
A cookie file stores the expiration in seconds since the Linux Epoch,
not in milliseconds. Correct reading and writing cookie files; with
a backwards-compatibility hack to read files that contain a millisecond
timestamp.
Add a test, and fix tests not to rely on the actual current time so
that they will also run successfully after 2030-01-01 noon.
Bug: 571574
Change-Id: If3ba68391e574520701cdee119544eedc42a1ff2 Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Thomas Wolf [Fri, 29 Jan 2021 22:03:44 +0000 (23:03 +0100)]
LFS: handle invalid pointers better
Make sure that SmudgeFilter calls LfsPointer.parseLfsPointer() with
a stream that supports mark/reset, and make sure that parseLfsPointer()
resets the stream properly if it decides that the stream content is not
a LFS pointer.
Add a test.
Bug: 570758
Change-Id: I2593d67cff31b2dfdfaaa48e437331f0ed877915 Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
In a distributed setting, one can have multiple datacenters use
reftables for serving, while the ground truth for the Ref database is
administered centrally. In this setting, replication delays combined
with compaction can cause update-index ranges to overlap.
Such a setting is used at Google, and the JGit code already handles
this correctly (modulo a bugfix that applied in change I8f8215b99a).
Remove the restriction that was applied at FileReftableDatabase.
Matthias Sohn [Sun, 27 Dec 2020 01:11:47 +0000 (02:11 +0100)]
Fix errorprone configuration for maven-compiler-plugin with javac
See https://errorprone.info/docs/installation.
Add new profile jdk8 to enable running errorprone with javac on java 8
and java 11. Remove errorprone configuration from benchmark module,
didn't find a way to make it work and this module does not contain any
productive code.
Change-Id: I6a84195af05e6cea9e7c04ad5cd4c79742e80cb3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Matthias Sohn [Wed, 24 Feb 2021 13:54:52 +0000 (14:54 +0100)]
Merge branch 'master' into stable-5.11
* master: (35 commits)
[releng] japicmp: update last release version
IgnoreNode: include path to file for invalid .gitignore patterns
FastIgnoreRule: include bad pattern in log message
init: add config option to set default for the initial branch name
init: allow specifying the initial branch name for the new repository
Fail clone if initial branch doesn't exist in remote repository
GPG: fix reading unprotected old-format secret keys
Update Orbit to S20210216215844
Add missing bazel dependency for o.e.j.gpg.bc.test
GPG: handle extended private key format
dfs: handle short copies
[GPG] Provide a factory for the BouncyCastleGpgSigner
Fix boxing warnings
GPG: compute the keygrip to find a secret key
GPG signature verification via BouncyCastle
Post commit hook failure should not cause commit failure
Allow to define additional Hook classes outside JGit
GitHook: use default charset for output and error streams
GitHook: use generic OutputStream instead of PrintStream
Update jetty to 9.4.36.v20210114
...
Thomas Wolf [Tue, 23 Feb 2021 17:10:08 +0000 (18:10 +0100)]
IgnoreNode: include path to file for invalid .gitignore patterns
Include the full file path of the .gitignore file and the line number
of the invalid pattern. Also include the pattern itself.
.gitignore files inside the repository are reported with their
repository-relative path; files outside (from git config
core.excludesFile or .git/info/exclude) are reported with their
full absolute path.
Bug: 571143
Change-Id: Ibe5969679bc22cff923c62e3ab9801d90d6d06d1 Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Thomas Wolf [Tue, 23 Feb 2021 12:11:56 +0000 (13:11 +0100)]
FastIgnoreRule: include bad pattern in log message
When a .gitignore pattern cannot be parsed include the pattern in the
log message. Just reporting "not closed bracket" isn't helpful if the
user doesn't know in which pattern the problem occurred.
Even better would be to include the full path of the .gitignore file
that contained the offending pattern. This is not implemented in this
change; it may need new API and needs more thought.
Bug: 571143
Change-Id: Id5b16d9cf550544ba3ad409a02041946fa8516ab Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>