Matthias Sohn [Wed, 22 Feb 2023 00:28:27 +0000 (01:28 +0100)]
Merge branch 'stable-6.3' into stable-6.4
* stable-6.3:
Use Java 11 ProcessHandle to get pid of the current process
Acquire file lock "gc.pid" before running gc
Silence API errors introduced by 9424052f
Matthias Sohn [Wed, 22 Feb 2023 00:27:50 +0000 (01:27 +0100)]
Merge branch 'stable-6.2' into stable-6.3
* stable-6.2:
Use Java 11 ProcessHandle to get pid of the current process
Acquire file lock "gc.pid" before running gc
Silence API errors introduced by 9424052f
Matthias Sohn [Wed, 22 Feb 2023 00:27:16 +0000 (01:27 +0100)]
Merge branch 'stable-6.1' into stable-6.2
* stable-6.1:
Use Java 11 ProcessHandle to get pid of the current process
Acquire file lock "gc.pid" before running gc
Silence API errors introduced by 9424052f
Matthias Sohn [Wed, 22 Feb 2023 00:26:36 +0000 (01:26 +0100)]
Merge branch 'stable-6.0' into stable-6.1
* stable-6.0:
Use Java 11 ProcessHandle to get pid of the current process
Acquire file lock "gc.pid" before running gc
Silence API errors introduced by 9424052f
Matthias Sohn [Fri, 10 Feb 2023 22:39:20 +0000 (23:39 +0100)]
Acquire file lock "gc.pid" before running gc
Git guards gc by locking a lock file "gc.pid" before starting execution.
The lock file contains the pid and hostname of the process holding the
lock. Git tries to kill the process holding that lock if the lock file
wasn't modified in the last 12 hours and was started from the same host.
Teach JGit to acquire this lock before running gc but skip execution if
another process already holds the lock. Killing the other process could
be undesired if it's a long running application.
If the lock file wasn't modified in the last 12 hours try to lock it and
run gc if locking succeeds.
Register a shutdown hook for the lock file to ensure it is cleaned up if
the process is gracefully killed.
Matthias Sohn [Mon, 20 Feb 2023 20:01:38 +0000 (21:01 +0100)]
Merge branch 'stable-6.3' into stable-6.4
* stable-6.3:
Fix getPackedRefs to not throw NoSuchFileException
Add pack options to preserve and prune old pack files
Allow to perform PackedBatchRefUpdate without locking loose refs
Document option "core.sha1Implementation" introduced in 59029aec
Matthias Sohn [Mon, 20 Feb 2023 19:59:14 +0000 (20:59 +0100)]
Merge branch 'stable-6.2' into stable-6.3
* stable-6.2:
Fix getPackedRefs to not throw NoSuchFileException
Add pack options to preserve and prune old pack files
Allow to perform PackedBatchRefUpdate without locking loose refs
Document option "core.sha1Implementation" introduced in 59029aec
Matthias Sohn [Thu, 16 Feb 2023 15:59:56 +0000 (16:59 +0100)]
Merge branch 'stable-6.1' into stable-6.2
* stable-6.1:
Fix getPackedRefs to not throw NoSuchFileException
Add pack options to preserve and prune old pack files
Allow to perform PackedBatchRefUpdate without locking loose refs
Document option "core.sha1Implementation" introduced in 59029aec
Matthias Sohn [Thu, 16 Feb 2023 15:56:07 +0000 (16:56 +0100)]
Merge branch 'stable-6.0' into stable-6.1
* stable-6.0:
Add pack options to preserve and prune old pack files
Allow to perform PackedBatchRefUpdate without locking loose refs
Document option "core.sha1Implementation" introduced in 59029aec
Matthias Sohn [Thu, 16 Feb 2023 15:42:58 +0000 (16:42 +0100)]
Merge branch 'stable-5.13' into stable-6.0
* stable-5.13:
Add pack options to preserve and prune old pack files
Allow to perform PackedBatchRefUpdate without locking loose refs
Document option "core.sha1Implementation" introduced in 59029aec
Fix getPackedRefs to not throw NoSuchFileException
Since Files.newInputStream is from java.nio package, it throws
java.nio.file.NoSuchFileException. This was missed in the change
I00da88e. Without this change, getPackedRefs fails with
NoSuchFileException when there is no packed-refs file in a project.
Saša Živkov [Fri, 21 Oct 2022 14:32:03 +0000 (16:32 +0200)]
Allow to perform PackedBatchRefUpdate without locking loose refs
Add another newBatchUpdate method in the RefDirectory where we can
control if the created PackedBatchRefUpdate will lock the loose refs or
not.
This can be useful in cases when we run programs which have exclusive
access to a Git repository and we know that locking loose refs is
unnecessary and just a performance loss.
Matthias Sohn [Wed, 1 Feb 2023 00:12:06 +0000 (01:12 +0100)]
Merge branch 'stable-6.3' into stable-6.4
* stable-6.3:
Shortcut during git fetch for avoiding looping through all local refs
FetchCommand: fix fetchSubmodules to work on a Ref to a blob
Silence API warnings introduced by I466dcde6
Allow the exclusions of refs prefixes from bitmap
PackWriterBitmapPreparer: do not include annotated tags in bitmap
BatchingProgressMonitor: avoid int overflow when computing percentage
Speedup GC listing objects referenced from reflogs
FileSnapshotTest: Add more MISSING_FILE coverage
Matthias Sohn [Tue, 31 Jan 2023 23:59:32 +0000 (00:59 +0100)]
Merge branch 'stable-6.2' into stable-6.3
* stable-6.2:
Shortcut during git fetch for avoiding looping through all local refs
FetchCommand: fix fetchSubmodules to work on a Ref to a blob
Silence API warnings introduced by I466dcde6
Allow the exclusions of refs prefixes from bitmap
PackWriterBitmapPreparer: do not include annotated tags in bitmap
BatchingProgressMonitor: avoid int overflow when computing percentage
Speedup GC listing objects referenced from reflogs
FileSnapshotTest: Add more MISSING_FILE coverage
Matthias Sohn [Tue, 31 Jan 2023 23:44:41 +0000 (00:44 +0100)]
Merge branch 'stable-6.1' into stable-6.2
* stable-6.1:
Shortcut during git fetch for avoiding looping through all local refs
FetchCommand: fix fetchSubmodules to work on a Ref to a blob
Silence API warnings introduced by I466dcde6
Allow the exclusions of refs prefixes from bitmap
PackWriterBitmapPreparer: do not include annotated tags in bitmap
BatchingProgressMonitor: avoid int overflow when computing percentage
Speedup GC listing objects referenced from reflogs
FileSnapshotTest: Add more MISSING_FILE coverage
Matthias Sohn [Tue, 31 Jan 2023 23:38:52 +0000 (00:38 +0100)]
Merge branch 'stable-6.0' into stable-6.1
* stable-6.0:
Shortcut during git fetch for avoiding looping through all local refs
FetchCommand: fix fetchSubmodules to work on a Ref to a blob
Silence API warnings introduced by I466dcde6
Allow the exclusions of refs prefixes from bitmap
PackWriterBitmapPreparer: do not include annotated tags in bitmap
BatchingProgressMonitor: avoid int overflow when computing percentage
Speedup GC listing objects referenced from reflogs
FileSnapshotTest: Add more MISSING_FILE coverage
Matthias Sohn [Tue, 31 Jan 2023 23:30:52 +0000 (00:30 +0100)]
Merge branch 'stable-5.13' into stable-6.0
* stable-5.13:
Shortcut during git fetch for avoiding looping through all local refs
FetchCommand: fix fetchSubmodules to work on a Ref to a blob
Silence API warnings introduced by I466dcde6
Allow the exclusions of refs prefixes from bitmap
PackWriterBitmapPreparer: do not include annotated tags in bitmap
BatchingProgressMonitor: avoid int overflow when computing percentage
Speedup GC listing objects referenced from reflogs
FileSnapshotTest: Add more MISSING_FILE coverage
Luca Milanesio [Wed, 18 May 2022 12:31:30 +0000 (13:31 +0100)]
Shortcut during git fetch for avoiding looping through all local refs
The FetchProcess needs to verify that all the refs received point
to objects that are reachable from the local refs, which could be
very expensive but is needed to avoid missing objects exceptions
because of broken chains.
When the local repository has a lot of refs (e.g. millions) and the
client is fetching a non-commit object (e.g. refs/sequences/changes in
Gerrit) the reachability check on all local refs can be very expensive
compared to the time to fetch the remote ref.
Example for a 2M refs repository:
- fetching a single non-commit object: 50ms
- checking the reachability of local refs: 30s
A ref pointing to a non-commit object doesn't have any parent or
successor objects, hence would never need to have a reachability check
done. Skipping the askForIsComplete() altogether would save the 30s
time spent in an unnecessary phase.
Matthias Sohn [Tue, 4 Oct 2022 13:42:25 +0000 (15:42 +0200)]
FetchCommand: fix fetchSubmodules to work on a Ref to a blob
FetchCommand#fetchSubmodules assumed that FETCH_HEAD can always be
parsed as a tree. This isn't true if it refers to a Ref referring to a
BLOB. This is e.g. used in Gerrit for Refs like refs/sequences/changes
which are used to implement sequences stored in git.
Luca Milanesio [Tue, 20 Dec 2022 21:50:19 +0000 (21:50 +0000)]
Allow the exclusions of refs prefixes from bitmap
When running a GC.repack() against a repository with over one
thousands of refs/heads and tens of millions of ObjectIds,
the calculation of all bitmaps associated with all the refs
would result in an unreasonable big file that would take up to
several hours to compute.
Test scenario: repo with 2500 heads / 10M obj Intel Xeon E5-2680 2.5GHz
Before this change: 20 mins
After this change and 2300 heads excluded: 10 mins (90s for bitmap)
Having such a large bitmap file is also slow in the runtime
processing and have negligible or even negative benefits, because
the time lost in reading and decompressing the bitmap in memory
would not be compensated by the time saved by using it.
It is key to preserve the bitmaps for those refs that are mostly
used in clone/fetch and give the ability to exlude some refs
prefixes that are known to be less frequently accessed, even
though they may actually be actively written.
Example: Gerrit sandbox branches may even be actively
used and selected automatically because its commits are very
recent, however, they may bloat the bitmap, making it ineffective.
A mono-repo with tens of thousands of developers may have
a relatively small number of active branches where the
CI/CD jobs are continuously fetching/cloning the code. However,
because Gerrit allows the use of sandbox branches, the
total number of refs/heads may be even tens to hundred
thousands.
Luca Milanesio [Wed, 28 Dec 2022 01:09:52 +0000 (01:09 +0000)]
PackWriterBitmapPreparer: do not include annotated tags in bitmap
The annotated tags should be excluded from the bitmap associated
with the heads-only packfile. However, this was not happening
because of the check of exclusion of the peeled object instead
of the objectId to be excluded from the bitmap.
When creating a bitmap for the above commit graph, before this
change all the commits are included (3 bitmaps), which is
incorrect, because all commits reachable from annotated tags
should not be included.
The heads-only bitmap should include only commit0 and commit1
but because PackWriterBitPreparer was checking for the peeled
pointer of tag1 to be excluded (commit2) which was not found in
the list of tags to exclude (annotated-tag1), the commit2 was
included, even if it wasn't reachable only from the head.
Add an additional check for exclusion of the original objectId
for allowing the exclusion of annotated tags and their pointed
commits. Add one specific test associated with an annotated tag
for making sure that this use-case is covered also.
Example repository benchmark for measuring the improvement:
# refs: 400k (2k heads, 88k tags, 310k changes)
# objects: 11M (88k of them are annotate tags)
# packfiles: 2.7G
Before this change:
GC time: 5h
clone --bare time: 7 mins
After this change:
GC time: 20 mins
clone --bare time: 3 mins
Matthias Sohn [Wed, 18 Jan 2023 16:39:19 +0000 (17:39 +0100)]
Speedup GC listing objects referenced from reflogs
GC needs to get a ReflogReader for all existing refs to list all objects
referenced from reflogs. The existing Repository#getReflogReader method
accepts the ref name and then resolves the Ref to create a ReflogReader.
GC calling that for a huge number of Refs one by one is very slow. GC
first gets all Refs in bulk and then calls getReflogReader for each of
them.
Fix this by adding another getReflogReader method to Repository which
accepts a Ref directly.
This speeds up running JGit gc on a mirror clone of the Gerrit
repository from 15:36 min to 1:08 min. The repository used in this test
had 45k refs, 275k commits and 1.2m git objects.
Nasser Grainawi [Tue, 10 Jan 2023 23:15:42 +0000 (16:15 -0700)]
Cache trustFolderStat/trustPackedRefsStat value per-instance
Instead of re-reading the config every time the methods using these
values were called, cache the config value at the time of instance
construction. Caching the values improves performance for each of the
method calls. These configs are set based on the filesystem storing the
repository and unlikely to change while an application is running.
Refresh 'objects' dir and retry if a loose object is not found
A new loose object may not be immediately visible on a NFS
client if it was created on another client. Refreshing the
'objects' dir and trying again can help work around the NFS
behavior.
Here's an E2E problem that this change can help fix. Consider
a Gerrit multi-primary setup with repositories based on NFS.
Add a new patch-set to an existing change and then immediately
fetch the new patch-set of that change. If the fetch is handled
by a Gerrit primary different that the one which created the
patch-set, then we sometimes run into a MissingObjectException
that causes the fetch to fail.
Currently, we always read packed-refs file when 'trustFolderStat'
is false. Introduce a new config 'trustPackedRefsStat' which takes
precedence over 'trustFolderStat' when reading packed refs. Possible
values for this new config are:
* always: Trust packed-refs file attributes
* after_open: Same as 'always', but refresh the file attributes of
packed-refs before trusting it
* never: Always read the packed-refs file
* unset: Fallback to 'trustFolderStat' to determine if the file
attributes of packed-refs can be trusted
Folks whose repositories are on NFS and have traditionally been
setting 'trustFolderStat=false' can now get some performance improvement
with 'trustPackedRefsStat=after_open' as it refreshes the file
attributes of packed-refs (at least on some NFS clients) before
considering it.
For example, consider a repository on NFS with ~500k packed-refs. Here
are some stats which illustrate the improvement with this new config
when reading packed refs on NFS:
Dmitrii Filippov [Mon, 27 Jun 2022 18:03:53 +0000 (20:03 +0200)]
Fix crashes on rare combination of file names
The NameConflictTreeWalk class is used in merge for iterating over
entries in commits. The class uses a separate iterator for each
commit's tree. In rare cases it can incorrectly report the same entry
twice. As a result, duplicated entries are added to the merge result
and later jgit throws an exception when it tries to process merge
result.
The problem appears only when there is a directory-file conflict for
the last item in trees. Example from the bug:
Commit 1:
* subtree - file
* subtree-0 - file
Commit 2:
* subtree - directory
* subtree-0 - file
Here the names are ordered like this:
"subtree" file <"subtree-0" file < "subtree" directory.
The NameConflictTreeWalk handles similar cases correctly if there are
other files after subtree... in commits - this is processed in the
AbstractTreeIterator.min function. Existing code has a special
optimization for the case, when all trees are pointed to the same
entry name - it skips additional checks. However, this optimization
incorrectly skips checks if one of trees reached the end.
The fix processes a situation when some trees reached the end, while
others are still point to an entry.
Matthias Sohn [Wed, 23 Nov 2022 15:54:34 +0000 (16:54 +0100)]
Merge branch 'master' into stable-6.4
* master:
[pgm] Add options --name-only, --name-status to diff, log, show
Update Orbit to R20221123021534 for 2022-12
RBE: Update toolchain with bazel-toolchains 5.1.2 release
SshTestGitServer: : ensure UploadPack is closed to fix resource leak
UploadPackTest: ensure UploadPack is closed to fix resource leak
[pgm] Ensure UploadPack is closed to fix resource leak
UploadPackServlet#doPost use try-with-resource to ensure up is closed
Fix warnings in PatchApplierTest
Fix boxing warnings in TransportTest
Silence warnings about unclosed BasePackPushConnection
Fix warning about non-externalized String
Remove unused imports
Suppress non-externalized String warnings
Remove unused API problem filters
Silence API errors
Silence API errors
Silence API warnings
Add 4.26 target platform
Use "releases" repository for 4.25 target platform
Update Apache Mina SSHD to 2.9.2
Update Orbit to S20221118032057
DfsBlockCache: Report IndexEventConsumer metrics for reverse indexes.
DfsStreamKey: Replace ForReverseIndex to separate metrics.
RawText.isBinary(): handle complete buffer correctly
PackExt: Add a reverse index extension.
David Ostrovsky [Fri, 11 Nov 2022 10:18:38 +0000 (11:18 +0100)]
RBE: Update toolchain with bazel-toolchains 5.1.2 release
Due to this platform style migration: [1] the RBE toolchain needs to be
updated to use the latest rbe_config_gen from bazel-toolchains (at least
version 5.1.2 so that it contains: [2]).
This change makes RBE build forwards compatible so that Bazel could be
updated to the upcoming major 6.0 release.
Matthias Sohn [Sun, 20 Nov 2022 19:24:14 +0000 (20:24 +0100)]
Merge branch 'stable-6.3'
* stable-6.3:
Remove unused imports
Suppress non-externalized String warnings
Remove unused API problem filters
Silence API errors
Silence API errors
Silence API warnings
Matthias Sohn [Sun, 20 Nov 2022 19:22:24 +0000 (20:22 +0100)]
Merge branch 'stable-6.2' into stable-6.3
* stable-6.2:
Remove unused imports
Suppress non-externalized String warnings
Remove unused API problem filters
Silence API errors
Silence API errors
Silence API warnings
Thomas Wolf [Mon, 25 Apr 2022 05:52:25 +0000 (07:52 +0200)]
Update Apache Mina SSHD to 2.9.2
Release notes for 2.9.2:
https://github.com/apache/mina-sshd/blob/master/docs/changes/2.9.2.md
Change-Id: I7809bcba1d45b76ab9dcc031f86beb2f69da3788 Signed-off-by: Thomas Wolf <twolf@apache.org> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Matthias Sohn [Fri, 18 Nov 2022 21:09:04 +0000 (22:09 +0100)]
Update Orbit to S20221118032057
update
- com.jcraft.jsch to 0.1.55.v20221112-0806
- net.bytebuddy.byte-buddy to 1.12.18.v20221114-2102
- net.bytebuddy.byte-buddy-agent to 1.12.18.v20221114-2102
- org.apache.commons.codec to 1.14.0.v20221112-0806
- org.apache.httpcomponents.httpclient to 4.5.13.v20221112-0806
- org.slf4j.api to 1.7.30.v20221112-0806
- org.slf4j.binding.simple to 1.7.30.v20221112-0806
Anna Papitto [Fri, 11 Nov 2022 18:44:37 +0000 (10:44 -0800)]
DfsStreamKey: Replace ForReverseIndex to separate metrics.
Keys used for identifying reverse indexes in the DfsBlockCache use a
custom subclass ForReverseIndex because there was no PackExt for them.
This conflates BlockCacheMetrics for reverse indexes with those for
packs, since the key falls back onto 0 when there is no extension.
Replace the custom ForReverseIndex with a DfsStreamKey usage to bring
keys for the new REVERSE_INDEX extension in line with INDEX and BITMAP
and separate reverse index and pack BlockCacheMetrics.
Change-Id: I305e2c16d2a8cb2a824855ea92e0c9a9b188fce5 Signed-off-by: Anna Papitto <annapapitto@google.com>
Matthias Sohn [Wed, 16 Nov 2022 10:29:24 +0000 (11:29 +0100)]
Merge branch 'master' into stable-6.4
* master:
[benchmarks] Remove profiler configuration
Add SHA1 benchmark
[benchmarks] Set version of maven-compiler-plugin to 3.8.1
Fix running JMH benchmarks
Add option to allow using JDK's SHA1 implementation
Fix API breakage caused by extracting WorkTreeUpdater
Update Orbit to S20221109014815
Use replace instead of replaceAll in toCleanString
Extract Exception -> HTTP status code mapping for reuse
Don't handle internal git errors as an HTTP error
Fix the path for JSchText.properties
Fix Maven SHA1 for Bazel build
UploadPack: Receive and parse client session-id
TransferConfig: Move reading advertisesid setting into TransferConfig
FirstWant: Parse client session-id if received.
ReceivePack: Receive and parse client session-id.
Ignore IllegalStateException if JVM is already shutting down
Allow to perform PackedBatchRefUpdate without locking loose refs
Matthias Sohn [Wed, 16 Nov 2022 09:15:30 +0000 (10:15 +0100)]
Merge branch 'stable-6.3'
* stable-6.3:
[benchmarks] Remove profiler configuration
Add SHA1 benchmark
[benchmarks] Set version of maven-compiler-plugin to 3.8.1
Fix running JMH benchmarks
Add option to allow using JDK's SHA1 implementation
Fix API breakage caused by extracting WorkTreeUpdater
Extract Exception -> HTTP status code mapping for reuse
Don't handle internal git errors as an HTTP error
Ignore IllegalStateException if JVM is already shutting down
Allow to perform PackedBatchRefUpdate without locking loose refs
Matthias Sohn [Wed, 16 Nov 2022 09:14:13 +0000 (10:14 +0100)]
Merge branch 'stable-6.2' into stable-6.3
* stable-6.2:
Extract Exception -> HTTP status code mapping for reuse
Don't handle internal git errors as an HTTP error
Allow to perform PackedBatchRefUpdate without locking loose refs
Matthias Sohn [Wed, 16 Nov 2022 09:13:20 +0000 (10:13 +0100)]
Merge branch 'stable-6.1' into stable-6.2
* stable-6.1:
Extract Exception -> HTTP status code mapping for reuse
Don't handle internal git errors as an HTTP error
Allow to perform PackedBatchRefUpdate without locking loose refs
Matthias Sohn [Wed, 16 Nov 2022 08:56:08 +0000 (09:56 +0100)]
Merge branch 'stable-6.2' into stable-6.3
* stable-6.2:
[benchmarks] Remove profiler configuration
Add SHA1 benchmark
[benchmarks] Set version of maven-compiler-plugin to 3.8.1
Fix running JMH benchmarks
Add option to allow using JDK's SHA1 implementation
Ignore IllegalStateException if JVM is already shutting down
Matthias Sohn [Wed, 16 Nov 2022 08:55:22 +0000 (09:55 +0100)]
Merge branch 'stable-6.1' into stable-6.2
* stable-6.1:
[benchmarks] Remove profiler configuration
Add SHA1 benchmark
[benchmarks] Set version of maven-compiler-plugin to 3.8.1
Fix running JMH benchmarks
Add option to allow using JDK's SHA1 implementation
Ignore IllegalStateException if JVM is already shutting down
Matthias Sohn [Wed, 16 Nov 2022 08:54:28 +0000 (09:54 +0100)]
Merge branch 'stable-6.0' into stable-6.1
* stable-6.0:
[benchmarks] Remove profiler configuration
Add SHA1 benchmark
[benchmarks] Set version of maven-compiler-plugin to 3.8.1
Fix running JMH benchmarks
Add option to allow using JDK's SHA1 implementation
Ignore IllegalStateException if JVM is already shutting down
Anna Papitto [Thu, 10 Nov 2022 22:45:46 +0000 (14:45 -0800)]
PackExt: Add a reverse index extension.
There is no reverse index PackExt because the reverse index is not currently
written to a file. This prevents fine-grained performance reporting for reverse
indexes, which will be useful when introducing a reverse index file and
observing performance changes.
Add a reverse index extension that matches the one in cgit
(https://github.com/git/git/blob/9bf691b78cf906751e65d65ba0c6ffdcd9a5a12c/Documentation/gitformat-pack.txt#L302)
in preparation for adding a reverse index file while observing
performance before and after.
Change-Id: Iee53f1e01cf645a3c468892fcf97c8444f9a784a Signed-off-by: Anna Papitto <annapapitto@google.com>
Matthias Sohn [Tue, 15 Nov 2022 23:15:17 +0000 (00:15 +0100)]
Merge branch 'stable-5.13' into stable-6.0
* stable-5.13:
[benchmarks] Remove profiler configuration
Add SHA1 benchmark
[benchmarks] Set version of maven-compiler-plugin to 3.8.1
Fix running JMH benchmarks
Add option to allow using JDK's SHA1 implementation
Ignore IllegalStateException if JVM is already shutting down
Matthias Sohn [Tue, 4 Oct 2022 13:45:01 +0000 (15:45 +0200)]
Fix running JMH benchmarks
Without this I sometimes hit the error:
$ java -jar target/benchmarks.jar
Exception in thread "main" java.lang.RuntimeException: ERROR: Unable to
find the resource: /META-INF/BenchmarkList
at org.openjdk.jmh.runner.AbstractResourceReader.getReaders(AbstractResourceReader.java:98)
at org.openjdk.jmh.runner.BenchmarkList.find(BenchmarkList.java:124)
at org.openjdk.jmh.runner.Runner.internalRun(Runner.java:253)
at org.openjdk.jmh.runner.Runner.run(Runner.java:209)
at org.openjdk.jmh.Main.main(Main.java:71)
Matthias Sohn [Fri, 11 Nov 2022 16:54:06 +0000 (17:54 +0100)]
Add option to allow using JDK's SHA1 implementation
The change If6da9833 moved the computation of SHA1 from the JVM's
JCE to a pure Java implementation with collision detection.
The extra security for public sites comes with a cost of slower
SHA1 processing compared to the native implementation in the JDK.
When JGit is used internally and not exposed to any traffic from
external or untrusted users, the extra cost of the pure Java SHA1
implementation can be avoided, falling back to the previous
native MessageDigest implementation.
Matthias Sohn [Mon, 14 Nov 2022 13:34:30 +0000 (14:34 +0100)]
Update Orbit to S20221109014815
and update
- com.sun.jna to 5.12.1.v20221103-2317
- com.sun.jna.platform to 5.12.1.v20221103-2317
- org.bouncycastle.bcpg to 1.72.0.v20221013-1810
- org.bouncycastle.bcpkix to 1.72.0.v20221013-1810
- org.bouncycastle.bcprov to 1.72.0.v20221013-1810
- org.bouncycastle.bcutil to 1.72.0.v20221013-1810
- org.mockito.mockito-core to 4.8.1.v20221103-2317
- org.objenesis to 3.3.0.v20221103-2317
Eryk Szymanski [Tue, 8 Nov 2022 17:22:50 +0000 (18:22 +0100)]
Use replace instead of replaceAll in toCleanString
This is from SonarLint (rule.java:S4348)
Regex patterns should not be created needlessly:
When String::replaceAll is used, the first argument should be a real
regular expression. If it’s not the case, String::replace does exactly
the same thing as String::replaceAll without the performance drawback of
the regex.
Sven Selberg [Wed, 9 Nov 2022 17:28:45 +0000 (18:28 +0100)]
Extract Exception -> HTTP status code mapping for reuse
Extract private static method UploadPackServlet#statusCodeForThrowable
to a public static method in the UploadPackErrorHandler interface so
that implementers of this interface can reuse the default mapping.