| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ensure that reftable auto-refresh operations, clearing the database
cache and reloading the reftable stack are executed in an exclusive
critical section under lock. Previously, these steps were performed
without an exclusive critical section, creating a window where
concurrent threads could interfere with each other.
In a race condition, one thread might clear the cache and before it had
a chance of reloading the stack, another thread could repopulate the
cache with stale data, keeping a reference to the open BlockSource
channel to the underlying tables that are subsequently removed when the
first thread reloads the stack.
The above race condition resulted in attempts to access closed resources
and lead to ClosedChannelException errors.
As an example, consider the following scenario:
* T0 - Thread-1 is executing auto-refresh and it clears the database
cache
* T1 - The master branch moves forward (for any reason):
- A new refTable (`R_new`) file is created
- An existing refTable (`R_old`) file is deleted due to
auto-compaction.
* T3 - Thread-2 repopulates the database cache before Thread-1 has had a
chance to reload the refTable stack.
* T4 - Thread-1 finally reloads the refTable stack, causing the closing
of the BlockSource wrapping the removed `R_old` refTable file.
* T5 - Thread-2 attempts to read from the already-closed `R_old`
BlockSource and the `j.n.c.ClosedChannelException` is thrown
To reproduce this problem, you can run a script created to craft this
racing condition: I1e78e175cff.
While such errors during concurrent execution might be expected and
tolerable in isolation, the situation becomes more severe when the
`RepositoryCache` is involved, as is the case with Gerrit.
The `FileReftableDatabase` instance is cached within the `Repository`
object. When a `BlockSource` is closed prematurely due to this race
condition, the dangling reference remains in memory until the cached
Repository expires, which is one hour by default.
This means that, once the race occurs, the repository may be unable to
perform any ref lookups for up to an hour, effectively causing a
repository outage.
By introducing a ReentrantLock around both operations, the refresh logic
now guarantees that concurrent readers and writers maintain a consistent
view of the reftable state, eliminating the race condition.
Verified by executing the script provided at I1e78e175cff and no
exceptions are raised anymore.
Bug: jgit-130
Change-Id: I6153528a7b2695115b670bda04d4d4228c1731e1
|
|\
| |
| |
| |
| |
| |
| | |
* stable-7.1:
Fix: Close the "preserved" PackDirectory
Change-Id: I82f138f134fe09717e2e024b3c87971140f01b29
|
| |\
| | |
| | |
| | |
| | |
| | |
| | | |
* stable-7.0:
Fix: Close the "preserved" PackDirectory
Change-Id: Icd3f79322f8c021e18fd5c881cd9f2a406230fa8
|
| | |\
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* stable-6.10:
Fix: Close the "preserved" PackDirectory
Change-Id: Ie0ecfd8178ef4e2eef6a29d46be5645648fe88f3
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This has been missing since the feature was first added in commit
6167641834e28f8ad322f8fde60866b339bfb7fe.
It's possible we could be more aggressive and close soon after
attempting to get an object from the preserved packs, but for concurrent
misses that might cause thrashing. More likely it would be safe to
attempt closing after successfully restoring a preserved pack. A follow
up change should attempt that.
Change-Id: I87d61007bcc3d03fc86bd18465ca66a2e6f697a1
|
|\| | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* stable-7.1:
Use the same ordering/locking in delete() as C git
Change-Id: Id52c938b041604162dca9162726bfb594e96f5d1
|
| |\| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* stable-7.0:
Use the same ordering/locking in delete() as C git
Change-Id: I2c38321ee410d9ec60481d56315710beaebd393a
|
| | |\|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* stable-6.10:
Use the same ordering/locking in delete() as C git
Change-Id: I0d06e39d06315e0b9e770bdf79164779d98f9f50
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Following the examples of cgit, lock packed-refs *before* checking for
existance of refs in it [1] and *keep the lock* until the loose ref (if
any) is removed [2]. The packed-refs lock is kept even when no
packed-refs update is required [3] so that somebody else doesn't pack a
reference that we are trying to delete.
This fixes a concurrency issue that happens on projects with a
substantial amount of refs(>~500k) where packing takes long enough for a
ref deletion to be triggered half way through it. Not locking the
packed-refs file before checking if the refs exists is not safe, as it
opens up situations where loose refs are repacked in memory and locked
on disk, but before the lock is released and packed-refs is flushed to
disk, a ref is deleted.
As packed-refs was NOT locked while checking wether a ref existed in it,
the current content on disk was read, which was about to be overwritten
and did not contain the ref about to be deleted. As the delete doesn't
see the ref in the current, on-disk, version of packed refs, it skips
processing altogether and moves on, correctly, deleting only the
associated loose ref and leaving the packed one behind.
Once the new packed-refs, containing the ref that was just deleted, was
commited to disk, the ref would come back to life.
Therefore, the packed-refs needs to be locked before checking if it
contains a ref or not in the same way the C implementation of Git does
at [1].
There are tradeoffs, though, in this decision, which will reduce the
parallelism of deleting loose refs and performing the refs repacking,
which happens very often in certain JGit implementations like Gerrit
Code Review. Before this change, repacking of refs and removal of loose
refs unrelated to the in-flight repacking was possible without involving
any locking; after this change, all loose refs removals have to wait for
the packing of refs to be completed, even though the repacking and the
refs removals were completely unrelated and their namespaces disjoint.
See more details on the test's performance results and the associated
tradeoffs in the Issue jgit-152.
NOTE: This delete ref locking logic was incorrect regardless of how the
packing of the refs is implemented. Making decisions if the pack
transaction is needed or not on an unlocked resource is racy and also
flagged as bug at [1].
[1]https://github.com/git/git/blob/master/refs/packed-backend.c#L1590
[2]https://github.com/git/git/blob/master/refs/files-backend.c#L3261
[3]https://github.com/git/git/blob/master/refs/files-backend.c#L2943
Bug: jgit-152
Change-Id: I158ec837904617c5fdf667e295ae667b2f037945
|
| |\| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* stable-7.0:
Prepare 7.0.2-SNAPSHOT builds
JGit v7.0.1.202505221510-r
Prepare 6.10.2-SNAPSHOT builds
JGit v6.10.1.202505221210-r
AmazonS3: Do not accept DOCTYPE and entities
ManifestParser: Do not accept DOCTYPE and entities
Change-Id: I4506e4bf51225000418b15bf09df3287be26242a
|
| | |\|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* stable-6.10:
Prepare 6.10.2-SNAPSHOT builds
JGit v6.10.1.202505221210-r
AmazonS3: Do not accept DOCTYPE and entities
ManifestParser: Do not accept DOCTYPE and entities
Change-Id: I699d57974d9ef2428355c59194c6becbc16828b7
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This follows OWASP recommendations in
https://cheatsheetseries.owasp.org/cheatsheets/XML_External_Entity_Prevention_Cheat_Sheet.html
Change-Id: I3d47debf14d95c8189d51256b4eb2ba991279452
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
These open the door for XXE attacks [1] and manifest do not need them.
[1] https://en.wikipedia.org/wiki/XML_external_entity_attack
Change-Id: Ia79971e1c34afaf287584ae4a7f71baebcb48b6a
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This follows OWASP recommendations in
https://cheatsheetseries.owasp.org/cheatsheets/XML_External_Entity_Prevention_Cheat_Sheet.html
Change-Id: I3d47debf14d95c8189d51256b4eb2ba991279452
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
These open the door for XXE attacks [1] and manifest do not need them.
[1] https://en.wikipedia.org/wiki/XML_external_entity_attack
Change-Id: Ia79971e1c34afaf287584ae4a7f71baebcb48b6a
|
| | | |
| | | |
| | | |
| | | | |
Change-Id: I163957653b075f1f05a6219f4d23b340588ffcbd
|
| | | |
| | | |
| | | |
| | | | |
Change-Id: I7651613c33803daf00882a543dbf0c3f836110fa
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Reading file attributes is faster than reading file content hence use
FileSnapshot to speedup detecting if FileReftableStack is up-to-date.
Introduce new option "core.trustTablesListStat" allowing to configure if
we can trust file attributes of the "tables.list" file to speedup
detection of file modifications. This file is used to store the list of
filenames of the files storing Reftables in FileReftableDatabase.
If this option is set to "ALWAYS" we trust File attributes and use them
to speedup detection of file modifications.
If set to "NEVER" the content of the "tables.list" file is always read
unconditionally. This can help to avoid caching issues on some
filesystems.
If set to "AFTER_OPEN" we will open a FileInputStream to refresh File
attributes of the "tables.list" file before relying on the refreshed
File attributes to detect modifications. This works on some NFS
filesystems and is faster than using "NEVER".
Change-Id: I3e288d90fb07edf4fa2a03c707a333b26f0c458d
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
FileReftableDatabase didn't consider that refs might be changed by
another process e.g. using git (which started supporting reftable with
version 2.45).
Add a test creating a light-weight tag which is updated using git
running in another process and assert that FileReftableDatabase
recognizes the tag modification.
FileReftableStack#addReftable checks if the stack is up-to-date while it
holds the FileLock for tables.list, if it is not up-to-date the
RefUpdate fails with a LOCK_FAILURE to protect against lost ref updates
if another instance of FileReftableDatabase or another thread or process
updated the reftable stack since we last read it.
If option `reftable.autoRefresh = true` or `setAutoRefresh(true)` was
called check before each ref resolution if the reftable stack is
up-to-date and, if necessary, reload the reftable stack automatically.
Calling `setAutoRefresh(true)` takes precedence over the configured
value for option `reftable.autoRefresh`.
Add a testConcurrentRacyReload which tests that updates still abort ref
updates if the reftable stack the update is based on was outdated.
Bug: jgit-102
Change-Id: I1f9faa2afdbfff27e83ff295aef6d572babed4fe
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
If the cached regions are invalid the merger throws an
IllegalStateException. This is too strict. The caller can just
continue working as if there was no cache.
Report the error as IOException, that the caller can catch and handle.
Change-Id: I19a1061225533b46d3a17936912a11000430f2ce
|
|\| | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* stable-7.1:
Do not load bitmap indexes during directory scans
Fix calculation of pack files and objects since bitmap
Pack: no longer set invalid in openFail()
Change-Id: I4516cd7f39418ddbb7db381f58aadc99b6d7e40d
|
| |\| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* stable-7.0:
Do not load bitmap indexes during directory scans
Fix calculation of pack files and objects since bitmap
Pack: no longer set invalid in openFail()
Change-Id: I480a52909a7f3ee771947c0fd447433e10a9b19b
|
| | |\|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* stable-6.10:
Do not load bitmap indexes during directory scans
Fix calculation of pack files and objects since bitmap
Pack: no longer set invalid in openFail()
Change-Id: I8846ad4745a360244f81518a028fed5f07086724
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Previously, if a bitmap index had not been loaded yet, it would get
loaded during a directory scan. Loading a bitmap file can be expensive
and there is no immediate need to do so during a scan. Fix this by
simply setting bitmap index file names on the Packs during directory
scans so that bitmaps can be lazily loaded at some later point if they
are needed.
This change has the side affect of no longer marking a Pack valid if it
is currently invalid simply because a bitmap file has been found, as
there is no valid reason to do so and this can incorrectly mark a Pack
without an index, or with other issues valid. Since the initial lack of
a bitmap file, or an invalid one, or the deletion of one, would not
result in the Pack being marked invalid, there is no need to overturn
the invalid flag when a new bitmap file is found.
Change-Id: I056acc09e7ae6a0982acd81b552d524190ebb4be
Signed-off-by: Martin Fick <mfick@nvidia.com>
|
| | | |\ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The intention of the 'invalidate' argument in openFail() is to
invalidate the Pack in certain situations. However, after moving
doOpen() to a lock instead of using synchronized, the invalidation
approach could also incorrectly mark an already invalid Pack valid,
which was never the intention since previously invalid would only ever
get set to false if it already was false. Fix this by never setting
invalid in openFail(), instead set invalid explicitly before calling
openFail when needed. This makes the intent clearer, and aligns better
with all the existing comments already trying to explain the boolean
(and some of them become obvious enough now that the comment is deleted
or shortened). This is also likely faster than adding a conditional in
openFail() to make 'invalidate' work properly.
Change-Id: Ie6182103ee2994724cb5cb0b64030fedba84b637
Signed-off-by: Martin Fick <mfick@nvidia.com>
|
| | | |/
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Fix a logic issue where pack files and objects created since the most
recent bitmap were incorrectly counted, ignoring their modification
time.
Since pack files are processed in order from most recent to oldest, we
can reliably stop counting as soon as we encounter the first bitmap. By
definition, all subsequent pack files are older and should not be
included in the count.
This ensures accurate repository statistics and prevents overcounting.
Bug: jgit-140
Change-Id: I99d85fb70bc7eb42a8d24c74a1fdb8e03334099e
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Fixes errorprone error [OperatorPrecedence], see
https://errorprone.info/bugpattern/OperatorPrecedence
Change-Id: I3086ac0238bcf4661de6a69b1c133a4f64a3a8d4
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Command-line git stages deletions if file paths are given. (i.e., --all
is implied.) File paths are also optional if --update or --all (or
--no-all) are given.
Add a setter and getter for an "all" flag on AddCommand.
Check consistency with the "update" flag in call(). Make file paths
optional (imply a "." path) if update is set or if setAll() had been
called. If file paths are given, set the all flag.
Stage deletions if update is set, or if the "all" flag is set.
Add the C git command-line options for the "all" flag to jgit.pgm.Add.
Bug: jgit-122
Change-Id: Iedddedcaa2d7a2e75162454ea047f34ec1cf3660
|
|\ \ \ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* changes:
DirCacheCheckout.preScanOneTree: consider mode bits
Merge: improve handling of case-variants
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
If only the file mode is changed, it's still a change and we must check
out the entry from the commit.
Bug: jgit-138
Change-Id: I83adebe563fcdb4cbe330edb44884d55ed463c2c
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Ensure that on a case-insensitive filesystem a merge that includes a
rename of a file from one case variant to another does not delete the
file.
Basically make sure that we don't delete files that we had marked under
a case variant as "keep" before, and ensure that when checking out a
file, it is written to the file system with the exact casing recorded
in the git tree.
Bug: egit-76
Change-Id: Ibbc9ba97c70971ba3e83381b41364a5529f5a5dc
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
In [1] we could use a "trim" function to remove leading/trailing '/'
from paths.
[1] https://gerrithub.io/q/I1f2a07327d1a1d8149ee482bc2529b7e1a5303db
Change-Id: I490e6afe5c8e6c164d07442b1b388f8a131b4c50
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
and reduce indentation.
Change-Id: I60a6f721eed051d67aa385a143e2bd3a950485f7
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
As of right now, the describe command in JGit only supports adding
matches for tag inclusion. It does not support adding matches for
excluding tags, which is something that can be done with git on the
command line using the "--excludes" flag. This change adds a sister
method to setMatches(), called setExcludes(), which does exactly
that.
A few preliminary tests have also been included in
DescribeCommandTest.
Change-Id: Id1449c7b83c42f1d875eabd5796c748507d69362
Signed-off-by: Jonathing <me@jonathing.me>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This is a follow-up of change 1208616. The goal is to get even closer
to consistency with Gerrit's commit-msg hook.
The modified test cases were all verified against what the commit-msg
hook would do with the same commit message.
The substantial change is that within the footer block we are putting
the Change-Id also after lines matching `includeInFooterPattern`, not
just after lines matching `footerPattern`. That are lines that start
either with a space or with an opening bracket.
Change-Id: I39305154e653f8f5adb6b75ca0f6b349b720e9d8
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Before this change we were inserting the Change-Id at the beginning
of the footer block, but after any Bug or Issue footers.
After this change we are inserting the Change-Id at the end of the
footer block, but before any Signed-off-by footers.
The overall goal is to stay consistent with Gerrit's commit-msg hook.
Change-Id: Id3a73e901ac3c289f79941d12979459c66cb7a13
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
and suppress nls warnings.
Change-Id: I33be306f4d5894e81fce7b2b34fdff30313417de
|
| | | | |
| | | | |
| | | | |
| | | | | |
Change-Id: I849d0a3ff07bb53e93112eb716a6537cf2419290
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Blame goes into the history of a file until it blames all lines. If we
have a blamed version of the file in cache, we can use it to resolve
the lines not blamed yet and cut the calculation earlier.
When processing a candidate, check if it is in the cache and if so
fill the blame from the pending regions using the information of the
cached (and fully blamed) copy.
The generator doesn't populate the cache itself. Callers must take the
final results and put them in the cache.
Change-Id: Ia99b09d6d825e96940ca4200967958923bf647c7
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | | |
index"
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Prints the multipack index file in a human readable format. This helps
to debug/inspect multipack indexes generated by jgit or git.
Change-Id: I04f477b3763b0ecfde6f4379f267de8a609a54e7
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The blame generator goes into the history of a file until it blames
all lines. If the caller keeps a cache of the results, the generator
could use it to shorten that walk, using the blame information of an
older revision of the file to build the new blame view.
Define an interface and POJO for the blame cache.
Change-Id: Ib6b033ef46089bbc5a5b32e8e060d4ab0f74b871
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Need to make DirCacheVersions public otherwise Config#allValuesOf cannot
invoke its #values method via introspection.
Change-Id: Id11a6fdbe7ce3d84f04bf47e98746424dcc761b4
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The regex for a relative path used greedy matches, which could cause
excessive backtracking. Simplify the regex and use a possessive
quantifier to avoid backtracking at all: a relative path is a sequence
(AB)*A?, where A and B are disjunct: once (AB)* has been matched, there
is no need for any backtracking in the relative path.
Bug: egit-80
Change-Id: Ic7865f20767d85ec1db2d0b92adcd548099eb643
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
and fix boxing warnings.
Change-Id: Ia9b4deba7892256639c53bac5d7b62f1fbb01389
|
|\ \ \ \ \ \
| |/ / / / /
|/| | | | | |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Errorprone says:
DefaultTypedConfigGetter.java:176:
error: [InfiniteRecursion] This method always recurses, and will cause
a StackOverflowError return getLong(config, section,
subsection, name, defaultValue);
[1] introduced new getters with boxed types to return a null when the
config is not set. The getters of unboxed types should call to the
boxed version, but, as the values are not explicitely boxed, they
are calling to themselves.
[1] https://gerrithub.io/c/eclipse-jgit/jgit/+/1207895
Change-Id: Ied45a199c8ef905e3774a17a04d91a656aa0e42b
|
|/ / / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Fromg git documentation[1]: While the pack-indexes provide fast lookup
per packfile, this performance degrades as the number of packfiles
increases, because abbreviations need to inspect every packfile and we
are more likely to have a miss on our most-recently-used packfile. For
some large repositories, repacking into a single packfile is not
feasible due to storage space or excessive repack times. (...)
The multi-pack-index (MIDX for short) stores a list of objects and
their offsets into multiple packfiles. (...) Thus, we can provide
O(log N) lookup time for any number of packfiles.
This is a writer of the multipack index format. The test only verifies
the "shape" of the file, when we get a parser we can check also the
values (specially the large offset handling).
On the JGit repository, the multipack index generated by this writer
passes the validation of `git multi-pack-index verify`.
[1] https://git-scm.com/docs/pack-format#_multi_pack_index_midx_files_have_the_following_format
Change-Id: I1fca599c4ebf28154f28f039c2c4cfe75b2dc79d
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
|