| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
FileBasedConfig should not rely on auto-detection of
the file-snapshot attribute computation based on config.
The check was already performed when a new FileBasedConfig
is created at L158:
// don't use config in this snapshot to avoid endless recursion
newSnapshot = FileSnapshot.saveNoConfig(getFile());
The check was missing though when the FileBasedConfig is saved
to disk and the new snapshot is obtained from the associated
LockFile.
This change fixes the issue by keeping a non-config based
FileSnapshot also after a FileBasedConfig is saved.
Bug: 577983
Change-Id: Id1e410ba687e683ff2b2643af31e1110b103b356
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
creation"
This reverts commit f829f5f838e0f9c17373ea6cb3407976a8f395ff.
Using MISSING_FILEKEY as indicator for a non-existing file doesn't work
on Windows.
Bug: 577954
Change-Id: I92102a3d259f6cc0f367096a3213cfa794466817
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
|
|
|
|
|
|
|
|
|
| |
File.isFile() [1] checks if the file exists and is a normal file.
[1] https://docs.oracle.com/javase/8/docs/api/java/io/File.html#isFile--
Change-Id: I0a883f2482ecc5ac58b270351b416742b568eb68
Signed-off-by: Nasser Grainawi <quic_nasserg@quicinc.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Return immediately in scanRef if the loose ref was identified as
missing when a snapshot was attempted for the ref. This will help
performance of scanRef when the ref is packed but has a corresponding
empty dir in 'refs/'.
For example, consider the case where we create 50k sharded refs in
a new namespace called 'new-refs' using an atomic 'BatchRefUpdate'.
The refs are named like 'refs/new-refs/01/1/1', 'refs/new-refs/01/1/2',
'refs/new-refs/01/1/3' and so on. After the refs are created, the
'new-refs' namespace looks like below:
$ find refs/new-refs -type f | wc -l
0
$ find refs/new-refs -type d | wc -l
5101
At this point, an 'exactRef' call on each of the 50k refs without
this change takes ~2.5s, where as with this change it takes ~1.5s.
Change-Id: I926bc41b9ae89a1a792b1b5ec9a17b05271c906b
Signed-off-by: Kaushik Lingarkar <quic_kaushikl@quicinc.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Doing a getFileStoreAttributes call even when the file doesn't
exist is unnecessary. This call is particularly slow on some
filesystems. Instead, do it only when the file exists and load
the appropriate cache.
This update can help speed up RefDirectory.exactRef when the ref
is packed, but has a corresponding empty dir for it under 'refs/'.
This scenario can happen when an atomic 'BatchRefUpdate' creates
new sharded refs.
For example, consider the case where we create 50k sharded refs in
a new namespace called 'new-refs' using an atomic 'BatchRefUpdate'.
The refs are named like 'refs/new-refs/01/1/1', 'refs/new-refs/01/1/2',
'refs/new-refs/01/1/3' and so on. After the refs are created, the
'new-refs' namespace looks like below:
$ find refs/new-refs -type f | wc -l
0
$ find refs/new-refs -type d | wc -l
5101
At this point, an 'exactRef' call on each of the 50k refs without
this change takes ~30s, where as with this change it takes ~2.5s.
Change-Id: I4a5d4c6a652dbeed1f4bc3b4f2b2f1416f7ca0e7
Signed-off-by: Kaushik Lingarkar <quic_kaushikl@quicinc.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rather than getting all ref names and prefixes and saving them
in memory to perform the check for conflicting names, rely on
RefDirectory.isNameConflicting as it is no longer an expensive
call after it was optimized in Ie994fc.
The old optimization to save ref names and prefixes in memory
was targeted towards making clones faster. With this change,
the clone performance is unaffected when tests were done with
repos containing many(~500k) refs.
Here are few recorded elapsed times for creating 10 branches
using BatchRefUpdate on NFS based repositories with varying
loose refs count. As seen here, this change helps improve the
BatchRefUpdate performance from O(n^2) to O(1).
loose_refs_count with_change without_change
50 241 ms 310 ms
300 263 ms 1502 ms
1k 181 ms 4241 ms
2k 204 ms 6440 ms
9k 158 ms 25930 ms
20k 154 ms 60443 ms
50k 171 ms 135199 ms
110k 157 ms 329450 ms
160k 209 ms 396328 ms
This update improves the Gerrit notedb migration performance
as it uses BatchRefUpdate to write change meta refs similar to
the test performed above.
Change-Id: I853ac6c7feb4b39c3156c01876b38cbd182accfe
Signed-off-by: Kaushik Lingarkar <quic_kaushikl@quicinc.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid having to scan over ALL loose refs to determine if the
name is nested within or is a container of an existing reference.
This can get really expensive if there are too many loose refs.
Instead use exactRef and getRefsByPrefix which scan based on a
prefix.
With a simple shell script(like below) using jgit client to create
1k refs in a new repository on NFS, this change brings down the time
from 12mins to 7mins.
for ref in $(seq 1 1000); do
jgit branch "$ref"
done
Here are few recorded elapsed times to create a new branch on NFS
based repositories with varying loose refs count. As we see here,
this change improves the name conflicting check from O(n^2) to O(1).
loose_refs_count with_change without_change
50 44 ms 164 ms
300 45 ms 1193 ms
1k 38 ms 2610 ms
2k 44 ms 6003 ms
9k 46 ms 27860 ms
20k 45 ms 48591 ms
50k 51 ms 135471 ms
110k 43 ms 294252 ms
160k 52 ms 430976 ms
Change-Id: Ie994fc184b8f82811bfb37b111eb9733dbe3e6e0
Signed-off-by: Kaushik Lingarkar <quic_kaushikl@quicinc.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We are running several servers with jGit. We need to run repack from
time to time to keep the repos performant. I.e. after push we test how
many small packs are in the repo and when a threshold is reached we run
the repack.
After upgrading jGit version we've found that if someone does the clone
at the time repack is running the clone sometimes (not always) fails
because the repack removes .pack file used by the clone. Server
exception and client error attached.
I've tracked down the cause and it seems to be introduced between jGit
5.2 (which we upgraded from) and 5.3 and being caused by this commit:
Move throw of PackInvalidException outside the catch -
https://github.com/eclipse/jgit/commit/afef866a44cd65fef292c174cad445b3fb526400
The problem is that when the throw was inside of the try block the last
catch block catched the exception and called openFailed(false) method.
It is true that it called it with invalidate = false, which is wrong.
The real problem though is that with the throw outside of the try block
the openFail is not called at all and the fields activeWindows and
activeCopyRawData are not set to 0. Which affects the later called tests
like: if (++activeCopyRawData == 1 && activeWindows == 0).
The fix for this is relatively simple keeping the throw outside of the
try block and still having the invalid field set to true. I did
exhaustive testing of the change running concurrent clones and pushes
indefinitely and with the patch applied it never fails while without the
patch it takes relatively short to get the error.
See: https://www.eclipse.org/lists/jgit-dev/msg04014.html
Bug: 569349
Change-Id: I9dbf8801c8d3131955ad7124f42b62095d96da54
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
| |
Change-Id: Ie245bf5c8c924dfb1f0f40b8bcdcb1e6f5815526
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
|
|
|
| |
- log an error
- either there is no list or it is incomplete hence return immediately
Change-Id: Ieee5378ca06304056b9ccc30c1acd5f52360052d
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If pack or index files are guarded by a pack lock (.keep file)
deleteOrphans() should not touch the respective files protected by the
lock file. Otherwise it may interfere with PackInserter concurrently
inserting a new pack file and its index.
The problem was caused by the following race.
All mentioned files are located in "objects/pack/".
File endings relevant in "pack" dir:
.pack
.keep
.idx
.bitmap
When ReceivePack receives a pack file it executes the following steps:
ReceivePack.service():
receivePackAndCheckConnectivity():
receivePack():
receive the pack
parse the pack, returns packLock (.keep file)
PackInserter.flush():
write tmpPck file: "insert_<random>.pack"
write tmpIdx file: "insert_<random>.idx"
real pack name: "pack-<SHA1>.pack"
real index name: "pack-<SHA1>.idx"
atomic rename tmpPack to realPack
atomic rename tmpIdx to tmpIdx
execute commands
unlock pack by removing .keep file
trigger auto gc if enabled
When PackInserter.flush() renames the temporary pack to the final
"pack-xxx.pack" file the temporary pack index file "insert_xxx.idx"
has no matching .pack file with the same base name for a short interval.
If deleteOrphans() ran during that interval it deduced the pack index
file was orphaned. Subsequently the missing pack index caused
MissingObjectExceptions since objects contained in the pack couldn't be
looked up anymore.
Bug: https://bugs.chromium.org/p/gerrit/issues/detail?id=13544
Change-Id: I559c81e4b1d7c487f92a751bd78b987d32c98719
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
| |
Change-Id: I7c44e3603df2dd368cb7c0ba0072413b887b6903
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since ObjectDatabase and PackFile don't know their repository use the
packfile's grand-grand-parent directory as an identifier for the
repository the packfile resides in.
Remove metric for a repository if the number of cached bytes for the
repository drops to 0 in order to ensure the map of cached bytes per
repository doesn't contain repositories which have no data cached in the
WindowCache.
Change-Id: I969ab8029db0a292e7585cbb36ca0baa797da20b
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Java GC evicts all SoftReferences when the used heap size comes close to
the maximum heap size. This means peaks in heap memory consumption can
flush the complete WindowCache which was observed to have negative
impact on performance of upload-pack in Gerrit.
Hence add a boolean option core.packedGitUseStrongRefs to allow using
strong references to reference packfile pages cached in the WindowCache.
If this option is set to true Java gc can no longer flush the
WindowCache to free memory if the used heap comes close to the maximum
heap size. On the other hand this provides more predictable performance.
Bug: 553573
Change-Id: I9de406293087ab0fa61130c8e0829775762ece8d
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using exceptions during normal operations - for example with the
desire of expanding an array in the failure case - can have a
severe performance impact. When exceptions are instantiated,
a stack trace is collected. Generating stack trace can be expensive.
Compared to that, checking an array for length - even if done many
times - is cheap since this is a check that can run in just a
handful of CPU cycles.
Change-Id: Ifaf10623f6a876c9faecfa44654c9296315adfcb
Signed-off-by: Patrick Hiesel <hiesel@google.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
| |
Change-Id: Icc5265f87ae58aa1e667ed1827075c4a30f75c32
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
| |
Change-Id: I1735033c56b444a9a7160cb7df89292a228d5b34
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The CommandFailedException which was thrown in finally block is silently
discarded [1]. Refactor this method to throw the exception after the
finally block.
This fixes the warning "Null comparison always yields false: The
variable failure can only be null at this location".
[1] https://wiki.sei.cmu.edu/confluence/display/java/ERR04-J.+Do+not+complete+abruptly+from+a+finally+block
https://docs.oracle.com/javase/specs/jls/se8/html/jls-14.html#jls-14.20.2
Change-Id: Idbfc303d9c9046ab9a43e0d4c6d65d325bdaf0ed
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
| |
Change-Id: I6f6b8641f6c3e8ab9f625594085014272305656a
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the following statistics
- cache hit count and hit ratio
- cache miss count and miss ratio
- count of successful and failed loads
- rate of failed loads
- load, eviction and request count
- average and total load time
Use LongAdder instead of AtomicLong to implement counters in order to
improve scalability.
Optionally expose these metrics via JMX, they are registered with the
platform MBean server if the config option jmx.WindowCacheStats = true
in the user or system level git config.
Bug: 553573
Change-Id: Ia2d5246ef69b9c2bd594a23934424bc5800774aa
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the Config#StringReader we relied on ArrayIndexOutOfBoundsException
to detect the end of the input. Creation of exception with (deep) stack
trace can significantly degrade performance in case when we read
thousands of config files, like in the case when Gerrit reads all
external ids from the NoteDb.
Use the buf.length to detect the end of the input.
Change-Id: I12266f25751373a870ce3fa623cf2a95d882d521
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Older JGit stored only milliseconds timestamps in the index. Newer
JGit may get finer timestamps from the file system. This leads to
slow index diffs when a new JGit runs against an index produced
by older JGit because many timestamps will differ and JGit will
then do many content checks. See [1].
Handle this migration case by only comparing milliseconds if the
index entry has only millisecond precision.
The inverse may also occur; also compare only milliseconds if the
file timestamp has only millisecond precision.
Do the same also for microsecond resolution. On Windows, NTFS may
provide 100ns resolution and may be used by external programs writing
the index, but Java's WindowsFileAttributes may provide only
microseconds.
File timestamp precision in Java depends not only on the Java APIs
used by different JGit versions but may also change when running the
same Java code on different VMs. And of course the resolution may
vary among operating and file systems. Moreover, timestamp precision
in the index depends on the program that wrote the index. Canonical
git may use a different resolution, maybe even different between git
versions.
[1] https://www.eclipse.org/forums/index.php/t/1100344/
Change-Id: Idfd08606c883cb98787b2138f9baf0cc89a57b56
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If CheckStat is MINIMAL or timestamps have no nanosecond part
WorkingTreeIterator.compareMetaData only checks the second part of
timestamps and ignores nanoseconds which may have ended up in the index
by using native git.
If
fileLastModified.getEpochSecond() == cacheLastModified.getEpochSecond()
we currently proceed comparing fileLastModified and cacheLastModified
with full precision which is wrong since we determined that we detected
reduced timestamp resolution.
Fix this and also handle smudged index entries for CheckStat.MINIMAL.
Change-Id: I6149885903ac63d79b42d234cc02aa4e19578f3c
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
|
| |
Change-Id: I995d7a1b6ab2866221eee9f5cb828b97192daf4a
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
|
|
|
|
|
| |
Change-Id: Ida47637f54afdb76513be9b04aae32107567d4e3
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* stable-5.0:
Prepare 4.11.10-SNAPSHOT builds
JGit v4.11.9.201909030838-r
Bazel: Update bazlets to the latest master revision
Bazel: Remove FileTreeIteratorWithTimeControl from BUILD file
BatchRefUpdate: repro racy atomic update, and fix it
Delete unused FileTreeIteratorWithTimeControl
Fix RacyGitTests#testRacyGitDetection
Change RacyGitTests to create a racy git situation in a stable way
Silence API warnings
Change-Id: I172136a031ff0730e575327cafb3527c9650a71d
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |\
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* stable-4.11:
Prepare 4.11.10-SNAPSHOT builds
JGit v4.11.9.201909030838-r
Bazel: Update bazlets to the latest master revision
Bazel: Remove FileTreeIteratorWithTimeControl from BUILD file
BatchRefUpdate: repro racy atomic update, and fix it
Delete unused FileTreeIteratorWithTimeControl
Fix RacyGitTests#testRacyGitDetection
Change RacyGitTests to create a racy git situation in a stable way
Silence API warnings
Change-Id: Ifb6a4dbea2f48fd2ffa66eb737d61920aefedfbd
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | |\
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* stable-4.10:
Bazel: Update bazlets to the latest master revision
Bazel: Remove FileTreeIteratorWithTimeControl from BUILD file
BatchRefUpdate: repro racy atomic update, and fix it
Delete unused FileTreeIteratorWithTimeControl
Fix RacyGitTests#testRacyGitDetection
Change RacyGitTests to create a racy git situation in a stable way
Silence API warnings
Change-Id: If672b4f0c350f4e8ff7e1e706485cffd8137236d
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | |\
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* stable-4.9:
BatchRefUpdate: repro racy atomic update, and fix it
Delete unused FileTreeIteratorWithTimeControl
Fix RacyGitTests#testRacyGitDetection
Change RacyGitTests to create a racy git situation in a stable way
Silence API warnings
Change-Id: Id5bf44645655fca40ad22bb1f1ad20a7c2e8f6db
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
PackedBatchRefUpdate was creating a new packed-refs list that was
potentially unsorted. This would be papered over when the list was
read back from disk in parsePackedRef, which detects unsorted ref
lists on reading, and sorts them. However, the BatchRefUpdate also
installed the new (unsorted) list in-memory in
RefDirectory#packedRefs.
With the timestamp granularity code committed to stable-5.1, we can
more often accurately decide that the packed-refs file is clean, and
will return the erroneous unsorted data more often. Unluckily timed
delays also cause the file to be clean, hence this problem was
exacerbated under load.
The symptom is that refs added by a BatchRefUpdate would stop being
visible directly after they were added. In particular, the Gerrit
integration tests uses BatchRefUpdate in its setup for creating the
Admin group, and then tries to read it out directly afterward.
The tests recreates one failure case. A better approach would be to
revise RefList.Builder, so it detects out-of-order lists and
automatically sorts them.
Fixes https://bugs.eclipse.org/bugs/show_bug.cgi?id=548716 and
https://bugs.chromium.org/p/gerrit/issues/detail?id=11373.
Bug: 548716
Change-Id: I613c8059964513ce2370543620725b540b3cb6d1
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Move the handling of cached user and system config to getSystemConfig
and getUserConfig methods and revert the implementation of
openSystemConfig and openUserConfig to the old stateless
implementation.
This ensures the open methods respect the passed-in parent config, which
may be different on each invocation. Additionally, returning a new
instance matches the behavior of the previous implementation of the
default system reader, which downstream callers may be depending on.
Move the implementation of the new caching methods getSystemConfig and
getUserConfig up to SystemReader. This avoids that we break the ABI for
subclasses of SystemReader.
Also see [1] which fixed a similar problem with Gerrit's custom
SystemReader.
[1] https://gerrit-review.googlesource.com/c/gerrit/+/225458
Change-Id: If54a2491932d8fc914d4649cb73c9e837c5b8ad0
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This ensures that only one instance of user and one instance of system
config is set.
Change-Id: Idd00150f91d2d40af79499dd7bf8ad5940f87c4e
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
deleteChildren was called on directory instead of gitDir, leading to a
potential null pointer exception if the git directory existed initially.
Bug: 550340
Change-Id: Iafc3b2961253a99862a59e81c7371f7bc564b412
Signed-off-by: Adrien Bustany <adrien-xx-eclipse@bustany.org>
|
| | | | |
| | | | |
| | | | |
| | | | | |
Change-Id: I809399e6a71e0079d2f0007b0d3f00b531d451bb
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Ensure we use the same type when comparing seconds since the epoch.
This does not prevent that in 2038 timestamps in seconds since the epoch
stored in a 32 bit integer will overflow. Integer.MAX_VALUE translates
to 2038-01-19T03:14:07Z. After this date we'll have an issue since we
store seconds since the epoch in a 32 bit integer in some places.
Bug: 319142
Change-Id: If0c03003d40b480f044686e2f7a2f62c9f4e2fe1
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Replace the two int variables smudge_s and smudge_ns by an Instant and
use the new method DirCacheEntry.mightBeRacilyClean(Instant).
Change-Id: Id70adbb0856a64909617acf65da1bae8e2ae934a
Signed-off-by: Michael Keppler <Michael.Keppler@gmx.de>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | | |
Change-Id: I5487f3c2609eaf2a0ddf71ebb2f6c9701fb7600c
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | | |
Change-Id: I3812961a27ac410d610ef50c73a28f21bb05ae79
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | | |
Change-Id: Id8447735beb24becb41612d3d29d5351f8273d22
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
So far the git configuration and the system wide git configuration were
always reloaded when jgit accessed these global configuration files to
access global configuration options which are not in the context of a
single git repository. Cache these configurations in SystemReader and
only reload them if their file metadata observed using FileSnapshot
indicates a modification.
Change-Id: I092fe11a5d95f1c5799273cacfc7a415d0b7786c
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The existing javadoc was copied from another method and not adapted.
Change-Id: I39a7e5d719b2c379de9bd1a4710a55a73700c6f0
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
- fix handling of interrupts in FileStoreAttributes#saveToConfig
- increase retry wait time to 100ms
- don't wait after last retry
- dont retry if failure is caused by another exception than
LockFailedException
Change-Id: I108c012717d2bcce71f2c6cb9cf0879de704ebc2
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The method org.eclipse.jgit.util.FS.supportsAtomicCreateNewFile()
should default to true as mentioned in docs [1]
org.eclipse.jgit.util.FS_POSIX.supportsAtomicCreateNewFile() method
will set the value to false if the git config
core.supportsatomiccreatenewfile is not set.
It should default to true if the configuration is undefined.
[1]
https://github.com/eclipse/jgit/blob/4169a95a65683e39e7a6a8f2b11b543e2bc754db/org.eclipse.jgit/src/org/eclipse/jgit/util/FS_POSIX.java#L372
Bug: 544164
Change-Id: I16ccf989a89da2cf4975c200b3228b25ba4c0d55
Signed-off-by: Vishal Devgire <vishaldevgire@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Bug: 547400
Change-Id: Ic3541e360a2968ba3532a3d3fa4828b0d0463c02
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | | |
Change-Id: I91cdf1e085a29c0aabd6d22c6ebe848b2d75f42c
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
We can't add this method to the super class StoredConfig since that
abstracts from filesystem storage. MockSystemReader.MockConfig is a
StoredConfig and is also used by tests for dfs based storage. Hence
remove this leaky abstraction.
This implies we always use the fallback FileStoreAttributes which means
a config file modification is considered racy within the first 2
seconds. This should not be an issue since typically configs change
rarely and re-reading a config within the racy period is relatively
cheap since configs are small.
Change-Id: Ia2615addc24a7cadf3c566ee842c6f4f07e159a5
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Change-Id: I4de75d12ec9e61193494916307289378cdb6220e
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Change-Id: I105b8a05bc64f249879a0795a059958553cc60c6
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Increase the safety factor to 2.5x for extra safety if max of measured
timestamp resolution and measured minimal racy threshold is < 100ms, use
1.25 otherwise since for large filesystem resolution values the
influence of finite resolution of the system clock should be negligible.
Before, not yet using the newly introduced minRacyThreshold measurement,
the threshold was 1.1x FS resolution, and we could issue the
following sequence of events,
start
create-file
read-file (currentTime)
end
which had the following timestamps:
create-file 1564589081998
start 1564589082002
read 1564589082003
end 1564589082004
In this case, the difference between create-file and read is 5ms,
which exceeded the 4ms FS resolution, even though the events together
took just 2ms of runtime.
Reproduce with:
bazel test --runs_per_test=100 \
//org.eclipse.jgit.test:org_eclipse_jgit_internal_storage_file_FileSnapshotTest
The file system timestamp resolution is 4ms in this case.
This code assumes that the kernel and the JVM use the same clock that
is synchronized with the file system clock. This seems plausible,
given the resolution of System.currentTimeMillis() and the latency for
a gettimeofday system call (typically ~1us), but it would be good to
justify this with specifications.
Also cover a source of flakiness: if the test runs under extreme load,
then we could have
start
create-file
<long delay>
read
end
which would register as an unmodified file. Avoid this by skipping the
test if end-start is too big.
[msohn]:
- downported from master to stable-5.1
- skip test if resolution is below 10ms
- adjust safety factor to 1.25 for resolutions above 100ms
Change-Id: I87d2cf035e01c44b7ba8364c410a860aa8e312ef
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Since we now measure file time resolution we can use it to replace the
hard coded wait time of 25ms. FileSnapshot#equals will return true until
the mtime of the old (o) and the new FileSnapshot (n) differ by at least
one file time resolution.
Change-Id: Icb713a80ce9eb929242ed083406bfb6650c72223
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|