| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a nasty performance issue for repositories that have many
objects referenced through refs/tags/, but not in refs/heads/.
Situations like this can arise when a project has made releases like
refs/tags/v1.0, and then decides to orphan history and start over for
version 2. The v1.0 objects are not reachable from master anymore,
but are still live due to the v1.0 tag.
When tags are packed in the GC_OTHER pack, bitmaps are not able to
cover the repository's contents. This may cause very slow counting
times during git clone, as the server must enumerate the ancient
history under refs/tags/ to respond to the client.
Clients by default always ask for all tags when asking for all heads
during clone. This has been true since git-core commit 8434c2f1afedb
(Apr 27 2008), when clone was converted to a builtin. Including tags
in the main GC pack should still allow servers to benefit from the
fast full pack reuse path when serving a clone to a client.
Change-Id: I22e29517b5bc6fa3d6b19a19f13bef0c68afdca3
|
|
|
|
|
| |
Change-Id: I57d5d4fab7e0b1bd4cf5f1850e8569c8ac5def88
Signed-off-by: Marc Strapetz <marc.strapetz@syntevo.com>
Signed-off-by: Andrey Loskutov <loskutov@gmx.de>
|
|
|
|
| |
Change-Id: I4b5621bcb4db82e0560408b3cde6f18b0cc55b29
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
| |
Change-Id: I6901da975a86c460ce7c783a519669d8be8e23bb
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
| |
Change-Id: I59cda76b2c5039e08612f394ee4f7f1788578c49
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
|
|
|
|
|
| |
Previously it was looking for a keep file with the name of a pack file
(extenstion included) appended with a '.keep'. However, the keep file
name should be the pack file name with a '.keep' extension
Change-Id: I9dc4c7c393ae20aefa0b9507df8df83610ce4d42
Signed-off-by: James Melvin <jmelvin@codeaurora.org>
|
|
|
|
|
| |
Change-Id: I20754d13007e6591d36aae5766f3a9a82b24e120
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
|
| |
Change-Id: I6b05a6f6c3f92365c272e1bdaf76093ca01f2d58
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|
|
|
|
| |
Change-Id: Iaa88fe1b195dfe6be99a7b4cb064684e75563715
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
|\
| |
| |
| |
| |
| |
| |
| | |
* origin/stable-4.5:
Fix one case of missing object
Change-Id: Ia6384f4be71086d5a0a8c42c7521220f57dfd086
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When a repository is being GCed and a concurrent push is received, there
is the possibility of having a missing object. This is due to the fact
that after the list of objects to delete is built, there is a window of
time when an unreferenced and ready to delete object can be referenced
by the incoming push. In that case, the object would be deleted because
there is no way to know it is no longer unreferenced. This will leave
the repository in an inconsistent state and most of the operations fail
with a missing tree/object error.
Given the incoming push change the last modified date for the now
referenced object, verify this one is still a candidate to delete
before actually performing the delete operation.
Change-Id: Iadcb29b8eb24b0cb4bb9335b670443c138a60787
Signed-off-by: Hector Oswaldo Caballero <hector.caballero@ericsson.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
We only need the tree id to add it to a TreeWalk so change tree's type
to AnyObjectId.
Bug: 509385
Change-Id: I98dd5fef15cd173fe1fd84273f0f48e64e12e608
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |
| |
| |
| |
| | |
Bug: 509385
Change-Id: I6b6ff5b721d959eb0708003a40c8f97d6826ac46
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |
| |
| |
| |
| | |
Bug: 509385
Change-Id: Ic8a82895fa39be73f1bd8427cfe9437be6fc4e3e
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |
| |
| |
| |
| | |
Bug: 509385
Change-Id: Id5dc40bb3fb9da97ea0795cca1f2bcdcde347767
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |
| |
| |
| |
| | |
Bug: 509385
Change-Id: Icfe58ac2e5344546448a55ad14ec082356be968c
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |
| |
| |
| |
| | |
Bug: 509385
Change-Id: I30c427f0dd2fc1fceb6b003dfdee0a05efaefca9
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |
| |
| |
| |
| | |
Bug: 509385
Change-Id: I5f914c910ef3a7583594fb31c7757d3dddf6a05e
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |
| |
| |
| |
| | |
Bug: 509385
Change-Id: I4cba81d8ea596800a40799dc9cb763fae01fe508
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |
| |
| |
| |
| | |
Bug: 509385
Change-Id: I9d25cf117cfb19df108f5fe281232193fd898474
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |
| |
| |
| |
| | |
Bug: 509385
Change-Id: I9fbdfda59f7bc577aab55dc92ff897b00b5cb050
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |
| |
| |
| |
| | |
Bug: 509385
Change-Id: Ic57fd3bf940752229e35102e7761823f7d3d8732
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
FileSnapshot.isModified may have reported a file to be clean although it
was actually dirty.
Imagine you have a FileSnapshot on file f. lastmodified and lastread are
both t0. Now time is t1 and you
1) modify the file
2) update the FileSnapshot of the file (lastModified=t1, lastRead=t1)
3) modify the file again
4) wait 3 seconds
5) ask the Filesnapshot whether the file is dirty or not. It erroneously
answered it's clean.
Any file which has been modified longer than 2.5 seconds ago was
reported to be clean. As the test shows that's not always correct.
The real-world problem fixed by this change is the following:
* A gerrit server using JGit to serve git repositories is processing
fetch requests while simultaneously a native git garbage collection
runs on the repo.
* At time t1 native git writes temporary files in the pack folder
setting the mtime of the pack folder to t1.
* A fetch request causes JGit to search for new packfiles and JGit
remembers this scan in a Filesnapshot on the packs folder. Since the gc
is not finished JGit doesn't see any new packfiles.
* The fetch is processed and the gc ends while the filesystem timer is
still t1. GC writes a new packfile and deletes the old packfile.
* 3 seconds later another request arrives. JGit does not yet know about
the new packfile but is also not rescanning the pack folder because it
cached that the last scan happened at time t1 and pack folder's mtime is
also t1. Now JGit will not be able to resolve any object contained in
this new pack. This behavior may be persistent if objects referenced by
the ref/meta/config branch are affected so gerrit can't read permissions
stored in the refs/meta/config branch anymore and will not allow any
pushes anymore. The pack folder will not change its mtime and therefore
no rescan will take place.
Change-Id: I3efd0ccffeb97b01207dc3e7a6b85c6b06928fad
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When the reply is already compressed (e.g. a packfile fetched using dumb
HTTP), "Content-Encoding: gzip" wastes bandwidth relative to sending the
content raw. So don't "Accept-Encoding: gzip" for such requests.
Change-Id: Id25702c0b0ed2895df8e9790052c3417d713572c
Signed-off-by: Zhen Chen <czhen@google.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The add(long) method was deprecated in favor of addWord(long) in
the 0.8.3 release of JavaEWAH [1].
[1] https://github.com/lemire/javaewah/commit/e443cf5e
Change-Id: I89c397ed02e040f57663d04504399dfdc0889626
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fix JGits merge-base calculation in case of inconsistent commit times.
JGit was potentially failing to compute correct merge-bases when the
commit times where inconsistent (a parent commit was younger than a
child commit). The code in MergeBaseGenerator was aware of the fact that
sometimes the discovery of a merge base x can occur after the parents of
x have been seen (see comment in #carryOntoOne()). But in the light of
inconsistent commit times it was possible that these parents of a
merge-base have already been returned as a merge-base.
This commit fixes the bug by buffering all commits generated by
MergeBaseGenerator. It is expected that this buffer will be small
because the number of merge-bases will be small. Additionally a new
flag is used to mark the ancestors of merge-bases. This allows to filter
out the unwanted commits.
Bug: 507584
Change-Id: I9cc140b784c3231b972bd2c3de61a789365237ab
|
| |
| |
| |
| |
| |
| |
| |
| | |
This does not address all cases where no message is specified, only
cases where Repository#isValidRefName returns false.
Change-Id: Ib88cdabfdcdf37be0053e06949b0e21ad87a9575
Signed-off-by: Grace Wang <gracewang92@gmail.com>
|
| |
| |
| |
| | |
Change-Id: I900d745195f58c067fadf209bb92cd3c852c59f4
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
TransportHttp sets 'Accept-Encoding: gzip' to allow the server to
compress HTTP responses. When fetching a loose object over HTTP, it
uses the following code to read the response:
InputStream in = openInputStream(c);
int len = c.getContentLength();
return new FileStream(in, len);
If the content is gzipped, openInputStream decompresses it and produces
the correct content for the object. Unfortunately the Content-Length
header contains the length of the compressed stream instead of the
actual content length. Use a length of -1 instead since we don't know
the actual length.
Loose objects are already compressed, so the gzip encoding typically
produces a longer compressed payload. The value from the Content-Length
is too high, producing EOFException: Short read of block.
Change-Id: I8d5284dad608e3abd8217823da2b365e8cd998b0
Signed-off-by: Zhen Chen <czhen@google.com>
Helped-by: Jonathan Nieder <jrn@google.com>
|
| |
| |
| |
| |
| |
| |
| | |
The InputStream in FileStream in downloadPack is never closed.
Change-Id: I59975d0b8d51f4b3e3ba9d4496b254d508cb936d
Signed-off-by: Zhen Chen <czhen@google.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
MonotonicClock can be implemented to provide more certainity about
time than the standard System.currentTimeMillis() can provide. This
can be used by classes such as PersonIdent and Ketch to rely on
more certainity about time moving in a strictly ascending order.
Gerrit Code Review can also leverage this interface through its
embedding of JGit and use MonotonicClock and ProposedTimestamp to
provide stronger assurance that NoteDb time is moving forward.
Change-Id: I1a3cbd49a39b150a0d49b36d572da113ca83a786
|
| |
| |
| |
| |
| |
| |
| |
| | |
Use Oxygen M3 Orbit repository which provides the bundles built using
the new orbit-recipe based build.
CQ: 11658
Change-Id: I7f3dcc966732b32830c75d5daa55383bd028d182
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |
| |
| |
| | |
Change-Id: Iaf83f66637d6a13e4a6d096ba8529553af7e30ed
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
|
| |
| |
| |
| | |
Change-Id: I46c39f2eceb4d58e49bd6273b87711f35250ab5c
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The type parameters can now be inferred when creating
ConcurrentHashMap.
A for loop over the keys of a ConcurrentHashMap doesn't
need to use an Iterator<Map.Entry>; loop syntax handles
this just fine over keySet().
Change-Id: I1f85bb81b77f7cd1caec77197f2f0bf78e4a82a1
|
| |
| |
| |
| | |
Change-Id: I585242507815aad4aa0103fd55a6c369e342ab33
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Java 8 fixed the silent flush during close issue by
FilterOutputStream (base class of BufferedOutputStream)
using try-with-resources to close the stream, getting a
behavior matching what JGit's SafeBufferedOutputStream
was doing:
try {
flush();
} finally {
out.close();
}
With Java 8 as the minimum required version to run JGit
it is no longer necessary to override close() or have
this class. Deprecate the class, and use the JRE's version
of close.
Change-Id: Ic0584c140010278dbe4062df2e71be5df9a797b3
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This simplifies testing for Gerrit Code Review where
application code is updating the repository description
and the test harness uses InMemoryRepository.
Change-Id: I9fbcc028ae24d90209a862f5f4f03e46bfb71db0
|
|/ /
| |
| |
| |
| | |
Change-Id: I7c545d06b1bced678c020fab9af1382bc4416b6e
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This method pair allows the caller to read and modify the description
file that is traditionally used by gitweb and cgit when rendering a
repository on the web.
Gerrit Code Review has offered this feature for years as part of
its GitRepositoryManager interface, but its fundamentally a feature
of JGit and its Repository abstraction.
git-core typically initializes a repository with a default value
inside the description file. During getDescription() this string
is converted to null as it is never a useful description.
Change-Id: I0a457026c74e9c73ea27e6f070d5fbaca3439be5
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
ObjectId is serializable, and so are its subtypes. Ensure that
serialization does not follow the hash collision chain internal to the
ObjectIdOwnerMap, otherwise completely unrelated objects may get
serialized when a RevObject is serialized.
Note that serializing a RevCommit or RevTag may serialize quite a few
objects due to the parent/object links they contain. A user has no real
control over how many objects will be written when a RevCommit is
serialized. C.f [1]. This change does not resolve that, but in any case
this internal hash collision chain link should not participate in
serialization.
[1] https://github.com/gitblit/gitblit/pull/1141
Change-Id: Ice331a9dc80a59ca360fcc04adaff8b5e750d847
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Change-Id: I982a78070efb6bc2d3395330456d62e0d5ce6da7
Signed-off-by: Philipp Marx <smigfu@googlemail.com>
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
I forgot to do this in 97f3baa0d3df7ed26a55b2240cc5ce1a04861a4c
(StreamCopyThread: Remove unnecessary flushCount, 2016-11-13).
Change-Id: Iaed9f345848cf0f854c9d0debcf94bc831d53054
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Bazel runs ErrorProne by default and ErrorProne rightly complains that
allowing the user to specify any Cipher can lead to insecure code
(in particular, getCipher("AES") operates in ECB mode). Unfortunately
this is required to support existing repositories insecurely stored
on S3.
Extract the insecure factory code to its own class so this can be built
as a java_library() with this check disabled.
Change-Id: I34f381965bdaa25d5aa8ebf6d8d5271b238334e0
|
|\ \ \ \ \ \
| |_|/ / / /
|/| | | | | |
|