Wim Jongman [Fri, 6 Jan 2017 01:27:43 +0000 (02:27 +0100)]
Normalizer creating a valid branch name from a string
Generic normalization method for a possible invalid branch name.
The method compresses dividers between spaces, then replaces spaces
and non word characters with underscores.
This method is needed in preparation for subsequent EGit changes.
Thomas Wolf [Sun, 15 Jan 2017 20:35:50 +0000 (21:35 +0100)]
Fix StashApplyCommand for stashes containing untracked changes.
If there are untracked changes, apply only the untracked tree
after a successful merge. The merge tree from merging untracked
with HEAD would also contain files already reset before (changes
in tracked files) and try to reset those again,leading to false
checkout conflicts.
Bug: 505804
Change-Id: Iaced4d277623334d11e3d1cca5969590d7c5093e Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Marc Strapetz [Tue, 10 Jan 2017 15:28:54 +0000 (16:28 +0100)]
Fix possible InvalidObjectIdException in ObjectDirectory
ObjectDirectory.getShallowCommits should throw an IOException
instead of an InvalidArgumentException if invalid SHAs are present
in .git/shallow (as this file is usually edited by a human).
Change-Id: Ia3a39d38f7aec4282109c7698438f0795fbec905 Signed-off-by: Marc Strapetz <marc.strapetz@syntevo.com> Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Zhen Chen [Sat, 14 Jan 2017 06:02:37 +0000 (22:02 -0800)]
Skip pack header bytes in DfsPackFile
The 12 bytes `PACK...` header is written in PackWriter before reading
CachedPack files. In DfsPackFile#copyPackBypassCache, the header was not
skipped when the first block is not in cache.
David Pursehouse [Fri, 13 Jan 2017 01:41:27 +0000 (10:41 +0900)]
LfsProtocolServlet: Improve error on getLargeFileRepository failure
Externalize the error message and make it more specific. Also emit
an error log.
Change-Id: Ie7b1c90c54673bfb8e133fb0aa19a117d4ca6587 Signed-off-by: David Pursehouse <david.pursehouse@gmail.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
David Pursehouse [Fri, 13 Jan 2017 01:08:29 +0000 (10:08 +0900)]
Add support for refusing LFS request due to invalid authorization
Add a new exception type that server implementations can throw when a
client attempts to make an unauthorized LFS operation, which will result
in HTTP 401 Unauthorized being returned to the client.
An example of this is a Gerrit server that rejects a request to perform
an LFS operation on a ref that is not visible to the caller.
As defined in the LFS spec [1] the request may include authentication,
and per RFC 2616 [2], "401 response indicates that authorization has been
refused for those credentials".
Shawn Pearce [Tue, 3 Jan 2017 22:23:57 +0000 (14:23 -0800)]
Pack refs/tags/ with refs/heads/
This fixes a nasty performance issue for repositories that have many
objects referenced through refs/tags/, but not in refs/heads/.
Situations like this can arise when a project has made releases like
refs/tags/v1.0, and then decides to orphan history and start over for
version 2. The v1.0 objects are not reachable from master anymore,
but are still live due to the v1.0 tag.
When tags are packed in the GC_OTHER pack, bitmaps are not able to
cover the repository's contents. This may cause very slow counting
times during git clone, as the server must enumerate the ancient
history under refs/tags/ to respond to the client.
Clients by default always ask for all tags when asking for all heads
during clone. This has been true since git-core commit 8434c2f1afedb
(Apr 27 2008), when clone was converted to a builtin. Including tags
in the main GC pack should still allow servers to benefit from the
fast full pack reuse path when serving a clone to a client.
James Melvin [Tue, 27 Dec 2016 21:08:56 +0000 (14:08 -0700)]
Fix keep pack filename
Previously it was looking for a keep file with the name of a pack file
(extenstion included) appended with a '.keep'. However, the keep file
name should be the pack file name with a '.keep' extension
Change-Id: I9dc4c7c393ae20aefa0b9507df8df83610ce4d42 Signed-off-by: James Melvin <jmelvin@codeaurora.org>
Matthias Sohn [Sun, 18 Dec 2016 09:37:47 +0000 (10:37 +0100)]
Update maven-source-plugin to 3.0.1 to fix OOM during build
Recently we frequently suffer from OutOfMemoryError when creating source
archives in the Maven build. maven-source-plugin 3.0.0 has a bug [1]
causing OOM which is fixed in 3.0.1.
When a repository is being GCed and a concurrent push is received, there
is the possibility of having a missing object. This is due to the fact
that after the list of objects to delete is built, there is a window of
time when an unreferenced and ready to delete object can be referenced
by the incoming push. In that case, the object would be deleted because
there is no way to know it is no longer unreferenced. This will leave
the repository in an inconsistent state and most of the operations fail
with a missing tree/object error.
Given the incoming push change the last modified date for the now
referenced object, verify this one is still a candidate to delete
before actually performing the delete operation.
FileSnapshot.isModified may have reported a file to be clean although it
was actually dirty.
Imagine you have a FileSnapshot on file f. lastmodified and lastread are
both t0. Now time is t1 and you
1) modify the file
2) update the FileSnapshot of the file (lastModified=t1, lastRead=t1)
3) modify the file again
4) wait 3 seconds
5) ask the Filesnapshot whether the file is dirty or not. It erroneously
answered it's clean.
Any file which has been modified longer than 2.5 seconds ago was
reported to be clean. As the test shows that's not always correct.
The real-world problem fixed by this change is the following:
* A gerrit server using JGit to serve git repositories is processing
fetch requests while simultaneously a native git garbage collection
runs on the repo.
* At time t1 native git writes temporary files in the pack folder
setting the mtime of the pack folder to t1.
* A fetch request causes JGit to search for new packfiles and JGit
remembers this scan in a Filesnapshot on the packs folder. Since the gc
is not finished JGit doesn't see any new packfiles.
* The fetch is processed and the gc ends while the filesystem timer is
still t1. GC writes a new packfile and deletes the old packfile.
* 3 seconds later another request arrives. JGit does not yet know about
the new packfile but is also not rescanning the pack folder because it
cached that the last scan happened at time t1 and pack folder's mtime is
also t1. Now JGit will not be able to resolve any object contained in
this new pack. This behavior may be persistent if objects referenced by
the ref/meta/config branch are affected so gerrit can't read permissions
stored in the refs/meta/config branch anymore and will not allow any
pushes anymore. The pack folder will not change its mtime and therefore
no rescan will take place.
Change-Id: I3efd0ccffeb97b01207dc3e7a6b85c6b06928fad Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Zhen Chen [Wed, 23 Nov 2016 00:39:27 +0000 (16:39 -0800)]
Decide whether to "Accept-Encoding: gzip" on a request-by-request basis
When the reply is already compressed (e.g. a packfile fetched using dumb
HTTP), "Content-Encoding: gzip" wastes bandwidth relative to sending the
content raw. So don't "Accept-Encoding: gzip" for such requests.
Jacek Centkowski [Thu, 24 Nov 2016 09:58:14 +0000 (10:58 +0100)]
Expose getObjectToTransfer method of FileLfsServlet
Providing own implementation to doGet/doPut methods is troublesome when
this method is private.
Change-Id: I098cdc5cb90410eaaebc56c88c2d9e168584dd6d Signed-off-by: Jacek Centkowski <geminica.programs@gmail.com> Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Fix JGits merge-base calculation in case of inconsistent commit times.
JGit was potentially failing to compute correct merge-bases when the
commit times where inconsistent (a parent commit was younger than a
child commit). The code in MergeBaseGenerator was aware of the fact that
sometimes the discovery of a merge base x can occur after the parents of
x have been seen (see comment in #carryOntoOne()). But in the light of
inconsistent commit times it was possible that these parents of a
merge-base have already been returned as a merge-base.
This commit fixes the bug by buffering all commits generated by
MergeBaseGenerator. It is expected that this buffer will be small
because the number of merge-bases will be small. Additionally a new
flag is used to mark the ancestors of merge-bases. This allows to filter
out the unwanted commits.
David Pursehouse [Fri, 18 Nov 2016 01:30:27 +0000 (17:30 -0800)]
Enable error-prone for the Maven build
- update maven-compiler-plugin to 3.6.0
- exclude InsecureCipherFactory from errorprone checks
See
https://maven.apache.org/guides/mini/guide-default-execution-ids.html#Example:_Configuring_compile_to_run_twice
https://groups.google.com/d/topic/error-prone-discuss/pzT45ZMCQOc/discussion
Change-Id: Ic16d3f15cf2ef40de62fe6bfe4b8b35f0c1edc4e Signed-off-by: David Pursehouse <david.pursehouse@gmail.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Zhen Chen [Tue, 22 Nov 2016 23:53:06 +0000 (15:53 -0800)]
dump HTTP: Avoid being confused by Content-Length of a gzipped stream
TransportHttp sets 'Accept-Encoding: gzip' to allow the server to
compress HTTP responses. When fetching a loose object over HTTP, it
uses the following code to read the response:
InputStream in = openInputStream(c);
int len = c.getContentLength();
return new FileStream(in, len);
If the content is gzipped, openInputStream decompresses it and produces
the correct content for the object. Unfortunately the Content-Length
header contains the length of the compressed stream instead of the
actual content length. Use a length of -1 instead since we don't know
the actual length.
Loose objects are already compressed, so the gzip encoding typically
produces a longer compressed payload. The value from the Content-Length
is too high, producing EOFException: Short read of block.
Change-Id: I8d5284dad608e3abd8217823da2b365e8cd998b0 Signed-off-by: Zhen Chen <czhen@google.com> Helped-by: Jonathan Nieder <jrn@google.com>
Zhen Chen [Tue, 22 Nov 2016 01:15:15 +0000 (17:15 -0800)]
Fix content length in HttpClientConnection
Per the interface specification, the getContentLength method should
return -1 if content length is unknown or greater than
Integer.MAX_VALUE.
For chunked transfer encoding, the content length is not included in the
header, hence will cause a NullPointerException when trying to parse the
content length header.
Shawn Pearce [Wed, 16 Nov 2016 01:34:07 +0000 (17:34 -0800)]
Define MonotonicClock interface for advanced timestamps
MonotonicClock can be implemented to provide more certainity about
time than the standard System.currentTimeMillis() can provide. This
can be used by classes such as PersonIdent and Ketch to rely on
more certainity about time moving in a strictly ascending order.
Gerrit Code Review can also leverage this interface through its
embedding of JGit and use MonotonicClock and ProposedTimestamp to
provide stronger assurance that NoteDb time is moving forward.
Shawn Pearce [Sun, 13 Nov 2016 19:24:41 +0000 (11:24 -0800)]
Deprecate SafeBufferedOutputStream
Java 8 fixed the silent flush during close issue by
FilterOutputStream (base class of BufferedOutputStream)
using try-with-resources to close the stream, getting a
behavior matching what JGit's SafeBufferedOutputStream
was doing:
try {
flush();
} finally {
out.close();
}
With Java 8 as the minimum required version to run JGit
it is no longer necessary to override close() or have
this class. Deprecate the class, and use the JRE's version
of close.
Shawn Pearce [Mon, 14 Nov 2016 21:59:59 +0000 (13:59 -0800)]
Support {get,set}GitwebDescription on InMemoryRepository
This simplifies testing for Gerrit Code Review where
application code is updating the repository description
and the test harness uses InMemoryRepository.
Shawn Pearce [Mon, 14 Nov 2016 19:00:34 +0000 (11:00 -0800)]
Add {get,set}GitwebDescription to Repository
This method pair allows the caller to read and modify the description
file that is traditionally used by gitweb and cgit when rendering a
repository on the web.
Gerrit Code Review has offered this feature for years as part of
its GitRepositoryManager interface, but its fundamentally a feature
of JGit and its Repository abstraction.
git-core typically initializes a repository with a default value
inside the description file. During getDescription() this string
is converted to null as it is never a useful description.
Shawn Pearce [Sun, 13 Nov 2016 19:52:12 +0000 (11:52 -0800)]
Extract insecure Cipher factory
Bazel runs ErrorProne by default and ErrorProne rightly complains that
allowing the user to specify any Cipher can lead to insecure code
(in particular, getCipher("AES") operates in ECB mode). Unfortunately
this is required to support existing repositories insecurely stored
on S3.
Extract the insecure factory code to its own class so this can be built
as a java_library() with this check disabled.
Jonathan Nieder [Sun, 13 Nov 2016 21:16:48 +0000 (13:16 -0800)]
StreamCopyThread: Remove unnecessary flushCount
StreamCopyThread#run consistently interrupts itself whenever it
discovers it has been interrupted by StreamCopyThread#flush while not
reading. The flushCount is not needed to avoid lost flushes.
All in-tree users of StreamCopyThread never flush. As a nice side
benefit, this avoids the expense of atomic operations that have no
purpose for those users.
Hugo Arès [Wed, 9 Nov 2016 13:49:41 +0000 (08:49 -0500)]
Get rid of SoftReference in RepositoryCache
Now that RepositoryCache have a time based eviction strategy, get rid
of the strategy to evict cache entries if heap memory is running low,
i.e. soft references. Main reason why time based eviction was
implemented was to offer an alternative to the unpredictable soft
references.
Relying on soft references is not working, especially in large heap. The
JVM GC will consider collecting soft references as last resort before
throwing an out of memory error. For example, an application like Gerrit
configured with a 128GB heap, GC will wait until all 128GB is filled
before collecting the soft references so the application will be
suffering long pauses caused by GC for a long time already. In other
words, you will have to restart application because it's unusable before
JVM eviction kicks in.
Keeping the SoftReference in RepositoryCache is causing more harm than
good. If you use the time based eviction (which is the default strategy)
and want to tune JVM to release soft references more aggressively, it
will release repositories from the cache even though they are not
expired which defeats the purpose of the repository cache.
Gerrit uses Lucene library which uses soft references and this is
causing a "memory leak" except if you configure JVM to release soft
references more aggressively which have the nasty side effect of
evicting non expired repositories from the cache.
Change-Id: I9940bd800464c7f007696d0ccde52ea617b2ebce Signed-off-by: Hugo Arès <hugo.ares@ericsson.com>
Shawn Pearce [Sun, 13 Nov 2016 01:06:39 +0000 (17:06 -0800)]
Switch JSchSession to simple isolated OutputStream
Work around issues with JSch not handling interrupts by
isolating the JSch interactions onto another thread.
Run write and flush on a single threaded Executor using
simple Callable operations wrapping the method calls,
waiting on the future to determine the outcome before
allowing the caller to continue.
If any operation was interrupted the state of the stream
becomes fuzzy at close time. The implementation tries to
interrupt the pending write or flush, but this is very
likely to corrupt the stream object, so exceptions are
ignored during such a dirty close.
Philipp Marx [Fri, 11 Nov 2016 09:43:09 +0000 (10:43 +0100)]
Check that DfsBlockCache#blockSize is a power of 2
In case a value is used which isn’t a power of 2 there will be a high
chance of java.lang.ArrayIndexOutBoundsException and
org.eclipse.jgit.errors.CorruptObjectException due to a mismatching
assumption for the DfsBlockCache#blockSizeShift parameter.
Change-Id: Ib348b3704edf10b5f93a3ffab4fa6f09cbbae231 Signed-off-by: Philipp Marx <smigfu@googlemail.com>
Matthias Sohn [Mon, 7 Nov 2016 21:31:10 +0000 (22:31 +0100)]
Fix loop in auto gc
* GC.tooManyLooseObjects() always responded true since the loop missed
to advance the iterator so it always incremented until the threshold was
exceeded.
* Also fix loop exit criterion which was off by 1.
* Add some tests.
Change-Id: I70976dfaa026efbcf3c46bd45941f37277a18e04 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Jonathan Nieder [Fri, 4 Nov 2016 22:59:32 +0000 (15:59 -0700)]
StreamCopyThread: Do not drop data when flush is observed before writing
StreamCopyThread.flush was introduced in 61645b938bc934fda3b0624c5bac1e3495634750 (Add timeouts to smart
transport clients, 2009-06-19) to support timeouts on write in JSch.
The commit message from that change explains:
JSch made a timeout on write difficult because they explicitly do
a catch for InterruptedException inside of their OutputStream. We
have to work around that by creating an additional thread that just
shuttles data between our own OutputStream and the real JSch stream.
The code that runs on that thread is structured as follows:
while (!done) {
int n = src.read(buf);
dst.write(buf, 0, n);
}
with src being a PipedInputStream representing the data to be written
to JSch. To add flush support, that change wanted to add an extra step
if (wantFlush)
dst.flush();
but to handle the case where the thread is blocked in the read() call
waiting for new input, it needs to interrupt the read. So that is how
it works: the caller runs
to write some data and force it to flush by interrupting the read.
After the pipeOut.flush(), the StreamCopyThread reads the data that was
written and prepares to copy it out. If the copyThread.flush() call
interrupts the copyThread before it acquires writeLock and starts
writing, we throw away the data we just read to fulfill the flush.
Oops.
Noticed during the review of e67d59df3fb8ae1efe94a54efce36784f7cc83e8
(StreamCopyThread: Do not let flush interrupt a write, 2016-11-04),
which introduced this bug.
Jonathan Nieder [Fri, 4 Nov 2016 18:48:38 +0000 (11:48 -0700)]
StreamCopyThread: Do not let flush interrupt a write
flush calls interrupt() to interrupt a pending read and trigger a
flush. Unfortunately that interrupt() call can also interrupt a
pending write, putting Jsch in a bad state and triggering "Short read
of block" errors. Add locking to ensure the flush only interrupts
reads as intended.
Zhen Chen [Mon, 31 Oct 2016 21:27:36 +0000 (14:27 -0700)]
Fix flush call race condition in StreamCopyThread
If there was a new flush() call during flush previous bytes, we need to
catch it in order to process the new bytes between the two flush()
calls instead of going to last catch IOException clause and end the
thread.
Thomas Wolf [Mon, 24 Oct 2016 07:29:30 +0000 (09:29 +0200)]
Don't serialize internal hash collision chain link
ObjectId is serializable, and so are its subtypes. Ensure that
serialization does not follow the hash collision chain internal to the
ObjectIdOwnerMap, otherwise completely unrelated objects may get
serialized when a RevObject is serialized.
Note that serializing a RevCommit or RevTag may serialize quite a few
objects due to the parent/object links they contain. A user has no real
control over how many objects will be written when a RevCommit is
serialized. C.f [1]. This change does not resolve that, but in any case
this internal hash collision chain link should not participate in
serialization.
[1] https://github.com/gitblit/gitblit/pull/1141
Change-Id: Ice331a9dc80a59ca360fcc04adaff8b5e750d847 Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>