SpotBugs [1] is the spiritual successor of FindBugs, carrying on from
the point where it left off with support of its community.
This is a backport of [1] which originally did the replacement on the
master branch. This change updates to the current latest version, so
that we can get the benefit of its checks when pushing changes to the
stable branches.
On a local non-NFS filesystem the .git/config file will be orphaned if
it is replaced by a new process while the current process is reading the
old file. The current process successfully continues to read the
orphaned file until it closes the file handle.
Since NFS servers do not keep track of open files, instead of orphaning
the old .git/config file, such a replacement on an NFS filesystem will
instead cause the old file to be garbage collected (deleted). A stale
file handle exception will be raised on NFS clients if the file is
garbage collected (deleted) on the server while it is being read. Since
we no longer have access to the old file in these cases, the previous
code would just fail. However, in these cases, reopening the file and
rereading it will succeed (since it will open the new replacement file).
Since retrying the read is a viable strategy to deal with stale file
handles on the .git/config file, implement such a strategy.
Since it is possible that the .git/config file could be replaced again
while rereading it, loop on stale file handle exceptions, up to 5 extra
times, trying to read the .git/config file again, until we either read
the new file, or find that the file no longer exists. The limit of 5 is
arbitrary, and provides a safe upper bounds to prevent infinite loops
consuming resources in a potential unforeseen persistent error
condition.
Change-Id: I6901157b9dfdbd3013360ebe3eb40af147a8c626 Signed-off-by: Nasser Grainawi <nasser@codeaurora.org> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When running on NFS there was a chance that JGits LockFile
semantic is broken because File#createNewFile() may allow
multiple clients to create the same file in parallel. This
change provides a fix which is only used when the new config
option core.supportsAtomicCreateNewFile is set to false. The
default for this option is true. This option can only be set in the
global or the system config file. The repository config file is not
taken into account in this case.
If the config option core.supportsAtomicCreateNewFile is true
then File#createNewFile() is trusted and the behaviour doesn't
change.
But if core.supportsAtomicCreateNewFile is set to false then after
successful creation of the lock file a hardlink to that lock file is
created and the attribute nlink of the lock file is checked to be 2. If
multiple clients manage to create the same lock file nlink would be
greater than 2 showing the error.
This expensive workaround is described in
https://www.time-travellers.org/shane/papers/NFS_considered_harmful.html
section III.d) "Exclusive File Creation"
Honor trustFolderStats also when reading packed-refs
Then list of packed refs was cached in RefDirectory based on mtime of
the packed-refs file. This may fail on NFS when attributes are cached.
A cached mtime of the packed-refs file could cause JGit to trust the
cached content of this file and to overlook that the file is modified.
Honor the config option trustFolderStats and always read the packed-refs
content if the option is false. By default this option is set to true
and this fix is not active.
Fix exception handling for opening bitmap index files
When creating a new PackFile instance it is specified whether this pack
has an associated bitmap index file or not. This information is cached
and the public method getBitmapIndex() will always assume a bitmap index
file must exist if the cached data tells so. But it may happen that the
packfiles are repacked during a gc in a different process causing the
packfile, bitmap-index and index file to be deleted. Since JGit still
has an open FileHandle on the packfile this file is not really deleted
and can still be accessed. But index and bitmap index file are deleted.
Fix getBitmapIndex() to invalidate the cached packfile instance if such
a situation occurs.
This problem showed up when a gerrit server was serving repositories
which where garbage collected with native git regularly. Fetch and
clone commands for certain repositories failed permanently after a
native git gc had deleted old bitmap index files.
Change-Id: I8e620bec74dd3f310ba42024f9a657062f868f0e Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Matthias Sohn [Sat, 25 Mar 2017 01:33:06 +0000 (02:33 +0100)]
Only mark packfile invalid if exception signals permanent problem
Add NoPackSignatureException and UnsupportedPackVersionException to
explicitly mark permanent unrecoverable problems with a pack
Assume problem with a pack is permanent only if we are sure the
exception signals a non-transient problem we can't recover from:
- AccessDeniedException: we lack permissions
- CorruptObjectException: we detected corruption
- EOFException: file ended unexpectedly
- NoPackSignatureException: pack has no pack signature
- NoSuchFileException: file has gone missing
- PackMismatchException: pack no longer matches its index
- UnpackException: unpacking failed
- UnsupportedPackIndexVersionException: unsupported pack index version
- UnsupportedPackVersionException: unsupported pack version
Do not attempt to handle Errors since they are thrown for serious
problems applications should not try to recover from.
Change-Id: I2c416ce2b0e23255c4fb03a3f9a0ee237f7a484a Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Luca Milanesio [Fri, 24 Mar 2017 00:18:12 +0000 (00:18 +0000)]
Don't flag a packfile invalid if opening existing file failed
A packfile random file open operation may fail with a
FileNotFoundException even if the file exists, possibly
for the temporary lack of resources.
Instead of managing the FileNotFoundException as any generic
IOException it is best to rethrow the exception but prevent
the packfile for being flagged as invalid until it is actually
opened and read successfully or unsuccessfully.
Luca Milanesio [Fri, 10 Mar 2017 00:20:23 +0000 (00:20 +0000)]
Don't remove pack when FileNotFoundException is transient
The FileNotFoundException is typically raised in three conditions:
1. file doesn't exist
2. incompatible read vs. read/write open modes
3. filesystem locking
4. temporary lack of resources (e.g. too many open files)
1. is already managed, 2. would never happen as packs are not
overwritten while with 3. and 4. it is worth logging the exception and
retrying to read the pack again.
Log transient errors using an exponential backoff strategy to avoid
flooding the logs with the same error if consecutive retries to access
the pack fail repeatedly.
When a repository is being GCed and a concurrent push is received, there
is the possibility of having a missing object. This is due to the fact
that after the list of objects to delete is built, there is a window of
time when an unreferenced and ready to delete object can be referenced
by the incoming push. In that case, the object would be deleted because
there is no way to know it is no longer unreferenced. This will leave
the repository in an inconsistent state and most of the operations fail
with a missing tree/object error.
Given the incoming push change the last modified date for the now
referenced object, verify this one is still a candidate to delete
before actually performing the delete operation.
David Turner [Fri, 14 Oct 2016 20:44:51 +0000 (16:44 -0400)]
Config: do not add spaces before units
Adding a space before the unit ('g', 'm', 'k) causes git to fail with
the error:
fatal: bad numeric config value
Change-Id: I57f11d3a1cdcca4549858e773af1a2a80fc0369f Signed-off-by: David Turner <dturner@twosigma.com> Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Hugo Arès [Wed, 12 Oct 2016 10:54:52 +0000 (06:54 -0400)]
Fix eviction of repositories with negative usage count
If the repository close method was called twice (or more) for one open,
the usage count became negative and the repository was never be evicted
from the cache because the method checking if repository is expired was
not considering negative usage count.
Change-Id: I18a80c415c54c37d1b9def2b311ff2d0afa455ca Signed-off-by: Hugo Arès <hugo.ares@ericsson.com>
There was a bug when carrying over flags from a merge commit to its
non-first parents. The first parent of a merge commit was handled
differently and correct but the non-first parents are handled by a
recursive algorithm. Flags should be copied from the root merge commit
to parent-2, to grandparent-2, ... up to the limit of STACK_DEPTH==500
parents-levels. But the recursive algorithm was always copying only to
the direct parents of the merge commit and not the grand*-parents.
This seems to be no problem when commits are handled in a strict date
order because then copying only one level is no problem if children are
handled before parents. But when commits are not seperated anymore by
distinctive correct dates (e.g. because all commits have the same date)
then it may happen that a merge-parent is handled before the merge
commit and when dealing later with the merge commit one has to copy
flags down to more than one level
Unconditionally close repository in unregisterAndCloseRepository
Repository.close() method is used when reference counting and expiration
needs to be honored. The RepositoryCache.unregisterAndCloseRepository
method should close the repository unconditionally. This is also indicated
from its javadoc.
Thomas Wolf [Mon, 15 Aug 2016 05:55:44 +0000 (07:55 +0200)]
Handle all values of branch.[name].rebase
BranchConfig treated this config property as a boolean, but git also
allows the values "preserve" and "interactive". Config property
pull.rebase also allows the same values.
Replace private enum PullCommand.PullRebaseMode by new public enum
BranchConfig.BranchRebaseMode and adapt all uses. Add a new setter to
PullCommand.
Note: PullCommand will treat "interactive" like "true", i.e., as a
non-interactive rebase. Not sure how "interactive" should be handled.
At least it won't balk on it.
Bug: 499482
Change-Id: I7309360f5662b2c2efa1bd8ea6f112c63cf064af Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
Martin Goellnitz [Sat, 10 Sep 2016 09:29:30 +0000 (11:29 +0200)]
Add support for post-commit hooks
Change-Id: I6691b454404dd4db3c690ecfc7515de765bc2ef7 Signed-off-by: Martin Goellnitz <m.goellnitz@outlook.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Matthias Sohn [Sun, 4 Sep 2016 10:07:37 +0000 (12:07 +0200)]
Don't log error if system git config does not exist
- enhance FS.readPipe to throw an exception if the external command
fails to enable the caller to handle the command failure
- reduce log level to warning if system git config does not exist
- improve log message
Bug: 476639
Change-Id: I94ae3caec22150dde81f1ea8e1e665df55290d42 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Ned Twigg [Fri, 18 Mar 2016 10:08:44 +0000 (03:08 -0700)]
CLI: implement option -d for deleting tags
Change-Id: I438456b76aefd361384729686271288186d3be3b Signed-off-by: Ned Twigg <ned.twigg@diffplug.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Shawn Pearce [Mon, 29 Aug 2016 19:11:11 +0000 (12:11 -0700)]
ReceivePack: integrate push option parsing into recvCommands
This allows the same try/catch to handle parsing the command list,
push certificate and push options. Any errors will be caught and
handled by the same catch block, as the client is in the same state.
Shawn Pearce [Mon, 29 Aug 2016 18:54:15 +0000 (11:54 -0700)]
ReceivePack: allow push options to be set
Some embeddings of JGit require creating a ReceivePack instance in
another process from the one that handled the network socket with the
client. Similar to the PushCertificate add a setter to allow the
option list to be supplied.
Shawn Pearce [Fri, 26 Aug 2016 21:44:44 +0000 (14:44 -0700)]
ReceivePack: refactor push option parsing
Refactor all of the push option support code to allocate the list
immediately before parsing the options section off the stream.
Move option support down to ReceivePack instead of BaseReceivePack.
Push options are specific to the ReceivePack protocol and are not
likely to appear in the 4 year old subscription proposal. These
changes are OK before JGit 4.5 ships as no consumer should be relying
on these new APIs.
Shawn Pearce [Fri, 26 Aug 2016 01:59:15 +0000 (18:59 -0700)]
DfsReader: check object type during open
Do not open an OBJ_TREE if the caller is expecting an OBJ_BLOB or
OBJ_COMMIT; instead throw IncorrectObjectTypeException. This better
matches behavior of WindowCursor, the ObjectReader implementation of
the local file based object store.
Masaya Suzuki [Fri, 26 Aug 2016 01:25:48 +0000 (18:25 -0700)]
Clarify the semantics of DfsRefDatabase#compareAndPut
DfsRefDatabase#compareAndPut had a vague semantics for reference
matching. Because of this, an operation to make a symbolic
reference had been broken for some DFS implementations even if they
followed the contract of compareAndPut. The clarified semantics
requires the implementations to satisfy the followings:
* Matching references should be both symbolic references or both
object ID references.
* If both are symbolic references, both should have the same target
name.
* If both are object ID references, both should have the same object
ID.
This semantics is defined based on
https://git.eclipse.org/r/#/c/77416/. Before this commit,
DfsRefDatabase couldn't see the target of symbolic references.
InMemoryRepository is changed to comply with the new semantics. This
semantics change can affect the existing DFS implementations that only
checks object IDs. This commit adds two tests that the previous
InMemoryRepository couldn't pass.
Matthias Sohn [Tue, 24 May 2016 23:07:18 +0000 (01:07 +0200)]
Add a RepeatRule to help repeating flaky tests
A JUnit TestRule which enables to run the same JUnit test repeatedly.
This may help to identify the root cause why a flaky tests which succeed
most often does fail sometimes.
Add the RepeatRule to the test class containing the test to be repeated:
public class MyTest {
@Rule
public RepeatRule repeatRule = new RepeatRule();
...
}
and annotate the test to be repeated with the @Repeat(n=<repetitions>)
annotation:
@Test
@Repeat(n = 100)
public void test() {
...
}
then this test will be repeated 100 times. If any test execution fails
test repetition will be stopped.
Change-Id: I7c49ccebe1cb00bcde6b002b522d95c13fd3a35e Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
When doing a detaching operation, JGit fakes a SymbolicRef as an
ObjectIdRef. This is because RefUpdate#updateImpl dereferences the
SymbolicRef when updating it. For example, assume that HEAD is
pointing to refs/heads/master. If I try to make a detached HEAD
pointing to a commit c0ffee, RefUpdate dereferences HEAD as
refs/heads/master first and changes refs/heads/master to c0ffee. The
detach argument of RefDatabase#newUpdate avoids this dereference by
faking HEAD as ObjectIdRef.
This faking is problematic for the linking operation of
DfsRefDatabase. It does a compare-and-swap operation on every
reference change because of its distributed systems nature. If a
SymbolicRef is faked as an ObjectRef, it thinks that there is a
racing change in the reference and rejects the update. Because of
this, DFS based repositories cannot change the link target of symbolic
refs. This has not been a problem for file-based repositories because
they have a file-lock based semantics instead of the CAS based one.
The reference implementation, InMemoryRepository, is not affected
because it only compares ObjectIds.
When [1] introduced this faking code, there was no way for RefUpdate
to distinguish the detaching operation. When [2] fixed the detaching
operation, it introduced a detachingSymbolicRef flag. This commit uses
this flag to control whether it needs to dereference the symbolic refs
by calling Ref#getLeaf. The same flag is used in the reflog update
operation.
This commit does not affect any operation that succeeds currently. In
some DFS repository implementations, this fixes a ref linking
operation, which is currently failing.
Dan Wang [Tue, 23 Aug 2016 02:16:51 +0000 (19:16 -0700)]
Packet logging for JGit
Imitate the packet tracing feature from C Git v1.7.5-rc0~58^2~1 (add
packet tracing debug code, 2011-02-24). Unlike C Git, use the log4j
log level setting instead of the GIT_TRACE_PACKET environment variable
to enable tracing.
Stefan Beller [Wed, 24 Aug 2016 19:47:10 +0000 (12:47 -0700)]
push: Do not use push options unless requested
Unless the user passed --push-option, the client does not intend to
pass push options to the server.
Without this change, all pushes to servers without push option support
fail.
Not enabling the feature (instead of enabling it and sending an empty
list of options) in this case is more intuitive and matches the
behavior of C git push's --push-option parameter better.
Bug: 500149
Change-Id: Ia4f13840cc54d8ba54e99b1432108f1c43022c53 Signed-off-by: Stefan Beller <sbeller@google.com>
HttpClientConnection uses a TemporaryBufferEntity which uses
TemporaryBuffer.LocalFile to buffer an HttpEntity. It was leaking
temporary files if the buffered entities were larger than 1MB since it
failed to destroy the TemporaryBuffer.LocalFile.
Bug: 500079
Change-Id: Ib963e04efc252bdd0420a5c69b1a19181e9e6169 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Masaya Suzuki [Sun, 21 Aug 2016 22:27:30 +0000 (15:27 -0700)]
Use FS#lastModified instead of File#lastModified
This fixes the tests failed in JDK8.
FS uses java.nio API to get file attributes. The timestamps obtained
from that API are more precise than the ones from
java.io.File#lastModified() since Java8.
This difference accidentally makes JGit detect newly added files as
smudged. Use the precised timestamp to avoid this false positive.
David Pursehouse [Fri, 19 Aug 2016 01:58:06 +0000 (10:58 +0900)]
LfsProtocolServlet: Add support for rate limit and bandwidth limit errors
The git-lfs specification [1] describes the following optional status codes
that may be returned:
429 - The user has hit a rate limit with the server. Though the API does
not specify any rate limits, implementors are encouraged to set some
for availability reasons.
509 - Returned if the bandwidth limit for the user or repository has been
exceeded. The API does not specify any bandwidth limit, but implementors
may track usage.
Add two new exception classes to represent these cases. Implementations may
throw these from #getLargeFileRepository(), causing the corresponding HTTP
status codes to be returned to the client.
Masaya Suzuki [Fri, 19 Aug 2016 20:51:05 +0000 (13:51 -0700)]
Ignore IOException thrown from close
AddCommandTest is flaky because IOException is thrown sometimes.
Caused by: java.io.IOException: Stream closed
at java.lang.ProcessBuilder$NullOutputStream.write(ProcessBuilder.java:433)
at java.io.OutputStream.write(OutputStream.java:116)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
at org.eclipse.jgit.util.FS.runProcess(FS.java:993)
at org.eclipse.jgit.util.FS.execute(FS.java:1102)
at org.eclipse.jgit.treewalk.WorkingTreeIterator.filterClean(WorkingTreeIterator.java:470)
... 22 more
OpenJDK replaces the underlying OutputStream with NullOutputStream when
the process exits. This throws IOException for all write operation. When
it exits before JGit writes the input to the pipe buffer, the input
stays in BufferedOutputStream. The close method tries to write it again,
and IOException is thrown.
Since we ignore IOException in StreamGobbler, we also ignore it when
we close the stream.
Shawn Pearce [Fri, 19 Aug 2016 18:51:40 +0000 (11:51 -0700)]
DfsObjDatabase: clear PackList dirty bit if no new packs
If a reference was updated more recently than a pack was written
(typical) the PackList was perpetually dirty until the next GC
was completed for the repository.
Detect this condition by observing no changes to the PackList
membership and resetting the dirty bit.
David Pursehouse [Sat, 13 Aug 2016 07:56:20 +0000 (16:56 +0900)]
LfsProtocolServlet: Always set the Content-Type header on response
If the Content-Type is not set on error responses, the git-lfs client
does not read the body which contains the error message, and instead
just displays a generic error message.
Also set the charset on the Content-Type header.
Change-Id: I88e6f07f20b622a670e7c5063145dffb8b630aee Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Jonathan Nieder [Tue, 9 Aug 2016 01:50:01 +0000 (18:50 -0700)]
RepoCommand: Avoid group lists shadowing groups strings
Reported-by: David Pursehouse <david.pursehouse@gmail.com>
Change-Id: I9e9b021d335bda4d58b6bcc30f59b81ac5b37724 Signed-off-by: Jonathan Nieder <jrn@google.com>
Matthias Sohn [Mon, 8 Aug 2016 15:10:53 +0000 (17:10 +0200)]
Silence API errors in LfsProtocolServlet
bb9988c2 changed the signature of getLargeFileRepository() which is only
breaking implementors which is ok according to OSGi semantic versioning
rules.
Change-Id: I68bda7900b72e217571f74aee53705167f8100a2 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Jonathan Nieder [Mon, 8 Aug 2016 19:35:36 +0000 (12:35 -0700)]
Shallow fetch: Pass along "shallow"s in unparsed-wants case, too
Since 84d2738ff21c (Don't skip want validation when the client sends no
haves, 2013-06-21), this branch is not taken. Process the
"shallow"s anyway as a defensive measure in case the code path gets
revived.
Change-Id: Idfb834825d77f51e17191c1635c9d78c78738cfd Signed-off-by: Jonathan Nieder <jrn@google.com>
Jonathan Nieder [Mon, 8 Aug 2016 19:31:39 +0000 (12:31 -0700)]
Shallow fetch: Pass a DepthWalk to PackWriter
d385a7a5e5ca (Shallow fetch: Respect "shallow" lines, 2016-08-03) forgot
that UploadPack wasn't passing a DepthWalk to PackWriter in the first
place. As a result, shallow clones fail:
java.lang.IllegalArgumentException: Shallow packs require a DepthWalk
at org.eclipse.jgit.internal.storage.pack.PackWriter.preparePack(PackWriter.java:756)
at org.eclipse.jgit.transport.UploadPack.sendPack(UploadPack.java:1497)
at org.eclipse.jgit.transport.UploadPack.sendPack(UploadPack.java:1381)
at org.eclipse.jgit.transport.UploadPack.service(UploadPack.java:774)
at org.eclipse.jgit.transport.UploadPack.upload(UploadPack.java:667)
at org.eclipse.jgit.http.server.UploadPackServlet.doPost(UploadPackServlet.java:191)
Jonathan Nieder [Sat, 6 Aug 2016 00:36:08 +0000 (20:36 -0400)]
Merge changes I27961679,I91be6165,If0dbd562
* changes:
LfsProtocolServlet: Allow access to objects in request
LfsProtocolServlet: Allow getLargeFileRepository to raise exceptions
Remove references to org.eclipse.jgit.java7
Terry Parker [Thu, 4 Aug 2016 18:14:33 +0000 (11:14 -0700)]
Shallow fetch/clone: Make --depth mean the total history depth
cgit changed the --depth parameter to mean the total depth of history
rather than the depth of ancestors to be returned [1]. JGit still uses
the latter meaning, so update it to match cgit.
depth=0 still means a non-shallow clone. depth=1 now means only the
wants rather than the wants and their direct parents.
This is accomplished by changing the semantic meaning of "depth" in
UploadPack and PackWriter to mean the total depth of history desired,
while keeping "depth" in DepthWalk.{RevWalk,ObjectWalk} to mean
the depth of traversal. Thus UploadPack and PackWriter always
initialize their DepthWalks with "depth-1".
Terry Parker [Wed, 3 Aug 2016 16:01:22 +0000 (09:01 -0700)]
Shallow fetch: Respect "shallow" lines
When fetching from a shallow clone, the client sends "have" lines
to tell the server about objects it already has and "shallow" lines
to tell where its local history terminates. In some circumstances,
the server fails to honor the shallow lines and fails to return
objects that the client needs.
UploadPack passes the "have" lines to PackWriter so PackWriter can
omit them from the generated pack. UploadPack processes "shallow"
lines by calling RevWalk.assumeShallow() with the set of shallow
commits. RevWalk creates and caches RevCommits for these shallow
commits, clearing out their parents. That way, walks correctly
terminate at the shallow commits instead of assuming the client has
history going back behind them. UploadPack converts its RevWalk to an
ObjectWalk, maintaining the cached RevCommits, and passes it to
PackWriter.
Unfortunately, to support shallow fetches the PackWriter does the
following:
if (shallowPack && !(walk instanceof DepthWalk.ObjectWalk))
walk = new DepthWalk.ObjectWalk(reader, depth);
That is, when the client sends a "deepen" line (fetch --depth=<n>)
and the caller has not passed in a DepthWalk.ObjectWalk, PackWriter
throws away the RevWalk that was passed in and makes a new one. The
cleared parent lists prepared by RevWalk.assumeShallow() are lost.
Fortunately UploadPack intends to pass in a DepthWalk.ObjectWalk.
It tries to create it by calling toObjectWalkWithSameObjects() on
a DepthWalk.RevWalk. But it doesn't work: because DepthWalk.RevWalk
does not override the standard RevWalk#toObjectWalkWithSameObjects
implementation, the result is a plain ObjectWalk instead of an
instance of DepthWalk.ObjectWalk.
The result is that the "shallow" information is thrown away and
objects reachable from the shallow commits can be omitted from the
pack sent when fetching with --depth from a shallow clone.
Multiple factors collude to limit the circumstances under which this
bug can be observed:
1. Commits with depth != 0 don't enter DepthGenerator's pending queue.
That means a "have" cannot have any effect on DepthGenerator unless
it is also a "want".
2. DepthGenerator#next() doesn't call carryFlagsImpl(), so the
uninteresting flag is not propagated to ancestors there even if a
"have" is also a "want".
3. JGit treats a depth of 1 as "1 past the wants".
Because of (2), the only place the UNINTERESTING flag can leak to a
shallow commit's parents is in the carryFlags() call from
markUninteresting(). carryFlags() only traverses commits that have
already been parsed: commits yet to be parsed are supposed to inherit
correct flags from their parent in PendingGenerator#next (which
doesn't happen here --- that is (2)). So the list of commits that have
already been parsed becomes relevant.
When we hit the markUninteresting() call, all "want"s, "have"s, and
commits to be unshallowed have been parsed. carryFlags() only
affects the parsed commits. If the "want" is a direct parent of a
"have", then it carryFlags() marks it as uninteresting. If the "have"
was also a "shallow", then its parent pointer should have been null
and the "want" shouldn't have been marked, so we see the bug. If the
"want" is a more distant ancestor then (2) keeps the uninteresting
state from propagating to the "want" and we don't see the bug. If the
"shallow" is not also a "have" then the shallow commit isn't parsed
so (2) keeps the uninteresting state from propagating to the "want
so we don't see the bug.
Here is a reproduction case (time flowing left to right, arrows
pointing to parents). "C" must be a commit that the client
reports as a "have" during negotiation. That can only happen if the
server reports it as an existing branch or tag in the first round of
negotiation:
A <-- B <-- C <-- D
First do
git clone --depth 1 <repo>
which yields D as a "have" and C as a "shallow" commit. Then try
git fetch --depth 1 <repo> B:refs/heads/B
Negotiation sets up: have D, shallow C, have C, want B.
But due to this bug B is marked as uninteresting and is not sent.
Change-Id: I6e14b57b2f85e52d28cdcf356df647870f475440 Signed-off-by: Terry Parker <tparker@google.com>
David Pursehouse [Fri, 29 Jul 2016 03:37:48 +0000 (12:37 +0900)]
LfsProtocolServlet: Allow getLargeFileRepository to raise exceptions
According to the specification [1] the server may return the following
HTTP error responses:
- 403: The user has read, but not write access.
- 404: The repository does not exist for the user.
- 422: Validation error with one or more of the objects in the request.
In the current implementation, however, getLargeFileRepository can only
return null to indicate an error. This results in the error code:
- 503: Service Unavailable
being returned to the client regardless of what the actual reason was.
Add exception classes to cover these cases, derived from a common base
exception, and change the specification of getLargeFileRepository to throw
the base exception.
In LfsProtocolServlet#post, handle the new exceptions and send back the
appropriate HTTP responses as mentioned above.
The specification also mentions several other optional response codes (406,
429, 501, and 509) but these are not implemented in this commit. It should
be trivial to implement them in follow-up commits.