In practice the DHT storage layer has not been performing as well as
large scale server environments want to see from a Git server.
The performance of the DHT schema degrades rapidly as small changes
are pushed into the repository due to the chunk size being less than
1/3 of the pushed pack size. Small chunks cause poor prefetch
performance during reading, and require significantly longer prefetch
lists inside of the chunk meta field to work around the small size.
The DHT code is very complex (>17,000 lines of code) and is very
sensitive to the underlying database round-trip time, as well as the
way objects were written into the pack stream that was chunked and
stored on the database. A poor pack layout (from any version of C Git
prior to Junio reworking it) can cause the DHT code to be unable to
enumerate the objects of the linux-2.6 repository in a completable
time scale.
Performing a clone from a DHT stored repository of 2 million objects
takes 2 million row lookups in the DHT to locate the OBJECT_INDEX row
for each object being cloned. This is very difficult for some DHTs to
scale, even at 5000 rows/second the lookup stage alone takes 6 minutes
(on local filesystem, this is almost too fast to bother measuring).
Some servers like Apache Cassandra just fall over and cannot complete
the 2 million lookups in rapid fire.
On a ~400 MiB repository, the DHT schema has an extra 25 MiB of
redundant data that gets downloaded to the JGit process, and that is
before you consider the cost of the OBJECT_INDEX table also being
fully loaded, which is at least 223 MiB of data for the linux kernel
repository. In the DHT schema answering a `git clone` of the ~400 MiB
linux kernel needs to load 248 MiB of "index" data from the DHT, in
addition to the ~400 MiB of pack data that gets sent to the client.
This is 193 MiB more data to be accessed than the native filesystem
format, but it needs to come over a much smaller pipe (local Ethernet
typically) than the local SATA disk drive.
I also never got around to writing the "repack" support for the DHT
schema, as it turns out to be fairly complex to safely repack data in
the repository while also trying to minimize the amount of changes
made to the database, due to very common limitations on database
mutation rates..
This new DFS storage layer fixes a lot of those issues by taking the
simple approach for storing relatively standard Git pack and index
files on an abstract filesystem. Packs are accessed by an in-process
buffer cache, similar to the WindowCache used by the local filesystem
storage layer. Unlike the local file IO, there are some assumptions
that the storage system has relatively high latency and no concept of
"file handles". Instead it looks at the file more like HTTP byte range
requests, where a read channel is a simply a thunk to trigger a read
request over the network.
The DFS code in this change is still abstract, it does not store on
any particular filesystem, but is fairly well suited to the Amazon S3
or Apache Hadoop HDFS. Storing packs directly on HDFS rather than
HBase removes a layer of abstraction, as most HBase row reads turn
into an HDFS read.
Most of the DFS code in this change was blatently copied from the
local filesystem code. Most parts should be refactored to be shared
between the two storage systems, but right now I am hesistent to do
this due to how well tuned the local filesystem code currently is.
Matthias Sohn [Wed, 26 Oct 2011 21:29:23 +0000 (17:29 -0400)]
Merge changes I488e9c97,I30f1049f,I1c088dce
* changes:
Cosmetic adjustment of relative date format, do not display "0 months"
Make use of the many date formatting options in the log command
Define a utility class for handling Git date formats
Carsten Pfeiffer [Tue, 25 Oct 2011 07:22:11 +0000 (09:22 +0200)]
Allow detecting which files were renamed during a revwalk
The egit history view shows the files associated with a commit by using
a PathFilter. When following renames with a FollowFilter, the PathFilter
cannot be configured anymore because the affected files are simply not
known.
Thus, it should be possible to get to know which files are renamed.
Robin Rosenberg [Sun, 23 Oct 2011 20:53:17 +0000 (22:53 +0200)]
Fix compatibilty breakage for SystemReader
Introducing a new abstract method is not nice when one
expects other to subclass them. Create default implementations
so old code that implements SystemReader does not break.
The default methods just delegate to the JVM.
Change-Id: I42cdfdcb6b29f7203697a23833dca85185b0b9b3 Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
Robin Rosenberg [Sat, 22 Oct 2011 23:51:30 +0000 (01:51 +0200)]
Define a utility class for handling Git date formats
Besides the formats known by git-log(1) we also add "locale"
and "localelocal" that formats dates according to the user's locale.
"locale" does not translate into local timezone, while
localelocal does.
Robin Rosenberg [Mon, 17 Oct 2011 06:28:19 +0000 (08:28 +0200)]
Fix bad checkout behaviour when a file is removed
We deleted the entry if there was a file and an index
entry, but not when there was just an index entry. Now
delete the file in both cases since the missing file
just means our worktree is dirty. This affected the
implementation of reset --hard.
* changes:
UploadPack: Fix races in smart HTTP negotiation
PackWriter: Export more statistics
Do not requeue state vector in stateless RPC fetch
Wrap excessively long line in BasePackFetchConnection
Fix smart HTTP client stream alignment errors
Jens Baumgart [Wed, 5 Oct 2011 11:56:23 +0000 (13:56 +0200)]
Extend IndexDiff to calculate ignored files and folders
IndexDiff was extended to calculate ignored files and folders.
The calculation only considers files that are NOT in the index.
This functionality is required by the new EGit decorator implementation.
Change-Id: I589e758cc55873ce75614602e017ac793435e24d Signed-off-by: Kevin Sawicki <kevin@github.com> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
Manuel Doninger [Thu, 8 Sep 2011 17:37:11 +0000 (19:37 +0200)]
New config constant for default start-point
This constant determine the default start-point, if the user
don't want to create a branch from the current HEAD.
Change-Id: Iea944e11e80134fbafc4c47383457d5ed11a4164 Signed-off-by: Manuel Doninger <manuel.doninger@googlemail.com> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
Matthias Sohn [Thu, 29 Sep 2011 22:00:22 +0000 (00:00 +0200)]
Fire IndexChangedEvent on DirCache.commit()
Since we replaced GitIndex by DirCache JGit didn't fire
IndexChangedEvents anymore. For EGit this still worked with a high
latency since its RepositoryChangeScanner which is scheduled to
run each 10 seconds fires the event in case the index changes.
This scanner is meant to detect index changes induced by a different
process e.g. by calling "git add" from native git.
When the index is changed from within the same process we should fire
the event synchronously. Compare the index checksum on write to index
checksum when index was read earlier to determine if index really
changed. Use IndexChangedListener interface to keep DirCache decoupled
from Repository.
Change-Id: Id4311f7a7859ffe8738863b3d86c83c8b5f513af Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Fix status in index entries after checkout of paths
The checkout command was producing an inconsistent state of the index
which even confuses native git. The content sha1 of the touched index
entries was updated, but the length and the filemode was not updated.
Later in coding the index entries got automatically corrected (through
Dircache.checkoutEntry()) but the correction was after persisting the
index to disk. So, the correction was lost and we ended up with an index
where length and sha1 don't fit together.
A similar problem is fixed with "lastModified" of DircacheEntry. When
checking out a path without specifying an explicit commit (you want to
checkout what's in the index) the index was not updated regarding
lastModified. Readers of the index will think the checked-out
file is dirty because the file has a younger lastmodified then what's
in the index.
Robin Rosenberg [Thu, 8 Sep 2011 17:42:19 +0000 (19:42 +0200)]
Test the reflog message for commit, cherry-pick, revert and merge
Change-Id: I319f09577b3e04f6c31399fe8e57e9a9ad2c8a6c Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Robin Rosenberg [Thu, 8 Sep 2011 16:35:17 +0000 (18:35 +0200)]
Append merge strategy to reflog message
Change-Id: Ia0e73208b86c45a3d96698e973f6e70ec5cb7303 Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Robin Rosenberg [Thu, 8 Sep 2011 16:05:01 +0000 (18:05 +0200)]
Fix the reflog prefix for cherry-pick, revert and merge commands
We should see whether the commit was a regular commit or something
else.
Change-Id: I82d8300cf3c53cb2bdcb6495386aadb803e0c6f7 Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Shawn O. Pearce [Sat, 27 Aug 2011 00:28:18 +0000 (17:28 -0700)]
UploadPack: Fix races in smart HTTP negotiation
Clients cache the set of advertised references at the start of a
negotiation, and keep replaying the same "want SHA1" list to the
server on each negotiation step. If another client pushes into
a branch and moves it by fast-forward, any request to obtain that
branch's prior SHA-1 is still valid, the commit is reachable from
the new position of the reference. Unfortunately the fast-forward
causes smart HTTP negotations to fail, as the server no longer is
advertising that prior SHA-1.
Instead of causing clients to fail out with a "want invalid" error
and forcing the end-user retry, possibly getting into a never ending
try-fail-retry race while other clients are pushing into the same
busy repository, allow the slightly stale want request so long as
it is still reachable.
C Git implemented this same change recently to fix races on the
smart HTTP protocol when the C Git git-http-backend is used.
The new RequestPolicy feature also allows server authors to make
an even more lenient configuration that exports any SHA-1 to the
client. This might be useful in certain settings where a server
has authenticated the client as the "repository owner" and wants
to allow them to grab any content from the server as a complete
unbroken history chain.
The new setAdvertisedRefs() method allows server authors to manually
fix the references that are advertised, possibly bypassing the
getAllRefs() call on the Repository object.
Change-Id: I7cdb563bf9c55c83653f217f6e53c3add55a0541 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Shawn O. Pearce [Mon, 15 Aug 2011 23:38:28 +0000 (16:38 -0700)]
PackWriter: Export more statistics
Export the shallow pack information, and also a handy function to
sum up the total times. Include the time writing out the index file,
if it was created.
Change-Id: I7f60ae6848455a357b25feedb23743bbf6c153cf Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Do not requeue state vector in stateless RPC fetch
If the no-done capability was enabled on the connection, don't
queue up the state vector again once the ACK %s ready message
is observed from the remote. The pack will be following in this
response stream, so the state vector is no longer required.
Change-Id: I7bd1e76957cb58c7ff1cdaeef227f1b02a7e5d24 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The client's use of UnionInputStream was broken when combined with a
8192 byte buffer used by PackParser. A smart HTTP client connection
always pushes in the execute stateless RPC input stream after the
data stream has ended from the remote peer. At the end of the pack,
PackParser asked to fill a 8192 byte buffer, but if only e.g. 1000
bytes remained UnionInputStream went to the next stream and asked
it for input, which triggered a new RPC, and failed because there
was nothing pending in the request buffer.
Change UnionInputStream to only return what it consumed from a
single InputStream without invoking the next InputStream, just in
case that second InputStream happens to be one of these magical
ones that generates an RPC invocation.
Change-Id: I0e51a8e6fea1647e4d2e08ac9cfc69c2945ce4cb Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Kevin Sawicki [Tue, 13 Sep 2011 22:29:55 +0000 (15:29 -0700)]
Remove duplicate calls to DirCache.unlock on checkout
Calls to unlock the DirCache before throwing an exception
were not needed since checkout calls doCheckout wrapped
in a try block that calls DirCache.unlock in a finally
block.
Change-Id: I2b249a784f9e363430e288aad67fcefb7fac0a6e Signed-off-by: Kevin Sawicki <kevin@github.com>
Matthias Sohn [Sun, 11 Sep 2011 20:43:41 +0000 (22:43 +0200)]
Merge branch 'stable-1.1'
* stable-1.1:
Allow commit when submodule changes are present
Ignore submodule on checkout instead of deleting it
cleanup: Reuse local variable for current DirCacheEntry
Prepare post v1.1.0.201109071825-rc3 builds
JGit v1.1.0.201109071825-rc3
Use commit message best practices for Mylyn Commit template
Change-Id: I6ab9e5cb48c036d2ee2e548f5ec040d93672d8ad Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Robin Rosenberg [Sat, 3 Sep 2011 20:54:37 +0000 (22:54 +0200)]
Ignore submodule on checkout instead of deleting it
The purpose of this commit is to prevent destruction of
submodules on checkout from a tree with a submodule to
another. For consistency we handle the reverse case too,
when we checkout a branch that has a submodule and the
submodule directory exists. And finally we ignore the
case where the submodule changes.
We do not update the submodules, we just try to ignore
them harder.
Bug: 356664
Change-Id: I202c695a57af99b13d0d7220803fd08def3d9b5e Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
Robin Rosenberg [Sun, 4 Sep 2011 09:12:49 +0000 (11:12 +0200)]
Fix the names in the reflog for checkouts
We were diverging from the reference implementation. Always use the
ref we checkout to as the to-branch the reflog and avoid the
refs/heads both in the from-name and to-name.
Change-Id: Id973d9102593872e4df41d0788f0eb7c7fd130c4 Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Robin Rosenberg [Sun, 4 Sep 2011 09:10:47 +0000 (11:10 +0200)]
Add a helper for parsing branch switch info out of a reflog entry
Change-Id: I91c7e08c4afd2562df2226887a933d93c78a0371 Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Robin Rosenberg [Sat, 27 Aug 2011 14:58:26 +0000 (16:58 +0200)]
Use the appropriate constant for ".git"
We have two constants with the same content. DOT_GIT is intended
for the git repository below the work tree, while DOT_GIT_EXT is
the ".git" directory extension usually associated with bare
repositories.
Change-Id: I0946b4beb2d1c3af289ddbbb5641d2f4e4c49d3f Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
Roberto Tyley [Thu, 25 Aug 2011 21:25:10 +0000 (22:25 +0100)]
Tolerate zlib deflation with window size < 32Kb
JGit currently identifies loose objects as 'corrupt' if they've been
deflated using a window size less than 32Kb, because the
isStandardFormat() function doesn't recognise the header
byte as a zlib header. This patch makes the method tolerant of
all valid window sizes (15-bit to 8-bit) - but doesn't sacrifice
it's accuracy in distingushing the standard loose-object format
from the experimental (now abandoned) format. It's based on a patch
which has been merged into C-Git master branch:
On memory constrained systems zlib may use a much smaller window
size - working on Agit, I found that Android uses a 4KB window;
giving a header byte of 0x48, not 0x78. Consequently all loose
objects generated by the Android platform appear 'corrupt' :(
It might appear that this patch changes isStandardFormat() to the
point where it could incorrectly identify the experimental format as
the standard one, but the two criteria (bitmask & checksum) can only
give a false result for an experimental object where both of the
following are true:
1) object size is exactly 8 bytes when uncompressed (bitmask)
2) [single-byte in-pack git type&size header] * 256
+ [1st byte of the following zlib header] % 31 = 0 (checksum)
As it happens, for all possible combinations of valid object type
(1-4) and window bits (0-7), the only time when the checksum will be
divisible by 31 is for 0x1838 - ie object type *1*, a Commit - which,
due the fields all Commit objects must contain, could never be as
small as 8 bytes in size.
Given this, the combination of the two criteria (bitmask & checksum)
always correctly determines the buffer format, and is more tolerant
than the previous version.
References:
Android uses a 4KB window for deflation:
http://android.git.kernel.org/?p=platform/libcore.git;a=blob;f=luni/src/main/native/java_util_zip_Deflater.cpp;h=c0b2feff196e63a7b85d97cf9ae5bb2583409c28;hb=refs/heads/gingerbread#l53
Code snippet searching for false positives with the zlib checksum:
https://gist.github.com/1118177
Throw JGit exception when ResetCommand got wrong ref
If the ResetCommand should reset to a invalid ref (e.g. HEAD in a repo
whithout a single commit) it was throwing an NPE. This is fixed now by
throwing a JGitInternalExcpeption. It would be nicer if we could throw
a InvalidRefException, but this would modify our API.
Bug: 339610
Change-Id: Iffcb4f2cca9f702176471d93c3a71e5cb3e700b1 Signed-off-by: Christian Halstrick <christian.halstrick@sap.com> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
Shawn O. Pearce [Tue, 16 Aug 2011 19:32:10 +0000 (12:32 -0700)]
PackWriter: support excluding objects already in other packs
This can be useful when implementing garbage collection and there
are packs that should not be copied, such as huge packs that have
a sibling ".keep" file alongside of them.
Callers driving PackWriter need to initialize the list of packs not
to include objects from by passing each index to excludeObjects().
Change-Id: Id7f34df69df97be406bcae184308e92b0e8690fd Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
Test was added which reproduce the ClassCastException when ours or
theirs merge strategy is set to MergeCommand. Merger and MergeCommand
were updated in order to avoid exception.
Change-Id: I4c1284b4e80d82638d0677a05e5d38182526d196 Signed-off-by: Denys Digtiar <duemir@gmail.com> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
Dariusz Luksza [Wed, 17 Aug 2011 10:43:35 +0000 (12:43 +0200)]
Adds DiffEntry.scan(TreeWalk, boolean) method
Adds method into DiffEntry class that allows to specify whether changed
trees are included in scanning result list. By default changed trees
aren't added, but in some cases having changed tree would be useful.
Also adds check for tree count in TreeWalk and when it is different from
two it will thrown an IllegalArgumentException.
This change is required by egit
I7ddb21e7ff54333dd6d7ace3209bbcf83da2b219
Change-Id: I5a680a73e1cffa18ade3402cc86008f46c1da1f1 Signed-off-by: Dariusz Luksza <dariusz@luksza.org> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Shawn O. Pearce [Tue, 16 Aug 2011 19:18:39 +0000 (12:18 -0700)]
PackWriter: Make want/have actual sets
During parsing these are used with contains(). If they are a List
type, the contains operation is not efficient. Some callers such
as UploadPack often pass a List here, so convert to Set when the
type isn't efficient for contains().
Change-Id: If948ae3bf1f46e756bd2d5db14795e12ba7a6207 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* changes:
DHT: Change DhtReadher caches to be dynamic by workload
DHT: Use a proper HashMap for RecentChunk lookups
DHT: Always have at least one recent chunk in DhtReader
DHT: Fix NPE during prefetch
DHT: Drop leading hash digits from row keys
The "tiny optimization" introduced by 67b0 turns out to have a big
savings on wall-clock time when the object store is very slow (e.g.
the DHT support in JGit), but comes with a much bigger penalty in
space used by the output stream. CGit packed with 67b0 enabled is
7 MiB larger than it should be (36 MiB rather than 28/29 MiB). The
much bigger Linux kernel repository gained over 200 MiB, though some
of this may have been caused by a smaller window setting.
Revert this patch as PackWriter should be optimizing for space used
rather than time spent, since its primary use is network transfer, and
that isn't free.
Change-Id: I7413a9ef89762208159b4a1adc5a22a4c9245611 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Shawn O. Pearce [Mon, 8 Aug 2011 22:11:54 +0000 (15:11 -0700)]
Speed up ObjectWalk by 6235 objects/sec
The "Counting objects" phase of packing is the most time consuming
part for any server providing access to Git repositories. Scanning
through the entire project history, including every revision of
every tree that has ever existed is expensive and takes an incredible
amount of CPU time.
Inline the tree parsing logic, unroll a number of loops, and setup
to better handle the common case of seeing another occurrence of
an object that was already marked SEEN.
This change boosts the "Counting objects" phase when JGit is acting
as a server and is packing the linux-2.6 repository for its client.
Compared to CGit on the same hardware, a JGit daemon server is now
21883 objects/sec faster:
CGit:
Counted 2058062 objects in 38981 ms at 52796.54 objects/sec
Counted 2058062 objects in 38920 ms at 52879.29 objects/sec
Counted 2058062 objects in 39059 ms at 52691.11 objects/sec
JGit (before):
Counted 2058062 objects in 31529 ms at 65275.21 objects/sec
Counted 2058062 objects in 30359 ms at 67790.84 objects/sec
Counted 2058062 objects in 30033 ms at 68526.69 objects/sec
JGit (this commit):
Counted 2058062 objects in 28726 ms at 71644.57 objects/sec
Counted 2058062 objects in 27652 ms at 74427.24 objects/sec
Counted 2058062 objects in 27528 ms at 74762.50 objects/sec
Above the first run was a "cold server". For JGit the JVM had just
started up with `jgit daemon`, and for CGit we hadn't touched the
repository "recently" (but it was certainly in kernel buffer cache).
The second and third runs were against the running JGit JVM, allowing
timing tests to better reflect the benefits of JGit's pack and index
caching, as well as any optimizations the JIT may have performed.
The timings are fair. CGit is opening, checking and mmap'ing both
the pack and index during the timer. JGit is opening, checking
and malloc+read'ing the pack and index data into its Java heap
during the timer. Both processes are walking the same graph space,
and are computing the "path hash" necessary to sort objects in the
object table for delta compression. Since this commit only impacts
the "Counting objects" phase, delta compression was obviously not
included in the timings and JGit may still be performing delta
compression slower than CGit, resulting in an overall slower server
experience for clients.
Change-Id: Ieb184bfaed8475d6960a494b1f3c870e0382164a Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Robin Stocker [Tue, 9 Aug 2011 21:31:50 +0000 (23:31 +0200)]
Add isSuccessful to MergeStatus, RebaseResult.Status and PullResult
This is useful when the result needs to be displayed and it's only of
interest if the operation was successful or not (in egit, it could be
used in MultiPullResultDialog).
Change-Id: Icfc9a9c76763f8a777087a1262c8d6ad251a9068 Signed-off-by: Robin Stocker <robin@nibor.org> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>