searchForReuse might impact performance in large repositories
The search for reuse phase for *all* the objects scans *all*
the packfiles, looking for the best candidate to serve back to the
client.
This can lead to an expensive operation when the number of
packfiles and objects is high.
Add parameter "pack.searchForReuseTimeout" to limit the time spent
on this search.
Change-Id: I54f5cddb6796fdc93ad9585c2ab4b44854fa6c48
Reason why this change is needed:
Getting this metric will help estimate how much time will be saved once
the reachability checks get optimized
What this patch does:
Measure time spent by requestValidator.checkWants() in parseWants() and save
it in an instance of PackStatistics.Accumulator.
Signed-off-by: Alexa Panfil <alexapizza@google.com>
Change-Id: Id7fe4016f96549d9511a2c24052dad93cfbb31a4
Change-Id: Ifb8227cb62370029d6774f2a22b15d6478c713ca
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Add to statistics the amount and size of packfiles offloaded to HTTP
download.
Change-Id: I895a7219ecac2794368bfc4fdfae74c1238deed9
Signed-off-by: Ivan Frade <ifrade@google.com>
Wait opening new packfile until it can't be racy anymore
If
- pack.waitPreventRacyPack = true (default is false)
- packfile size > pack.minSizePreventRacyPack (default is 100 MB)
wait after a new packfile was written and before it is opened until it
cannot be racy anymore.
If a new packfile is accessed while it's still racy at least the pack's
index will be reread by ObjectDirectory.scanPacksImpl(). Hence it may
save resources to wait one tick of the file system timer to avoid this
reloading. On filesystems with a coarse timestamp resolution it may be
beneficial to skip this wait for small packfiles.
Bug: 546891
Change-Id: I0e8bf3d7677a025edd2e397dd2c9134ba59b1a18
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
tree:<depth> should not traverse overly-deep trees
If we are traversing a tree which is too deep, then there is no need to
traverse the children. Skipping children is much faster than traversing
the possibly thousands of objects which are directly or indirectly
referenced by the tree.
Change-Id: I6d68cc1d35da48e3288b9cc80356a281ab36863d
Signed-off-by: Matthew DeVore <matvore@gmail.com>
Add fetch statistics for the counts of advertised refs, wants and haves.
Also add the duration in milliseconds for the negotiation phase. For
non-bidirectional transports like HTTP, this is the time for the final
round that sends the pack back to the user.
Change-Id: I1af7ffd3cb7b62182340682e2a243691ea24ec2e
Signed-off-by: Terry Parker <tparker@google.com>
If set, "singlePack" will create a single GC pack file for all
objects reachable from refs/*. If not set, the GC pack will contain
object reachable from refs/heads/* and refs/tags/*, and the GC_REST
pack will contain all other reachable objects.
Change-Id: I56bcb6a9da2c10a0909c2f940c025db6f3acebcb
Signed-off-by: Terry Parker <tparker@google.com>
Enable and fix 'Should be tagged with @Override' warning
Set missingOverrideAnnotation=warning in Eclipse compiler preferences
which enables the warning:
The method <method> of type <type> should be tagged with @Override
since it actually overrides a superclass method
Justification for this warning is described in:
http://stackoverflow.com/a/94411/381622
Enabling this causes in excess of 1000 warnings across the entire
code-base. They are very easy to fix automatically with Eclipse's
"Quick Fix" tool.
Fix all of them except 2 which cause compilation failure when the
project is built with mvn; add TODO comments on those for further
investigation.
Change-Id: I5772061041fd361fe93137fd8b0ad356e748a29c
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
gc: Add options to preserve and prune old pack files
The new --preserve-oldpacks option moves old pack files into the
preserved subdirectory instead of deleting them after repacking.
The new --prune-preserved option prunes old pack files from the
preserved subdirectory after repacking, but before potentially
moving the latest old packfiles to this subdirectory.
These options are designed to prevent stale file handle exceptions
during git operations which can happen on users of NFS repos when
repacking is done on them. The strategy is to preserve old pack files
around until the next repack with the hopes that they will become
unreferenced by then and not cause any exceptions to running processes
when they are finally deleted (pruned).
Change-Id: If3f729f0d9ce920ee2c3e6acdde46f2068be61d2
Signed-off-by: James Melvin <jmelvin@codeaurora.org>
Expose the following bitmap selection parameters via PackConfig:
"bitmapContiguousCommitCount", "bitmapRecentCommitCount",
"bitmapRecentCommitSpan", "bitmapDistantCommitSpan",
"bitmapExcessiveBranchCount", and "bitmapInactiveBranchAge".
The value of bitmapContiguousCommitCount, whereby bitmaps are
created for the most recent N commits in a branch, has never
been verified. If experiments show that they are not valuable,
then we can simplify the implementation so that there is only
a concept of recent and distant commit history (defined by
"bitmapRecentCommitCount"), and the only controls we need are
"bitmapRecentCommitSpan" and "bitmapDistantCommitSpan".
Change-Id: I288bf3f97d6fbfdfcd5dde2699eff433a7307fb9
Signed-off-by: Terry Parker <tparker@google.com>
Update bitmap selection throttling to fully span active branches.
Replace the “bitmapCommitRange” parameter that was recently introduced
with two new parameters: “bitmapExcessiveBranchCount” and
“bitmapInactiveBranchAgeInDays”. If the count of branches does not
exceed “bitmapExcessiveBranchCount”, then the current algorithm is kept
for all branches.
If the branch count is excessive, then the commit time for the tip
commit for each branch is used to determine if a branch is “inactive”.
"Active" branches get full commit selection using the existing
algorithm. "Inactive" branches get fewer bitmaps near the branch tips.
Introduce a "contiguousCommitCount" parameter that always enforces that
the N most recent commits in a branch are selected for bitmaps. The
previous nextSelectionDistance() algorithm created anywhere from 1-100
contiguous bitmaps at branch tips.
For example, consider a branch with commits numbering 0-300, with 0
being the most recent commit. If the most recent 200 commits are not
merge commits and the 200th commit was the last one selected,
nextSelectionDistance() returned 100, causing commits 200-101 to be
ignored. Then a window of size 100 was evaluated, searching for merge
commits. Since no merge commits are found, the next commit (commit 0)
was selected, for a total of 1 commit in the topmost 100 commits.
If instead the 250th commit was selected, then by the same logic
commit 50 is selected. At that point nextSelectionDistance() switches to
selecting consecutive commits, so commits 0-50 in the topmost 100
commits are selected. The "contiguousCommitCount" parameter provides
more determinism by always selecting a constant number or topmost
commits.
Add an optimization to break out of the inner loop of selectCommits() if
all of the commits for the current branch have already been found.
When reusing bitmaps from an existing pack, remove unnecessary
populating and clearing of the writeBitmaps/PackBitmapIndexBuilder.
Add comments to PackWriterBitmapPreparer, rename methods and variables
for readability.
Add tests for bitmap selection with and without merge commits and with
excessive branch pruning triggered.
Note: I will follow up with an additional change that exposes the new
parameters through PackConfig.
Change-Id: I5ccbb96c8849f331c302d9f7840e05f9650c4608
Signed-off-by: Terry Parker <tparker@google.com>
Limit the range of commits for which bitmaps are created.
A bitmap index contains bitmaps for a set of commits in a pack file.
Creating a bitmap for every commit is too expensive, so heuristics
select the most "important" commits. The most recent commits are the
most valuable. To clone a repository only those for the branch tips are
needed. When fetching, only commits since the last fetch are needed.
The commit selection heuristics generally work, but for some
repositories the number of selected commits is prohibitively high. One
example is the MSM 3.10 Linux kernel. With over 1 million commits on
2820 branches, the current heuristics resulted in +36k selected commits.
Each uncompressed bitmap for that repository is ~413k, making it
difficult to complete a GC operation in available memory.
The benefit of creating bitmaps over the entire history of a repository
like the MSM 3.10 Linux kernel isn't clear. For that repository, most
history for the last year appears to be in the last 100k commits.
Limiting bitmap commit selection to just those commits reduces the count
of selected commits from ~36k to ~10.5k. Dropping bitmaps for older
commits does not affect object counting times for clones or for fetches
on clients that are reasonably up-to-date.
This patch defines a new "bitmapCommitRange" PackConfig parameter to
limit the commit selection process when building bitmaps. The range
starts with the most recent commit and walks backwards. A range of 10k
considers only the 10000 most recent commits. A range of zero creates
bitmaps only for branch tips. A range of -1 (the default) does not limit
the range--all commits in the pack are used in the commit selection
process.
Change-Id: Ied92c70cfa0778facc670e0f14a0980bed5e3bfb
Signed-off-by: Terry Parker <tparker@google.com>
Root commits are commits with zero parents. If a commmit has no
parents it is the first commit in the repository. In general the root
commits should be unique for any given project, as the first commit
will be created at a different time, by a different user with its own
message. These root commits can be used as a "fingerprint" to
identify disjoint histories.
Change-Id: Id891dbc1f17c816cea404569578bb7635ff85cdb
Introduce PostUploadHook to replace UploadPackLogger
UploadPackLogger is incorrectly named--it can be used to trigger any
post upload action, such as GC/compaction. This change introduces
PostUploadHook/PostUploadHookChain to replace
UploadPackLogger/UploadPackLoggerChain and deprecates the latter.
It also introduces PackStatistics as a replacement for
PackWriter.Statistics, since the latter is not public API.
It changes PackWriter to use PackStatistics and reimplements
PackWriter.Statistics to delegate to PackStatistics.
Change-Id: Ic51df1613e471f568ffee25ae67e118425b38986
Signed-off-by: Terry Parker <tparker@google.com>
Support cutting existing delta chains longer than the max depth
Some packs built by JGit have incredibly long delta chains due to a
long standing bug in PackWriter. Google has packs created by JGit's
DfsGarbageCollector with chains of 6000 objects long, or more.
Inflating objects at the end of this 6000 long chain is impossible
to complete within a reasonable time bound. It could take a beefy
system hours to perform even using the heavily optimized native C
implementation of Git, let alone with JGit.
Enable pack.cutDeltaChains to be set in a configuration file to
permit the PackWriter to determine the length of each delta chain
and clip the chain at arbitrary points to fit within pack.depth.
Delta chain cycles are still possible, but no attempt is made to
detect them. A trivial chain of A->B->A will iterate for the full
pack.depth configured limit (e.g. 50) and then pick an object to
store as non-delta.
When cutting chains the object list is walked in reverse to try
and take advantage of existing chain computations. The assumption
here is most deltas are near the end of the list, and their bases
are near the front of the list. Going up from the tail attempts to
reuse chainLength computations by relying on the memoized value in
the delta base.
The chainLength field in ObjectToPack is overloaded into the depth
field normally used by DeltaWindow. This is acceptable because the
chain cut happens before delta search, and the chainLength is reset
to 0 if delta search will follow.
Change-Id: Ida4fde9558f3abbbb77ade398d2af3941de9c812
JGit 3.0: move internal classes into an internal subpackage
This breaks all existing callers once. Applications are not supposed
to build against the internal storage API unless they can accept API
churn and make necessary updates as versions change.
Change-Id: I2ab1327c202ef2003565e1b0770a583970e432e9
This is helpful for writing the pack configuration into a log file.
Change-Id: I5e7f5ff7e01c9538ca12a1860844ba9b467bdf05
Signed-off-by: Edwin Kempin <edwin.kempin@sap.com>
The bitmap code in PackWriter knows exactly when to use a pack as
a "cached pack". It enables cached pack usage only when the pack
has a bitmap and its entire closure of objects needs to be sent.
This is a much simpler code path to maintain, and JGit actually
has a way to write the necessary index.
Change-Id: I2645d482f8733fdf0c4120cc59ba9aa4d4ba6881
Bitmaps provide a huge performance boost for counting objects and they
play nice with the cgit implementation.
Change-Id: I33b05a6c8f1ee2df7770f0b9fdc50d0b4bbf1029
Enable writing pack indexes with bitmaps in the GC.
Update the dfs and file GC implementations to prepare and write
bitmaps on the packs that contain the full closure of the object
graph. Update the DfsPackDescription to include the index version.
Change-Id: I3f1421e9cd90fe93e7e2ef2b8179ae2f1ba819ed
Support creating pack bitmap indexes in PackWriter.
Update the PackWriter to support writing out pack bitmap indexes,
a parallel ".bitmap" file to the ".pack" file.
Bitmaps are selected at commits every 1 to 5,000 commits for
each unique path from the start. The most recent 100 commits are
all bitmapped. The next 19,000 commits have a bitmaps every 100
commits. The remaining commits have a bitmap every 5,000 commits.
Commits with more than 1 parent are prefered over ones
with 1 or less. Furthermore, previously computed bitmaps are reused,
if the previous entry had the reuse flag set, which is set when the
bitmap was placed at the max allowed distance.
Bitmaps are used to speed up the counting phase when packing, for
requests that are not shallow. The PackWriterBitmapWalker uses
a RevFilter to proactively mark commits with RevFlag.SEEN, when
they appear in a bitmap. The walker produces the full closure
of reachable ObjectIds, given the collection of starting ObjectIds.
For fetch request, two ObjectWalks are executed to compute the
ObjectIds reachable from the haves and from the wants. The
ObjectIds needed to be written are determined by taking all the
resulting wants AND NOT the haves.
For clone requests, we get cached pack support for "free" since
it is possible to determine if all of the ObjectIds in a pack file
are included in the resulting list of ObjectIds to write.
On my machine, the best times for clones and fetches of the linux
kernel repository (with about 2.6M objects and 300K commits) are
tabulated below:
Operation Index V2 Index VE003
Clone 37530ms (524.06 MiB) 82ms (524.06 MiB)
Fetch (1 commit back) 75ms 107ms
Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB)
Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB)
Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB)
Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB)
Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB)
Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
A pack bitmap index is an additional index of compressed
bitmaps of the object graph. Furthermore, a logical API of the index
functionality is included, as it is expected to be used by the
PackWriter.
Compressed bitmaps are created using the javaewah library, which is a
word-aligned compressed variant of the Java bitset class based on
run-length encoding. The library only works with positive integer
values. Thus, the maximum number of ObjectIds in a pack file that
this index can currently support is limited to Integer.MAX_VALUE.
Every ObjectId is given an integer mapping. The integer is the
position of the ObjectId in the complete ObjectId list, sorted
by offset, for the pack file. That integer is what the bitmaps
use to reference the ObjectId. Currently, the new index format can
only be used with pack files that contain a complete closure of the
object graph e.g. the result of a garbage collection.
The index file includes four bitmaps for the Git object types i.e.
commits, trees, blobs, and tags. In addition, a collection of
bitmaps keyed by an ObjectId is also included. The bitmap for each entry
in the collection represents the full closure of ObjectIds reachable
from the keyed ObjectId (including the keyed ObjectId itself). The
bitmaps are further compressed by XORing the current bitmaps against
prior bitmaps in the index, and selecting the smallest representation.
The XOR'd bitmap and offset from the current entry to the position
of the bitmap to XOR against is the actual representation of the entry
in the index file. Each entry contains one byte, which is currently
used to note whether the bitmap should be blindly reused.
Change-Id: Id328724bf6b4c8366a088233098c18643edcf40f
Break the dependency on RevObject when creating a newObjectToPack().
Update the ObjectReuseAsIs API to support creating new
ObjectToPack with only the AnyObjectId and Git object type. This is
needed to support the future pack index bitmaps, which only contain
this information and do not want the overhead of creating a temporary
object for every ObjectId.
Change-Id: I906360b471412688bf429ecef74fd988f47875dc
Include supported extensions in PackFile constructor.
Previously a PackFile class was assumed to only support a .pack and .idx
file. Update the constructor to enumerate the supported extensions for
the pack file. This will allow the bitmap code to only be executed if
the bitmap extension file is known to exist.
Change-Id: Ie59041dffec5f60d7ea2771026ffd945106bd4bf
Reduce memory held and speed up DfsGarbageCollector.
getObjectList() returns a list of ObjectToPack. These can hold on to a
lot of memory. Furthermore, binary searching for objects in a sorted
array can be slow. Improve the speed and reduce the memory by creating a
copy of the ObjectId and inserting it into an ObjectIdOwnerMap.
Change-Id: Ib5aa5b7447e05938b47fa55812a87b9872c20ea7
Update DfsGarbageCollector to not read back a pack index.
Previously, the Dfs GC excluded objects from packs by passing a
previously written index to the PackWriter. Reading back a file on
Dfs is slow. Instead, allow the PackWriter to expose the objects
included in a pack and forward that to invocations of excludeObjects() .
Change-Id: I377cb4ab07f62cf790505e1eeb0b2efe81897c79
Rename PackConstants to PackExt, a typed pack file extension.
PackConstants previously contained string values for the pack and pack
index extension. Change PackConstant to be PackExt, a typed wrapper
around the string pack file extension.
Change-Id: I86ac4db6da8f33aa42d6f37cfcc119e819444318
Update DfsObjDatabase API to open/write by pack extension.
Previously, the DfsObjDatabase had a hardcoded getPackFile() and
getPackIndex() methods which opens a .pack and .idx file, respectively.
A future change to add a bitmap index will need to be stored in a
parallel .bitmap file. Update the DfsObjDatabase to support opening and
writing of files for any pack extension.
Change-Id: I7c403b501e242096a2d435f6865d6025a9f86108
The maxMemory for a DeltaWindow can be optionally disabled when it is
less than or equal to zero. Respect this configuration when enforcing
the limits on object load.
Change-Id: Ic0f4ffcabf82105f8e690bd0eb5e6be485a313b3
Previously, memory limits were enforced at the start of each iteration
of the delta search, based on objects that were currently loaded in
memory. However, new objects added to the window may be expanded in a
future iteration of the search and thus were not accounted for correctly
at the start of the search. To fix this, memory limits are now enforced
before each object is loaded.
Change-Id: I898ab43e7bf5ee7189831f3a68bb9385ae694b8f
A few classes such as Constanrs are marked with @SuppressWarnings, as are
toString() methods with many liternal, but otherwise $NLS-n$ is used for
string containing text that should not be translated. A few literals may
fall into the gray zone, but mostly I've tried to only tag the obvious
ones.
Change-Id: I22e50a77e2bf9e0b842a66bdf674e8fa1692f590
Fix DeltaWindow.clear() to release loaded buffer bytes.
It is possible for the buffer to be set but not the index. It
ocurrs when an exception occurs during creating an index, but
after the buffer is loaded. Furthermore, the cleared DeltaWindowEntry
should have been ent and not res.
Change-Id: I2e0d79540316635bf7aa43efd225e4eb38230844
These appear as descriptions in the index, see here (currently empty):
http://download.eclipse.org/jgit/docs/latest/apidocs/
Change-Id: If7996deef30ae688bade8b3ad6b19547ca3d8b50
Signed-off-by: Chris Aniszczyk <zx@twitter.com>
Do not delta compress objects that have already tried to compress.
If an object is in a pack file already, delta compression will not
attempt to re-compress it. This assumes that the previous
packing already performed the optimal compression attempt, however,
the subclasses of StoredObjectRepresentation may use other heuristics
to determine if the stored format is optimal.
Change-Id: I403de522f4b0dd2667d54f6faed621f392c07786
PackWriter supports excluding objects from being written to the pack.
You may specify a PackIndex which lists all those objects which should
not go into the new pack. This feature was broken because not all
commits have been checked whether they should be excluded or not. For
other object types the exclude algorithm worked. This commit adds the
missing check.
Change-Id: Id0047098393641ccba784c58b8325175c22fcece
Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Use Integer, Character, and Long valueOf methods when
passing parameters to MessageFormat and other places
that expect objects instead of primitives
Change-Id: I5942fbdbca6a378136c00d951ce61167f2366ca4