Support creating pack bitmap indexes in PackWriter.
Update the PackWriter to support writing out pack bitmap indexes,
a parallel ".bitmap" file to the ".pack" file.
Bitmaps are selected at commits every 1 to 5,000 commits for
each unique path from the start. The most recent 100 commits are
all bitmapped. The next 19,000 commits have a bitmaps every 100
commits. The remaining commits have a bitmap every 5,000 commits.
Commits with more than 1 parent are prefered over ones
with 1 or less. Furthermore, previously computed bitmaps are reused,
if the previous entry had the reuse flag set, which is set when the
bitmap was placed at the max allowed distance.
Bitmaps are used to speed up the counting phase when packing, for
requests that are not shallow. The PackWriterBitmapWalker uses
a RevFilter to proactively mark commits with RevFlag.SEEN, when
they appear in a bitmap. The walker produces the full closure
of reachable ObjectIds, given the collection of starting ObjectIds.
For fetch request, two ObjectWalks are executed to compute the
ObjectIds reachable from the haves and from the wants. The
ObjectIds needed to be written are determined by taking all the
resulting wants AND NOT the haves.
For clone requests, we get cached pack support for "free" since
it is possible to determine if all of the ObjectIds in a pack file
are included in the resulting list of ObjectIds to write.
On my machine, the best times for clones and fetches of the linux
kernel repository (with about 2.6M objects and 300K commits) are
tabulated below:
Operation Index V2 Index VE003
Clone 37530ms (524.06 MiB) 82ms (524.06 MiB)
Fetch (1 commit back) 75ms 107ms
Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB)
Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB)
Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB)
Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB)
Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB)
Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
12 年前 blame: Compute the origin of lines in a result file
BlameGenerator digs through history and discovers the origin of each
line of some result file. BlameResult consumes the stream of regions
created by the generator and lays them out in a table for applications
to display alongside of source lines.
Applications may optionally push in the working tree copy of a file
using the push(String, byte[]) method, allowing the application to
receive accurate line annotations for the working tree version. Lines
that are uncommitted (difference between HEAD and working tree) will
show up with the description given by the application as the author,
or "Not Committed Yet" as a default string.
Applications may also run the BlameGenerator in reverse mode using the
reverse(AnyObjectId, AnyObjectId) method instead of push(). When
running in the reverse mode the generator annotates lines by the
commit they are removed in, rather than the commit they were added in.
This allows a user to discover where a line disappeared from when they
are looking at an older revision in the repository. For example:
blame --reverse 16e810b2..master -L 1080, org.eclipse.jgit.test/tst/org/eclipse/jgit/storage/file/RefDirectoryTest.java
( 1080) }
2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1081)
2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1082) /**
2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1083) * Kick the timestamp of a local file.
Above we learn that line 1080 (a closing curly brace of the prior
method) still exists in branch master, but the Javadoc comment below
it has been removed by Christian Halstrick on May 20th as part of
commit 2302a6d3. This result differs considerably from that of C
Git's blame --reverse feature. JGit tells the reader which commit
performed the delete, while C Git tells the reader the last commit
that still contained the line, leaving it an exercise to the reader
to discover the descendant that performed the removal.
This is still only a basic implementation. Quite notably it is
missing support for the smart block copy/move detection that the C
implementation of `git blame` is well known for. Despite being
incremental, the BlameGenerator can only be run once. After the
generator runs it cannot be reused. A better implementation would
support applications browsing through history efficiently.
In regards to CQ 5110, only a little of the original code survives.
CQ: 5110
Bug: 306161
Change-Id: I84b8ea4838bb7d25f4fcdd540547884704661b8f
Signed-off-by: Kevin Sawicki <kevin@github.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
13 年前 Support creating pack bitmap indexes in PackWriter.
Update the PackWriter to support writing out pack bitmap indexes,
a parallel ".bitmap" file to the ".pack" file.
Bitmaps are selected at commits every 1 to 5,000 commits for
each unique path from the start. The most recent 100 commits are
all bitmapped. The next 19,000 commits have a bitmaps every 100
commits. The remaining commits have a bitmap every 5,000 commits.
Commits with more than 1 parent are prefered over ones
with 1 or less. Furthermore, previously computed bitmaps are reused,
if the previous entry had the reuse flag set, which is set when the
bitmap was placed at the max allowed distance.
Bitmaps are used to speed up the counting phase when packing, for
requests that are not shallow. The PackWriterBitmapWalker uses
a RevFilter to proactively mark commits with RevFlag.SEEN, when
they appear in a bitmap. The walker produces the full closure
of reachable ObjectIds, given the collection of starting ObjectIds.
For fetch request, two ObjectWalks are executed to compute the
ObjectIds reachable from the haves and from the wants. The
ObjectIds needed to be written are determined by taking all the
resulting wants AND NOT the haves.
For clone requests, we get cached pack support for "free" since
it is possible to determine if all of the ObjectIds in a pack file
are included in the resulting list of ObjectIds to write.
On my machine, the best times for clones and fetches of the linux
kernel repository (with about 2.6M objects and 300K commits) are
tabulated below:
Operation Index V2 Index VE003
Clone 37530ms (524.06 MiB) 82ms (524.06 MiB)
Fetch (1 commit back) 75ms 107ms
Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB)
Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB)
Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB)
Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB)
Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB)
Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
12 年前 PackWriter: Support reuse of entire packs
The most expensive part of packing a repository for transport to
another system is enumerating all of the objects in the repository.
Once this gets to the size of the linux-2.6 repository (1.8 million
objects), enumeration can take several CPU minutes and costs a lot
of temporary working set memory.
Teach PackWriter to efficiently reuse an existing "cached pack"
by answering a clone request with a thin pack followed by a larger
cached pack appended to the end. This requires the repository
owner to first construct the cached pack by hand, and record the
tip commits inside of $GIT_DIR/objects/info/cached-packs:
cd $GIT_DIR
root=$(git rev-parse master)
tmp=objects/.tmp-$$
names=$(echo $root | git pack-objects --keep-true-parents --revs $tmp)
for n in $names; do
chmod a-w $tmp-$n.pack $tmp-$n.idx
touch objects/pack/pack-$n.keep
mv $tmp-$n.pack objects/pack/pack-$n.pack
mv $tmp-$n.idx objects/pack/pack-$n.idx
done
(echo "+ $root";
for n in $names; do echo "P $n"; done;
echo) >>objects/info/cached-packs
git repack -a -d
When a clone request needs to include $root, the corresponding
cached pack will be copied as-is, rather than enumerating all of
the objects that are reachable from $root.
For a linux-2.6 kernel repository that should be about 376 MiB,
the above process creates two packs of 368 MiB and 38 MiB[1].
This is a local disk usage increase of ~26 MiB, due to reduced
delta compression between the large cached pack and the smaller
recent activity pack. The overhead is similar to 1 full copy of
the compressed project sources.
With this cached pack in hand, JGit daemon completes a clone request
in 1m17s less time, but a slightly larger data transfer (+2.39 MiB):
Before:
remote: Counting objects: 1861830, done
remote: Finding sources: 100% (1861830/1861830)
remote: Getting sizes: 100% (88243/88243)
remote: Compressing objects: 100% (88184/88184)
Receiving objects: 100% (1861830/1861830), 376.01 MiB | 19.01 MiB/s, done.
remote: Total 1861830 (delta 4706), reused 1851053 (delta 1553844)
Resolving deltas: 100% (1564621/1564621), done.
real 3m19.005s
After:
remote: Counting objects: 1601, done
remote: Counting objects: 1828460, done
remote: Finding sources: 100% (50475/50475)
remote: Getting sizes: 100% (18843/18843)
remote: Compressing objects: 100% (7585/7585)
remote: Total 1861830 (delta 2407), reused 1856197 (delta 37510)
Receiving objects: 100% (1861830/1861830), 378.40 MiB | 31.31 MiB/s, done.
Resolving deltas: 100% (1559477/1559477), done.
real 2m2.938s
Repository owners can periodically refresh their cached packs by
repacking their repository, folding all newer objects into a larger
cached pack. Since repacking is already considered to be a normal
Git maintenance activity, this isn't a very big burden.
[1] In this test $root was set back about two weeks.
Change-Id: Ib87131d5c4b5e8c5cacb0f4fe16ff4ece554734b
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
13 年前 Added read/write support for pack bitmap index.
A pack bitmap index is an additional index of compressed
bitmaps of the object graph. Furthermore, a logical API of the index
functionality is included, as it is expected to be used by the
PackWriter.
Compressed bitmaps are created using the javaewah library, which is a
word-aligned compressed variant of the Java bitset class based on
run-length encoding. The library only works with positive integer
values. Thus, the maximum number of ObjectIds in a pack file that
this index can currently support is limited to Integer.MAX_VALUE.
Every ObjectId is given an integer mapping. The integer is the
position of the ObjectId in the complete ObjectId list, sorted
by offset, for the pack file. That integer is what the bitmaps
use to reference the ObjectId. Currently, the new index format can
only be used with pack files that contain a complete closure of the
object graph e.g. the result of a garbage collection.
The index file includes four bitmaps for the Git object types i.e.
commits, trees, blobs, and tags. In addition, a collection of
bitmaps keyed by an ObjectId is also included. The bitmap for each entry
in the collection represents the full closure of ObjectIds reachable
from the keyed ObjectId (including the keyed ObjectId itself). The
bitmaps are further compressed by XORing the current bitmaps against
prior bitmaps in the index, and selecting the smallest representation.
The XOR'd bitmap and offset from the current entry to the position
of the bitmap to XOR against is the actual representation of the entry
in the index file. Each entry contains one byte, which is currently
used to note whether the bitmap should be blindly reused.
Change-Id: Id328724bf6b4c8366a088233098c18643edcf40f
11 年前 Added read/write support for pack bitmap index.
A pack bitmap index is an additional index of compressed
bitmaps of the object graph. Furthermore, a logical API of the index
functionality is included, as it is expected to be used by the
PackWriter.
Compressed bitmaps are created using the javaewah library, which is a
word-aligned compressed variant of the Java bitset class based on
run-length encoding. The library only works with positive integer
values. Thus, the maximum number of ObjectIds in a pack file that
this index can currently support is limited to Integer.MAX_VALUE.
Every ObjectId is given an integer mapping. The integer is the
position of the ObjectId in the complete ObjectId list, sorted
by offset, for the pack file. That integer is what the bitmaps
use to reference the ObjectId. Currently, the new index format can
only be used with pack files that contain a complete closure of the
object graph e.g. the result of a garbage collection.
The index file includes four bitmaps for the Git object types i.e.
commits, trees, blobs, and tags. In addition, a collection of
bitmaps keyed by an ObjectId is also included. The bitmap for each entry
in the collection represents the full closure of ObjectIds reachable
from the keyed ObjectId (including the keyed ObjectId itself). The
bitmaps are further compressed by XORing the current bitmaps against
prior bitmaps in the index, and selecting the smallest representation.
The XOR'd bitmap and offset from the current entry to the position
of the bitmap to XOR against is the actual representation of the entry
in the index file. Each entry contains one byte, which is currently
used to note whether the bitmap should be blindly reused.
Change-Id: Id328724bf6b4c8366a088233098c18643edcf40f
11 年前 Implement similarity based rename detection
Content similarity based rename detection is performed only after
a linear time detection is performed using exact content match on
the ObjectIds. Any names which were paired up during that exact
match phase are excluded from the inexact similarity based rename,
which reduces the space that must be considered.
During rename detection two entries cannot be marked as a rename
if they are different types of files. This prevents a symlink from
being renamed to a regular file, even if their blob content appears
to be similar, or is identical.
Efficiently comparing two files is performed by building up two
hash indexes and hashing lines or short blocks from each file,
counting the number of bytes that each line or block represents.
Instead of using a standard java.util.HashMap, we use a custom
open hashing scheme similiar to what we use in ObjecIdSubclassMap.
This permits us to have a very light-weight hash, with very little
memory overhead per cell stored.
As we only need two ints per record in the map (line/block key and
number of bytes), we collapse them into a single long inside of
a long array, making very efficient use of available memory when
we create the index table. We only need object headers for the
index structure itself, and the index table, but not per-cell.
This offers a massive space savings over using java.util.HashMap.
The score calculation is done by approximating how many bytes are
the same between the two inputs (which for a delta would be how much
is copied from the base into the result). The score is derived by
dividing the approximate number of bytes in common into the length
of the larger of the two input files.
Right now the SimilarityIndex table should average about 1/2 full,
which means we waste about 50% of our memory on empty entries
after we are done indexing a file and sort the table's contents.
If memory becomes an issue we could discard the table and copy all
records over to a new array that is properly sized.
Building the index requires O(M + N log N) time, where M is the
size of the input file in bytes, and N is the number of unique
lines/blocks in the file. The N log N time constraint comes
from the sort of the index table that is necessary to perform
linear time matching against another SimilarityIndex created for
a different file.
To actually perform the rename detection, a SxD matrix is created,
placing the sources (aka deletions) along one dimension and the
destinations (aka additions) along the other. A simple O(S x D)
loop examines every cell in this matrix.
A SimilarityIndex is built along the row and reused for each
column compare along that row, avoiding the costly index rebuild
at the row level. A future improvement would be to load a smaller
square matrix into SimilarityIndexes and process everything in that
sub-matrix before discarding the column dimension and moving down
to the next sub-matrix block along that same grid of rows.
An optional ProgressMonitor is permitted to be passed in, allowing
applications to see the progress of the detector as it works through
the matrix cells. This provides some indication of current status
for very long running renames.
The default line/block hash function used by the SimilarityIndex
may not be optimal, and may produce too many collisions. It is
borrowed from RawText's hash, which is used to quickly skip out of
a longer equality test if two lines have different hash functions.
We may need to refine this hash in the future, in order to minimize
the number of collisions we get on common source files.
Based on a handful of test commits in JGit (especially my own
recent rename repository refactoring series), this rename detector
produces output that is very close to C Git. The content similarity
scores are sometimes off by 1%, which is most probably caused by
our SimilarityIndex type using a different hash function than C
Git uses when it computes the delta size between any two objects
in the rename matrix.
Bug: 318504
Change-Id: I11dff969e8a2e4cf252636d857d2113053bdd9dc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 年前 Support creating pack bitmap indexes in PackWriter.
Update the PackWriter to support writing out pack bitmap indexes,
a parallel ".bitmap" file to the ".pack" file.
Bitmaps are selected at commits every 1 to 5,000 commits for
each unique path from the start. The most recent 100 commits are
all bitmapped. The next 19,000 commits have a bitmaps every 100
commits. The remaining commits have a bitmap every 5,000 commits.
Commits with more than 1 parent are prefered over ones
with 1 or less. Furthermore, previously computed bitmaps are reused,
if the previous entry had the reuse flag set, which is set when the
bitmap was placed at the max allowed distance.
Bitmaps are used to speed up the counting phase when packing, for
requests that are not shallow. The PackWriterBitmapWalker uses
a RevFilter to proactively mark commits with RevFlag.SEEN, when
they appear in a bitmap. The walker produces the full closure
of reachable ObjectIds, given the collection of starting ObjectIds.
For fetch request, two ObjectWalks are executed to compute the
ObjectIds reachable from the haves and from the wants. The
ObjectIds needed to be written are determined by taking all the
resulting wants AND NOT the haves.
For clone requests, we get cached pack support for "free" since
it is possible to determine if all of the ObjectIds in a pack file
are included in the resulting list of ObjectIds to write.
On my machine, the best times for clones and fetches of the linux
kernel repository (with about 2.6M objects and 300K commits) are
tabulated below:
Operation Index V2 Index VE003
Clone 37530ms (524.06 MiB) 82ms (524.06 MiB)
Fetch (1 commit back) 75ms 107ms
Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB)
Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB)
Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB)
Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB)
Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB)
Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
12 年前 Implement similarity based rename detection
Content similarity based rename detection is performed only after
a linear time detection is performed using exact content match on
the ObjectIds. Any names which were paired up during that exact
match phase are excluded from the inexact similarity based rename,
which reduces the space that must be considered.
During rename detection two entries cannot be marked as a rename
if they are different types of files. This prevents a symlink from
being renamed to a regular file, even if their blob content appears
to be similar, or is identical.
Efficiently comparing two files is performed by building up two
hash indexes and hashing lines or short blocks from each file,
counting the number of bytes that each line or block represents.
Instead of using a standard java.util.HashMap, we use a custom
open hashing scheme similiar to what we use in ObjecIdSubclassMap.
This permits us to have a very light-weight hash, with very little
memory overhead per cell stored.
As we only need two ints per record in the map (line/block key and
number of bytes), we collapse them into a single long inside of
a long array, making very efficient use of available memory when
we create the index table. We only need object headers for the
index structure itself, and the index table, but not per-cell.
This offers a massive space savings over using java.util.HashMap.
The score calculation is done by approximating how many bytes are
the same between the two inputs (which for a delta would be how much
is copied from the base into the result). The score is derived by
dividing the approximate number of bytes in common into the length
of the larger of the two input files.
Right now the SimilarityIndex table should average about 1/2 full,
which means we waste about 50% of our memory on empty entries
after we are done indexing a file and sort the table's contents.
If memory becomes an issue we could discard the table and copy all
records over to a new array that is properly sized.
Building the index requires O(M + N log N) time, where M is the
size of the input file in bytes, and N is the number of unique
lines/blocks in the file. The N log N time constraint comes
from the sort of the index table that is necessary to perform
linear time matching against another SimilarityIndex created for
a different file.
To actually perform the rename detection, a SxD matrix is created,
placing the sources (aka deletions) along one dimension and the
destinations (aka additions) along the other. A simple O(S x D)
loop examines every cell in this matrix.
A SimilarityIndex is built along the row and reused for each
column compare along that row, avoiding the costly index rebuild
at the row level. A future improvement would be to load a smaller
square matrix into SimilarityIndexes and process everything in that
sub-matrix before discarding the column dimension and moving down
to the next sub-matrix block along that same grid of rows.
An optional ProgressMonitor is permitted to be passed in, allowing
applications to see the progress of the detector as it works through
the matrix cells. This provides some indication of current status
for very long running renames.
The default line/block hash function used by the SimilarityIndex
may not be optimal, and may produce too many collisions. It is
borrowed from RawText's hash, which is used to quickly skip out of
a longer equality test if two lines have different hash functions.
We may need to refine this hash in the future, in order to minimize
the number of collisions we get on common source files.
Based on a handful of test commits in JGit (especially my own
recent rename repository refactoring series), this rename detector
produces output that is very close to C Git. The content similarity
scores are sometimes off by 1%, which is most probably caused by
our SimilarityIndex type using a different hash function than C
Git uses when it computes the delta size between any two objects
in the rename matrix.
Bug: 318504
Change-Id: I11dff969e8a2e4cf252636d857d2113053bdd9dc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 年前 |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570 |
- abbreviationLengthMustBeNonNegative=Abbreviation length must not be negative.
- abortingRebase=Aborting rebase: resetting to {0}
- abortingRebaseFailed=Could not abort rebase
- abortingRebaseFailedNoOrigHead=Could not abort rebase since ORIG_HEAD is null
- advertisementCameBefore=advertisement of {0}^{} came before {1}
- advertisementOfCameBefore=advertisement of {0}^{} came before {1}
- amazonS3ActionFailed={0} of ''{1}'' failed: {2} {3}
- amazonS3ActionFailedGivingUp={0} of ''{1}'' failed: Giving up after {2} attempts.
- ambiguousObjectAbbreviation=Object abbreviation {0} is ambiguous
- aNewObjectIdIsRequired=A NewObjectId is required.
- anExceptionOccurredWhileTryingToAddTheIdOfHEAD=An exception occurred while trying to add the Id of HEAD
- anSSHSessionHasBeenAlreadyCreated=An SSH session has been already created
- applyingCommit=Applying {0}
- archiveFormatAlreadyAbsent=Archive format already absent: {0}
- archiveFormatAlreadyRegistered=Archive format already registered: {0}
- argumentIsNotAValidCommentString=Invalid comment: {0}
- atLeastOnePathIsRequired=At least one path is required.
- atLeastOnePatternIsRequired=At least one pattern is required.
- atLeastTwoFiltersNeeded=At least two filters needed.
- authenticationNotSupported=authentication not supported
- badBase64InputCharacterAt=Bad Base64 input character at {0} : {1} (decimal)
- badEntryDelimiter=Bad entry delimiter
- badEntryName=Bad entry name: {0}
- badEscape=Bad escape: {0}
- badGroupHeader=Bad group header
- badObjectType=Bad object type: {0}
- badSectionEntry=Bad section entry: {0}
- bareRepositoryNoWorkdirAndIndex=Bare Repository has neither a working tree, nor an index
- base64InputNotProperlyPadded=Base64 input not properly padded.
- baseLengthIncorrect=base length incorrect
- bitmapMissingObject=Bitmap at {0} is missing {1}.
- bitmapsMustBePrepared=Bitmaps must be prepared before they may be written.
- blameNotCommittedYet=Not Committed Yet
- blobNotFound=Blob not found: {0}
- blobNotFoundForPath=Blob not found: {0} for path: {1}
- branchNameInvalid=Branch name {0} is not allowed
- buildingBitmaps=Building bitmaps
- cachedPacksPreventsIndexCreation=Using cached packs prevents index creation
- cachedPacksPreventsListingObjects=Using cached packs prevents listing objects
- cannotBeCombined=Cannot be combined.
- cannotBeRecursiveWhenTreesAreIncluded=TreeWalk shouldn't be recursive when tree objects are included.
- cannotChangeActionOnComment=Cannot change action on comment line in git-rebase-todo file, old action: {0}, new action: {1}.
- cannotChangeToComment=Cannot change a non-comment line to a comment line.
- cannotCombineSquashWithNoff=Cannot combine --squash with --no-ff.
- cannotCombineTreeFilterWithRevFilter=Cannot combine TreeFilter {0} with RevFilter {1}.
- cannotCommitOnARepoWithState=Cannot commit on a repo with state: {0}
- cannotCommitWriteTo=Cannot commit write to {0}
- cannotConnectPipes=cannot connect pipes
- cannotConvertScriptToText=Cannot convert script to text
- cannotCreateConfig=cannot create config
- cannotCreateDirectory=Cannot create directory {0}
- cannotCreateHEAD=cannot create HEAD
- cannotCreateIndexfile=Cannot create an index file with name {0}
- cannotDeleteCheckedOutBranch=Branch {0} is checked out and can not be deleted
- cannotDeleteFile=Cannot delete file: {0}
- cannotDeleteStaleTrackingRef=Cannot delete stale tracking ref {0}
- cannotDeleteStaleTrackingRef2=Cannot delete stale tracking ref {0}: {1}
- cannotDetermineProxyFor=Cannot determine proxy for {0}
- cannotDownload=Cannot download {0}
- cannotExecute=cannot execute: {0}
- cannotGet=Cannot get {0}
- cannotListRefs=cannot list refs
- cannotLock=Cannot lock {0}
- cannotLockPackIn=Cannot lock pack in {0}
- cannotMatchOnEmptyString=Cannot match on empty string.
- cannotMoveIndexTo=Cannot move index to {0}
- cannotMovePackTo=Cannot move pack to {0}
- cannotOpenService=cannot open {0}
- cannotParseDate=The date specification "{0}" could not be parsed with the following formats: {1}
- cannotParseGitURIish=Cannot parse Git URI-ish
- cannotPullOnARepoWithState=Cannot pull into a repository with state: {0}
- cannotRead=Cannot read {0}
- cannotReadBlob=Cannot read blob {0}
- cannotReadCommit=Cannot read commit {0}
- cannotReadFile=Cannot read file {0}
- cannotReadHEAD=cannot read HEAD: {0} {1}
- cannotReadObject=Cannot read object
- cannotReadTree=Cannot read tree {0}
- cannotRebaseWithoutCurrentHead=Can not rebase without a current HEAD
- cannotResolveLocalTrackingRefForUpdating=Cannot resolve local tracking ref {0} for updating.
- cannotSquashFixupWithoutPreviousCommit=Cannot {0} without previous commit.
- cannotStoreObjects=cannot store objects
- cannotUnloadAModifiedTree=Cannot unload a modified tree.
- cannotWorkWithOtherStagesThanZeroRightNow=Cannot work with other stages than zero right now. Won't write corrupt index.
- canOnlyCherryPickCommitsWithOneParent=Cannot cherry-pick commit ''{0}'' because it has {1} parents, only commits with exactly one parent are supported.
- canOnlyRevertCommitsWithOneParent=Cannot revert commit ''{0}'' because it has {1} parents, only commits with exactly one parent are supported
- cantFindObjectInReversePackIndexForTheSpecifiedOffset=Can't find object in (reverse) pack index for the specified offset {0}
- cantPassMeATree=Can't pass me a tree!
- channelMustBeInRange0_255=channel {0} must be in range [0, 255]
- characterClassIsNotSupported=The character class {0} is not supported.
- checkoutConflictWithFile=Checkout conflict with file: {0}
- checkoutConflictWithFiles=Checkout conflict with files: {0}
- checkoutUnexpectedResult=Checkout returned unexpected result {0}
- classCastNotA=Not a {0}
- cloneNonEmptyDirectory=Destination path "{0}" already exists and is not an empty directory
- collisionOn=Collision on {0}
- commandWasCalledInTheWrongState=Command {0} was called in the wrong state
- commitAlreadyExists=exists {0}
- commitMessageNotSpecified=commit message not specified
- commitOnRepoWithoutHEADCurrentlyNotSupported=Commit on repo without HEAD currently not supported
- commitAmendOnInitialNotPossible=Amending is not possible on initial commit.
- compressingObjects=Compressing objects
- connectionFailed=connection failed
- connectionTimeOut=Connection time out: {0}
- contextMustBeNonNegative=context must be >= 0
- corruptionDetectedReReadingAt=Corruption detected re-reading at {0}
- corruptObjectBadStream=bad stream
- corruptObjectBadStreamCorruptHeader=bad stream, corrupt header
- corruptObjectGarbageAfterSize=garbage after size
- corruptObjectIncorrectLength=incorrect length
- corruptObjectInvalidEntryMode=invalid entry mode
- corruptObjectInvalidMode=invalid mode
- corruptObjectInvalidMode2=invalid mode {0}
- corruptObjectInvalidMode3=invalid mode {0} for {1} ''{2}'' in {3}.
- corruptObjectInvalidType=invalid type
- corruptObjectInvalidType2=invalid type {0}
- corruptObjectMalformedHeader=malformed header: {0}
- corruptObjectNegativeSize=negative size
- corruptObjectNoAuthor=no author
- corruptObjectNoCommitter=no committer
- corruptObjectNoHeader=no header
- corruptObjectNoObject=no object
- corruptObjectNoTaggerBadHeader=no tagger/bad header
- corruptObjectNoTaggerHeader=no tagger header
- corruptObjectNoTagName=no tag name
- corruptObjectNotree=no tree
- corruptObjectNoType=no type
- corruptObjectPackfileChecksumIncorrect=Packfile checksum incorrect.
- couldNotCheckOutBecauseOfConflicts=Could not check out because of conflicts
- couldNotDeleteLockFileShouldNotHappen=Could not delete lock file. Should not happen
- couldNotDeleteTemporaryIndexFileShouldNotHappen=Could not delete temporary index file. Should not happen
- couldNotGetAdvertisedRef=Could not get advertised Ref for branch {0}
- couldNotGetRepoStatistics=Could not get repository statistics
- couldNotLockHEAD=Could not lock HEAD
- couldNotReadIndexInOneGo=Could not read index in one go, only {0} out of {1} read
- couldNotReadObjectWhileParsingCommit=Could not read an object while parsing commit {0}
- couldNotRenameDeleteOldIndex=Could not rename delete old index
- couldNotRenameTemporaryFile=Could not rename temporary file {0} to new location {1}
- couldNotRenameTemporaryIndexFileToIndex=Could not rename temporary index file to index
- couldNotURLEncodeToUTF8=Could not URL encode to UTF-8
- couldNotWriteFile=Could not write file {0}
- countingObjects=Counting objects
- createBranchFailedUnknownReason=Create branch failed for unknown reason
- createBranchUnexpectedResult=Create branch returned unexpected result {0}
- createNewFileFailed=Could not create new file {0}
- credentialPassword=Password
- credentialUsername=Username
- daemonAlreadyRunning=Daemon already running
- daysAgo={0} days ago
- deleteBranchUnexpectedResult=Delete branch returned unexpected result {0}
- deleteFileFailed=Could not delete file {0}
- deleteTagUnexpectedResult=Delete tag returned unexpected result {0}
- deletingNotSupported=Deleting {0} not supported.
- destinationIsNotAWildcard=Destination is not a wildcard.
- detachedHeadDetected=HEAD is detached
- dirCacheDoesNotHaveABackingFile=DirCache does not have a backing file
- dirCacheFileIsNotLocked=DirCache {0} not locked
- dirCacheIsNotLocked=DirCache is not locked
- DIRCChecksumMismatch=DIRC checksum mismatch
- DIRCExtensionIsTooLargeAt=DIRC extension {0} is too large at {1} bytes.
- DIRCExtensionNotSupportedByThisVersion=DIRC extension {0} not supported by this version.
- DIRCHasTooManyEntries=DIRC has too many entries.
- DIRCUnrecognizedExtendedFlags=Unrecognized extended flags: {0}
- dirtyFilesExist=Dirty files exist. Refusing to merge
- doesNotHandleMode=Does not handle mode {0} ({1})
- downloadCancelled=Download cancelled
- downloadCancelledDuringIndexing=Download cancelled during indexing
- duplicateAdvertisementsOf=duplicate advertisements of {0}
- duplicateRef=Duplicate ref: {0}
- duplicateRemoteRefUpdateIsIllegal=Duplicate remote ref update is illegal. Affected remote name: {0}
- duplicateStagesNotAllowed=Duplicate stages not allowed
- eitherGitDirOrWorkTreeRequired=One of setGitDir or setWorkTree must be called.
- emptyCommit=No changes
- emptyPathNotPermitted=Empty path not permitted.
- encryptionError=Encryption error: {0}
- endOfFileInEscape=End of file in escape
- entryNotFoundByPath=Entry not found by path: {0}
- enumValueNotSupported2=Invalid value: {0}.{1}={2}
- enumValueNotSupported3=Invalid value: {0}.{1}.{2}={3}
- enumValuesNotAvailable=Enumerated values of type {0} not available
- errorDecodingFromFile=Error decoding from file {0}
- errorEncodingFromFile=Error encoding from file {0}
- errorInBase64CodeReadingStream=Error in Base64 code reading stream.
- errorInPackedRefs=error in packed-refs
- errorInvalidProtocolWantedOldNewRef=error: invalid protocol: wanted 'old new ref'
- errorListing=Error listing {0}
- errorOccurredDuringUnpackingOnTheRemoteEnd=error occurred during unpacking on the remote end: {0}
- errorReadingInfoRefs=error reading info/refs
- errorSymlinksNotSupported=Symlinks are not supported with this OS/JRE
- exceptionCaughtDuringExecutionOfAddCommand=Exception caught during execution of add command
- exceptionCaughtDuringExecutionOfArchiveCommand=Exception caught during execution of archive command
- exceptionCaughtDuringExecutionOfCherryPickCommand=Exception caught during execution of cherry-pick command. {0}
- exceptionCaughtDuringExecutionOfCommitCommand=Exception caught during execution of commit command
- exceptionCaughtDuringExecutionOfFetchCommand=Exception caught during execution of fetch command
- exceptionCaughtDuringExecutionOfLsRemoteCommand=Exception caught during execution of ls-remote command
- exceptionCaughtDuringExecutionOfMergeCommand=Exception caught during execution of merge command. {0}
- exceptionCaughtDuringExecutionOfPullCommand=Exception caught during execution of pull command
- exceptionCaughtDuringExecutionOfPushCommand=Exception caught during execution of push command
- exceptionCaughtDuringExecutionOfResetCommand=Exception caught during execution of reset command. {0}
- exceptionCaughtDuringExecutionOfRevertCommand=Exception caught during execution of revert command. {0}
- exceptionCaughtDuringExecutionOfRmCommand=Exception caught during execution of rm command
- exceptionCaughtDuringExecutionOfTagCommand=Exception caught during execution of tag command
- exceptionOccurredDuringAddingOfOptionToALogCommand=Exception occurred during adding of {0} as option to a Log command
- exceptionOccurredDuringReadingOfGIT_DIR=Exception occurred during reading of $GIT_DIR/{0}. {1}
- expectedACKNAKFoundEOF=Expected ACK/NAK, found EOF
- expectedACKNAKGot=Expected ACK/NAK, got: {0}
- expectedBooleanStringValue=Expected boolean string value
- expectedCharacterEncodingGuesses=Expected {0} character encoding guesses
- expectedEOFReceived=expected EOF; received ''{0}'' instead
- expectedGot=expected ''{0}'', got ''{1}''
- expectedLessThanGot=expected less than ''{0}'', got ''{1}''
- expectedPktLineWithService=expected pkt-line with '# service=-', got ''{0}''
- expectedReceivedContentType=expected Content-Type {0}; received Content-Type {1}
- expectedReportForRefNotReceived={0}: expected report for ref {1} not received
- failedUpdatingRefs=failed updating refs
- failureDueToOneOfTheFollowing=Failure due to one of the following:
- failureUpdatingFETCH_HEAD=Failure updating FETCH_HEAD: {0}
- failureUpdatingTrackingRef=Failure updating tracking ref {0}: {1}
- fileCannotBeDeleted=File cannot be deleted: {0}
- fileIsTooBigForThisConvenienceMethod=File is too big for this convenience method ({0} bytes).
- fileIsTooLarge=File is too large: {0}
- fileModeNotSetForPath=FileMode not set for path {0}
- flagIsDisposed={0} is disposed.
- flagNotFromThis={0} not from this.
- flagsAlreadyCreated={0} flags already created.
- funnyRefname=funny refname
- gcFailed=Garbage collection failed.
- gitmodulesNotFound=.gitmodules not found in tree.
- headRequiredToStash=HEAD required to stash local changes
- hoursAgo={0} hours ago
- hugeIndexesAreNotSupportedByJgitYet=Huge indexes are not supported by jgit, yet
- hunkBelongsToAnotherFile=Hunk belongs to another file
- hunkDisconnectedFromFile=Hunk disconnected from file
- hunkHeaderDoesNotMatchBodyLineCountOf=Hunk header {0} does not match body line count of {1}
- illegalArgumentNotA=Not {0}
- illegalCombinationOfArguments=The combination of arguments {0} and {1} is not allowed
- illegalPackingPhase=Illegal packing phase {0}
- illegalStateExists=exists {0}
- improperlyPaddedBase64Input=Improperly padded Base64 input.
- incorrectHashFor=Incorrect hash for {0}; computed {1} as a {2} from {3} bytes.
- incorrectOBJECT_ID_LENGTH=Incorrect OBJECT_ID_LENGTH.
- indexFileIsInUse=Index file is in use
- indexFileIsTooLargeForJgit=Index file is too large for jgit
- indexSignatureIsInvalid=Index signature is invalid: {0}
- indexWriteException=Modified index could not be written
- inMemoryBufferLimitExceeded=In-memory buffer limit exceeded
- inputStreamMustSupportMark=InputStream must support mark()
- integerValueOutOfRange=Integer value {0}.{1} out of range
- internalRevisionError=internal revision error
- internalServerError=internal server error
- interruptedWriting=Interrupted writing {0}
- inTheFuture=in the future
- invalidAdvertisementOf=invalid advertisement of {0}
- invalidAncestryLength=Invalid ancestry length
- invalidBooleanValue=Invalid boolean value: {0}.{1}={2}
- invalidChannel=Invalid channel {0}
- invalidCharacterInBase64Data=Invalid character in Base64 data.
- invalidCommitParentNumber=Invalid commit parent number
- invalidEncryption=Invalid encryption
- invalidGitdirRef = Invalid .git reference in file ''{0}''
- invalidGitType=invalid git type: {0}
- invalidId=Invalid id {0}
- invalidIdLength=Invalid id length {0}; should be {1}
- invalidIntegerValue=Invalid integer value: {0}.{1}={2}
- invalidKey=Invalid key: {0}
- invalidLineInConfigFile=Invalid line in config file
- invalidModeFor=Invalid mode {0} for {1} {2} in {3}.
- invalidModeForPath=Invalid mode {0} for path {1}
- invalidObject=Invalid {0} {1}:{2}
- invalidOldIdSent=invalid old id sent
- invalidPacketLineHeader=Invalid packet line header: {0}
- invalidPath=Invalid path: {0}
- invalidPathContainsSeparator=Invalid path (contains separator ''{0}''): {1}
- invalidPathPeriodAtEndWindows=Invalid path (period at end is ignored by Windows): {0}
- invalidPathSpaceAtEndWindows=Invalid path (space at end is ignored by Windows): {0}
- invalidPathReservedOnWindows=Invalid path (''{0}'' is reserved on Windows): {1}
- invalidReflogRevision=Invalid reflog revision: {0}
- invalidRefName=Invalid ref name: {0}
- invalidRemote=Invalid remote: {0}
- invalidStageForPath=Invalid stage {0} for path {1}
- invalidTagOption=Invalid tag option: {0}
- invalidTimeout=Invalid timeout: {0}
- invalidURL=Invalid URL {0}
- invalidWildcards=Invalid wildcards {0}
- invalidRefSpec=Invalid refspec {0}
- invalidWindowSize=Invalid window size
- isAStaticFlagAndHasNorevWalkInstance={0} is a static flag and has no RevWalk instance
- JRELacksMD5Implementation=JRE lacks MD5 implementation
- kNotInRange=k {0} not in {1} - {2}
- largeObjectExceedsByteArray=Object {0} exceeds 2 GiB byte array limit
- largeObjectExceedsLimit=Object {0} exceeds {1} limit, actual size is {2}
- largeObjectException={0} exceeds size limit
- largeObjectOutOfMemory=Out of memory loading {0}
- lengthExceedsMaximumArraySize=Length exceeds maximum array size
- listingAlternates=Listing alternates
- localObjectsIncomplete=Local objects incomplete.
- localRefIsMissingObjects=Local ref {0} is missing object(s).
- lockCountMustBeGreaterOrEqual1=lockCount must be >= 1
- lockError=lock error: {0}
- lockOnNotClosed=Lock on {0} not closed.
- lockOnNotHeld=Lock on {0} not held.
- malformedpersonIdentString=Malformed PersonIdent string (no < was found): {0}
- maxCountMustBeNonNegative=max count must be >= 0
- mergeConflictOnNonNoteEntries=Merge conflict on non-note entries: base = {0}, ours = {1}, theirs = {2}
- mergeConflictOnNotes=Merge conflict on note {0}. base = {1}, ours = {2}, theirs = {2}
- mergeStrategyAlreadyExistsAsDefault=Merge strategy "{0}" already exists as a default strategy
- mergeStrategyDoesNotSupportHeads=merge strategy {0} does not support {1} heads to be merged into HEAD
- mergeUsingStrategyResultedInDescription=Merge of revisions {0} with base {1} using strategy {2} resulted in: {3}. {4}
- mergeRecursiveReturnedNoCommit=Merge returned no commit:\n Depth {0}\n Head one {1}\n Head two {2}
- mergeRecursiveTooManyMergeBasesFor = "More than {0} merge bases for:\n a {1}\n b {2} found:\n count {3}"
- messageAndTaggerNotAllowedInUnannotatedTags = Unannotated tags cannot have a message or tagger
- minutesAgo={0} minutes ago
- missingAccesskey=Missing accesskey.
- missingConfigurationForKey=No value for key {0} found in configuration
- missingDeltaBase=delta base
- missingForwardImageInGITBinaryPatch=Missing forward-image in GIT binary patch
- missingObject=Missing {0} {1}
- missingPrerequisiteCommits=missing prerequisite commits:
- missingRequiredParameter=Parameter "{0}" is missing
- missingSecretkey=Missing secretkey.
- mixedStagesNotAllowed=Mixed stages not allowed
- mkDirFailed=Creating directory {0} failed
- mkDirsFailed=Creating directories for {0} failed
- month=month
- months=months
- monthsAgo={0} months ago
- multipleMergeBasesFor=Multiple merge bases for:\n {0}\n {1} found:\n {2}\n {3}
- need2Arguments=Need 2 arguments
- needPackOut=need packOut
- needsAtLeastOneEntry=Needs at least one entry
- needsWorkdir=Needs workdir
- newlineInQuotesNotAllowed=Newline in quotes not allowed
- noApplyInDelete=No apply in delete
- noClosingBracket=No closing {0} found for {1} at index {2}.
- noHEADExistsAndNoExplicitStartingRevisionWasSpecified=No HEAD exists and no explicit starting revision was specified
- noHMACsupport=No {0} support: {1}
- noMergeBase=No merge base could be determined. Reason={0}. {1}
- noMergeHeadSpecified=No merge head specified
- noSuchRef=no such ref
- notABoolean=Not a boolean: {0}
- notABundle=not a bundle
- notADIRCFile=Not a DIRC file.
- notAGitDirectory=not a git directory
- notAPACKFile=Not a PACK file.
- notARef=Not a ref: {0}: {1}
- notASCIIString=Not ASCII string: {0}
- notAuthorized=not authorized
- notAValidPack=Not a valid pack {0}
- notFound=not found.
- nothingToFetch=Nothing to fetch.
- nothingToPush=Nothing to push.
- notMergedExceptionMessage=Branch was not deleted as it has not been merged yet; use the force option to delete it anyway
- noXMLParserAvailable=No XML parser available.
- objectAtHasBadZlibStream=Object at {0} in {1} has bad zlib stream
- objectAtPathDoesNotHaveId=Object at path "{0}" does not have an id assigned. All object ids must be assigned prior to writing a tree.
- objectIsCorrupt=Object {0} is corrupt: {1}
- objectIsNotA=Object {0} is not a {1}.
- objectNotFound=Object {0} not found.
- objectNotFoundIn=Object {0} not found in {1}.
- obtainingCommitsForCherryPick=Obtaining commits that need to be cherry-picked
- offsetWrittenDeltaBaseForObjectNotFoundInAPack=Offset-written delta base for object not found in a pack
- onlyAlreadyUpToDateAndFastForwardMergesAreAvailable=only already-up-to-date and fast forward merges are available
- onlyOneFetchSupported=Only one fetch supported
- onlyOneOperationCallPerConnectionIsSupported=Only one operation call per connection is supported.
- openFilesMustBeAtLeast1=Open files must be >= 1
- openingConnection=Opening connection
- operationCanceled=Operation {0} was canceled
- outputHasAlreadyBeenStarted=Output has already been started.
- packChecksumMismatch=Pack checksum mismatch
- packCorruptedWhileWritingToFilesystem=Pack corrupted while writing to filesystem
- packDoesNotMatchIndex=Pack {0} does not match index
- packetSizeMustBeAtLeast=packet size {0} must be >= {1}
- packetSizeMustBeAtMost=packet size {0} must be <= {1}
- packfileCorruptionDetected=Packfile corruption detected: {0}
- packFileInvalid=Pack file invalid: {0}
- packfileIsTruncated=Packfile is truncated.
- packHasUnresolvedDeltas=pack has unresolved deltas
- packingCancelledDuringObjectsWriting=Packing cancelled during objects writing
- packObjectCountMismatch=Pack object count mismatch: pack {0} index {1}: {2}
- packRefs=Pack refs
- packTooLargeForIndexVersion1=Pack too large for index version 1
- packWriterStatistics=Total {0,number,#0} (delta {1,number,#0}), reused {2,number,#0} (delta {3,number,#0})
- panicCantRenameIndexFile=Panic: index file {0} must be renamed to replace {1}; until then repository is corrupt
- patchApplyException=Cannot apply: {0}
- patchFormatException=Format error: {0}
- pathIsNotInWorkingDir=Path is not in working dir
- pathNotConfigured=Submodule path is not configured
- peeledLineBeforeRef=Peeled line before ref.
- peerDidNotSupplyACompleteObjectGraph=peer did not supply a complete object graph
- prefixRemote=remote:
- problemWithResolvingPushRefSpecsLocally=Problem with resolving push ref specs locally: {0}
- progressMonUploading=Uploading {0}
- propertyIsAlreadyNonNull=Property is already non null
- pruneLoosePackedObjects=Prune loose objects also found in pack files
- pruneLooseUnreferencedObjects=Prune loose, unreferenced objects
- pullOnRepoWithoutHEADCurrentlyNotSupported=Pull on repository without HEAD currently not supported
- pullTaskName=Pull
- pushCancelled=push cancelled
- pushIsNotSupportedForBundleTransport=Push is not supported for bundle transport
- pushNotPermitted=push not permitted
- rawLogMessageDoesNotParseAsLogEntry=Raw log message does not parse as log entry
- readingObjectsFromLocalRepositoryFailed=reading objects from local repository failed: {0}
- readTimedOut=Read timed out after {0} ms
- receivePackObjectTooLarge1=Object too large, rejecting the pack. Max object size limit is {0} bytes.
- receivePackObjectTooLarge2=Object too large ({0} bytes), rejecting the pack. Max object size limit is {1} bytes.
- receivingObjects=Receiving objects
- refAlreadyExists=already exists
- refAlreadyExists1=Ref {0} already exists
- reflogEntryNotFound=Entry {0} not found in reflog for ''{1}''
- refNotResolved=Ref {0} can not be resolved
- refUpdateReturnCodeWas=RefUpdate return code was: {0}
- remoteConfigHasNoURIAssociated=Remote config "{0}" has no URIs associated
- remoteDoesNotHaveSpec=Remote does not have {0} available for fetch.
- remoteDoesNotSupportSmartHTTPPush=remote does not support smart HTTP push
- remoteHungUpUnexpectedly=remote hung up unexpectedly
- remoteNameCantBeNull=Remote name can't be null.
- renameBranchFailedBecauseTag=Can not rename as Ref {0} is a tag
- renameBranchFailedUnknownReason=Rename failed with unknown reason
- renameBranchUnexpectedResult=Unexpected rename result {0}
- renameFileFailed=Could not rename file {0} to {1}
- renamesAlreadyFound=Renames have already been found.
- renamesBreakingModifies=Breaking apart modified file pairs
- renamesFindingByContent=Finding renames by content similarity
- renamesFindingExact=Finding exact renames
- renamesRejoiningModifies=Rejoining modified file pairs
- repositoryAlreadyExists=Repository already exists: {0}
- repositoryConfigFileInvalid=Repository config file {0} invalid {1}
- repositoryIsRequired=Repository is required.
- repositoryNotFound=repository not found: {0}
- repositoryState_applyMailbox=Apply mailbox
- repositoryState_bisecting=Bisecting
- repositoryState_conflicts=Conflicts
- repositoryState_merged=Merged
- repositoryState_normal=Normal
- repositoryState_rebase=Rebase
- repositoryState_rebaseInteractive=Rebase interactive
- repositoryState_rebaseOrApplyMailbox=Rebase/Apply mailbox
- repositoryState_rebaseWithMerge=Rebase w/merge
- requiredHashFunctionNotAvailable=Required hash function {0} not available.
- resettingHead=Resetting head to {0}
- resolvingDeltas=Resolving deltas
- resultLengthIncorrect=result length incorrect
- rewinding=Rewinding to commit {0}
- searchForReuse=Finding sources
- searchForSizes=Getting sizes
- secondsAgo={0} seconds ago
- selectingCommits=Selecting commits
- sequenceTooLargeForDiffAlgorithm=Sequence too large for difference algorithm.
- serviceNotEnabledNoName=Service not enabled
- serviceNotPermitted={0} not permitted
- serviceNotPermittedNoName=Service not permitted
- shallowCommitsAlreadyInitialized=Shallow commits have already been initialized
- shortCompressedStreamAt=Short compressed stream at {0}
- shortReadOfBlock=Short read of block.
- shortReadOfOptionalDIRCExtensionExpectedAnotherBytes=Short read of optional DIRC extension {0}; expected another {1} bytes within the section.
- shortSkipOfBlock=Short skip of block.
- signingNotSupportedOnTag=Signing isn't supported on tag operations yet.
- similarityScoreMustBeWithinBounds=Similarity score must be between 0 and 100.
- sizeExceeds2GB=Path {0} size {1} exceeds 2 GiB limit.
- skipMustBeNonNegative=skip must be >= 0
- smartHTTPPushDisabled=smart HTTP push disabled
- sourceDestinationMustMatch=Source/Destination must match.
- sourceIsNotAWildcard=Source is not a wildcard.
- sourceRefDoesntResolveToAnyObject=Source ref {0} doesn't resolve to any object.
- sourceRefNotSpecifiedForRefspec=Source ref not specified for refspec: {0}
- squashCommitNotUpdatingHEAD=Squash commit -- not updating HEAD
- staleRevFlagsOn=Stale RevFlags on {0}
- startingReadStageWithoutWrittenRequestDataPendingIsNotSupported=Starting read stage without written request data pending is not supported
- stashApplyConflict=Applying stashed changes resulted in a conflict
- stashApplyConflictInIndex=Applying stashed index changes resulted in a conflict. Dropped index changes.
- stashApplyFailed=Applying stashed changes did not successfully complete
- stashApplyOnUnsafeRepository=Cannot apply stashed commit on a repository with state: {0}
- stashApplyWithoutHead=Cannot apply stashed commit in an empty repository or onto an unborn branch
- stashCommitMissingTwoParents=Stashed commit ''{0}'' does not have two parent commits
- stashDropDeleteRefFailed=Deleting stash reference failed with result: {0}
- stashDropFailed=Dropping stashed commit failed
- stashDropMissingReflog=Stash reflog does not contain entry ''{0}''
- stashFailed=Stashing local changes did not successfully complete
- stashResolveFailed=Reference ''{0}'' does not resolve to stashed commit
- statelessRPCRequiresOptionToBeEnabled=stateless RPC requires {0} to be enabled
- submoduleExists=Submodule ''{0}'' already exists in the index
- submoduleParentRemoteUrlInvalid=Cannot remove segment from remote url ''{0}''
- submodulesNotSupported=Submodules are not supported
- symlinkCannotBeWrittenAsTheLinkTarget=Symlink "{0}" cannot be written as the link target cannot be read from within Java.
- systemConfigFileInvalid=Systen wide config file {0} is invalid {1}
- tagAlreadyExists=tag ''{0}'' already exists
- tagNameInvalid=tag name {0} is invalid
- tagOnRepoWithoutHEADCurrentlyNotSupported=Tag on repository without HEAD currently not supported
- theFactoryMustNotBeNull=The factory must not be null
- timerAlreadyTerminated=Timer already terminated
- topologicalSortRequired=Topological sort required.
- transportExceptionBadRef=Empty ref: {0}: {1}
- transportExceptionEmptyRef=Empty ref: {0}
- transportExceptionInvalid=Invalid {0} {1}:{2}
- transportExceptionMissingAssumed=Missing assumed {0}
- transportExceptionReadRef=read {0}
- transportNeedsRepository=Transport needs repository
- transportProtoAmazonS3=Amazon S3
- transportProtoBundleFile=Git Bundle File
- transportProtoFTP=FTP
- transportProtoGitAnon=Anonymous Git
- transportProtoHTTP=HTTP
- transportProtoLocal=Local Git Repository
- transportProtoSFTP=SFTP
- transportProtoSSH=SSH
- treeEntryAlreadyExists=Tree entry "{0}" already exists.
- treeFilterMarkerTooManyFilters=Too many markTreeFilters passed, maximum number is {0} (passed {1})
- treeIteratorDoesNotSupportRemove=TreeIterator does not support remove()
- treeWalkMustHaveExactlyTwoTrees=TreeWalk should have exactly two trees.
- truncatedHunkLinesMissingForAncestor=Truncated hunk, at least {0} lines missing for ancestor {1}
- truncatedHunkNewLinesMissing=Truncated hunk, at least {0} new lines is missing
- truncatedHunkOldLinesMissing=Truncated hunk, at least {0} old lines is missing
- tSizeMustBeGreaterOrEqual1=tSize must be >= 1
- unableToCheckConnectivity=Unable to check connectivity.
- unableToStore=Unable to store {0}.
- unableToWrite=Unable to write {0}
- unencodeableFile=Unencodeable file: {0}
- unexpectedCompareResult=Unexpected metadata comparison result: {0}
- unexpectedEndOfConfigFile=Unexpected end of config file
- unexpectedHunkTrailer=Unexpected hunk trailer
- unexpectedOddResult=odd: {0} + {1} - {2}
- unexpectedRefReport={0}: unexpected ref report: {1}
- unexpectedReportLine=unexpected report line: {0}
- unexpectedReportLine2={0} unexpected report line: {1}
- unknownOrUnsupportedCommand=Unknown or unsupported command "{0}", only "{1}" is allowed.
- unknownDIRCVersion=Unknown DIRC version {0}
- unknownHost=unknown host
- unknownIndexVersionOrCorruptIndex=Unknown index version (or corrupt index): {0}
- unknownObject=unknown object
- unknownObjectType=Unknown object type {0}.
- unknownRepositoryFormat=Unknown repository format
- unknownRepositoryFormat2=Unknown repository format "{0}"; expected "0".
- unknownZlibError=Unknown zlib error.
- unmergedPath=Unmerged path: {0}
- unmergedPaths=Repository contains unmerged paths
- unpackException=Exception while parsing pack stream
- unreadablePackIndex=Unreadable pack index: {0}
- unrecognizedRef=Unrecognized ref: {0}
- unsupportedArchiveFormat=Unknown archive format ''{0}''
- unsupportedCommand0=unsupported command 0
- unsupportedEncryptionAlgorithm=Unsupported encryption algorithm: {0}
- unsupportedEncryptionVersion=Unsupported encryption version: {0}
- unsupportedGC Unsupported garbage collector for repository type: {0}
- unsupportedOperationNotAddAtEnd=Not add-at-end: {0}
- unsupportedPackIndexVersion=Unsupported pack index version {0}
- unsupportedPackVersion=Unsupported pack version {0}.
- updatingReferences=Updating references
- updatingRefFailed=Updating the ref {0} to {1} failed. ReturnCode from RefUpdate.update() was {2}
- uriNotConfigured=Submodule URI not configured
- uriNotFound={0} not found
- URINotSupported=URI not supported: {0}
- URLNotFound={0} not found
- userConfigFileInvalid=User config file {0} invalid {1}
- walkFailure=Walk failure.
- wantNotValid=want {0} not valid
- weeksAgo={0} weeks ago
- windowSizeMustBeLesserThanLimit=Window size must be < limit
- windowSizeMustBePowerOf2=Window size must be power of 2
- writerAlreadyInitialized=Writer already initialized
- writeTimedOut=Write timed out after {0} ms
- writingNotPermitted=Writing not permitted
- writingNotSupported=Writing {0} not supported.
- writingObjects=Writing objects
- wrongDecompressedLength=wrong decompressed length
- wrongRepositoryState=Wrong Repository State: {0}
- year=year
- years=years
- years0MonthsAgo={0} {1} ago
- yearsAgo={0} years ago
- yearsMonthsAgo={0} {1}, {2} {3} ago
|