Support creating pack bitmap indexes in PackWriter.
Update the PackWriter to support writing out pack bitmap indexes,
a parallel ".bitmap" file to the ".pack" file.
Bitmaps are selected at commits every 1 to 5,000 commits for
each unique path from the start. The most recent 100 commits are
all bitmapped. The next 19,000 commits have a bitmaps every 100
commits. The remaining commits have a bitmap every 5,000 commits.
Commits with more than 1 parent are prefered over ones
with 1 or less. Furthermore, previously computed bitmaps are reused,
if the previous entry had the reuse flag set, which is set when the
bitmap was placed at the max allowed distance.
Bitmaps are used to speed up the counting phase when packing, for
requests that are not shallow. The PackWriterBitmapWalker uses
a RevFilter to proactively mark commits with RevFlag.SEEN, when
they appear in a bitmap. The walker produces the full closure
of reachable ObjectIds, given the collection of starting ObjectIds.
For fetch request, two ObjectWalks are executed to compute the
ObjectIds reachable from the haves and from the wants. The
ObjectIds needed to be written are determined by taking all the
resulting wants AND NOT the haves.
For clone requests, we get cached pack support for "free" since
it is possible to determine if all of the ObjectIds in a pack file
are included in the resulting list of ObjectIds to write.
On my machine, the best times for clones and fetches of the linux
kernel repository (with about 2.6M objects and 300K commits) are
tabulated below:
Operation Index V2 Index VE003
Clone 37530ms (524.06 MiB) 82ms (524.06 MiB)
Fetch (1 commit back) 75ms 107ms
Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB)
Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB)
Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB)
Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB)
Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB)
Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
12年前 blame: Compute the origin of lines in a result file
BlameGenerator digs through history and discovers the origin of each
line of some result file. BlameResult consumes the stream of regions
created by the generator and lays them out in a table for applications
to display alongside of source lines.
Applications may optionally push in the working tree copy of a file
using the push(String, byte[]) method, allowing the application to
receive accurate line annotations for the working tree version. Lines
that are uncommitted (difference between HEAD and working tree) will
show up with the description given by the application as the author,
or "Not Committed Yet" as a default string.
Applications may also run the BlameGenerator in reverse mode using the
reverse(AnyObjectId, AnyObjectId) method instead of push(). When
running in the reverse mode the generator annotates lines by the
commit they are removed in, rather than the commit they were added in.
This allows a user to discover where a line disappeared from when they
are looking at an older revision in the repository. For example:
blame --reverse 16e810b2..master -L 1080, org.eclipse.jgit.test/tst/org/eclipse/jgit/storage/file/RefDirectoryTest.java
( 1080) }
2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1081)
2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1082) /**
2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1083) * Kick the timestamp of a local file.
Above we learn that line 1080 (a closing curly brace of the prior
method) still exists in branch master, but the Javadoc comment below
it has been removed by Christian Halstrick on May 20th as part of
commit 2302a6d3. This result differs considerably from that of C
Git's blame --reverse feature. JGit tells the reader which commit
performed the delete, while C Git tells the reader the last commit
that still contained the line, leaving it an exercise to the reader
to discover the descendant that performed the removal.
This is still only a basic implementation. Quite notably it is
missing support for the smart block copy/move detection that the C
implementation of `git blame` is well known for. Despite being
incremental, the BlameGenerator can only be run once. After the
generator runs it cannot be reused. A better implementation would
support applications browsing through history efficiently.
In regards to CQ 5110, only a little of the original code survives.
CQ: 5110
Bug: 306161
Change-Id: I84b8ea4838bb7d25f4fcdd540547884704661b8f
Signed-off-by: Kevin Sawicki <kevin@github.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
13年前 Support creating pack bitmap indexes in PackWriter.
Update the PackWriter to support writing out pack bitmap indexes,
a parallel ".bitmap" file to the ".pack" file.
Bitmaps are selected at commits every 1 to 5,000 commits for
each unique path from the start. The most recent 100 commits are
all bitmapped. The next 19,000 commits have a bitmaps every 100
commits. The remaining commits have a bitmap every 5,000 commits.
Commits with more than 1 parent are prefered over ones
with 1 or less. Furthermore, previously computed bitmaps are reused,
if the previous entry had the reuse flag set, which is set when the
bitmap was placed at the max allowed distance.
Bitmaps are used to speed up the counting phase when packing, for
requests that are not shallow. The PackWriterBitmapWalker uses
a RevFilter to proactively mark commits with RevFlag.SEEN, when
they appear in a bitmap. The walker produces the full closure
of reachable ObjectIds, given the collection of starting ObjectIds.
For fetch request, two ObjectWalks are executed to compute the
ObjectIds reachable from the haves and from the wants. The
ObjectIds needed to be written are determined by taking all the
resulting wants AND NOT the haves.
For clone requests, we get cached pack support for "free" since
it is possible to determine if all of the ObjectIds in a pack file
are included in the resulting list of ObjectIds to write.
On my machine, the best times for clones and fetches of the linux
kernel repository (with about 2.6M objects and 300K commits) are
tabulated below:
Operation Index V2 Index VE003
Clone 37530ms (524.06 MiB) 82ms (524.06 MiB)
Fetch (1 commit back) 75ms 107ms
Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB)
Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB)
Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB)
Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB)
Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB)
Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
12年前 PackWriter: Support reuse of entire packs
The most expensive part of packing a repository for transport to
another system is enumerating all of the objects in the repository.
Once this gets to the size of the linux-2.6 repository (1.8 million
objects), enumeration can take several CPU minutes and costs a lot
of temporary working set memory.
Teach PackWriter to efficiently reuse an existing "cached pack"
by answering a clone request with a thin pack followed by a larger
cached pack appended to the end. This requires the repository
owner to first construct the cached pack by hand, and record the
tip commits inside of $GIT_DIR/objects/info/cached-packs:
cd $GIT_DIR
root=$(git rev-parse master)
tmp=objects/.tmp-$$
names=$(echo $root | git pack-objects --keep-true-parents --revs $tmp)
for n in $names; do
chmod a-w $tmp-$n.pack $tmp-$n.idx
touch objects/pack/pack-$n.keep
mv $tmp-$n.pack objects/pack/pack-$n.pack
mv $tmp-$n.idx objects/pack/pack-$n.idx
done
(echo "+ $root";
for n in $names; do echo "P $n"; done;
echo) >>objects/info/cached-packs
git repack -a -d
When a clone request needs to include $root, the corresponding
cached pack will be copied as-is, rather than enumerating all of
the objects that are reachable from $root.
For a linux-2.6 kernel repository that should be about 376 MiB,
the above process creates two packs of 368 MiB and 38 MiB[1].
This is a local disk usage increase of ~26 MiB, due to reduced
delta compression between the large cached pack and the smaller
recent activity pack. The overhead is similar to 1 full copy of
the compressed project sources.
With this cached pack in hand, JGit daemon completes a clone request
in 1m17s less time, but a slightly larger data transfer (+2.39 MiB):
Before:
remote: Counting objects: 1861830, done
remote: Finding sources: 100% (1861830/1861830)
remote: Getting sizes: 100% (88243/88243)
remote: Compressing objects: 100% (88184/88184)
Receiving objects: 100% (1861830/1861830), 376.01 MiB | 19.01 MiB/s, done.
remote: Total 1861830 (delta 4706), reused 1851053 (delta 1553844)
Resolving deltas: 100% (1564621/1564621), done.
real 3m19.005s
After:
remote: Counting objects: 1601, done
remote: Counting objects: 1828460, done
remote: Finding sources: 100% (50475/50475)
remote: Getting sizes: 100% (18843/18843)
remote: Compressing objects: 100% (7585/7585)
remote: Total 1861830 (delta 2407), reused 1856197 (delta 37510)
Receiving objects: 100% (1861830/1861830), 378.40 MiB | 31.31 MiB/s, done.
Resolving deltas: 100% (1559477/1559477), done.
real 2m2.938s
Repository owners can periodically refresh their cached packs by
repacking their repository, folding all newer objects into a larger
cached pack. Since repacking is already considered to be a normal
Git maintenance activity, this isn't a very big burden.
[1] In this test $root was set back about two weeks.
Change-Id: Ib87131d5c4b5e8c5cacb0f4fe16ff4ece554734b
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
13年前 Config: Rewrite subsection and value escaping and parsing
Previously, Config was using the same method for both escaping and
parsing subsection names and config values. The goal was presumably code
savings, but unfortunately, these two pieces of the git config format
are simply different.
In git v2.15.1, Documentation/config.txt says the following about
subsection names:
"Subsection names are case sensitive and can contain any characters
except newline (doublequote `"` and backslash can be included by
escaping them as `\"` and `\\`, respectively). Section headers cannot
span multiple lines. Variables may belong directly to a section or to
a given subsection."
And, later in the same documentation section, about values:
"A line that defines a value can be continued to the next line by
ending it with a `\`; the backquote and the end-of-line are stripped.
Leading whitespaces after 'name =', the remainder of the line after
the first comment character '#' or ';', and trailing whitespaces of
the line are discarded unless they are enclosed in double quotes.
Internal whitespaces within the value are retained verbatim.
Inside double quotes, double quote `"` and backslash `\` characters
must be escaped: use `\"` for `"` and `\\` for `\`.
The following escape sequences (beside `\"` and `\\`) are recognized:
`\n` for newline character (NL), `\t` for horizontal tabulation (HT,
TAB) and `\b` for backspace (BS). Other char escape sequences
(including octal escape sequences) are invalid."
The main important differences are that subsection names have a limited
set of supported escape sequences, and do not support newlines at all,
either escaped or unescaped. Arguably, it would be easy to support
escaped newlines, but C git simply does not:
$ git config -f foo.config $'foo.bar\nbaz.quux' value
error: invalid key (newline): foo.bar
baz.quux
I468106ac was an attempt to fix one bug in escapeValue, around leading
whitespace, without having to rewrite the whole escaping/parsing code.
Unfortunately, because escapeValue was used for escaping subsection
names as well, this made it possible to write invalid config files, any
time Config#toText is called with a subsection name with trailing
whitespace, like {foo }.
Rather than pile hacks on top of hacks, fix it for real by largely
rewriting the escaping and parsing code.
In addition to fixing escape sequences, fix (and write tests for) a few
more issues in the old implementation:
* Now that we can properly parse it, always emit newlines as "\n" from
escapeValue, rather than the weird (but still supported) syntax with a
non-quoted trailing literal "\n\" before the newline. In addition to
producing more readable output and matching the behavior of C git,
this makes the escaping code much simpler.
* Disallow '\0' entirely within both subsection names and values, since
due to Unix command line argument conventions it is impossible to pass
such values to "git config".
* Properly preserve intra-value whitespace when parsing, rather than
collapsing it all to a single space.
Change-Id: I304f626b9d0ad1592c4e4e449a11b136c0f8b3e3
6年前 Retry stale file handles on .git/config file
On a local non-NFS filesystem the .git/config file will be orphaned if
it is replaced by a new process while the current process is reading the
old file. The current process successfully continues to read the
orphaned file until it closes the file handle.
Since NFS servers do not keep track of open files, instead of orphaning
the old .git/config file, such a replacement on an NFS filesystem will
instead cause the old file to be garbage collected (deleted). A stale
file handle exception will be raised on NFS clients if the file is
garbage collected (deleted) on the server while it is being read. Since
we no longer have access to the old file in these cases, the previous
code would just fail. However, in these cases, reopening the file and
rereading it will succeed (since it will open the new replacement file).
Since retrying the read is a viable strategy to deal with stale file
handles on the .git/config file, implement such a strategy.
Since it is possible that the .git/config file could be replaced again
while rereading it, loop on stale file handle exceptions, up to 5 extra
times, trying to read the .git/config file again, until we either read
the new file, or find that the file no longer exists. The limit of 5 is
arbitrary, and provides a safe upper bounds to prevent infinite loops
consuming resources in a potential unforeseen persistent error
condition.
Change-Id: I6901157b9dfdbd3013360ebe3eb40af147a8c626
Signed-off-by: Nasser Grainawi <nasser@codeaurora.org>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> 6年前 Client-side protocol V2 support for fetching
Make all transports request protocol V2 when fetching. Depending on
the transport, set the GIT_PROTOCOL environment variable (file and
ssh), pass the Git-Protocol header (http), or set the hidden
"\0version=2\0" (git anon). We'll fall back to V0 if the server
doesn't reply with a version 2 answer.
A user can control which protocol the client requests via the git
config protocol.version; if not set, JGit requests protocol V2 for
fetching. Pushing always uses protocol V0 still.
In the API, there is only a new Transport.openFetch() version that
takes a collection of RefSpecs plus additional patterns to construct
the Ref prefixes for the "ls-refs" command in protocol V2. If none
are given, the server will still advertise all refs, even in protocol
V2.
BasePackConnection.readAdvertisedRefs() handles falling back to
protocol V0. It newly returns true if V0 was used and the advertised
refs were read, and false if V2 is used and an explicit "ls-refs" is
needed. (This can't be done transparently inside readAdvertisedRefs()
because a "stateless RPC" transport like TransportHttp may need to
open a new connection for writing.)
BasePackFetchConnection implements the changes needed for the protocol
V2 "fetch" command (stateless protocol, simplified ACK handling,
delimiters, section headers).
In TransportHttp, change readSmartHeaders() to also recognize the
"version 2" packet line as a valid smart server indication.
Adapt tests, and run all the HTTP tests not only with both HTTP
connection factories (JDK and Apache HttpClient) but also with both
protocol V0 and V2. The SSH tests are much slower and much more
focused on the SSH protocol and SSH key handling. Factor out two
very simple cloning and pulling tests and make those run with
protocol V2.
Bug: 553083
Change-Id: I357c7f5daa7efb2872f1c64ee6f6d54229031ae1
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
3年前 Added read/write support for pack bitmap index.
A pack bitmap index is an additional index of compressed
bitmaps of the object graph. Furthermore, a logical API of the index
functionality is included, as it is expected to be used by the
PackWriter.
Compressed bitmaps are created using the javaewah library, which is a
word-aligned compressed variant of the Java bitset class based on
run-length encoding. The library only works with positive integer
values. Thus, the maximum number of ObjectIds in a pack file that
this index can currently support is limited to Integer.MAX_VALUE.
Every ObjectId is given an integer mapping. The integer is the
position of the ObjectId in the complete ObjectId list, sorted
by offset, for the pack file. That integer is what the bitmaps
use to reference the ObjectId. Currently, the new index format can
only be used with pack files that contain a complete closure of the
object graph e.g. the result of a garbage collection.
The index file includes four bitmaps for the Git object types i.e.
commits, trees, blobs, and tags. In addition, a collection of
bitmaps keyed by an ObjectId is also included. The bitmap for each entry
in the collection represents the full closure of ObjectIds reachable
from the keyed ObjectId (including the keyed ObjectId itself). The
bitmaps are further compressed by XORing the current bitmaps against
prior bitmaps in the index, and selecting the smallest representation.
The XOR'd bitmap and offset from the current entry to the position
of the bitmap to XOR against is the actual representation of the entry
in the index file. Each entry contains one byte, which is currently
used to note whether the bitmap should be blindly reused.
Change-Id: Id328724bf6b4c8366a088233098c18643edcf40f
11年前 Add support to follow HTTP redirects
git-core follows HTTP redirects so JGit should also provide this.
Implement config setting http.followRedirects with possible values
"false" (= never), "true" (= always), and "initial" (only on GET, but
not on POST).[1]
We must do our own redirect handling and cannot rely on the support
that the underlying real connection may offer. At least the JDK's
HttpURLConnection has two features that get in the way:
* it does not allow cross-protocol redirects and thus fails on
http->https redirects (for instance, on Github).
* it translates a redirect after a POST to a GET unless the system
property "http.strictPostRedirect" is set to true. We don't want
to manipulate that system setting nor require it.
Additionally, git has its own rules about what redirects it accepts;[2]
for instance, it does not allow a redirect that adds query arguments.
We handle response codes 301, 302, 303, and 307 as per RFC 2616.[3]
On POST we do not handle 303, and we follow redirects only if
http.followRedirects == true.
Redirects are followed only a certain number of times. There are two
ways to control that limit:
* by default, the limit is given by the http.maxRedirects system
property that is also used by the JDK. If the system property is
not set, the default is 5. (This is much lower than the JDK default
of 20, but I don't see the value of following so many redirects.)
* this can be overwritten by a http.maxRedirects git config setting.
The JGit http.* git config settings are currently all global; JGit has
no support yet for URI-specific settings "http.<pattern>.name". Adding
support for that is well beyond the scope of this change.
Like git-core, we log every redirect attempt (LOG.info) so that users
may know about the redirection having occurred.
Extends the test framework to configure an AppServer with HTTPS support
so that we can test cloning via HTTPS and redirections involving HTTPS.
[1] https://git-scm.com/docs/git-config
[2] https://kernel.googlesource.com/pub/scm/git/git/+/6628eb41db5189c0cdfdced6d8697e7c813c5f0f
[3] https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
CQ: 13987
Bug: 465167
Change-Id: I86518cb76842f7d326b51f8715e3bbf8ada89859
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
9年前 Add method to read time unit from config
Time units supported:
-milliseconds (1 ms, 2 milliseconds)
-seconds (1 s, 1 sec, 1 second, 2 seconds)
-minutes (1 m, 1 min, 1 minute, 2 minutes)
-hours (1 h, 1 hr, 1 hour, 2 hours)
-days (1 d, 1 day, 2 days)
-weeks (1 w, 1 week, 2 weeks)
-months (1 mon, 1 month, 2 months)
-years (1 y, 1 year, 2 years)
This functionality is implemented in Gerrit ConfigUtil class. Add it to
JGit so it can eventually be remove from Gerrit.
Change-Id: I2d6564ff656b6ab9424a9360624061c94fd5f413
Signed-off-by: Hugo Arès <hugo.ares@ericsson.com>
8年前 Added read/write support for pack bitmap index.
A pack bitmap index is an additional index of compressed
bitmaps of the object graph. Furthermore, a logical API of the index
functionality is included, as it is expected to be used by the
PackWriter.
Compressed bitmaps are created using the javaewah library, which is a
word-aligned compressed variant of the Java bitset class based on
run-length encoding. The library only works with positive integer
values. Thus, the maximum number of ObjectIds in a pack file that
this index can currently support is limited to Integer.MAX_VALUE.
Every ObjectId is given an integer mapping. The integer is the
position of the ObjectId in the complete ObjectId list, sorted
by offset, for the pack file. That integer is what the bitmaps
use to reference the ObjectId. Currently, the new index format can
only be used with pack files that contain a complete closure of the
object graph e.g. the result of a garbage collection.
The index file includes four bitmaps for the Git object types i.e.
commits, trees, blobs, and tags. In addition, a collection of
bitmaps keyed by an ObjectId is also included. The bitmap for each entry
in the collection represents the full closure of ObjectIds reachable
from the keyed ObjectId (including the keyed ObjectId itself). The
bitmaps are further compressed by XORing the current bitmaps against
prior bitmaps in the index, and selecting the smallest representation.
The XOR'd bitmap and offset from the current entry to the position
of the bitmap to XOR against is the actual representation of the entry
in the index file. Each entry contains one byte, which is currently
used to note whether the bitmap should be blindly reused.
Change-Id: Id328724bf6b4c8366a088233098c18643edcf40f
11年前 Handle stale file handles on packed-refs file
On a local filesystem the packed-refs file will be orphaned if it is
replaced by another client while the current client is reading the old
one. However, since NFS servers do not keep track of open files, instead
of orphaning the old packed-refs file, such a replacement will cause the
old file to be garbage collected instead. A stale file handle exception
will be raised on NFS servers if the file is garbage collected (deleted)
on the server while it is being read. Since we no longer have access to
the old file in these cases, the previous code would just fail. However,
in these cases, reopening the file and rereading it will succeed (since
it will reopen the new replacement file). So retrying the read is a
viable strategy to deal with stale file handles on the packed-refs file,
implement such a strategy.
Since it is possible that the packed-refs file could be replaced again
while rereading it (multiple consecutive updates can easily occur with
ref deletions), loop on stale file handle exceptions, up to 5 extra
times, trying to read the packed-refs file again, until we either read
the new file, or find that the file no longer exists. The limit of 5 is
arbitrary, and provides a safe upper bounds to prevent infinite loops
consuming resources in a potential unforeseen persistent error
condition.
Change-Id: I085c472bafa6e2f32f610a33ddc8368bb4ab1814
Signed-off-by: Martin Fick<mfick@codeaurora.org>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> 8年前 Rewrite push certificate parsing
- Consistently return structured data, such as actual ReceiveCommands,
which is more useful for callers that are doing things other than
verifying the signature, e.g. recording the set of commands.
- Store the certificate version field, as this is required to be part
of the signed payload.
- Add a toText() method to recreate the actual payload for signature
verification. This requires keeping track of the un-chomped command
strings from the original protocol stream.
- Separate the parser from the certificate itself, so the actual
PushCertificate object can be immutable. Make a fair attempt at deep
immutability, but this is not possible with the current mutable
ReceiveCommand structure.
- Use more detailed error messages that don't involve NON-NLS strings.
- Document null return values more thoroughly. Instead of having the
undocumented behavior of throwing NPE from certain methods if they
are not first guarded by enabled(), eliminate enabled() and return
null from those methods.
- Add tests for parsing a push cert from a section of pkt-line stream
using a real live stream captured with Wireshark (which, it should
be noted, uncovered several simply incorrect statements in C git's
Documentation/technical/pack-protocol.txt).
This is a slightly breaking API change to classes that were
technically public and technically released in 4.0. However, it is
highly unlikely that people were actually depending on public
behavior, since there were no public methods to create
PushCertificates with anything other than null field values, or a
PushCertificateParser that did anything other than infinite loop or
throw exceptions when reading.
Change-Id: I5382193347a8eb1811032d9b32af9651871372d0
9年前 Add support to follow HTTP redirects
git-core follows HTTP redirects so JGit should also provide this.
Implement config setting http.followRedirects with possible values
"false" (= never), "true" (= always), and "initial" (only on GET, but
not on POST).[1]
We must do our own redirect handling and cannot rely on the support
that the underlying real connection may offer. At least the JDK's
HttpURLConnection has two features that get in the way:
* it does not allow cross-protocol redirects and thus fails on
http->https redirects (for instance, on Github).
* it translates a redirect after a POST to a GET unless the system
property "http.strictPostRedirect" is set to true. We don't want
to manipulate that system setting nor require it.
Additionally, git has its own rules about what redirects it accepts;[2]
for instance, it does not allow a redirect that adds query arguments.
We handle response codes 301, 302, 303, and 307 as per RFC 2616.[3]
On POST we do not handle 303, and we follow redirects only if
http.followRedirects == true.
Redirects are followed only a certain number of times. There are two
ways to control that limit:
* by default, the limit is given by the http.maxRedirects system
property that is also used by the JDK. If the system property is
not set, the default is 5. (This is much lower than the JDK default
of 20, but I don't see the value of following so many redirects.)
* this can be overwritten by a http.maxRedirects git config setting.
The JGit http.* git config settings are currently all global; JGit has
no support yet for URI-specific settings "http.<pattern>.name". Adding
support for that is well beyond the scope of this change.
Like git-core, we log every redirect attempt (LOG.info) so that users
may know about the redirection having occurred.
Extends the test framework to configure an AppServer with HTTPS support
so that we can test cloning via HTTPS and redirections involving HTTPS.
[1] https://git-scm.com/docs/git-config
[2] https://kernel.googlesource.com/pub/scm/git/git/+/6628eb41db5189c0cdfdced6d8697e7c813c5f0f
[3] https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
CQ: 13987
Bug: 465167
Change-Id: I86518cb76842f7d326b51f8715e3bbf8ada89859
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
9年前 RenameBranchCommand: more consistent handling of short ref names
Several problems:
* The command didn't specify whether it expected short or full names.
* For the new name, it expected a short name, but then got confused
if tags or both local and remote branches with the same name existed.
* For the old name, it accepted either a short or a full name, but
again got confused if a short name was given and a tag with the
same name existed.
With such an interface, one cannot use Repository.findRef() to
reliably find the branch to rename. Use exactRef() for the new
name as by the time the Ref is needed its full name is known.
For determining the old Ref from the name, do the resolution
explicitly: first try exactRef (assuming the old name is a full
name); if that doesn't find anything, try "refs/heads/<old>" and
"refs/remotes/<old>" explicitly. Throw an exception if the name
is ambiguous, or if exactRef returned something that is not a
branch (refs/tags/... or also refs/notes/...).
Document in the javadoc what kind of names are valid, and add tests.
A user can still shoot himself in the foot if he chooses exceptionally
stupid branch names. For instance, it is still possible to rename a
branch to "refs/heads/foo" (full name "refs/heads/refs/heads/foo"),
but it cannot be renamed further using the new short name if a branch
with the full name "refs/heads/foo" exists. Similar edge cases exist
for other dumb branch names, like a branch with the short name
"refs/tags/foo". Renaming using the full name is always possible.
Bug: 542446
Change-Id: I34ac91c80c0a00c79a384d16ce1e727c550d54e9
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch> 5年前 Implement similarity based rename detection
Content similarity based rename detection is performed only after
a linear time detection is performed using exact content match on
the ObjectIds. Any names which were paired up during that exact
match phase are excluded from the inexact similarity based rename,
which reduces the space that must be considered.
During rename detection two entries cannot be marked as a rename
if they are different types of files. This prevents a symlink from
being renamed to a regular file, even if their blob content appears
to be similar, or is identical.
Efficiently comparing two files is performed by building up two
hash indexes and hashing lines or short blocks from each file,
counting the number of bytes that each line or block represents.
Instead of using a standard java.util.HashMap, we use a custom
open hashing scheme similiar to what we use in ObjecIdSubclassMap.
This permits us to have a very light-weight hash, with very little
memory overhead per cell stored.
As we only need two ints per record in the map (line/block key and
number of bytes), we collapse them into a single long inside of
a long array, making very efficient use of available memory when
we create the index table. We only need object headers for the
index structure itself, and the index table, but not per-cell.
This offers a massive space savings over using java.util.HashMap.
The score calculation is done by approximating how many bytes are
the same between the two inputs (which for a delta would be how much
is copied from the base into the result). The score is derived by
dividing the approximate number of bytes in common into the length
of the larger of the two input files.
Right now the SimilarityIndex table should average about 1/2 full,
which means we waste about 50% of our memory on empty entries
after we are done indexing a file and sort the table's contents.
If memory becomes an issue we could discard the table and copy all
records over to a new array that is properly sized.
Building the index requires O(M + N log N) time, where M is the
size of the input file in bytes, and N is the number of unique
lines/blocks in the file. The N log N time constraint comes
from the sort of the index table that is necessary to perform
linear time matching against another SimilarityIndex created for
a different file.
To actually perform the rename detection, a SxD matrix is created,
placing the sources (aka deletions) along one dimension and the
destinations (aka additions) along the other. A simple O(S x D)
loop examines every cell in this matrix.
A SimilarityIndex is built along the row and reused for each
column compare along that row, avoiding the costly index rebuild
at the row level. A future improvement would be to load a smaller
square matrix into SimilarityIndexes and process everything in that
sub-matrix before discarding the column dimension and moving down
to the next sub-matrix block along that same grid of rows.
An optional ProgressMonitor is permitted to be passed in, allowing
applications to see the progress of the detector as it works through
the matrix cells. This provides some indication of current status
for very long running renames.
The default line/block hash function used by the SimilarityIndex
may not be optimal, and may produce too many collisions. It is
borrowed from RawText's hash, which is used to quickly skip out of
a longer equality test if two lines have different hash functions.
We may need to refine this hash in the future, in order to minimize
the number of collisions we get on common source files.
Based on a handful of test commits in JGit (especially my own
recent rename repository refactoring series), this rename detector
produces output that is very close to C Git. The content similarity
scores are sometimes off by 1%, which is most probably caused by
our SimilarityIndex type using a different hash function than C
Git uses when it computes the delta size between any two objects
in the rename matrix.
Bug: 318504
Change-Id: I11dff969e8a2e4cf252636d857d2113053bdd9dc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14年前 Support creating pack bitmap indexes in PackWriter.
Update the PackWriter to support writing out pack bitmap indexes,
a parallel ".bitmap" file to the ".pack" file.
Bitmaps are selected at commits every 1 to 5,000 commits for
each unique path from the start. The most recent 100 commits are
all bitmapped. The next 19,000 commits have a bitmaps every 100
commits. The remaining commits have a bitmap every 5,000 commits.
Commits with more than 1 parent are prefered over ones
with 1 or less. Furthermore, previously computed bitmaps are reused,
if the previous entry had the reuse flag set, which is set when the
bitmap was placed at the max allowed distance.
Bitmaps are used to speed up the counting phase when packing, for
requests that are not shallow. The PackWriterBitmapWalker uses
a RevFilter to proactively mark commits with RevFlag.SEEN, when
they appear in a bitmap. The walker produces the full closure
of reachable ObjectIds, given the collection of starting ObjectIds.
For fetch request, two ObjectWalks are executed to compute the
ObjectIds reachable from the haves and from the wants. The
ObjectIds needed to be written are determined by taking all the
resulting wants AND NOT the haves.
For clone requests, we get cached pack support for "free" since
it is possible to determine if all of the ObjectIds in a pack file
are included in the resulting list of ObjectIds to write.
On my machine, the best times for clones and fetches of the linux
kernel repository (with about 2.6M objects and 300K commits) are
tabulated below:
Operation Index V2 Index VE003
Clone 37530ms (524.06 MiB) 82ms (524.06 MiB)
Fetch (1 commit back) 75ms 107ms
Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB)
Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB)
Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB)
Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB)
Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB)
Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
12年前 Shallow fetch: Respect "shallow" lines
When fetching from a shallow clone, the client sends "have" lines
to tell the server about objects it already has and "shallow" lines
to tell where its local history terminates. In some circumstances,
the server fails to honor the shallow lines and fails to return
objects that the client needs.
UploadPack passes the "have" lines to PackWriter so PackWriter can
omit them from the generated pack. UploadPack processes "shallow"
lines by calling RevWalk.assumeShallow() with the set of shallow
commits. RevWalk creates and caches RevCommits for these shallow
commits, clearing out their parents. That way, walks correctly
terminate at the shallow commits instead of assuming the client has
history going back behind them. UploadPack converts its RevWalk to an
ObjectWalk, maintaining the cached RevCommits, and passes it to
PackWriter.
Unfortunately, to support shallow fetches the PackWriter does the
following:
if (shallowPack && !(walk instanceof DepthWalk.ObjectWalk))
walk = new DepthWalk.ObjectWalk(reader, depth);
That is, when the client sends a "deepen" line (fetch --depth=<n>)
and the caller has not passed in a DepthWalk.ObjectWalk, PackWriter
throws away the RevWalk that was passed in and makes a new one. The
cleared parent lists prepared by RevWalk.assumeShallow() are lost.
Fortunately UploadPack intends to pass in a DepthWalk.ObjectWalk.
It tries to create it by calling toObjectWalkWithSameObjects() on
a DepthWalk.RevWalk. But it doesn't work: because DepthWalk.RevWalk
does not override the standard RevWalk#toObjectWalkWithSameObjects
implementation, the result is a plain ObjectWalk instead of an
instance of DepthWalk.ObjectWalk.
The result is that the "shallow" information is thrown away and
objects reachable from the shallow commits can be omitted from the
pack sent when fetching with --depth from a shallow clone.
Multiple factors collude to limit the circumstances under which this
bug can be observed:
1. Commits with depth != 0 don't enter DepthGenerator's pending queue.
That means a "have" cannot have any effect on DepthGenerator unless
it is also a "want".
2. DepthGenerator#next() doesn't call carryFlagsImpl(), so the
uninteresting flag is not propagated to ancestors there even if a
"have" is also a "want".
3. JGit treats a depth of 1 as "1 past the wants".
Because of (2), the only place the UNINTERESTING flag can leak to a
shallow commit's parents is in the carryFlags() call from
markUninteresting(). carryFlags() only traverses commits that have
already been parsed: commits yet to be parsed are supposed to inherit
correct flags from their parent in PendingGenerator#next (which
doesn't happen here --- that is (2)). So the list of commits that have
already been parsed becomes relevant.
When we hit the markUninteresting() call, all "want"s, "have"s, and
commits to be unshallowed have been parsed. carryFlags() only
affects the parsed commits. If the "want" is a direct parent of a
"have", then it carryFlags() marks it as uninteresting. If the "have"
was also a "shallow", then its parent pointer should have been null
and the "want" shouldn't have been marked, so we see the bug. If the
"want" is a more distant ancestor then (2) keeps the uninteresting
state from propagating to the "want" and we don't see the bug. If the
"shallow" is not also a "have" then the shallow commit isn't parsed
so (2) keeps the uninteresting state from propagating to the "want
so we don't see the bug.
Here is a reproduction case (time flowing left to right, arrows
pointing to parents). "C" must be a commit that the client
reports as a "have" during negotiation. That can only happen if the
server reports it as an existing branch or tag in the first round of
negotiation:
A <-- B <-- C <-- D
First do
git clone --depth 1 <repo>
which yields D as a "have" and C as a "shallow" commit. Then try
git fetch --depth 1 <repo> B:refs/heads/B
Negotiation sets up: have D, shallow C, have C, want B.
But due to this bug B is marked as uninteresting and is not sent.
Change-Id: I6e14b57b2f85e52d28cdcf356df647870f475440
Signed-off-by: Terry Parker <tparker@google.com>
7年前 GPG signature verification via BouncyCastle
Add a GpgSignatureVerifier interface, plus a factory to create
instances thereof that is provided via the ServiceLoader mechanism.
Implement the new interface for BouncyCastle. A verifier maintains
an internal LRU cache of previously found public keys to speed up
verifying multiple objects (tag or commits). Mergetags are not handled.
Provide a new VerifySignatureCommand in org.eclipse.jgit.api together
with a factory method Git.verifySignature(). The command can verify
signatures on tags or commits, and can be limited to accept only tags
or commits. Provide a new public WrongObjectTypeException thrown when
the command is limited to either tags or commits and a name resolves
to some other object kind.
In jgit.pgm, implement "git tag -v", "git log --show-signature", and
"git show --show-signature". The output is similar to command-line
gpg invoked via git, but not identical. In particular, lines are not
prefixed by "gpg:" but by "bc:".
Trust levels for public keys are read from the keys' trust packets,
not from GPG's internal trust database. A trust packet may or may
not be set. Command-line GPG produces more warning lines depending
on the trust level, warning about keys with a trust level below
"full".
There are no unit tests because JGit still doesn't have any setup to
do signing unit tests; this would require at least a faked .gpg
directory with pre-created key rings and keys, and a way to make the
BouncyCastle classes use that directory instead of the default. See
bug 547538 and also bug 544847.
Tested manually with a small test repository containing signed and
unsigned commits and tags, with signatures made with different keys
and made by command-line git using GPG 2.2.25 and by JGit using
BouncyCastle 1.65.
Bug: 547751
Change-Id: If7e34aeed6ca6636a92bf774d893d98f6d459181
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
3年前 Implement similarity based rename detection
Content similarity based rename detection is performed only after
a linear time detection is performed using exact content match on
the ObjectIds. Any names which were paired up during that exact
match phase are excluded from the inexact similarity based rename,
which reduces the space that must be considered.
During rename detection two entries cannot be marked as a rename
if they are different types of files. This prevents a symlink from
being renamed to a regular file, even if their blob content appears
to be similar, or is identical.
Efficiently comparing two files is performed by building up two
hash indexes and hashing lines or short blocks from each file,
counting the number of bytes that each line or block represents.
Instead of using a standard java.util.HashMap, we use a custom
open hashing scheme similiar to what we use in ObjecIdSubclassMap.
This permits us to have a very light-weight hash, with very little
memory overhead per cell stored.
As we only need two ints per record in the map (line/block key and
number of bytes), we collapse them into a single long inside of
a long array, making very efficient use of available memory when
we create the index table. We only need object headers for the
index structure itself, and the index table, but not per-cell.
This offers a massive space savings over using java.util.HashMap.
The score calculation is done by approximating how many bytes are
the same between the two inputs (which for a delta would be how much
is copied from the base into the result). The score is derived by
dividing the approximate number of bytes in common into the length
of the larger of the two input files.
Right now the SimilarityIndex table should average about 1/2 full,
which means we waste about 50% of our memory on empty entries
after we are done indexing a file and sort the table's contents.
If memory becomes an issue we could discard the table and copy all
records over to a new array that is properly sized.
Building the index requires O(M + N log N) time, where M is the
size of the input file in bytes, and N is the number of unique
lines/blocks in the file. The N log N time constraint comes
from the sort of the index table that is necessary to perform
linear time matching against another SimilarityIndex created for
a different file.
To actually perform the rename detection, a SxD matrix is created,
placing the sources (aka deletions) along one dimension and the
destinations (aka additions) along the other. A simple O(S x D)
loop examines every cell in this matrix.
A SimilarityIndex is built along the row and reused for each
column compare along that row, avoiding the costly index rebuild
at the row level. A future improvement would be to load a smaller
square matrix into SimilarityIndexes and process everything in that
sub-matrix before discarding the column dimension and moving down
to the next sub-matrix block along that same grid of rows.
An optional ProgressMonitor is permitted to be passed in, allowing
applications to see the progress of the detector as it works through
the matrix cells. This provides some indication of current status
for very long running renames.
The default line/block hash function used by the SimilarityIndex
may not be optimal, and may produce too many collisions. It is
borrowed from RawText's hash, which is used to quickly skip out of
a longer equality test if two lines have different hash functions.
We may need to refine this hash in the future, in order to minimize
the number of collisions we get on common source files.
Based on a handful of test commits in JGit (especially my own
recent rename repository refactoring series), this rename detector
produces output that is very close to C Git. The content similarity
scores are sometimes off by 1%, which is most probably caused by
our SimilarityIndex type using a different hash function than C
Git uses when it computes the delta size between any two objects
in the rename matrix.
Bug: 318504
Change-Id: I11dff969e8a2e4cf252636d857d2113053bdd9dc
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14年前 GPG signature verification via BouncyCastle
Add a GpgSignatureVerifier interface, plus a factory to create
instances thereof that is provided via the ServiceLoader mechanism.
Implement the new interface for BouncyCastle. A verifier maintains
an internal LRU cache of previously found public keys to speed up
verifying multiple objects (tag or commits). Mergetags are not handled.
Provide a new VerifySignatureCommand in org.eclipse.jgit.api together
with a factory method Git.verifySignature(). The command can verify
signatures on tags or commits, and can be limited to accept only tags
or commits. Provide a new public WrongObjectTypeException thrown when
the command is limited to either tags or commits and a name resolves
to some other object kind.
In jgit.pgm, implement "git tag -v", "git log --show-signature", and
"git show --show-signature". The output is similar to command-line
gpg invoked via git, but not identical. In particular, lines are not
prefixed by "gpg:" but by "bc:".
Trust levels for public keys are read from the keys' trust packets,
not from GPG's internal trust database. A trust packet may or may
not be set. Command-line GPG produces more warning lines depending
on the trust level, warning about keys with a trust level below
"full".
There are no unit tests because JGit still doesn't have any setup to
do signing unit tests; this would require at least a faked .gpg
directory with pre-created key rings and keys, and a way to make the
BouncyCastle classes use that directory instead of the default. See
bug 547538 and also bug 544847.
Tested manually with a small test repository containing signed and
unsigned commits and tags, with signatures made with different keys
and made by command-line git using GPG 2.2.25 and by JGit using
BouncyCastle 1.65.
Bug: 547751
Change-Id: If7e34aeed6ca6636a92bf774d893d98f6d459181
Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
3年前 |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818 |
- #
- # Messages with format elements ({0}) are processed using java.text.MessageFormat.
- #
- abbreviationLengthMustBeNonNegative=Abbreviation length must not be negative.
- abortingRebase=Aborting rebase: resetting to {0}
- abortingRebaseFailed=Could not abort rebase
- abortingRebaseFailedNoOrigHead=Could not abort rebase since ORIG_HEAD is null
- advertisementCameBefore=advertisement of {0}^'{}' came before {1}
- advertisementOfCameBefore=advertisement of {0}^'{}' came before {1}
- amazonS3ActionFailed={0} of ''{1}'' failed: {2} {3}
- amazonS3ActionFailedGivingUp={0} of ''{1}'' failed: Giving up after {2} attempts.
- ambiguousObjectAbbreviation=Object abbreviation {0} is ambiguous
- aNewObjectIdIsRequired=A NewObjectId is required.
- anExceptionOccurredWhileTryingToAddTheIdOfHEAD=An exception occurred while trying to add the Id of HEAD
- anSSHSessionHasBeenAlreadyCreated=An SSH session has been already created
- applyBinaryBaseOidWrong=Cannot apply binary patch; OID for file {0} does not match
- applyBinaryOidTooShort=Binary patch for file {0} does not have full IDs
- applyBinaryResultOidWrong=Result of binary patch for file {0} has wrong OID.
- applyingCommit=Applying {0}
- archiveFormatAlreadyAbsent=Archive format already absent: {0}
- archiveFormatAlreadyRegistered=Archive format already registered with different implementation: {0}
- argumentIsNotAValidCommentString=Invalid comment: {0}
- assumeAtomicCreateNewFile=Reading option "core.supportsAtomicFileCreation" failed, fallback to default assuming atomic file creation is supported
- atLeastOnePathIsRequired=At least one path is required.
- atLeastOnePatternIsRequired=At least one pattern is required.
- atLeastTwoFiltersNeeded=At least two filters needed.
- atomicPushNotSupported=Atomic push not supported.
- atomicRefUpdatesNotSupported=Atomic ref updates not supported
- atomicSymRefNotSupported=Atomic symref not supported
- authenticationNotSupported=authentication not supported
- badBase64InputCharacterAt=Bad Base64 input character at {0} : {1} (decimal)
- badEntryDelimiter=Bad entry delimiter
- badEntryName=Bad entry name: {0}
- badEscape=Bad escape: {0}
- badGroupHeader=Bad group header
- badIgnorePattern=Cannot parse .gitignore pattern ''{0}''
- badIgnorePatternFull=File {0} line {1}: cannot parse pattern ''{2}'': {3}
- badObjectType=Bad object type: {0}
- badRef=Bad ref: {0}: {1}
- badSectionEntry=Bad section entry: {0}
- badShallowLine=Bad shallow line: {0}
- bareRepositoryNoWorkdirAndIndex=Bare Repository has neither a working tree, nor an index
- base85invalidChar=Invalid base-85 character: 0x{0}
- base85length=Base-85 encoded data must have a length that is a multiple of 5
- base85overflow=Base-85 value overflow, does not fit into 32 bits: 0x{0}
- base85tooLong=Extra base-85 encoded data for output size of {0} bytes
- base85tooShort=Base-85 data decoded into less than {0} bytes
- baseLengthIncorrect=base length incorrect
- binaryDeltaBaseLengthMismatch=Binary delta base length does not match, expected {0}, got {1}
- binaryDeltaInvalidOffset=Binary delta offset + length too large: {0} + {1}
- binaryDeltaInvalidResultLength=Binary delta expected result length is negative
- binaryHunkDecodeError=Binary hunk, line {0}: invalid input
- binaryHunkInvalidLength=Binary hunk, line {0}: input corrupt; expected length byte, got 0x{1}
- binaryHunkLineTooShort=Binary hunk, line {0}: input ended prematurely
- binaryHunkMissingNewline=Binary hunk, line {0}: input line not terminated by newline
- bitmapMissingObject=Bitmap at {0} is missing {1}.
- bitmapsMustBePrepared=Bitmaps must be prepared before they may be written.
- blameNotCommittedYet=Not Committed Yet
- blockLimitNotMultipleOfBlockSize=blockLimit {0} must be a multiple of blockSize {1}
- blockLimitNotPositive=blockLimit must be positive: {0}
- blockSizeNotPowerOf2=blockSize must be a power of 2
- bothRefTargetsMustNotBeNull=both old and new ref targets must not be null.
- branchNameInvalid=Branch name {0} is not allowed
- buildingBitmaps=Building bitmaps
- cachedPacksPreventsIndexCreation=Using cached packs prevents index creation
- cachedPacksPreventsListingObjects=Using cached packs prevents listing objects
- cannotAccessLastModifiedForSafeDeletion=Unable to access lastModifiedTime of file {0}, skip deletion since we cannot safely avoid race condition
- cannotBeCombined=Cannot be combined.
- cannotBeRecursiveWhenTreesAreIncluded=TreeWalk shouldn't be recursive when tree objects are included.
- cannotChangeActionOnComment=Cannot change action on comment line in git-rebase-todo file, old action: {0}, new action: {1}.
- cannotCheckoutFromUnbornBranch=Cannot check out from unborn branch
- cannotCheckoutOursSwitchBranch=Checking out ours/theirs is only possible when checking out index, not when switching branches.
- cannotCombineSquashWithNoff=Cannot combine --squash with --no-ff.
- cannotCombineTopoSortWithTopoKeepBranchTogetherSort=Cannot combine sorts TOPO and TOPO_KEEP_BRANCH_TOGETHER
- cannotCombineTreeFilterWithRevFilter=Cannot combine TreeFilter {0} with RevFilter {1}.
- cannotCommitOnARepoWithState=Cannot commit on a repo with state: {0}
- cannotCommitWriteTo=Cannot commit write to {0}
- cannotConnectPipes=cannot connect pipes
- cannotConvertScriptToText=Cannot convert script to text
- cannotCreateConfig=cannot create config
- cannotCreateDirectory=Cannot create directory {0}
- cannotCreateHEAD=cannot create HEAD
- cannotCreateIndexfile=Cannot create an index file with name {0}
- cannotCreateTempDir=Cannot create a temp dir
- cannotDeleteCheckedOutBranch=Branch {0} is checked out and cannot be deleted
- cannotDeleteFile=Cannot delete file: {0}
- cannotDeleteObjectsPath=Cannot delete {0}/{1}: {2}
- cannotDetermineProxyFor=Cannot determine proxy for {0}
- cannotDownload=Cannot download {0}
- cannotEnterObjectsPath=Cannot enter {0}/objects: {1}
- cannotEnterPathFromParent=Cannot enter {0} from {1}: {2}
- cannotExecute=cannot execute: {0}
- cannotFindMergeBaseUsingFirstParent=Cannot find merge bases using a first-parent walk.
- cannotGet=Cannot get {0}
- cannotGetObjectsPath=Cannot get {0}/{1}: {2}
- cannotListObjectsPath=Cannot ls {0}/{1}: {2}
- cannotListPackPath=Cannot ls {0}/pack: {1}
- cannotListRefs=cannot list refs
- cannotLock=Cannot lock {0}. Ensure that no other process has an open file handle on the lock file {0}.lock, then you may delete the lock file and retry.
- cannotLockPackIn=Cannot lock pack in {0}
- cannotMatchOnEmptyString=Cannot match on empty string.
- cannotMkdirObjectPath=Cannot create directory {0}/{1}: {2}
- cannotMoveIndexTo=Cannot move index to {0}
- cannotMovePackTo=Cannot move pack to {0}
- cannotOpenService=cannot open {0}
- cannotParseDate=The date specification "{0}" could not be parsed with the following formats: {1}
- cannotParseGitURIish=Cannot parse Git URI-ish
- cannotPullOnARepoWithState=Cannot pull into a repository with state: {0}
- cannotRead=Cannot read {0}
- cannotReadBackDelta=Cannot read delta type {0}
- cannotReadBlob=Cannot read blob {0}
- cannotReadCommit=Cannot read commit {0}
- cannotReadFile=Cannot read file {0}
- cannotReadHEAD=cannot read HEAD: {0} {1}
- cannotReadIndex=The index file {0} exists but cannot be read
- cannotReadObject=Cannot read object
- cannotReadObjectsPath=Cannot read {0}/{1}: {2}
- cannotReadTree=Cannot read tree {0}
- cannotRebaseWithoutCurrentHead=Can not rebase without a current HEAD
- cannotSaveConfig=Cannot save config file ''{0}''
- cannotSquashFixupWithoutPreviousCommit=Cannot {0} without previous commit.
- cannotStoreObjects=cannot store objects
- cannotResolveUniquelyAbbrevObjectId=Could not resolve uniquely the abbreviated object ID
- cannotUpdateUnbornBranch=Cannot update unborn branch
- cannotWriteObjectsPath=Cannot write {0}/{1}: {2}
- canOnlyCherryPickCommitsWithOneParent=Cannot cherry-pick commit ''{0}'' because it has {1} parents, only commits with exactly one parent are supported.
- canOnlyRevertCommitsWithOneParent=Cannot revert commit ''{0}'' because it has {1} parents, only commits with exactly one parent are supported
- commitDoesNotHaveGivenParent=The commit ''{0}'' does not have a parent number {1}.
- cantFindObjectInReversePackIndexForTheSpecifiedOffset=Can''t find object in (reverse) pack index for the specified offset {0}
- channelMustBeInRange1_255=channel {0} must be in range [1, 255]
- characterClassIsNotSupported=The character class {0} is not supported.
- checkingOutFiles=Checking out files
- checkoutConflictWithFile=Checkout conflict with file: {0}
- checkoutConflictWithFiles=Checkout conflict with files: {0}
- checkoutUnexpectedResult=Checkout returned unexpected result {0}
- classCastNotA=Not a {0}
- cloneNonEmptyDirectory=Destination path "{0}" already exists and is not an empty directory
- closed=closed
- closeLockTokenFailed=Closing LockToken ''{0}'' failed
- collisionOn=Collision on {0}
- commandClosedStderrButDidntExit=Command {0} closed stderr stream but didn''t exit within timeout {1} seconds
- commandRejectedByHook=Rejected by "{0}" hook.\n{1}
- commandWasCalledInTheWrongState=Command {0} was called in the wrong state
- commitMessageNotSpecified=commit message not specified
- commitOnRepoWithoutHEADCurrentlyNotSupported=Commit on repo without HEAD currently not supported
- commitAmendOnInitialNotPossible=Amending is not possible on initial commit.
- commitsHaveAlreadyBeenMarkedAsStart=Commits have already been marked as walk starts.
- compressingObjects=Compressing objects
- configSubsectionContainsNewline=config subsection name contains newline
- configSubsectionContainsNullByte=config subsection name contains byte 0x00
- configValueContainsNullByte=config value contains byte 0x00
- configHandleIsStale=config file handle is stale, {0}. retry
- configHandleMayBeLocked=config file handle may be locked by other process, {0}. retry
- connectionFailed=connection failed
- connectionTimeOut=Connection time out: {0}
- contextMustBeNonNegative=context must be >= 0
- cookieFilePathRelative=git config http.cookieFile contains a relative path, should be absolute: {0}
- corruptionDetectedReReadingAt=Corruption detected re-reading at {0}
- corruptObjectBadDate=bad date
- corruptObjectBadEmail=bad email
- corruptObjectBadStream=bad stream
- corruptObjectBadTimezone=bad time zone
- corruptObjectDuplicateEntryNames=duplicate entry names
- corruptObjectGarbageAfterSize=garbage after size
- corruptObjectIncorrectLength=incorrect length
- corruptObjectIncorrectSorting=incorrectly sorted
- corruptObjectInvalidModeChar=invalid mode character
- corruptObjectInvalidModeStartsZero=mode starts with '0'
- corruptObjectInvalidMode2=invalid mode {0,number,#}
- corruptObjectInvalidMode3=invalid mode {0} for {1} ''{2}'' in {3}.
- corruptObjectInvalidName=invalid name '%s'
- corruptObjectInvalidNameAux=invalid name 'AUX'
- corruptObjectInvalidNameCon=invalid name 'CON'
- corruptObjectInvalidNameCom=invalid name 'COM%c'
- corruptObjectInvalidNameEnd=invalid name ends with '%c'
- corruptObjectInvalidNameIgnorableUnicode=invalid name '%s' contains ignorable Unicode characters
- corruptObjectInvalidNameInvalidUtf8=invalid name contains byte sequence ''{0}'' which is not a valid UTF-8 character
- corruptObjectInvalidNameLpt=invalid name 'LPT%c'
- corruptObjectInvalidNameNul=invalid name 'NUL'
- corruptObjectInvalidNamePrn=invalid name 'PRN'
- corruptObjectInvalidObject=invalid object
- corruptObjectInvalidParent=invalid parent
- corruptObjectInvalidTree=invalid tree
- corruptObjectInvalidType=invalid type
- corruptObjectInvalidType2=invalid type {0}
- corruptObjectMissingEmail=missing email
- corruptObjectNameContainsByte=byte 0x%x not allowed in Windows filename
- corruptObjectNameContainsChar=char '%c' not allowed in Windows filename
- corruptObjectNameContainsNullByte=name contains byte 0x00
- corruptObjectNameContainsSlash=name contains '/'
- corruptObjectNameDot=invalid name '.'
- corruptObjectNameDotDot=invalid name '..'
- corruptObjectNameZeroLength=zero length name
- corruptObjectNegativeSize=negative size
- corruptObjectNoAuthor=no author
- corruptObjectNoCommitter=no committer
- corruptObjectNoHeader=no header
- corruptObjectNoObjectHeader=no object header
- corruptObjectNoTagHeader=no tag header
- corruptObjectNotreeHeader=no tree header
- corruptObjectNoTypeHeader=no type header
- corruptObjectPackfileChecksumIncorrect=Packfile checksum incorrect.
- corruptObjectTruncatedInMode=truncated in mode
- corruptObjectTruncatedInName=truncated in name
- corruptObjectTruncatedInObjectId=truncated in object id
- corruptObjectZeroId=entry points to null SHA-1
- corruptUseCnt=close() called when useCnt is already zero for {0}
- couldNotGetAdvertisedRef=Remote {0} did not advertise Ref for branch {1}. This Ref may not exist in the remote or may be hidden by permission settings.
- couldNotGetRepoStatistics=Could not get repository statistics
- couldNotFindTabInLine=Could not find tab in line {0}. Tab is the mandatory separator for the Netscape Cookie File Format.
- couldNotFindSixTabsInLine=Could not find 6 tabs but only {0} in line '{1}'. 7 tab separated columns per line are mandatory for the Netscape Cookie File Format.
- couldNotLockHEAD=Could not lock HEAD
- couldNotPersistCookies=Could not persist received cookies in file ''{0}''
- couldNotReadCookieFile=Could not read cookie file ''{0}''
- couldNotReadIndexInOneGo=Could not read index in one go, only {0} out of {1} read
- couldNotReadObjectWhileParsingCommit=Could not read an object while parsing commit {0}
- couldNotRewindToUpstreamCommit=Could not rewind to upstream commit
- couldNotURLEncodeToUTF8=Could not URL encode to UTF-8
- countingObjects=Counting objects
- corruptPack=Pack file {0} is corrupt, removing it from pack list
- createBranchFailedUnknownReason=Create branch failed for unknown reason
- createBranchUnexpectedResult=Create branch returned unexpected result {0}
- createNewFileFailed=Could not create new file {0}
- createRequiresZeroOldId=Create requires old ID to be zero
- credentialPassword=Password
- credentialPassphrase=Passphrase
- credentialUsername=Username
- daemonAlreadyRunning=Daemon already running
- daysAgo={0} days ago
- deepenNotWithDeepen=Cannot combine deepen with deepen-not
- deepenSinceWithDeepen=Cannot combine deepen with deepen-since
- deleteBranchUnexpectedResult=Delete branch returned unexpected result {0}
- deleteFileFailed=Could not delete file {0}
- deletedOrphanInPackDir=Deleted orphaned file {}
- deleteRequiresZeroNewId=Delete requires new ID to be zero
- deleteTagUnexpectedResult=Delete tag returned unexpected result {0}
- deletingNotSupported=Deleting {0} not supported.
- destinationIsNotAWildcard=Destination is not a wildcard.
- detachedHeadDetected=HEAD is detached
- dirCacheDoesNotHaveABackingFile=DirCache does not have a backing file
- dirCacheFileIsNotLocked=DirCache {0} not locked
- dirCacheIsNotLocked=DirCache is not locked
- DIRCChecksumMismatch=DIRC checksum mismatch
- DIRCCorruptLength=DIRC variable int {0} invalid after entry for {1}
- DIRCCorruptLengthFirst=DIRC variable int {0} invalid in first entry
- DIRCExtensionIsTooLargeAt=DIRC extension {0} is too large at {1} bytes.
- DIRCExtensionNotSupportedByThisVersion=DIRC extension {0} not supported by this version.
- DIRCHasTooManyEntries=DIRC has too many entries.
- DIRCUnrecognizedExtendedFlags=Unrecognized extended flags: {0}
- downloadCancelled=Download cancelled
- downloadCancelledDuringIndexing=Download cancelled during indexing
- duplicateAdvertisementsOf=duplicate advertisements of {0}
- duplicateRef=Duplicate ref: {0}
- duplicateRefAttribute=Duplicate ref attribute: {0}
- duplicateRemoteRefUpdateIsIllegal=Duplicate remote ref update is illegal. Affected remote name: {0}
- duplicateStagesNotAllowed=Duplicate stages not allowed
- eitherGitDirOrWorkTreeRequired=One of setGitDir or setWorkTree must be called.
- emptyCommit=No changes
- emptyPathNotPermitted=Empty path not permitted.
- emptyRef=Empty ref: {0}
- encryptionError=Encryption error: {0}
- encryptionOnlyPBE=Encryption error: only password-based encryption (PBE) algorithms are supported.
- endOfFileInEscape=End of file in escape
- entryNotFoundByPath=Entry not found by path: {0}
- enumValueNotSupported0=Invalid value: {0}
- enumValueNotSupported2=Invalid value: {0}.{1}={2}
- enumValueNotSupported3=Invalid value: {0}.{1}.{2}={3}
- enumValuesNotAvailable=Enumerated values of type {0} not available
- errorInPackedRefs=error in packed-refs
- errorInvalidProtocolWantedOldNewRef=error: invalid protocol: wanted 'old new ref'
- errorListing=Error listing {0}
- errorOccurredDuringUnpackingOnTheRemoteEnd=error occurred during unpacking on the remote end: {0}
- errorReadingInfoRefs=error reading info/refs
- exceptionCaughtDuringExecutionOfHook=Exception caught during execution of "{0}" hook.
- exceptionCaughtDuringExecutionOfAddCommand=Exception caught during execution of add command
- exceptionCaughtDuringExecutionOfArchiveCommand=Exception caught during execution of archive command
- exceptionCaughtDuringExecutionOfCherryPickCommand=Exception caught during execution of cherry-pick command. {0}
- exceptionCaughtDuringExecutionOfCommand=Exception caught during execution of command ''{0}'' in ''{1}'', return code ''{2}'', error message ''{3}''
- exceptionCaughtDuringExecutionOfCommitCommand=Exception caught during execution of commit command
- exceptionCaughtDuringExecutionOfFetchCommand=Exception caught during execution of fetch command
- exceptionCaughtDuringExecutionOfLsRemoteCommand=Exception caught during execution of ls-remote command
- exceptionCaughtDuringExecutionOfMergeCommand=Exception caught during execution of merge command. {0}
- exceptionCaughtDuringExecutionOfPullCommand=Exception caught during execution of pull command
- exceptionCaughtDuringExecutionOfPushCommand=Exception caught during execution of push command
- exceptionCaughtDuringExecutionOfResetCommand=Exception caught during execution of reset command. {0}
- exceptionCaughtDuringExecutionOfRevertCommand=Exception caught during execution of revert command. {0}
- exceptionCaughtDuringExecutionOfRmCommand=Exception caught during execution of rm command
- exceptionCaughtDuringExecutionOfTagCommand=Exception caught during execution of tag command
- exceptionHookExecutionInterrupted=Execution of "{0}" hook interrupted.
- exceptionOccurredDuringAddingOfOptionToALogCommand=Exception occurred during adding of {0} as option to a Log command
- exceptionOccurredDuringReadingOfGIT_DIR=Exception occurred during reading of $GIT_DIR/{0}. {1}
- exceptionWhileFindingUserHome=Problem determining the user home directory, trying Java user.home
- exceptionWhileReadingPack=Exception caught while accessing pack file {0}, the pack file might be corrupt. Caught {1} consecutive errors while trying to read this pack.
- expectedACKNAKFoundEOF=Expected ACK/NAK, found EOF
- expectedACKNAKGot=Expected ACK/NAK, got: {0}
- expectedBooleanStringValue=Expected boolean string value
- expectedCharacterEncodingGuesses=Expected {0} character encoding guesses
- expectedDirectoryNotSubmodule=Expected submodule ''{0}'' to be a directory
- expectedEOFReceived=expected EOF; received ''{0}'' instead
- expectedGot=expected ''{0}'', got ''{1}''
- expectedLessThanGot=expected less than ''{0}'', got ''{1}''
- expectedPktLineWithService=expected pkt-line with ''# service=-'', got ''{0}''
- expectedReceivedContentType=expected Content-Type {0}; received Content-Type {1}
- expectedReportForRefNotReceived={0}: expected report for ref {1} not received
- failedAtomicFileCreation=Atomic file creation failed, number of hard links to file {0} was not 2 but {1}
- failedCreateLockFile=Creating lock file {} failed
- failedReadHttpsProtocols=Failed to read system property https.protocols, assuming it is not set
- failedToConvert=Failed to convert rest: %s
- failedToDetermineFilterDefinition=An exception occurred while determining filter definitions
- failedUpdatingRefs=failed updating refs
- failureDueToOneOfTheFollowing=Failure due to one of the following:
- failureUpdatingFETCH_HEAD=Failure updating FETCH_HEAD: {0}
- failureUpdatingTrackingRef=Failure updating tracking ref {0}: {1}
- fileAlreadyExists=File already exists: {0}
- fileCannotBeDeleted=File cannot be deleted: {0}
- fileIsTooLarge=File is too large: {0}
- fileModeNotSetForPath=FileMode not set for path {0}
- filterExecutionFailed=Execution of filter command ''{0}'' on file ''{1}'' failed
- filterExecutionFailedRc=Execution of filter command ''{0}'' on file ''{1}'' failed with return code ''{2}'', message on stderr: ''{3}''
- filterRequiresCapability=filter requires server to advertise that capability
- findingGarbage=Finding garbage
- flagIsDisposed={0} is disposed.
- flagNotFromThis={0} not from this.
- flagsAlreadyCreated={0} flags already created.
- funnyRefname=funny refname
- gcFailed=Garbage collection failed.
- gcTooManyUnpruned=Too many loose, unpruneable objects after garbage collection. Consider adjusting gc.auto or gc.pruneExpire.
- headRequiredToStash=HEAD required to stash local changes
- hoursAgo={0} hours ago
- httpConfigCannotNormalizeURL=Cannot normalize URL path {0}: too many .. segments
- httpConfigInvalidURL=Cannot parse URL from subsection http.{0} in git config; ignored.
- httpFactoryInUse=Changing the HTTP connection factory after an HTTP connection has already been opened is not allowed.
- httpPreAuthTooLate=HTTP Basic preemptive authentication cannot be set once an HTTP connection has already been opened.
- httpUserInfoDecodeError=Cannot decode user info from URL {}; ignored.
- httpWrongConnectionType=Wrong connection type: expected {0}, got {1}.
- hugeIndexesAreNotSupportedByJgitYet=Huge indexes are not supported by jgit, yet
- hunkBelongsToAnotherFile=Hunk belongs to another file
- hunkDisconnectedFromFile=Hunk disconnected from file
- hunkHeaderDoesNotMatchBodyLineCountOf=Hunk header {0} does not match body line count of {1}
- illegalArgumentNotA=Not {0}
- illegalCombinationOfArguments=The combination of arguments {0} and {1} is not allowed
- illegalHookName=Illegal hook name {0}
- illegalPackingPhase=Illegal packing phase {0}
- incorrectHashFor=Incorrect hash for {0}; computed {1} as a {2} from {3} bytes.
- incorrectOBJECT_ID_LENGTH=Incorrect OBJECT_ID_LENGTH.
- indexFileCorruptedNegativeBucketCount=Invalid negative bucket count read from pack v2 index file: {0}
- indexFileIsTooLargeForJgit=Index file is too large for jgit
- indexNumbersNotIncreasing=index numbers not increasing: ''{0}'': min {1}, last max {2}
- indexWriteException=Modified index could not be written
- initFailedBareRepoDifferentDirs=When initializing a bare repo with directory {0} and separate git-dir {1} specified both folders must point to the same location
- initFailedDirIsNoDirectory=Cannot set directory to ''{0}'' which is not a directory
- initFailedGitDirIsNoDirectory=Cannot set git-dir to ''{0}'' which is not a directory
- initFailedNonBareRepoSameDirs=When initializing a non-bare repo with directory {0} and separate git-dir {1} specified both folders should not point to the same location
- inMemoryBufferLimitExceeded=In-memory buffer limit exceeded
- inputDidntMatchLength=Input did not match supplied length. {0} bytes are missing.
- inputStreamMustSupportMark=InputStream must support mark()
- integerValueOutOfRange=Integer value {0}.{1} out of range
- internalRevisionError=internal revision error
- internalServerError=internal server error
- interruptedWriting=Interrupted writing {0}
- inTheFuture=in the future
- invalidAdvertisementOf=invalid advertisement of {0}
- invalidAncestryLength=Invalid ancestry length
- invalidBooleanValue=Invalid boolean value: {0}.{1}={2}
- invalidChannel=Invalid channel {0}
- invalidCommitParentNumber=Invalid commit parent number
- invalidDepth=Invalid depth: {0}
- invalidEncryption=Invalid encryption
- invalidExpandWildcard=ExpandFromSource on a refspec that can have mismatched wildcards does not make sense.
- invalidFilter=Invalid filter: {0}
- invalidGitdirRef = Invalid .git reference in file ''{0}''
- invalidGitModules=Invalid .gitmodules file
- invalidGitType=invalid git type: {0}
- invalidHeaderFormat=Invalid header from git config http.extraHeader ignored: no colon or empty key in header ''{0}''
- invalidHeaderKey=Invalid header from git config http.extraHeader ignored: key contains illegal characters; see RFC 7230: ''{0}''
- invalidHeaderValue=Invalid header from git config http.extraHeader ignored: value should be 7bit-ASCII characters only: ''{0}''
- invalidHexString=Invalid hex string: {0}
- invalidHomeDirectory=Invalid home directory: {0}
- invalidHooksPath=Invalid git config core.hooksPath = {0}
- invalidId=Invalid id: {0}
- invalidId0=Invalid id
- invalidIdLength=Invalid id length {0}; should be {1}
- invalidIgnoreParamSubmodule=Found invalid ignore param for submodule {0}.
- invalidIgnoreRule=Exception caught while parsing ignore rule ''{0}''.
- invalidIntegerValue=Invalid integer value: {0}.{1}={2}
- invalidKey=Invalid key: {0}
- invalidLineInConfigFile=Invalid line in config file
- invalidLineInConfigFileWithParam=Invalid line in config file: {0}
- invalidModeFor=Invalid mode {0} for {1} {2} in {3}.
- invalidModeForPath=Invalid mode {0} for path {1}
- invalidNameContainsDotDot=Invalid name (contains ".."): {0}
- invalidObject=Invalid {0} {1}: {2}
- invalidOldIdSent=invalid old id sent
- invalidPacketLineHeader=Invalid packet line header: {0}
- invalidPath=Invalid path: {0}
- invalidPurgeFactor=Invalid purgeFactor {0}, values have to be in range between 0 and 1
- invalidRedirectLocation=Invalid redirect location {0} -> {1}
- invalidRefAdvertisementLine=Invalid ref advertisement line: ''{0}''
- invalidReflogRevision=Invalid reflog revision: {0}
- invalidRefName=Invalid ref name: {0}
- invalidReftableBlock=Invalid reftable block
- invalidReftableCRC=Invalid reftable CRC-32
- invalidReftableFile=Invalid reftable file
- invalidRemote=Invalid remote: {0}
- invalidRepositoryStateNoHead=Invalid repository --- cannot read HEAD
- invalidShallowObject=invalid shallow object {0}, expected commit
- invalidStageForPath=Invalid stage {0} for path {1}
- invalidSystemProperty=Invalid system property ''{0}'': ''{1}''; using default value {2}
- invalidTagOption=Invalid tag option: {0}
- invalidTimeout=Invalid timeout: {0}
- invalidTimestamp=Invalid timestamp in {0}
- invalidTimeUnitValue2=Invalid time unit value: {0}.{1}={2}
- invalidTimeUnitValue3=Invalid time unit value: {0}.{1}.{2}={3}
- invalidTreeZeroLengthName=Cannot append a tree entry with zero-length name
- invalidURL=Invalid URL {0}
- invalidWildcards=Invalid wildcards {0}
- invalidRefSpec=Invalid refspec {0}
- invalidWindowSize=Invalid window size
- isAStaticFlagAndHasNorevWalkInstance={0} is a static flag and has no RevWalk instance
- JRELacksMD5Implementation=JRE lacks MD5 implementation
- kNotInRange=k {0} not in {1} - {2}
- largeObjectExceedsByteArray=Object {0} exceeds 2 GiB byte array limit
- largeObjectExceedsLimit=Object {0} exceeds {1} limit, actual size is {2}
- largeObjectException={0} exceeds size limit
- largeObjectOutOfMemory=Out of memory loading {0}
- lengthExceedsMaximumArraySize=Length exceeds maximum array size
- lfsHookConflict=LFS built-in hook conflicts with existing pre-push hook in repository {0}. Either remove the pre-push hook or disable built-in LFS support.
- listingAlternates=Listing alternates
- listingPacks=Listing packs
- localObjectsIncomplete=Local objects incomplete.
- localRefIsMissingObjects=Local ref {0} is missing object(s).
- localRepository=local repository
- lockAlreadyHeld=Lock on {0} already held
- lockCountMustBeGreaterOrEqual1=lockCount must be >= 1
- lockError=lock error: {0}
- lockFailedRetry=locking {0} failed after {1} retries
- lockOnNotClosed=Lock on {0} not closed.
- lockOnNotHeld=Lock on {0} not held.
- lockStreamClosed=Output to lock on {0} already closed
- lockStreamMultiple=Output to lock on {0} already opened
- logInconsistentFiletimeDiff={}: inconsistent duration from file timestamps on {}, {}: {} > {}, but diff = {}. Aborting measurement at resolution {}.
- logLargerFiletimeDiff={}: inconsistent duration from file timestamps on {}, {}: diff = {} > {} (last good value). Aborting measurement.
- logSmallerFiletime={}: got smaller file timestamp on {}, {}: {} < {}. Aborting measurement at resolution {}.
- logXDGConfigHomeInvalid=Environment variable XDG_CONFIG_HOME contains an invalid path {}
- looseObjectHandleIsStale=loose-object {0} file handle is stale. retry {1} of {2}
- maxCountMustBeNonNegative=max count must be >= 0
- mergeConflictOnNonNoteEntries=Merge conflict on non-note entries: base = {0}, ours = {1}, theirs = {2}
- mergeConflictOnNotes=Merge conflict on note {0}. base = {1}, ours = {2}, theirs = {2}
- mergeStrategyAlreadyExistsAsDefault=Merge strategy "{0}" already exists as a default strategy
- mergeStrategyDoesNotSupportHeads=merge strategy {0} does not support {1} heads to be merged into HEAD
- mergeUsingStrategyResultedInDescription=Merge of revisions {0} with base {1} using strategy {2} resulted in: {3}. {4}
- mergeRecursiveConflictsWhenMergingCommonAncestors=Multiple common ancestors were found and merging them resulted in a conflict: {0}, {1}
- mergeRecursiveTooManyMergeBasesFor = "More than {0} merge bases for:\n a {1}\n b {2} found:\n count {3}"
- messageAndTaggerNotAllowedInUnannotatedTags = Unannotated tags cannot have a message or tagger
- minutesAgo={0} minutes ago
- mismatchOffset=mismatch offset for object {0}
- mismatchCRC=mismatch CRC for object {0}
- missingAccesskey=Missing accesskey.
- missingConfigurationForKey=No value for key {0} found in configuration
- missingCookieFile=Configured http.cookieFile ''{0}'' is missing
- missingCRC=missing CRC for object {0}
- missingDeltaBase=delta base
- missingForwardImageInGITBinaryPatch=Missing forward-image in GIT binary patch
- missingObject=Missing {0} {1}
- missingPrerequisiteCommits=missing prerequisite commits:
- missingRequiredParameter=Parameter "{0}" is missing
- missingSecretkey=Missing secretkey.
- mixedStagesNotAllowed=Mixed stages not allowed
- mkDirFailed=Creating directory {0} failed
- mkDirsFailed=Creating directories for {0} failed
- month=month
- months=months
- monthsAgo={0} months ago
- multipleMergeBasesFor=Multiple merge bases for:\n {0}\n {1} found:\n {2}\n {3}
- nameMustNotBeNullOrEmpty=Ref name must not be null or empty.
- need2Arguments=Need 2 arguments
- newIdMustNotBeNull=New ID must not be null
- newlineInQuotesNotAllowed=Newline in quotes not allowed
- noApplyInDelete=No apply in delete
- noClosingBracket=No closing {0} found for {1} at index {2}.
- noCommitsSelectedForShallow=No commits selected for shallow request
- noCredentialsProvider=Authentication is required but no CredentialsProvider has been registered
- noHEADExistsAndNoExplicitStartingRevisionWasSpecified=No HEAD exists and no explicit starting revision was specified
- noHMACsupport=No {0} support: {1}
- noMergeBase=No merge base could be determined. Reason={0}. {1}
- noMergeHeadSpecified=No merge head specified
- nonBareLinkFilesNotSupported=Link files are not supported with nonbare repos
- nonCommitToHeads=Cannot point a branch to a non-commit object
- noPathAttributesFound=No Attributes found for {0}.
- noSuchRef=no such ref
- noSuchRefKnown=no such ref: {0}
- noSuchSubmodule=no such submodule {0}
- notABoolean=Not a boolean: {0}
- notABundle=not a bundle
- notADIRCFile=Not a DIRC file.
- notAGitDirectory=not a git directory
- notAPACKFile=Not a PACK file.
- notARef=Not a ref: {0}: {1}
- notASCIIString=Not ASCII string: {0}
- notAuthorized=not authorized
- notAValidPack=Not a valid pack {0}
- notFound=not found.
- nothingToFetch=Nothing to fetch.
- nothingToPush=Nothing to push.
- notMergedExceptionMessage=Branch was not deleted as it has not been merged yet; use the force option to delete it anyway
- noXMLParserAvailable=No XML parser available.
- objectAtHasBadZlibStream=Object at {0} in {1} has bad zlib stream
- objectIsCorrupt=Object {0} is corrupt: {1}
- objectIsCorrupt3={0}: object {1}: {2}
- objectIsNotA=Object {0} is not a {1}.
- objectNotFound=Object {0} not found.
- objectNotFoundIn=Object {0} not found in {1}.
- obtainingCommitsForCherryPick=Obtaining commits that need to be cherry-picked
- oldIdMustNotBeNull=Expected old ID must not be null
- onlyOneFetchSupported=Only one fetch supported
- onlyOneOperationCallPerConnectionIsSupported=Only one operation call per connection is supported.
- onlyOpenPgpSupportedForSigning=OpenPGP is the only supported signing option with JGit at this time (gpg.format must be set to openpgp).
- openFilesMustBeAtLeast1=Open files must be >= 1
- openingConnection=Opening connection
- operationCanceled=Operation {0} was canceled
- outputHasAlreadyBeenStarted=Output has already been started.
- overflowedReftableBlock=Overflowed reftable block
- packChecksumMismatch=Pack checksum mismatch detected for pack file {0}: .pack has {1} whilst .idx has {2}
- packCorruptedWhileWritingToFilesystem=Pack corrupted while writing to filesystem
- packedRefsHandleIsStale=packed-refs handle is stale, {0}. retry
- packetSizeMustBeAtLeast=packet size {0} must be >= {1}
- packetSizeMustBeAtMost=packet size {0} must be <= {1}
- packedRefsCorruptionDetected=packed-refs corruption detected: {0}
- packfileCorruptionDetected=Packfile corruption detected: {0}
- packFileInvalid=Pack file invalid: {0}
- packfileIsTruncated=Packfile {0} is truncated.
- packfileIsTruncatedNoParam=Packfile is truncated.
- packHandleIsStale=Pack file {0} handle is stale, removing it from pack list
- packHasUnresolvedDeltas=pack has unresolved deltas
- packInaccessible=Failed to access pack file {0}, caught {1} consecutive errors while trying to access this pack.
- packingCancelledDuringObjectsWriting=Packing cancelled during objects writing
- packObjectCountMismatch=Pack object count mismatch: pack {0} index {1}: {2}
- packRefs=Pack refs
- packSizeNotSetYet=Pack size not yet set since it has not yet been received
- packTooLargeForIndexVersion1=Pack too large for index version 1
- packWasDeleted=Pack file {0} was deleted, removing it from pack list
- packWriterStatistics=Total {0,number,#0} (delta {1,number,#0}), reused {2,number,#0} (delta {3,number,#0})
- panicCantRenameIndexFile=Panic: index file {0} must be renamed to replace {1}; until then repository is corrupt
- patchApplyException=Cannot apply: {0}
- patchFormatException=Format error: {0}
- pathNotConfigured=Submodule path is not configured
- peeledLineBeforeRef=Peeled line before ref.
- peeledRefIsRequired=Peeled ref is required.
- peerDidNotSupplyACompleteObjectGraph=peer did not supply a complete object graph
- personIdentEmailNonNull=E-mail address of PersonIdent must not be null.
- personIdentNameNonNull=Name of PersonIdent must not be null.
- postCommitHookFailed=Execution of post-commit hook failed: {0}.
- prefixRemote=remote:
- problemWithResolvingPushRefSpecsLocally=Problem with resolving push ref specs locally: {0}
- progressMonUploading=Uploading {0}
- propertyIsAlreadyNonNull=Property is already non null
- pruneLoosePackedObjects=Prune loose objects also found in pack files
- pruneLooseUnreferencedObjects=Prune loose, unreferenced objects
- pullTaskName=Pull
- pushCancelled=push cancelled
- pushCertificateInvalidField=Push certificate has missing or invalid value for {0}
- pushCertificateInvalidFieldValue=Push certificate has missing or invalid value for {0}: {1}
- pushCertificateInvalidHeader=Push certificate has invalid header format
- pushCertificateInvalidSignature=Push certificate has invalid signature format
- pushIsNotSupportedForBundleTransport=Push is not supported for bundle transport
- pushNotPermitted=push not permitted
- pushOptionsNotSupported=Push options not supported; received {0}
- rawLogMessageDoesNotParseAsLogEntry=Raw log message does not parse as log entry
- readConfigFailed=Reading config file ''{0}'' failed
- readFileStoreAttributesFailed=Reading FileStore attributes from user config failed
- readerIsRequired=Reader is required
- readingObjectsFromLocalRepositoryFailed=reading objects from local repository failed: {0}
- readLastModifiedFailed=Reading lastModified of {0} failed
- readPipeIsNotAllowed=FS.readPipe() isn't allowed for command ''{0}''. Working directory: ''{1}''.
- readPipeIsNotAllowedRequiredPermission=FS.readPipe() isn't allowed for command ''{0}''. Working directory: ''{1}''. Required permission: {2}.
- readTimedOut=Read timed out after {0} ms
- receivePackObjectTooLarge1=Object too large, rejecting the pack. Max object size limit is {0} bytes.
- receivePackObjectTooLarge2=Object too large ({0} bytes), rejecting the pack. Max object size limit is {1} bytes.
- receivePackInvalidLimit=Illegal limit parameter value {0}
- receivePackTooLarge=Pack exceeds the limit of {0} bytes, rejecting the pack
- receivingObjects=Receiving objects
- redirectBlocked=Redirection blocked: redirect {0} -> {1} not allowed
- redirectHttp=URI ''{0}'': following HTTP redirect #{1} {2} -> {3}
- redirectLimitExceeded=Redirected more than {0} times; aborted at {1} -> {2}
- redirectLocationMissing=Invalid redirect: no redirect location for {0}
- redirectsOff=Cannot redirect because http.followRedirects is false (HTTP status {0})
- refAlreadyExists=already exists
- refAlreadyExists1=Ref {0} already exists
- reflogEntryNotFound=Entry {0} not found in reflog for ''{1}''
- refNotResolved=Ref {0} cannot be resolved
- reftableDirExists=reftable dir exists and is nonempty
- reftableRecordsMustIncrease=records must be increasing: last {0}, this {1}
- refUpdateReturnCodeWas=RefUpdate return code was: {0}
- remoteBranchNotFound=Remote branch ''{0}'' not found in upstream origin
- remoteConfigHasNoURIAssociated=Remote config "{0}" has no URIs associated
- remoteDoesNotHaveSpec=Remote does not have {0} available for fetch.
- remoteDoesNotSupportSmartHTTPPush=remote does not support smart HTTP push
- remoteHungUpUnexpectedly=remote hung up unexpectedly
- remoteNameCannotBeNull=Remote name cannot be null.
- renameBranchFailedAmbiguous=Cannot rename branch {0}; name is ambiguous: {1} or {2}
- renameBranchFailedNotABranch=Cannot rename {0}: this is not a branch
- renameBranchFailedUnknownReason=Rename failed with unknown reason
- renameBranchUnexpectedResult=Unexpected rename result {0}
- renameCancelled=Rename detection was cancelled
- renameFileFailed=Could not rename file {0} to {1}
- renamesAlreadyFound=Renames have already been found.
- renamesBreakingModifies=Breaking apart modified file pairs
- renamesFindingByContent=Finding renames by content similarity
- renamesFindingExact=Finding exact renames
- renamesRejoiningModifies=Rejoining modified file pairs
- repositoryAlreadyExists=Repository already exists: {0}
- repositoryConfigFileInvalid=Repository config file {0} invalid {1}
- repositoryIsRequired=repository is required
- repositoryNotFound=repository not found: {0}
- repositoryState_applyMailbox=Apply mailbox
- repositoryState_bare=Bare
- repositoryState_bisecting=Bisecting
- repositoryState_conflicts=Conflicts
- repositoryState_merged=Merged
- repositoryState_normal=Normal
- repositoryState_rebase=Rebase
- repositoryState_rebaseInteractive=Interactive rebase
- repositoryState_rebaseOrApplyMailbox=Rebase/Apply mailbox
- repositoryState_rebaseWithMerge=Rebase w/merge
- requiredHashFunctionNotAvailable=Required hash function {0} not available.
- resettingHead=Resetting head to {0}
- resolvingDeltas=Resolving deltas
- resultLengthIncorrect=result length incorrect
- rewinding=Rewinding to commit {0}
- s3ActionDeletion=Deletion
- s3ActionReading=Reading
- s3ActionWriting=Writing
- searchForReachableBranches=Finding reachable branches
- saveFileStoreAttributesFailed=Saving measured FileStore attributes to user config failed
- searchForReuse=Finding sources
- searchForReuseTimeout=Search for reuse timed out after {0} seconds
- searchForSizes=Getting sizes
- secondsAgo={0} seconds ago
- selectingCommits=Selecting commits
- sequenceTooLargeForDiffAlgorithm=Sequence too large for difference algorithm.
- serviceNotEnabledNoName=Service not enabled
- serviceNotPermitted={1} not permitted on ''{0}''
- sha1CollisionDetected=SHA-1 collision detected on {0}
- shallowCommitsAlreadyInitialized=Shallow commits have already been initialized
- shallowPacksRequireDepthWalk=Shallow packs require a DepthWalk
- shortCompressedStreamAt=Short compressed stream at {0}
- shortReadOfBlock=Short read of block.
- shortReadOfOptionalDIRCExtensionExpectedAnotherBytes=Short read of optional DIRC extension {0}; expected another {1} bytes within the section.
- shortSkipOfBlock=Short skip of block.
- signatureVerificationError=Signature verification failed
- signatureVerificationUnavailable=No signature verifier registered
- signedTagMessageNoLf=A non-empty message of a signed tag must end in LF.
- signingServiceUnavailable=Signing service is not available
- similarityScoreMustBeWithinBounds=Similarity score must be between 0 and 100.
- skipMustBeNonNegative=skip must be >= 0
- skipNotAccessiblePath=The path ''{0}'' isn't accessible. Skip it.
- smartHTTPPushDisabled=smart HTTP push disabled
- sourceDestinationMustMatch=Source/Destination must match.
- sourceIsNotAWildcard=Source is not a wildcard.
- sourceRefDoesntResolveToAnyObject=Source ref {0} doesn''t resolve to any object.
- sourceRefNotSpecifiedForRefspec=Source ref not specified for refspec: {0}
- squashCommitNotUpdatingHEAD=Squash commit -- not updating HEAD
- sshCommandFailed=Execution of ssh command ''{0}'' failed with error ''{1}''
- sshCommandTimeout=Execution of ssh command ''{0}'' timed out after {1} seconds
- sslFailureExceptionMessage=Secure connection to {0} could not be established because of SSL problems
- sslFailureInfo=A secure connection to {0} could not be established because the server''s certificate could not be validated.
- sslFailureCause=SSL reported: {0}
- sslFailureTrustExplanation=Do you want to skip SSL verification for this server?
- sslTrustAlways=Always skip SSL verification for this server from now on
- sslTrustForRepo=Skip SSL verification for git operations for repository {0}
- sslTrustNow=Skip SSL verification for this single git operation
- sslVerifyCannotSave=Could not save setting for http.sslVerify
- staleRevFlagsOn=Stale RevFlags on {0}
- startingReadStageWithoutWrittenRequestDataPendingIsNotSupported=Starting read stage without written request data pending is not supported
- stashApplyConflict=Applying stashed changes resulted in a conflict
- stashApplyFailed=Applying stashed changes did not successfully complete
- stashApplyOnUnsafeRepository=Cannot apply stashed commit on a repository with state: {0}
- stashApplyWithoutHead=Cannot apply stashed commit in an empty repository or onto an unborn branch
- stashCommitIncorrectNumberOfParents=Stashed commit ''{0}'' does have {1} parent commits instead of 2 or 3.
- stashDropDeleteRefFailed=Deleting stash reference failed with result: {0}
- stashDropFailed=Dropping stashed commit failed
- stashDropMissingReflog=Stash reflog does not contain entry ''{0}''
- stashDropNotSupported=Dropping stash not supported on this ref backend
- stashFailed=Stashing local changes did not successfully complete
- stashResolveFailed=Reference ''{0}'' does not resolve to stashed commit
- statelessRPCRequiresOptionToBeEnabled=stateless RPC requires {0} to be enabled
- storePushCertMultipleRefs=Store push certificate for {0} refs
- storePushCertOneRef=Store push certificate for {0}
- storePushCertReflog=Store push certificate
- submoduleExists=Submodule ''{0}'' already exists in the index
- submoduleNameInvalid=Invalid submodule name ''{0}''
- submoduleParentRemoteUrlInvalid=Cannot remove segment from remote url ''{0}''
- submodulePathInvalid=Invalid submodule path ''{0}''
- submoduleUrlInvalid=Invalid submodule URL ''{0}''
- supportOnlyPackIndexVersion2=Only support index version 2
- systemConfigFileInvalid=System wide config file {0} is invalid {1}
- tagAlreadyExists=tag ''{0}'' already exists
- tagNameInvalid=tag name {0} is invalid
- tagOnRepoWithoutHEADCurrentlyNotSupported=Tag on repository without HEAD currently not supported
- theFactoryMustNotBeNull=The factory must not be null
- threadInterruptedWhileRunning="Current thread interrupted while running {0}"
- timeIsUncertain=Time is uncertain
- timerAlreadyTerminated=Timer already terminated
- timeoutMeasureFsTimestampResolution=measuring filesystem timestamp resolution for ''{0}'' timed out, fall back to resolution of 2 seconds
- tooManyCommands=Commands size exceeds limit defined in receive.maxCommandBytes
- tooManyFilters=Too many "filter" lines in request
- tooManyIncludeRecursions=Too many recursions; circular includes in config file(s)?
- topologicalSortRequired=Topological sort required.
- transactionAborted=transaction aborted
- transportExceptionBadRef=Empty ref: {0}: {1}
- transportExceptionEmptyRef=Empty ref: {0}
- transportExceptionInvalid=Invalid {0} {1}:{2}
- transportExceptionMissingAssumed=Missing assumed {0}
- transportExceptionReadRef=read {0}
- transportNeedsRepository=Transport needs repository
- transportProvidedRefWithNoObjectId=Transport provided ref {0} with no object id
- transportProtoBundleFile=Git Bundle File
- transportProtoFTP=FTP
- transportProtoGitAnon=Anonymous Git
- transportProtoHTTP=HTTP
- transportProtoLocal=Local Git Repository
- transportProtoSFTP=SFTP
- transportProtoSSH=SSH
- transportProtoTest=Test
- treeEntryAlreadyExists=Tree entry "{0}" already exists.
- treeFilterMarkerTooManyFilters=Too many markTreeFilters passed, maximum number is {0} (passed {1})
- treeWalkMustHaveExactlyTwoTrees=TreeWalk should have exactly two trees.
- truncatedHunkLinesMissingForAncestor=Truncated hunk, at least {0} lines missing for ancestor {1}
- truncatedHunkNewLinesMissing=Truncated hunk, at least {0} new lines is missing
- truncatedHunkOldLinesMissing=Truncated hunk, at least {0} old lines is missing
- tSizeMustBeGreaterOrEqual1=tSize must be >= 1
- unableToCheckConnectivity=Unable to check connectivity.
- unableToCreateNewObject=Unable to create new object: {0}
- unableToReadPackfile=Unable to read packfile {0}
- unableToRemovePath=Unable to remove path ''{0}''
- unableToWrite=Unable to write {0}
- unableToSignCommitNoSecretKey=Unable to sign commit. Signing key not available.
- unauthorized=Unauthorized
- unencodeableFile=Unencodable file: {0}
- unexpectedCompareResult=Unexpected metadata comparison result: {0}
- unexpectedEndOfConfigFile=Unexpected end of config file
- unexpectedEndOfInput=Unexpected end of input
- unexpectedEofInPack=Unexpected EOF in partially created pack
- unexpectedHunkTrailer=Unexpected hunk trailer
- unexpectedOddResult=odd: {0} + {1} - {2}
- unexpectedPacketLine=unexpected {0}
- unexpectedRefReport={0}: unexpected ref report: {1}
- unexpectedReportLine=unexpected report line: {0}
- unexpectedReportLine2={0} unexpected report line: {1}
- unexpectedSubmoduleStatus=Unexpected submodule status: ''{0}''
- unknownOrUnsupportedCommand=Unknown or unsupported command "{0}", only "{1}" is allowed.
- unknownDIRCVersion=Unknown DIRC version {0}
- unknownHost=unknown host
- unknownObject=unknown object
- unknownObjectInIndex=unknown object {0} found in index but not in pack file
- unknownObjectType=Unknown object type {0}.
- unknownObjectType2=unknown
- unknownRefStorageFormat=Unknown ref storage format "{0}"
- unknownRepositoryFormat=Unknown repository format
- unknownRepositoryFormat2=Unknown repository format "{0}"; expected "0".
- unknownTransportCommand=unknown command {0}
- unknownZlibError=Unknown zlib error.
- unlockLockFileFailed=Unlocking LockFile ''{0}'' failed
- unmergedPath=Unmerged path: {0}
- unmergedPaths=Repository contains unmerged paths
- unpackException=Exception while parsing pack stream
- unreadablePackIndex=Unreadable pack index: {0}
- unrecognizedPackExtension=Unrecognized pack extension: {0}
- unrecognizedRef=Unrecognized ref: {0}
- unsetMark=Mark not set
- unsupportedAlternates=Alternates not supported
- unsupportedArchiveFormat=Unknown archive format ''{0}''
- unsupportedCommand0=unsupported command 0
- unsupportedEncryptionAlgorithm=Unsupported encryption algorithm: {0}
- unsupportedEncryptionVersion=Unsupported encryption version: {0}
- unsupportedGC=Unsupported garbage collector for repository type: {0}
- unsupportedMark=Mark not supported
- unsupportedOperationNotAddAtEnd=Not add-at-end: {0}
- unsupportedPackIndexVersion=Unsupported pack index version {0}
- unsupportedPackVersion=Unsupported pack version {0}.
- unsupportedReftableVersion=Unsupported reftable version {0}.
- unsupportedRepositoryDescription=Repository description not supported
- updateRequiresOldIdAndNewId=Update requires both old ID and new ID to be nonzero
- updatingHeadFailed=Updating HEAD failed
- updatingReferences=Updating references
- updatingRefFailed=Updating the ref {0} to {1} failed. ReturnCode from RefUpdate.update() was {2}
- upstreamBranchName=branch ''{0}'' of {1}
- uriNotConfigured=Submodule URI not configured
- uriNotFound={0} not found
- uriNotFoundWithMessage={0} not found: {1}
- URINotSupported=URI not supported: {0}
- userConfigInvalid=Git config in the user's home directory {0} is invalid {1}
- validatingGitModules=Validating .gitmodules files
- verifySignatureBad=BAD signature from "{0}"
- verifySignatureExpired=Expired signature from "{0}"
- verifySignatureGood=Good signature from "{0}"
- verifySignatureIssuer=issuer "{0}"
- verifySignatureKey=using key {0}
- verifySignatureMade=Signature made {0}
- verifySignatureTrust=[{0}]
- walkFailure=Walk failure.
- wantNoSpaceWithCapabilities=No space between oid and first capability in first want line
- wantNotValid=want {0} not valid
- weeksAgo={0} weeks ago
- windowSizeMustBeLesserThanLimit=Window size must be < limit
- windowSizeMustBePowerOf2=Window size must be power of 2
- writerAlreadyInitialized=Writer already initialized
- writeTimedOut=Write timed out after {0} ms
- writingNotPermitted=Writing not permitted
- writingNotSupported=Writing {0} not supported.
- writingObjects=Writing objects
- wrongDecompressedLength=wrong decompressed length
- wrongRepositoryState=Wrong Repository State: {0}
- year=year
- years=years
- years0MonthsAgo={0} {1} ago
- yearsAgo={0} years ago
- yearsMonthsAgo={0} {1}, {2} {3} ago
|