You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

JGitText.java 37KB

GPG signature verification via BouncyCastle Add a GpgSignatureVerifier interface, plus a factory to create instances thereof that is provided via the ServiceLoader mechanism. Implement the new interface for BouncyCastle. A verifier maintains an internal LRU cache of previously found public keys to speed up verifying multiple objects (tag or commits). Mergetags are not handled. Provide a new VerifySignatureCommand in org.eclipse.jgit.api together with a factory method Git.verifySignature(). The command can verify signatures on tags or commits, and can be limited to accept only tags or commits. Provide a new public WrongObjectTypeException thrown when the command is limited to either tags or commits and a name resolves to some other object kind. In jgit.pgm, implement "git tag -v", "git log --show-signature", and "git show --show-signature". The output is similar to command-line gpg invoked via git, but not identical. In particular, lines are not prefixed by "gpg:" but by "bc:". Trust levels for public keys are read from the keys' trust packets, not from GPG's internal trust database. A trust packet may or may not be set. Command-line GPG produces more warning lines depending on the trust level, warning about keys with a trust level below "full". There are no unit tests because JGit still doesn't have any setup to do signing unit tests; this would require at least a faked .gpg directory with pre-created key rings and keys, and a way to make the BouncyCastle classes use that directory instead of the default. See bug 547538 and also bug 544847. Tested manually with a small test repository containing signed and unsigned commits and tags, with signatures made with different keys and made by command-line git using GPG 2.2.25 and by JGit using BouncyCastle 1.65. Bug: 547751 Change-Id: If7e34aeed6ca6636a92bf774d893d98f6d459181 Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
3 years ago
Support creating pack bitmap indexes in PackWriter. Update the PackWriter to support writing out pack bitmap indexes, a parallel ".bitmap" file to the ".pack" file. Bitmaps are selected at commits every 1 to 5,000 commits for each unique path from the start. The most recent 100 commits are all bitmapped. The next 19,000 commits have a bitmaps every 100 commits. The remaining commits have a bitmap every 5,000 commits. Commits with more than 1 parent are prefered over ones with 1 or less. Furthermore, previously computed bitmaps are reused, if the previous entry had the reuse flag set, which is set when the bitmap was placed at the max allowed distance. Bitmaps are used to speed up the counting phase when packing, for requests that are not shallow. The PackWriterBitmapWalker uses a RevFilter to proactively mark commits with RevFlag.SEEN, when they appear in a bitmap. The walker produces the full closure of reachable ObjectIds, given the collection of starting ObjectIds. For fetch request, two ObjectWalks are executed to compute the ObjectIds reachable from the haves and from the wants. The ObjectIds needed to be written are determined by taking all the resulting wants AND NOT the haves. For clone requests, we get cached pack support for "free" since it is possible to determine if all of the ObjectIds in a pack file are included in the resulting list of ObjectIds to write. On my machine, the best times for clones and fetches of the linux kernel repository (with about 2.6M objects and 300K commits) are tabulated below: Operation Index V2 Index VE003 Clone 37530ms (524.06 MiB) 82ms (524.06 MiB) Fetch (1 commit back) 75ms 107ms Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB) Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB) Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB) Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB) Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB) Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
11 years ago
blame: Compute the origin of lines in a result file BlameGenerator digs through history and discovers the origin of each line of some result file. BlameResult consumes the stream of regions created by the generator and lays them out in a table for applications to display alongside of source lines. Applications may optionally push in the working tree copy of a file using the push(String, byte[]) method, allowing the application to receive accurate line annotations for the working tree version. Lines that are uncommitted (difference between HEAD and working tree) will show up with the description given by the application as the author, or "Not Committed Yet" as a default string. Applications may also run the BlameGenerator in reverse mode using the reverse(AnyObjectId, AnyObjectId) method instead of push(). When running in the reverse mode the generator annotates lines by the commit they are removed in, rather than the commit they were added in. This allows a user to discover where a line disappeared from when they are looking at an older revision in the repository. For example: blame --reverse 16e810b2..master -L 1080, org.eclipse.jgit.test/tst/org/eclipse/jgit/storage/file/RefDirectoryTest.java ( 1080) } 2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1081) 2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1082) /** 2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1083) * Kick the timestamp of a local file. Above we learn that line 1080 (a closing curly brace of the prior method) still exists in branch master, but the Javadoc comment below it has been removed by Christian Halstrick on May 20th as part of commit 2302a6d3. This result differs considerably from that of C Git's blame --reverse feature. JGit tells the reader which commit performed the delete, while C Git tells the reader the last commit that still contained the line, leaving it an exercise to the reader to discover the descendant that performed the removal. This is still only a basic implementation. Quite notably it is missing support for the smart block copy/move detection that the C implementation of `git blame` is well known for. Despite being incremental, the BlameGenerator can only be run once. After the generator runs it cannot be reused. A better implementation would support applications browsing through history efficiently. In regards to CQ 5110, only a little of the original code survives. CQ: 5110 Bug: 306161 Change-Id: I84b8ea4838bb7d25f4fcdd540547884704661b8f Signed-off-by: Kevin Sawicki <kevin@github.com> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
13 years ago
Support creating pack bitmap indexes in PackWriter. Update the PackWriter to support writing out pack bitmap indexes, a parallel ".bitmap" file to the ".pack" file. Bitmaps are selected at commits every 1 to 5,000 commits for each unique path from the start. The most recent 100 commits are all bitmapped. The next 19,000 commits have a bitmaps every 100 commits. The remaining commits have a bitmap every 5,000 commits. Commits with more than 1 parent are prefered over ones with 1 or less. Furthermore, previously computed bitmaps are reused, if the previous entry had the reuse flag set, which is set when the bitmap was placed at the max allowed distance. Bitmaps are used to speed up the counting phase when packing, for requests that are not shallow. The PackWriterBitmapWalker uses a RevFilter to proactively mark commits with RevFlag.SEEN, when they appear in a bitmap. The walker produces the full closure of reachable ObjectIds, given the collection of starting ObjectIds. For fetch request, two ObjectWalks are executed to compute the ObjectIds reachable from the haves and from the wants. The ObjectIds needed to be written are determined by taking all the resulting wants AND NOT the haves. For clone requests, we get cached pack support for "free" since it is possible to determine if all of the ObjectIds in a pack file are included in the resulting list of ObjectIds to write. On my machine, the best times for clones and fetches of the linux kernel repository (with about 2.6M objects and 300K commits) are tabulated below: Operation Index V2 Index VE003 Clone 37530ms (524.06 MiB) 82ms (524.06 MiB) Fetch (1 commit back) 75ms 107ms Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB) Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB) Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB) Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB) Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB) Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
11 years ago
PackWriter: Support reuse of entire packs The most expensive part of packing a repository for transport to another system is enumerating all of the objects in the repository. Once this gets to the size of the linux-2.6 repository (1.8 million objects), enumeration can take several CPU minutes and costs a lot of temporary working set memory. Teach PackWriter to efficiently reuse an existing "cached pack" by answering a clone request with a thin pack followed by a larger cached pack appended to the end. This requires the repository owner to first construct the cached pack by hand, and record the tip commits inside of $GIT_DIR/objects/info/cached-packs: cd $GIT_DIR root=$(git rev-parse master) tmp=objects/.tmp-$$ names=$(echo $root | git pack-objects --keep-true-parents --revs $tmp) for n in $names; do chmod a-w $tmp-$n.pack $tmp-$n.idx touch objects/pack/pack-$n.keep mv $tmp-$n.pack objects/pack/pack-$n.pack mv $tmp-$n.idx objects/pack/pack-$n.idx done (echo "+ $root"; for n in $names; do echo "P $n"; done; echo) >>objects/info/cached-packs git repack -a -d When a clone request needs to include $root, the corresponding cached pack will be copied as-is, rather than enumerating all of the objects that are reachable from $root. For a linux-2.6 kernel repository that should be about 376 MiB, the above process creates two packs of 368 MiB and 38 MiB[1]. This is a local disk usage increase of ~26 MiB, due to reduced delta compression between the large cached pack and the smaller recent activity pack. The overhead is similar to 1 full copy of the compressed project sources. With this cached pack in hand, JGit daemon completes a clone request in 1m17s less time, but a slightly larger data transfer (+2.39 MiB): Before: remote: Counting objects: 1861830, done remote: Finding sources: 100% (1861830/1861830) remote: Getting sizes: 100% (88243/88243) remote: Compressing objects: 100% (88184/88184) Receiving objects: 100% (1861830/1861830), 376.01 MiB | 19.01 MiB/s, done. remote: Total 1861830 (delta 4706), reused 1851053 (delta 1553844) Resolving deltas: 100% (1564621/1564621), done. real 3m19.005s After: remote: Counting objects: 1601, done remote: Counting objects: 1828460, done remote: Finding sources: 100% (50475/50475) remote: Getting sizes: 100% (18843/18843) remote: Compressing objects: 100% (7585/7585) remote: Total 1861830 (delta 2407), reused 1856197 (delta 37510) Receiving objects: 100% (1861830/1861830), 378.40 MiB | 31.31 MiB/s, done. Resolving deltas: 100% (1559477/1559477), done. real 2m2.938s Repository owners can periodically refresh their cached packs by repacking their repository, folding all newer objects into a larger cached pack. Since repacking is already considered to be a normal Git maintenance activity, this isn't a very big burden. [1] In this test $root was set back about two weeks. Change-Id: Ib87131d5c4b5e8c5cacb0f4fe16ff4ece554734b Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
13 years ago
Persist filesystem timestamp resolution and allow manual configuration To enable persisting filesystem timestamp resolution per FileStore add a new config section to the user global git configuration: - Config section is "filesystem" - Config subsection is concatenation of - Java vendor (system property "java.vm.vendor") - runtime version (system property "java.vm.version") - FileStore's name - separated by '|' e.g. "AdoptOpenJDK|1.8.0_212-b03|/dev/disk1s1" The prefix is needed since some Java versions do not expose the full timestamp resolution of the underlying filesystem. This may also depend on the underlying operating system hence concrete key values may not be portable. - Config key for timestamp resolution is "timestampResolution" as a time value, supported time units are those supported by DefaultTypedConfigGetter#getTimeUnit If timestamp resolution is already configured for a given FileStore the configured value is used instead of measuring the resolution. When timestamp resolution was measured it is persisted in the user global git configuration. Example: [filesystem "AdoptOpenJDK|1.8.0_212-b03|/dev/disk1s1"] timestampResolution = 1 seconds If locking the git config file fails retry saving the resolution up to 5 times in order to workaround races with another thread. In order to avoid stack overflow use the fallback filesystem timestamp resolution when loading FileBasedConfig which creates itself a FileSnapshot to help checking if the config changed. Note: - on some OSes Java 8,9 truncate to milliseconds or seconds, see https://bugs.openjdk.java.net/browse/JDK-8177809, fixed in Java 10 - UnixFileAttributes up to Java 12 truncates timestamp resolution to microseconds when converting the internal representation to FileTime exposed in the API, see https://bugs.openjdk.java.net/browse/JDK-8181493 - WindowsFileAttributes also provides only microsecond resolution up to Java 12 Hence do not attempt to manually configure a higher timestamp resolution than supported by the Java version being used at runtime. Bug: 546891 Bug: 548188 Change-Id: Iff91b8f9e6e5e2295e1463f87c8e95edf4abbcf8 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
4 years ago
Fix atomic lock file creation on NFS FS_POSIX.createNewFile(File) failed to properly implement atomic file creation on NFS using the algorithm [1]: - name of the hard link must be unique to prevent that two processes using different NFS clients try to create the same link. This would render nlink useless to detect if there was a race. - the hard link must be retained for the lifetime of the file since we don't know when the state of the involved NFS clients will be synchronized. This depends on NFS configuration options. To fix these issues we need to change the signature of createNewFile which would break API. Hence deprecate the old method FS.createNewFile(File) and add a new method createNewFileAtomic(File). The new method returns a LockToken which needs to be retained by the caller (LockFile) until all involved NFS clients synchronized their state. Since we don't know when the NFS caches are synchronized we need to retain the token until the corresponding file is no longer needed. The LockToken must be closed after the LockFile using it has been committed or unlocked. On Posix, if core.supportsAtomicCreateNewFile = false this will delete the hard link which guarded the atomic creation of the file. When acquiring the lock fails ensure that the hard link is removed. [1] https://www.time-travellers.org/shane/papers/NFS_considered_harmful.html also see file creation flag O_EXCL in http://man7.org/linux/man-pages/man2/open.2.html Change-Id: I84fcb16143a5f877e9b08c6ee0ff8fa4ea68a90d Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
5 years ago
Config: Rewrite subsection and value escaping and parsing Previously, Config was using the same method for both escaping and parsing subsection names and config values. The goal was presumably code savings, but unfortunately, these two pieces of the git config format are simply different. In git v2.15.1, Documentation/config.txt says the following about subsection names: "Subsection names are case sensitive and can contain any characters except newline (doublequote `"` and backslash can be included by escaping them as `\"` and `\\`, respectively). Section headers cannot span multiple lines. Variables may belong directly to a section or to a given subsection." And, later in the same documentation section, about values: "A line that defines a value can be continued to the next line by ending it with a `\`; the backquote and the end-of-line are stripped. Leading whitespaces after 'name =', the remainder of the line after the first comment character '#' or ';', and trailing whitespaces of the line are discarded unless they are enclosed in double quotes. Internal whitespaces within the value are retained verbatim. Inside double quotes, double quote `"` and backslash `\` characters must be escaped: use `\"` for `"` and `\\` for `\`. The following escape sequences (beside `\"` and `\\`) are recognized: `\n` for newline character (NL), `\t` for horizontal tabulation (HT, TAB) and `\b` for backspace (BS). Other char escape sequences (including octal escape sequences) are invalid." The main important differences are that subsection names have a limited set of supported escape sequences, and do not support newlines at all, either escaped or unescaped. Arguably, it would be easy to support escaped newlines, but C git simply does not: $ git config -f foo.config $'foo.bar\nbaz.quux' value error: invalid key (newline): foo.bar baz.quux I468106ac was an attempt to fix one bug in escapeValue, around leading whitespace, without having to rewrite the whole escaping/parsing code. Unfortunately, because escapeValue was used for escaping subsection names as well, this made it possible to write invalid config files, any time Config#toText is called with a subsection name with trailing whitespace, like {foo }. Rather than pile hacks on top of hacks, fix it for real by largely rewriting the escaping and parsing code. In addition to fixing escape sequences, fix (and write tests for) a few more issues in the old implementation: * Now that we can properly parse it, always emit newlines as "\n" from escapeValue, rather than the weird (but still supported) syntax with a non-quoted trailing literal "\n\" before the newline. In addition to producing more readable output and matching the behavior of C git, this makes the escaping code much simpler. * Disallow '\0' entirely within both subsection names and values, since due to Unix command line argument conventions it is impossible to pass such values to "git config". * Properly preserve intra-value whitespace when parsing, rather than collapsing it all to a single space. Change-Id: I304f626b9d0ad1592c4e4e449a11b136c0f8b3e3
6 years ago
Retry stale file handles on .git/config file On a local non-NFS filesystem the .git/config file will be orphaned if it is replaced by a new process while the current process is reading the old file. The current process successfully continues to read the orphaned file until it closes the file handle. Since NFS servers do not keep track of open files, instead of orphaning the old .git/config file, such a replacement on an NFS filesystem will instead cause the old file to be garbage collected (deleted). A stale file handle exception will be raised on NFS clients if the file is garbage collected (deleted) on the server while it is being read. Since we no longer have access to the old file in these cases, the previous code would just fail. However, in these cases, reopening the file and rereading it will succeed (since it will open the new replacement file). Since retrying the read is a viable strategy to deal with stale file handles on the .git/config file, implement such a strategy. Since it is possible that the .git/config file could be replaced again while rereading it, loop on stale file handle exceptions, up to 5 extra times, trying to read the .git/config file again, until we either read the new file, or find that the file no longer exists. The limit of 5 is arbitrary, and provides a safe upper bounds to prevent infinite loops consuming resources in a potential unforeseen persistent error condition. Change-Id: I6901157b9dfdbd3013360ebe3eb40af147a8c626 Signed-off-by: Nasser Grainawi <nasser@codeaurora.org> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
6 years ago
Client-side protocol V2 support for fetching Make all transports request protocol V2 when fetching. Depending on the transport, set the GIT_PROTOCOL environment variable (file and ssh), pass the Git-Protocol header (http), or set the hidden "\0version=2\0" (git anon). We'll fall back to V0 if the server doesn't reply with a version 2 answer. A user can control which protocol the client requests via the git config protocol.version; if not set, JGit requests protocol V2 for fetching. Pushing always uses protocol V0 still. In the API, there is only a new Transport.openFetch() version that takes a collection of RefSpecs plus additional patterns to construct the Ref prefixes for the "ls-refs" command in protocol V2. If none are given, the server will still advertise all refs, even in protocol V2. BasePackConnection.readAdvertisedRefs() handles falling back to protocol V0. It newly returns true if V0 was used and the advertised refs were read, and false if V2 is used and an explicit "ls-refs" is needed. (This can't be done transparently inside readAdvertisedRefs() because a "stateless RPC" transport like TransportHttp may need to open a new connection for writing.) BasePackFetchConnection implements the changes needed for the protocol V2 "fetch" command (stateless protocol, simplified ACK handling, delimiters, section headers). In TransportHttp, change readSmartHeaders() to also recognize the "version 2" packet line as a valid smart server indication. Adapt tests, and run all the HTTP tests not only with both HTTP connection factories (JDK and Apache HttpClient) but also with both protocol V0 and V2. The SSH tests are much slower and much more focused on the SSH protocol and SSH key handling. Factor out two very simple cloning and pulling tests and make those run with protocol V2. Bug: 553083 Change-Id: I357c7f5daa7efb2872f1c64ee6f6d54229031ae1 Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
3 years ago
Added read/write support for pack bitmap index. A pack bitmap index is an additional index of compressed bitmaps of the object graph. Furthermore, a logical API of the index functionality is included, as it is expected to be used by the PackWriter. Compressed bitmaps are created using the javaewah library, which is a word-aligned compressed variant of the Java bitset class based on run-length encoding. The library only works with positive integer values. Thus, the maximum number of ObjectIds in a pack file that this index can currently support is limited to Integer.MAX_VALUE. Every ObjectId is given an integer mapping. The integer is the position of the ObjectId in the complete ObjectId list, sorted by offset, for the pack file. That integer is what the bitmaps use to reference the ObjectId. Currently, the new index format can only be used with pack files that contain a complete closure of the object graph e.g. the result of a garbage collection. The index file includes four bitmaps for the Git object types i.e. commits, trees, blobs, and tags. In addition, a collection of bitmaps keyed by an ObjectId is also included. The bitmap for each entry in the collection represents the full closure of ObjectIds reachable from the keyed ObjectId (including the keyed ObjectId itself). The bitmaps are further compressed by XORing the current bitmaps against prior bitmaps in the index, and selecting the smallest representation. The XOR'd bitmap and offset from the current entry to the position of the bitmap to XOR against is the actual representation of the entry in the index file. Each entry contains one byte, which is currently used to note whether the bitmap should be blindly reused. Change-Id: Id328724bf6b4c8366a088233098c18643edcf40f
11 years ago
Fix atomic lock file creation on NFS FS_POSIX.createNewFile(File) failed to properly implement atomic file creation on NFS using the algorithm [1]: - name of the hard link must be unique to prevent that two processes using different NFS clients try to create the same link. This would render nlink useless to detect if there was a race. - the hard link must be retained for the lifetime of the file since we don't know when the state of the involved NFS clients will be synchronized. This depends on NFS configuration options. To fix these issues we need to change the signature of createNewFile which would break API. Hence deprecate the old method FS.createNewFile(File) and add a new method createNewFileAtomic(File). The new method returns a LockToken which needs to be retained by the caller (LockFile) until all involved NFS clients synchronized their state. Since we don't know when the NFS caches are synchronized we need to retain the token until the corresponding file is no longer needed. The LockToken must be closed after the LockFile using it has been committed or unlocked. On Posix, if core.supportsAtomicCreateNewFile = false this will delete the hard link which guarded the atomic creation of the file. When acquiring the lock fails ensure that the hard link is removed. [1] https://www.time-travellers.org/shane/papers/NFS_considered_harmful.html also see file creation flag O_EXCL in http://man7.org/linux/man-pages/man2/open.2.html Change-Id: I84fcb16143a5f877e9b08c6ee0ff8fa4ea68a90d Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
5 years ago
Support http.<url>.* configs Git has a rather elaborate mechanism to specify HTTP configuration options per URL, based on pattern matching the URL against "http" subsection names.[1] The URLs used for this matching are always the original URLs; redirected URLs do not participate. * Scheme and host must match exactly case-insensitively. * An optional user name must match exactly. * Ports must match exactly after default ports have been filled in. * The path of a subsection, if any, must match a segment prefix of the path of the URL. * Matches with user name take precedence over equal-length path matches without, but longer path matches are preferred over shorter matches with user name. Implement this for JGit. Factor out the HttpConfig from TransportHttp and implement the matching and override mechanism. The set of supported settings is still the same; JGit currently supports only followRedirects, postBuffer, and sslVerify, plus the JGit-specific maxRedirects key. Add tests for path normalization and prefix matching only on segment separators, and use the new mechanism in SmartClientSmartServerSslTest to disable sslVerify selectively for only the test server URLs. Compare also bug 374703 and bug 465492. With this commit it would be possible to set sslVerify to false for only the git server using a self-signed certificate instead of having to switch it off globally via http.sslVerify. [1] https://git-scm.com/docs/git-config Change-Id: I42a3c2399cb937cd7884116a2a32fcaa7a418fcb Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
6 years ago
TransportHttp: shared SSLContext during fetch or push TransportHttp makes several HTTP requests. The SSLContext and socket factory must be shared over these requests, otherwise authentication information may not be propagated correctly from one request to the next. This is important for authentication mechanisms that rely on client-side state, like NEGOTIATE (either NTLM, if the underlying HTTP library supports it, or Kerberos). In particular, SPNEGO cannot authenticate on a POST request; the authentication must come from the initial GET request, which implies that the POST request must use the same SSLContext and socket factory that was used for the GET. Change the way HTTPS connections are configured. Introduce the concept of a GitSession, which is a client-side HTTP session over several HTTPS requests. TransportHttp creates such a session and uses it to configure all HTTP requests during that session (fetch or push). This gives a way to abstract away the differences between JDK and Apache HTTP connections and to configure SSL setup outside. A GitSession can maintain state and thus give all HTTP requests in a session the same socket factory. Introduce an extension interface HttpConnectionFactory2 that adds a method to obtain a new GitSession. Implement this for both existing HTTP connection factories. Change TransportHttp to use the new GitSession to configure HTTP connections. The old methods for disabling SSL verification still exist to support possibly external connection and connection factory implementations that do not make use of the new GitSession yet. Bug: 535850 Change-Id: Iedf67464e4e353c1883447c13c86b5a838e678f1 Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
3 years ago
Add support to follow HTTP redirects git-core follows HTTP redirects so JGit should also provide this. Implement config setting http.followRedirects with possible values "false" (= never), "true" (= always), and "initial" (only on GET, but not on POST).[1] We must do our own redirect handling and cannot rely on the support that the underlying real connection may offer. At least the JDK's HttpURLConnection has two features that get in the way: * it does not allow cross-protocol redirects and thus fails on http->https redirects (for instance, on Github). * it translates a redirect after a POST to a GET unless the system property "http.strictPostRedirect" is set to true. We don't want to manipulate that system setting nor require it. Additionally, git has its own rules about what redirects it accepts;[2] for instance, it does not allow a redirect that adds query arguments. We handle response codes 301, 302, 303, and 307 as per RFC 2616.[3] On POST we do not handle 303, and we follow redirects only if http.followRedirects == true. Redirects are followed only a certain number of times. There are two ways to control that limit: * by default, the limit is given by the http.maxRedirects system property that is also used by the JDK. If the system property is not set, the default is 5. (This is much lower than the JDK default of 20, but I don't see the value of following so many redirects.) * this can be overwritten by a http.maxRedirects git config setting. The JGit http.* git config settings are currently all global; JGit has no support yet for URI-specific settings "http.<pattern>.name". Adding support for that is well beyond the scope of this change. Like git-core, we log every redirect attempt (LOG.info) so that users may know about the redirection having occurred. Extends the test framework to configure an AppServer with HTTPS support so that we can test cloning via HTTPS and redirections involving HTTPS. [1] https://git-scm.com/docs/git-config [2] https://kernel.googlesource.com/pub/scm/git/git/+/6628eb41db5189c0cdfdced6d8697e7c813c5f0f [3] https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html CQ: 13987 Bug: 465167 Change-Id: I86518cb76842f7d326b51f8715e3bbf8ada89859 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
9 years ago
Add support to follow HTTP redirects git-core follows HTTP redirects so JGit should also provide this. Implement config setting http.followRedirects with possible values "false" (= never), "true" (= always), and "initial" (only on GET, but not on POST).[1] We must do our own redirect handling and cannot rely on the support that the underlying real connection may offer. At least the JDK's HttpURLConnection has two features that get in the way: * it does not allow cross-protocol redirects and thus fails on http->https redirects (for instance, on Github). * it translates a redirect after a POST to a GET unless the system property "http.strictPostRedirect" is set to true. We don't want to manipulate that system setting nor require it. Additionally, git has its own rules about what redirects it accepts;[2] for instance, it does not allow a redirect that adds query arguments. We handle response codes 301, 302, 303, and 307 as per RFC 2616.[3] On POST we do not handle 303, and we follow redirects only if http.followRedirects == true. Redirects are followed only a certain number of times. There are two ways to control that limit: * by default, the limit is given by the http.maxRedirects system property that is also used by the JDK. If the system property is not set, the default is 5. (This is much lower than the JDK default of 20, but I don't see the value of following so many redirects.) * this can be overwritten by a http.maxRedirects git config setting. The JGit http.* git config settings are currently all global; JGit has no support yet for URI-specific settings "http.<pattern>.name". Adding support for that is well beyond the scope of this change. Like git-core, we log every redirect attempt (LOG.info) so that users may know about the redirection having occurred. Extends the test framework to configure an AppServer with HTTPS support so that we can test cloning via HTTPS and redirections involving HTTPS. [1] https://git-scm.com/docs/git-config [2] https://kernel.googlesource.com/pub/scm/git/git/+/6628eb41db5189c0cdfdced6d8697e7c813c5f0f [3] https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html CQ: 13987 Bug: 465167 Change-Id: I86518cb76842f7d326b51f8715e3bbf8ada89859 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
9 years ago
Merging Git notes Merging Git notes branches has several differences from merging "normal" branches. Although Git notes are initially stored as one flat tree the tree may fanout when the number of notes becomes too large for efficient access. In this case the first two hex digits of the note name will be used as a subdirectory name and the rest 38 hex digits as the file name under that directory. Similarly, when number of notes decreases a fanout tree may collapse back into a flat tree. The Git notes merge algorithm must take into account possibly different tree structures in different note branches and must properly match them against each other. Any conflict on a Git note is, by default, resolved by concatenating the two conflicting versions of the note. A delete-edit conflict is, by default, resolved by keeping the edit version. The note merge logic is pluggable and the caller may provide custom note merger that will perform different merging strategy. Additionally, it is possible to have non-note entries inside a notes tree. The merge algorithm must also take this fact into account and will try to merge such non-note entries. However, in case of any merge conflicts the merge operation will fail. Git notes merge algorithm is currently not trying to do content merge of non-note entries. Thanks to Shawn Pearce for patiently answering my questions related to this topic, giving hints and providing code snippets. Change-Id: I3b2335c76c766fd7ea25752e54087f9b19d69c88 Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
13 years ago
Make blame work correctly on merge conflicts When a conflicting file was blamed, JGit would not identify lines coming from the merge parents. The main cause for this was that Blame and BlameCommand simply added the first DirCacheEntry found for a file to its queue of candidates (blobs or commits) to consider. In case of a conflict this typically is the merge base commit, and comparing a auto-merged contents against that base would yield incorrect results. Such cases have to be handled specially. The candidate to be considered by the blame must use the working tree contents, but at the same time behave like a merge commit/candidate with HEAD and the MERGE_HEADs as parents. Canonical git does something very similar, see [1]. Implement that and add tests. I first did this for the JGit pgm Blame command. When I then tried to do the same in BlameCommand, I noticed that the latter also included some fancy but incomplete CR-LF handling. In order to be able to use the new BlameGenerator.prepareHead() also in BlameCommand this CR-LF handling was also moved into BlameGenerator and corrected in doing so. (Just considering the git config settings was not good enough, CR-LF behavior can also be influenced by .gitattributes, and even by whether the file in the index has CR-LF. To correctly determine CR-LF handling for check-in one needs to do a TreeWalk with at least a FileTreeIterator and a DirCacheIterator.) [1] https://github.com/git/git/blob/v2.22.0/blame.c#L174 Bug: 434330 Change-Id: I9d763dd6ba478b0b6ebf9456049d6301f478ef7c Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
4 years ago
Added read/write support for pack bitmap index. A pack bitmap index is an additional index of compressed bitmaps of the object graph. Furthermore, a logical API of the index functionality is included, as it is expected to be used by the PackWriter. Compressed bitmaps are created using the javaewah library, which is a word-aligned compressed variant of the Java bitset class based on run-length encoding. The library only works with positive integer values. Thus, the maximum number of ObjectIds in a pack file that this index can currently support is limited to Integer.MAX_VALUE. Every ObjectId is given an integer mapping. The integer is the position of the ObjectId in the complete ObjectId list, sorted by offset, for the pack file. That integer is what the bitmaps use to reference the ObjectId. Currently, the new index format can only be used with pack files that contain a complete closure of the object graph e.g. the result of a garbage collection. The index file includes four bitmaps for the Git object types i.e. commits, trees, blobs, and tags. In addition, a collection of bitmaps keyed by an ObjectId is also included. The bitmap for each entry in the collection represents the full closure of ObjectIds reachable from the keyed ObjectId (including the keyed ObjectId itself). The bitmaps are further compressed by XORing the current bitmaps against prior bitmaps in the index, and selecting the smallest representation. The XOR'd bitmap and offset from the current entry to the position of the bitmap to XOR against is the actual representation of the entry in the index file. Each entry contains one byte, which is currently used to note whether the bitmap should be blindly reused. Change-Id: Id328724bf6b4c8366a088233098c18643edcf40f
11 years ago
Handle stale file handles on packed-refs file On a local filesystem the packed-refs file will be orphaned if it is replaced by another client while the current client is reading the old one. However, since NFS servers do not keep track of open files, instead of orphaning the old packed-refs file, such a replacement will cause the old file to be garbage collected instead. A stale file handle exception will be raised on NFS servers if the file is garbage collected (deleted) on the server while it is being read. Since we no longer have access to the old file in these cases, the previous code would just fail. However, in these cases, reopening the file and rereading it will succeed (since it will reopen the new replacement file). So retrying the read is a viable strategy to deal with stale file handles on the packed-refs file, implement such a strategy. Since it is possible that the packed-refs file could be replaced again while rereading it (multiple consecutive updates can easily occur with ref deletions), loop on stale file handle exceptions, up to 5 extra times, trying to read the packed-refs file again, until we either read the new file, or find that the file no longer exists. The limit of 5 is arbitrary, and provides a safe upper bounds to prevent infinite loops consuming resources in a potential unforeseen persistent error condition. Change-Id: I085c472bafa6e2f32f610a33ddc8368bb4ab1814 Signed-off-by: Martin Fick<mfick@codeaurora.org> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
8 years ago
Rewrite push certificate parsing - Consistently return structured data, such as actual ReceiveCommands, which is more useful for callers that are doing things other than verifying the signature, e.g. recording the set of commands. - Store the certificate version field, as this is required to be part of the signed payload. - Add a toText() method to recreate the actual payload for signature verification. This requires keeping track of the un-chomped command strings from the original protocol stream. - Separate the parser from the certificate itself, so the actual PushCertificate object can be immutable. Make a fair attempt at deep immutability, but this is not possible with the current mutable ReceiveCommand structure. - Use more detailed error messages that don't involve NON-NLS strings. - Document null return values more thoroughly. Instead of having the undocumented behavior of throwing NPE from certain methods if they are not first guarded by enabled(), eliminate enabled() and return null from those methods. - Add tests for parsing a push cert from a section of pkt-line stream using a real live stream captured with Wireshark (which, it should be noted, uncovered several simply incorrect statements in C git's Documentation/technical/pack-protocol.txt). This is a slightly breaking API change to classes that were technically public and technically released in 4.0. However, it is highly unlikely that people were actually depending on public behavior, since there were no public methods to create PushCertificates with anything other than null field values, or a PushCertificateParser that did anything other than infinite loop or throw exceptions when reading. Change-Id: I5382193347a8eb1811032d9b32af9651871372d0
9 years ago
Persist filesystem timestamp resolution and allow manual configuration To enable persisting filesystem timestamp resolution per FileStore add a new config section to the user global git configuration: - Config section is "filesystem" - Config subsection is concatenation of - Java vendor (system property "java.vm.vendor") - runtime version (system property "java.vm.version") - FileStore's name - separated by '|' e.g. "AdoptOpenJDK|1.8.0_212-b03|/dev/disk1s1" The prefix is needed since some Java versions do not expose the full timestamp resolution of the underlying filesystem. This may also depend on the underlying operating system hence concrete key values may not be portable. - Config key for timestamp resolution is "timestampResolution" as a time value, supported time units are those supported by DefaultTypedConfigGetter#getTimeUnit If timestamp resolution is already configured for a given FileStore the configured value is used instead of measuring the resolution. When timestamp resolution was measured it is persisted in the user global git configuration. Example: [filesystem "AdoptOpenJDK|1.8.0_212-b03|/dev/disk1s1"] timestampResolution = 1 seconds If locking the git config file fails retry saving the resolution up to 5 times in order to workaround races with another thread. In order to avoid stack overflow use the fallback filesystem timestamp resolution when loading FileBasedConfig which creates itself a FileSnapshot to help checking if the config changed. Note: - on some OSes Java 8,9 truncate to milliseconds or seconds, see https://bugs.openjdk.java.net/browse/JDK-8177809, fixed in Java 10 - UnixFileAttributes up to Java 12 truncates timestamp resolution to microseconds when converting the internal representation to FileTime exposed in the API, see https://bugs.openjdk.java.net/browse/JDK-8181493 - WindowsFileAttributes also provides only microsecond resolution up to Java 12 Hence do not attempt to manually configure a higher timestamp resolution than supported by the Java version being used at runtime. Bug: 546891 Bug: 548188 Change-Id: Iff91b8f9e6e5e2295e1463f87c8e95edf4abbcf8 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
4 years ago
maxObjectSizeLimit for receive-pack. ReceivePack (and PackParser) can be configured with the maxObjectSizeLimit in order to prevent users from pushing too large objects to Git. The limit check is applied to all object types although it is most likely that a BLOB will exceed the limit. In all cases the size of the object header is excluded from the object size which is checked against the limit as this is the size of which a BLOB object would take in the working tree when checked out as a file. When an object exceeds the maxObjectSizeLimit the receive-pack will abort immediately. Delta objects (both offset and ref delta) are also checked against the limit. However, for delta objects we will first check the size of the inflated delta block against the maxObjectSizeLimit and abort immediately if it exceeds the limit. In this case we even do not know the exact size of the resolved delta object but we assume it will be larger than the given maxObjectSizeLimit as delta is generally only chosen if the delta can copy more data from the base object than the delta needs to insert or needs to represent the copy ranges. Aborting early, in this case, avoids unnecessary inflating of the (huge) delta block. Unfortunately, it is too expensive (especially for a large delta) to compute SHA-1 of an object that causes the receive-pack to abort. This would decrease the value of this feature whose main purpose is to protect server resources from users pushing huge objects. Therefore we don't report the SHA-1 in the error message. Change-Id: I177ef24553faacda444ed5895e40ac8925ca0d1e Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
12 years ago
Add support to follow HTTP redirects git-core follows HTTP redirects so JGit should also provide this. Implement config setting http.followRedirects with possible values "false" (= never), "true" (= always), and "initial" (only on GET, but not on POST).[1] We must do our own redirect handling and cannot rely on the support that the underlying real connection may offer. At least the JDK's HttpURLConnection has two features that get in the way: * it does not allow cross-protocol redirects and thus fails on http->https redirects (for instance, on Github). * it translates a redirect after a POST to a GET unless the system property "http.strictPostRedirect" is set to true. We don't want to manipulate that system setting nor require it. Additionally, git has its own rules about what redirects it accepts;[2] for instance, it does not allow a redirect that adds query arguments. We handle response codes 301, 302, 303, and 307 as per RFC 2616.[3] On POST we do not handle 303, and we follow redirects only if http.followRedirects == true. Redirects are followed only a certain number of times. There are two ways to control that limit: * by default, the limit is given by the http.maxRedirects system property that is also used by the JDK. If the system property is not set, the default is 5. (This is much lower than the JDK default of 20, but I don't see the value of following so many redirects.) * this can be overwritten by a http.maxRedirects git config setting. The JGit http.* git config settings are currently all global; JGit has no support yet for URI-specific settings "http.<pattern>.name". Adding support for that is well beyond the scope of this change. Like git-core, we log every redirect attempt (LOG.info) so that users may know about the redirection having occurred. Extends the test framework to configure an AppServer with HTTPS support so that we can test cloning via HTTPS and redirections involving HTTPS. [1] https://git-scm.com/docs/git-config [2] https://kernel.googlesource.com/pub/scm/git/git/+/6628eb41db5189c0cdfdced6d8697e7c813c5f0f [3] https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html CQ: 13987 Bug: 465167 Change-Id: I86518cb76842f7d326b51f8715e3bbf8ada89859 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
9 years ago
RenameBranchCommand: more consistent handling of short ref names Several problems: * The command didn't specify whether it expected short or full names. * For the new name, it expected a short name, but then got confused if tags or both local and remote branches with the same name existed. * For the old name, it accepted either a short or a full name, but again got confused if a short name was given and a tag with the same name existed. With such an interface, one cannot use Repository.findRef() to reliably find the branch to rename. Use exactRef() for the new name as by the time the Ref is needed its full name is known. For determining the old Ref from the name, do the resolution explicitly: first try exactRef (assuming the old name is a full name); if that doesn't find anything, try "refs/heads/<old>" and "refs/remotes/<old>" explicitly. Throw an exception if the name is ambiguous, or if exactRef returned something that is not a branch (refs/tags/... or also refs/notes/...). Document in the javadoc what kind of names are valid, and add tests. A user can still shoot himself in the foot if he chooses exceptionally stupid branch names. For instance, it is still possible to rename a branch to "refs/heads/foo" (full name "refs/heads/refs/heads/foo"), but it cannot be renamed further using the new short name if a branch with the full name "refs/heads/foo" exists. Similar edge cases exist for other dumb branch names, like a branch with the short name "refs/tags/foo". Renaming using the full name is always possible. Bug: 542446 Change-Id: I34ac91c80c0a00c79a384d16ce1e727c550d54e9 Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
5 years ago
Implement similarity based rename detection Content similarity based rename detection is performed only after a linear time detection is performed using exact content match on the ObjectIds. Any names which were paired up during that exact match phase are excluded from the inexact similarity based rename, which reduces the space that must be considered. During rename detection two entries cannot be marked as a rename if they are different types of files. This prevents a symlink from being renamed to a regular file, even if their blob content appears to be similar, or is identical. Efficiently comparing two files is performed by building up two hash indexes and hashing lines or short blocks from each file, counting the number of bytes that each line or block represents. Instead of using a standard java.util.HashMap, we use a custom open hashing scheme similiar to what we use in ObjecIdSubclassMap. This permits us to have a very light-weight hash, with very little memory overhead per cell stored. As we only need two ints per record in the map (line/block key and number of bytes), we collapse them into a single long inside of a long array, making very efficient use of available memory when we create the index table. We only need object headers for the index structure itself, and the index table, but not per-cell. This offers a massive space savings over using java.util.HashMap. The score calculation is done by approximating how many bytes are the same between the two inputs (which for a delta would be how much is copied from the base into the result). The score is derived by dividing the approximate number of bytes in common into the length of the larger of the two input files. Right now the SimilarityIndex table should average about 1/2 full, which means we waste about 50% of our memory on empty entries after we are done indexing a file and sort the table's contents. If memory becomes an issue we could discard the table and copy all records over to a new array that is properly sized. Building the index requires O(M + N log N) time, where M is the size of the input file in bytes, and N is the number of unique lines/blocks in the file. The N log N time constraint comes from the sort of the index table that is necessary to perform linear time matching against another SimilarityIndex created for a different file. To actually perform the rename detection, a SxD matrix is created, placing the sources (aka deletions) along one dimension and the destinations (aka additions) along the other. A simple O(S x D) loop examines every cell in this matrix. A SimilarityIndex is built along the row and reused for each column compare along that row, avoiding the costly index rebuild at the row level. A future improvement would be to load a smaller square matrix into SimilarityIndexes and process everything in that sub-matrix before discarding the column dimension and moving down to the next sub-matrix block along that same grid of rows. An optional ProgressMonitor is permitted to be passed in, allowing applications to see the progress of the detector as it works through the matrix cells. This provides some indication of current status for very long running renames. The default line/block hash function used by the SimilarityIndex may not be optimal, and may produce too many collisions. It is borrowed from RawText's hash, which is used to quickly skip out of a longer equality test if two lines have different hash functions. We may need to refine this hash in the future, in order to minimize the number of collisions we get on common source files. Based on a handful of test commits in JGit (especially my own recent rename repository refactoring series), this rename detector produces output that is very close to C Git. The content similarity scores are sometimes off by 1%, which is most probably caused by our SimilarityIndex type using a different hash function than C Git uses when it computes the delta size between any two objects in the rename matrix. Bug: 318504 Change-Id: I11dff969e8a2e4cf252636d857d2113053bdd9dc Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 years ago
Increase core.streamFileThreshold default to 50 MiB Projects like org.eclipse.mdt contain large XML files about 6 MiB in size. So does the Android project platform/frameworks/base. Doing a clone of either project with JGit takes forever to checkout the files into the working directory, because delta decompression tends to be very expensive as we need to constantly reposition the base stream for each copy instruction. This can be made worse by a very bad ordering of offsets, possibly due to an XML editor that doesn't preserve the order of elements in the file very well. Increasing the threshold to the same limit PackWriter uses when doing delta compression (50 MiB) permits a default configured JGit to decompress these XML file objects using the faster random-access arrays, rather than re-seeking through an inflate stream, significantly reducing checkout time after a clone. Since this new limit may be dangerously close to the JVM maximum heap size, every allocation attempt is now wrapped in a try/catch so that JGit can degrade by switching to the large object stream mode when the allocation is refused. It will run slower, but the operation will still complete. The large stream mode will run very well for big objects that aren't delta compressed, and is acceptable for delta compressed objects that are using only forward referencing copy instructions. Copies using prior offsets are still going to be horrible, and there is nothing we can do about it except increase core.streamFileThreshold. We might in the future want to consider changing the way the delta generators work in JGit and native C Git to avoid prior offsets once an object reaches a certain size, even if that causes the delta instruction stream to be slightly larger. Unfortunately native C Git won't want to do that until its also able to stream objects rather than malloc them as contiguous blocks. Change-Id: Ief7a3896afce15073e80d3691bed90c6a3897307 Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
13 years ago
Support creating pack bitmap indexes in PackWriter. Update the PackWriter to support writing out pack bitmap indexes, a parallel ".bitmap" file to the ".pack" file. Bitmaps are selected at commits every 1 to 5,000 commits for each unique path from the start. The most recent 100 commits are all bitmapped. The next 19,000 commits have a bitmaps every 100 commits. The remaining commits have a bitmap every 5,000 commits. Commits with more than 1 parent are prefered over ones with 1 or less. Furthermore, previously computed bitmaps are reused, if the previous entry had the reuse flag set, which is set when the bitmap was placed at the max allowed distance. Bitmaps are used to speed up the counting phase when packing, for requests that are not shallow. The PackWriterBitmapWalker uses a RevFilter to proactively mark commits with RevFlag.SEEN, when they appear in a bitmap. The walker produces the full closure of reachable ObjectIds, given the collection of starting ObjectIds. For fetch request, two ObjectWalks are executed to compute the ObjectIds reachable from the haves and from the wants. The ObjectIds needed to be written are determined by taking all the resulting wants AND NOT the haves. For clone requests, we get cached pack support for "free" since it is possible to determine if all of the ObjectIds in a pack file are included in the resulting list of ObjectIds to write. On my machine, the best times for clones and fetches of the linux kernel repository (with about 2.6M objects and 300K commits) are tabulated below: Operation Index V2 Index VE003 Clone 37530ms (524.06 MiB) 82ms (524.06 MiB) Fetch (1 commit back) 75ms 107ms Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB) Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB) Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB) Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB) Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB) Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
11 years ago
Shallow fetch: Respect "shallow" lines When fetching from a shallow clone, the client sends "have" lines to tell the server about objects it already has and "shallow" lines to tell where its local history terminates. In some circumstances, the server fails to honor the shallow lines and fails to return objects that the client needs. UploadPack passes the "have" lines to PackWriter so PackWriter can omit them from the generated pack. UploadPack processes "shallow" lines by calling RevWalk.assumeShallow() with the set of shallow commits. RevWalk creates and caches RevCommits for these shallow commits, clearing out their parents. That way, walks correctly terminate at the shallow commits instead of assuming the client has history going back behind them. UploadPack converts its RevWalk to an ObjectWalk, maintaining the cached RevCommits, and passes it to PackWriter. Unfortunately, to support shallow fetches the PackWriter does the following: if (shallowPack && !(walk instanceof DepthWalk.ObjectWalk)) walk = new DepthWalk.ObjectWalk(reader, depth); That is, when the client sends a "deepen" line (fetch --depth=<n>) and the caller has not passed in a DepthWalk.ObjectWalk, PackWriter throws away the RevWalk that was passed in and makes a new one. The cleared parent lists prepared by RevWalk.assumeShallow() are lost. Fortunately UploadPack intends to pass in a DepthWalk.ObjectWalk. It tries to create it by calling toObjectWalkWithSameObjects() on a DepthWalk.RevWalk. But it doesn't work: because DepthWalk.RevWalk does not override the standard RevWalk#toObjectWalkWithSameObjects implementation, the result is a plain ObjectWalk instead of an instance of DepthWalk.ObjectWalk. The result is that the "shallow" information is thrown away and objects reachable from the shallow commits can be omitted from the pack sent when fetching with --depth from a shallow clone. Multiple factors collude to limit the circumstances under which this bug can be observed: 1. Commits with depth != 0 don't enter DepthGenerator's pending queue. That means a "have" cannot have any effect on DepthGenerator unless it is also a "want". 2. DepthGenerator#next() doesn't call carryFlagsImpl(), so the uninteresting flag is not propagated to ancestors there even if a "have" is also a "want". 3. JGit treats a depth of 1 as "1 past the wants". Because of (2), the only place the UNINTERESTING flag can leak to a shallow commit's parents is in the carryFlags() call from markUninteresting(). carryFlags() only traverses commits that have already been parsed: commits yet to be parsed are supposed to inherit correct flags from their parent in PendingGenerator#next (which doesn't happen here --- that is (2)). So the list of commits that have already been parsed becomes relevant. When we hit the markUninteresting() call, all "want"s, "have"s, and commits to be unshallowed have been parsed. carryFlags() only affects the parsed commits. If the "want" is a direct parent of a "have", then it carryFlags() marks it as uninteresting. If the "have" was also a "shallow", then its parent pointer should have been null and the "want" shouldn't have been marked, so we see the bug. If the "want" is a more distant ancestor then (2) keeps the uninteresting state from propagating to the "want" and we don't see the bug. If the "shallow" is not also a "have" then the shallow commit isn't parsed so (2) keeps the uninteresting state from propagating to the "want so we don't see the bug. Here is a reproduction case (time flowing left to right, arrows pointing to parents). "C" must be a commit that the client reports as a "have" during negotiation. That can only happen if the server reports it as an existing branch or tag in the first round of negotiation: A <-- B <-- C <-- D First do git clone --depth 1 <repo> which yields D as a "have" and C as a "shallow" commit. Then try git fetch --depth 1 <repo> B:refs/heads/B Negotiation sets up: have D, shallow C, have C, want B. But due to this bug B is marked as uninteresting and is not sent. Change-Id: I6e14b57b2f85e52d28cdcf356df647870f475440 Signed-off-by: Terry Parker <tparker@google.com>
7 years ago
GPG signature verification via BouncyCastle Add a GpgSignatureVerifier interface, plus a factory to create instances thereof that is provided via the ServiceLoader mechanism. Implement the new interface for BouncyCastle. A verifier maintains an internal LRU cache of previously found public keys to speed up verifying multiple objects (tag or commits). Mergetags are not handled. Provide a new VerifySignatureCommand in org.eclipse.jgit.api together with a factory method Git.verifySignature(). The command can verify signatures on tags or commits, and can be limited to accept only tags or commits. Provide a new public WrongObjectTypeException thrown when the command is limited to either tags or commits and a name resolves to some other object kind. In jgit.pgm, implement "git tag -v", "git log --show-signature", and "git show --show-signature". The output is similar to command-line gpg invoked via git, but not identical. In particular, lines are not prefixed by "gpg:" but by "bc:". Trust levels for public keys are read from the keys' trust packets, not from GPG's internal trust database. A trust packet may or may not be set. Command-line GPG produces more warning lines depending on the trust level, warning about keys with a trust level below "full". There are no unit tests because JGit still doesn't have any setup to do signing unit tests; this would require at least a faked .gpg directory with pre-created key rings and keys, and a way to make the BouncyCastle classes use that directory instead of the default. See bug 547538 and also bug 544847. Tested manually with a small test repository containing signed and unsigned commits and tags, with signatures made with different keys and made by command-line git using GPG 2.2.25 and by JGit using BouncyCastle 1.65. Bug: 547751 Change-Id: If7e34aeed6ca6636a92bf774d893d98f6d459181 Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
3 years ago
Implement similarity based rename detection Content similarity based rename detection is performed only after a linear time detection is performed using exact content match on the ObjectIds. Any names which were paired up during that exact match phase are excluded from the inexact similarity based rename, which reduces the space that must be considered. During rename detection two entries cannot be marked as a rename if they are different types of files. This prevents a symlink from being renamed to a regular file, even if their blob content appears to be similar, or is identical. Efficiently comparing two files is performed by building up two hash indexes and hashing lines or short blocks from each file, counting the number of bytes that each line or block represents. Instead of using a standard java.util.HashMap, we use a custom open hashing scheme similiar to what we use in ObjecIdSubclassMap. This permits us to have a very light-weight hash, with very little memory overhead per cell stored. As we only need two ints per record in the map (line/block key and number of bytes), we collapse them into a single long inside of a long array, making very efficient use of available memory when we create the index table. We only need object headers for the index structure itself, and the index table, but not per-cell. This offers a massive space savings over using java.util.HashMap. The score calculation is done by approximating how many bytes are the same between the two inputs (which for a delta would be how much is copied from the base into the result). The score is derived by dividing the approximate number of bytes in common into the length of the larger of the two input files. Right now the SimilarityIndex table should average about 1/2 full, which means we waste about 50% of our memory on empty entries after we are done indexing a file and sort the table's contents. If memory becomes an issue we could discard the table and copy all records over to a new array that is properly sized. Building the index requires O(M + N log N) time, where M is the size of the input file in bytes, and N is the number of unique lines/blocks in the file. The N log N time constraint comes from the sort of the index table that is necessary to perform linear time matching against another SimilarityIndex created for a different file. To actually perform the rename detection, a SxD matrix is created, placing the sources (aka deletions) along one dimension and the destinations (aka additions) along the other. A simple O(S x D) loop examines every cell in this matrix. A SimilarityIndex is built along the row and reused for each column compare along that row, avoiding the costly index rebuild at the row level. A future improvement would be to load a smaller square matrix into SimilarityIndexes and process everything in that sub-matrix before discarding the column dimension and moving down to the next sub-matrix block along that same grid of rows. An optional ProgressMonitor is permitted to be passed in, allowing applications to see the progress of the detector as it works through the matrix cells. This provides some indication of current status for very long running renames. The default line/block hash function used by the SimilarityIndex may not be optimal, and may produce too many collisions. It is borrowed from RawText's hash, which is used to quickly skip out of a longer equality test if two lines have different hash functions. We may need to refine this hash in the future, in order to minimize the number of collisions we get on common source files. Based on a handful of test commits in JGit (especially my own recent rename repository refactoring series), this rename detector produces output that is very close to C Git. The content similarity scores are sometimes off by 1%, which is most probably caused by our SimilarityIndex type using a different hash function than C Git uses when it computes the delta size between any two objects in the rename matrix. Bug: 318504 Change-Id: I11dff969e8a2e4cf252636d857d2113053bdd9dc Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 years ago
Handle SSL handshake failures in TransportHttp When a https connection could not be established because the SSL handshake was unsuccessful, TransportHttp would unconditionally throw a TransportException. Other https clients like web browsers or also some SVN clients handle this more gracefully. If there's a problem with the server certificate, they inform the user and give him a possibility to connect to the server all the same. In git, this would correspond to dynamically setting http.sslVerify to false for the server. Implement this using the CredentialsProvider to inform and ask the user. We offer three choices: 1. skip SSL verification for the current git operation, or 2. skip SSL verification for the server always from now on for requests originating from the current repository, or 3. always skip SSL verification for the server from now on. For (1), we just suppress SSL verification for the current instance of TransportHttp. For (2), we store a http.<uri>.sslVerify = false setting for the original URI in the repo config. For (3), we store the http.<uri>.sslVerify setting in the git user config. Adapt the SmartClientSmartServerSslTest such that it uses this mechanism instead of setting http.sslVerify up front. Improve SimpleHttpServer to enable setting it up also with HTTPS support in anticipation of an EGit SWTbot UI test verifying that cloning via HTTPS from a server that has a certificate that doesn't validate pops up the correct dialog, and that cloning subsequently proceeds successfully if the user decides to skip SSL verification. Bug: 374703 Change-Id: Ie1abada9a3d389ad4d8d52c2d5265d2764e3fb0e Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
6 years ago
Fix atomic lock file creation on NFS FS_POSIX.createNewFile(File) failed to properly implement atomic file creation on NFS using the algorithm [1]: - name of the hard link must be unique to prevent that two processes using different NFS clients try to create the same link. This would render nlink useless to detect if there was a race. - the hard link must be retained for the lifetime of the file since we don't know when the state of the involved NFS clients will be synchronized. This depends on NFS configuration options. To fix these issues we need to change the signature of createNewFile which would break API. Hence deprecate the old method FS.createNewFile(File) and add a new method createNewFileAtomic(File). The new method returns a LockToken which needs to be retained by the caller (LockFile) until all involved NFS clients synchronized their state. Since we don't know when the NFS caches are synchronized we need to retain the token until the corresponding file is no longer needed. The LockToken must be closed after the LockFile using it has been committed or unlocked. On Posix, if core.supportsAtomicCreateNewFile = false this will delete the hard link which guarded the atomic creation of the file. When acquiring the lock fails ensure that the hard link is removed. [1] https://www.time-travellers.org/shane/papers/NFS_considered_harmful.html also see file creation flag O_EXCL in http://man7.org/linux/man-pages/man2/open.2.html Change-Id: I84fcb16143a5f877e9b08c6ee0ff8fa4ea68a90d Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
5 years ago
GPG signature verification via BouncyCastle Add a GpgSignatureVerifier interface, plus a factory to create instances thereof that is provided via the ServiceLoader mechanism. Implement the new interface for BouncyCastle. A verifier maintains an internal LRU cache of previously found public keys to speed up verifying multiple objects (tag or commits). Mergetags are not handled. Provide a new VerifySignatureCommand in org.eclipse.jgit.api together with a factory method Git.verifySignature(). The command can verify signatures on tags or commits, and can be limited to accept only tags or commits. Provide a new public WrongObjectTypeException thrown when the command is limited to either tags or commits and a name resolves to some other object kind. In jgit.pgm, implement "git tag -v", "git log --show-signature", and "git show --show-signature". The output is similar to command-line gpg invoked via git, but not identical. In particular, lines are not prefixed by "gpg:" but by "bc:". Trust levels for public keys are read from the keys' trust packets, not from GPG's internal trust database. A trust packet may or may not be set. Command-line GPG produces more warning lines depending on the trust level, warning about keys with a trust level below "full". There are no unit tests because JGit still doesn't have any setup to do signing unit tests; this would require at least a faked .gpg directory with pre-created key rings and keys, and a way to make the BouncyCastle classes use that directory instead of the default. See bug 547538 and also bug 544847. Tested manually with a small test repository containing signed and unsigned commits and tags, with signatures made with different keys and made by command-line git using GPG 2.2.25 and by JGit using BouncyCastle 1.65. Bug: 547751 Change-Id: If7e34aeed6ca6636a92bf774d893d98f6d459181 Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
3 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846
  1. /*
  2. * Copyright (C) 2010, 2013 Sasa Zivkov <sasa.zivkov@sap.com>
  3. * Copyright (C) 2012, 2021 Research In Motion Limited and others
  4. *
  5. * This program and the accompanying materials are made available under the
  6. * terms of the Eclipse Distribution License v. 1.0 which is available at
  7. * https://www.eclipse.org/org/documents/edl-v10.php.
  8. *
  9. * SPDX-License-Identifier: BSD-3-Clause
  10. */
  11. package org.eclipse.jgit.internal;
  12. import org.eclipse.jgit.nls.NLS;
  13. import org.eclipse.jgit.nls.TranslationBundle;
  14. /**
  15. * Translation bundle for JGit core
  16. */
  17. public class JGitText extends TranslationBundle {
  18. /**
  19. * Get an instance of this translation bundle
  20. *
  21. * @return an instance of this translation bundle
  22. */
  23. public static JGitText get() {
  24. return NLS.getBundleFor(JGitText.class);
  25. }
  26. // @formatter:off
  27. /***/ public String abbreviationLengthMustBeNonNegative;
  28. /***/ public String abortingRebase;
  29. /***/ public String abortingRebaseFailed;
  30. /***/ public String abortingRebaseFailedNoOrigHead;
  31. /***/ public String advertisementCameBefore;
  32. /***/ public String advertisementOfCameBefore;
  33. /***/ public String amazonS3ActionFailed;
  34. /***/ public String amazonS3ActionFailedGivingUp;
  35. /***/ public String ambiguousObjectAbbreviation;
  36. /***/ public String aNewObjectIdIsRequired;
  37. /***/ public String anExceptionOccurredWhileTryingToAddTheIdOfHEAD;
  38. /***/ public String anSSHSessionHasBeenAlreadyCreated;
  39. /***/ public String applyBinaryBaseOidWrong;
  40. /***/ public String applyBinaryOidTooShort;
  41. /***/ public String applyBinaryResultOidWrong;
  42. /***/ public String applyingCommit;
  43. /***/ public String archiveFormatAlreadyAbsent;
  44. /***/ public String archiveFormatAlreadyRegistered;
  45. /***/ public String argumentIsNotAValidCommentString;
  46. /***/ public String assumeAtomicCreateNewFile;
  47. /***/ public String atLeastOnePathIsRequired;
  48. /***/ public String atLeastOnePatternIsRequired;
  49. /***/ public String atLeastTwoFiltersNeeded;
  50. /***/ public String atomicPushNotSupported;
  51. /***/ public String atomicRefUpdatesNotSupported;
  52. /***/ public String atomicSymRefNotSupported;
  53. /***/ public String authenticationNotSupported;
  54. /***/ public String badBase64InputCharacterAt;
  55. /***/ public String badEntryDelimiter;
  56. /***/ public String badEntryName;
  57. /***/ public String badEscape;
  58. /***/ public String badGroupHeader;
  59. /***/ public String badIgnorePattern;
  60. /***/ public String badIgnorePatternFull;
  61. /***/ public String badObjectType;
  62. /***/ public String badRef;
  63. /***/ public String badSectionEntry;
  64. /***/ public String badShallowLine;
  65. /***/ public String bareRepositoryNoWorkdirAndIndex;
  66. /***/ public String base85invalidChar;
  67. /***/ public String base85length;
  68. /***/ public String base85overflow;
  69. /***/ public String base85tooLong;
  70. /***/ public String base85tooShort;
  71. /***/ public String baseLengthIncorrect;
  72. /***/ public String binaryDeltaBaseLengthMismatch;
  73. /***/ public String binaryDeltaInvalidOffset;
  74. /***/ public String binaryDeltaInvalidResultLength;
  75. /***/ public String binaryHunkDecodeError;
  76. /***/ public String binaryHunkInvalidLength;
  77. /***/ public String binaryHunkLineTooShort;
  78. /***/ public String binaryHunkMissingNewline;
  79. /***/ public String bitmapMissingObject;
  80. /***/ public String bitmapsMustBePrepared;
  81. /***/ public String blameNotCommittedYet;
  82. /***/ public String blockLimitNotMultipleOfBlockSize;
  83. /***/ public String blockLimitNotPositive;
  84. /***/ public String blockSizeNotPowerOf2;
  85. /***/ public String bothRefTargetsMustNotBeNull;
  86. /***/ public String branchNameInvalid;
  87. /***/ public String buildingBitmaps;
  88. /***/ public String cachedPacksPreventsIndexCreation;
  89. /***/ public String cachedPacksPreventsListingObjects;
  90. /***/ public String cannotAccessLastModifiedForSafeDeletion;
  91. /***/ public String cannotBeCombined;
  92. /***/ public String cannotBeRecursiveWhenTreesAreIncluded;
  93. /***/ public String cannotChangeActionOnComment;
  94. /***/ public String cannotCheckoutFromUnbornBranch;
  95. /***/ public String cannotCheckoutOursSwitchBranch;
  96. /***/ public String cannotCombineSquashWithNoff;
  97. /***/ public String cannotCombineTopoSortWithTopoKeepBranchTogetherSort;
  98. /***/ public String cannotCombineTreeFilterWithRevFilter;
  99. /***/ public String cannotCommitOnARepoWithState;
  100. /***/ public String cannotCommitWriteTo;
  101. /***/ public String cannotConnectPipes;
  102. /***/ public String cannotConvertScriptToText;
  103. /***/ public String cannotCreateConfig;
  104. /***/ public String cannotCreateDirectory;
  105. /***/ public String cannotCreateHEAD;
  106. /***/ public String cannotCreateIndexfile;
  107. /***/ public String cannotCreateTempDir;
  108. /***/ public String cannotDeleteCheckedOutBranch;
  109. /***/ public String cannotDeleteFile;
  110. /***/ public String cannotDeleteObjectsPath;
  111. /***/ public String cannotDetermineProxyFor;
  112. /***/ public String cannotDownload;
  113. /***/ public String cannotEnterObjectsPath;
  114. /***/ public String cannotEnterPathFromParent;
  115. /***/ public String cannotExecute;
  116. /***/ public String cannotFindMergeBaseUsingFirstParent;
  117. /***/ public String cannotGet;
  118. /***/ public String cannotGetObjectsPath;
  119. /***/ public String cannotListObjectsPath;
  120. /***/ public String cannotListPackPath;
  121. /***/ public String cannotListRefs;
  122. /***/ public String cannotLock;
  123. /***/ public String cannotLockPackIn;
  124. /***/ public String cannotMatchOnEmptyString;
  125. /***/ public String cannotMkdirObjectPath;
  126. /***/ public String cannotMoveIndexTo;
  127. /***/ public String cannotMovePackTo;
  128. /***/ public String cannotOpenService;
  129. /***/ public String cannotParseDate;
  130. /***/ public String cannotParseGitURIish;
  131. /***/ public String cannotPullOnARepoWithState;
  132. /***/ public String cannotRead;
  133. /***/ public String cannotReadBackDelta;
  134. /***/ public String cannotReadBlob;
  135. /***/ public String cannotReadCommit;
  136. /***/ public String cannotReadFile;
  137. /***/ public String cannotReadHEAD;
  138. /***/ public String cannotReadIndex;
  139. /***/ public String cannotReadObject;
  140. /***/ public String cannotReadObjectsPath;
  141. /***/ public String cannotReadTree;
  142. /***/ public String cannotRebaseWithoutCurrentHead;
  143. /***/ public String cannotSaveConfig;
  144. /***/ public String cannotSquashFixupWithoutPreviousCommit;
  145. /***/ public String cannotStoreObjects;
  146. /***/ public String cannotResolveUniquelyAbbrevObjectId;
  147. /***/ public String cannotUpdateUnbornBranch;
  148. /***/ public String cannotWriteObjectsPath;
  149. /***/ public String canOnlyCherryPickCommitsWithOneParent;
  150. /***/ public String canOnlyRevertCommitsWithOneParent;
  151. /***/ public String commitDoesNotHaveGivenParent;
  152. /***/ public String cantFindObjectInReversePackIndexForTheSpecifiedOffset;
  153. /***/ public String channelMustBeInRange1_255;
  154. /***/ public String characterClassIsNotSupported;
  155. /***/ public String checkingOutFiles;
  156. /***/ public String checkoutConflictWithFile;
  157. /***/ public String checkoutConflictWithFiles;
  158. /***/ public String checkoutUnexpectedResult;
  159. /***/ public String classCastNotA;
  160. /***/ public String cloneNonEmptyDirectory;
  161. /***/ public String closeLockTokenFailed;
  162. /***/ public String closed;
  163. /***/ public String collisionOn;
  164. /***/ public String commandClosedStderrButDidntExit;
  165. /***/ public String commandRejectedByHook;
  166. /***/ public String commandWasCalledInTheWrongState;
  167. /***/ public String commitMessageNotSpecified;
  168. /***/ public String commitOnRepoWithoutHEADCurrentlyNotSupported;
  169. /***/ public String commitAmendOnInitialNotPossible;
  170. /***/ public String commitsHaveAlreadyBeenMarkedAsStart;
  171. /***/ public String compressingObjects;
  172. /***/ public String configSubsectionContainsNewline;
  173. /***/ public String configSubsectionContainsNullByte;
  174. /***/ public String configValueContainsNullByte;
  175. /***/ public String configHandleIsStale;
  176. /***/ public String configHandleMayBeLocked;
  177. /***/ public String connectionFailed;
  178. /***/ public String connectionTimeOut;
  179. /***/ public String contextMustBeNonNegative;
  180. /***/ public String cookieFilePathRelative;
  181. /***/ public String corruptionDetectedReReadingAt;
  182. /***/ public String corruptObjectBadDate;
  183. /***/ public String corruptObjectBadEmail;
  184. /***/ public String corruptObjectBadStream;
  185. /***/ public String corruptObjectBadTimezone;
  186. /***/ public String corruptObjectDuplicateEntryNames;
  187. /***/ public String corruptObjectGarbageAfterSize;
  188. /***/ public String corruptObjectIncorrectLength;
  189. /***/ public String corruptObjectIncorrectSorting;
  190. /***/ public String corruptObjectInvalidModeChar;
  191. /***/ public String corruptObjectInvalidModeStartsZero;
  192. /***/ public String corruptObjectInvalidMode2;
  193. /***/ public String corruptObjectInvalidMode3;
  194. /***/ public String corruptObjectInvalidName;
  195. /***/ public String corruptObjectInvalidNameAux;
  196. /***/ public String corruptObjectInvalidNameCon;
  197. /***/ public String corruptObjectInvalidNameCom;
  198. /***/ public String corruptObjectInvalidNameEnd;
  199. /***/ public String corruptObjectInvalidNameIgnorableUnicode;
  200. /***/ public String corruptObjectInvalidNameInvalidUtf8;
  201. /***/ public String corruptObjectInvalidNameLpt;
  202. /***/ public String corruptObjectInvalidNameNul;
  203. /***/ public String corruptObjectInvalidNamePrn;
  204. /***/ public String corruptObjectInvalidObject;
  205. /***/ public String corruptObjectInvalidParent;
  206. /***/ public String corruptObjectInvalidTree;
  207. /***/ public String corruptObjectInvalidType;
  208. /***/ public String corruptObjectInvalidType2;
  209. /***/ public String corruptObjectMissingEmail;
  210. /***/ public String corruptObjectNameContainsByte;
  211. /***/ public String corruptObjectNameContainsChar;
  212. /***/ public String corruptObjectNameContainsNullByte;
  213. /***/ public String corruptObjectNameContainsSlash;
  214. /***/ public String corruptObjectNameDot;
  215. /***/ public String corruptObjectNameDotDot;
  216. /***/ public String corruptObjectNameZeroLength;
  217. /***/ public String corruptObjectNegativeSize;
  218. /***/ public String corruptObjectNoAuthor;
  219. /***/ public String corruptObjectNoCommitter;
  220. /***/ public String corruptObjectNoHeader;
  221. /***/ public String corruptObjectNoObjectHeader;
  222. /***/ public String corruptObjectNoTagHeader;
  223. /***/ public String corruptObjectNotreeHeader;
  224. /***/ public String corruptObjectNoTypeHeader;
  225. /***/ public String corruptObjectPackfileChecksumIncorrect;
  226. /***/ public String corruptObjectTruncatedInMode;
  227. /***/ public String corruptObjectTruncatedInName;
  228. /***/ public String corruptObjectTruncatedInObjectId;
  229. /***/ public String corruptObjectZeroId;
  230. /***/ public String corruptPack;
  231. /***/ public String corruptUseCnt;
  232. /***/ public String couldNotFindTabInLine;
  233. /***/ public String couldNotFindSixTabsInLine;
  234. /***/ public String couldNotGetAdvertisedRef;
  235. /***/ public String couldNotGetRepoStatistics;
  236. /***/ public String couldNotLockHEAD;
  237. /***/ public String couldNotPersistCookies;
  238. /***/ public String couldNotReadCookieFile;
  239. /***/ public String couldNotReadIndexInOneGo;
  240. /***/ public String couldNotReadObjectWhileParsingCommit;
  241. /***/ public String couldNotRewindToUpstreamCommit;
  242. /***/ public String couldNotURLEncodeToUTF8;
  243. /***/ public String countingObjects;
  244. /***/ public String createBranchFailedUnknownReason;
  245. /***/ public String createBranchUnexpectedResult;
  246. /***/ public String createNewFileFailed;
  247. /***/ public String createRequiresZeroOldId;
  248. /***/ public String credentialPassword;
  249. /***/ public String credentialPassphrase;
  250. /***/ public String credentialUsername;
  251. /***/ public String daemonAlreadyRunning;
  252. /***/ public String daysAgo;
  253. /***/ public String deepenNotWithDeepen;
  254. /***/ public String deepenSinceWithDeepen;
  255. /***/ public String deleteBranchUnexpectedResult;
  256. /***/ public String deleteFileFailed;
  257. /***/ public String deletedOrphanInPackDir;
  258. /***/ public String deleteRequiresZeroNewId;
  259. /***/ public String deleteTagUnexpectedResult;
  260. /***/ public String deletingNotSupported;
  261. /***/ public String destinationIsNotAWildcard;
  262. /***/ public String detachedHeadDetected;
  263. /***/ public String dirCacheDoesNotHaveABackingFile;
  264. /***/ public String dirCacheFileIsNotLocked;
  265. /***/ public String dirCacheIsNotLocked;
  266. /***/ public String DIRCChecksumMismatch;
  267. /***/ public String DIRCCorruptLength;
  268. /***/ public String DIRCCorruptLengthFirst;
  269. /***/ public String DIRCExtensionIsTooLargeAt;
  270. /***/ public String DIRCExtensionNotSupportedByThisVersion;
  271. /***/ public String DIRCHasTooManyEntries;
  272. /***/ public String DIRCUnrecognizedExtendedFlags;
  273. /***/ public String downloadCancelled;
  274. /***/ public String downloadCancelledDuringIndexing;
  275. /***/ public String duplicateAdvertisementsOf;
  276. /***/ public String duplicateRef;
  277. /***/ public String duplicateRefAttribute;
  278. /***/ public String duplicateRemoteRefUpdateIsIllegal;
  279. /***/ public String duplicateStagesNotAllowed;
  280. /***/ public String eitherGitDirOrWorkTreeRequired;
  281. /***/ public String emptyCommit;
  282. /***/ public String emptyPathNotPermitted;
  283. /***/ public String emptyRef;
  284. /***/ public String encryptionError;
  285. /***/ public String encryptionOnlyPBE;
  286. /***/ public String endOfFileInEscape;
  287. /***/ public String entryNotFoundByPath;
  288. /***/ public String enumValueNotSupported0;
  289. /***/ public String enumValueNotSupported2;
  290. /***/ public String enumValueNotSupported3;
  291. /***/ public String enumValuesNotAvailable;
  292. /***/ public String errorInPackedRefs;
  293. /***/ public String errorInvalidProtocolWantedOldNewRef;
  294. /***/ public String errorListing;
  295. /***/ public String errorOccurredDuringUnpackingOnTheRemoteEnd;
  296. /***/ public String errorReadingInfoRefs;
  297. /***/ public String exceptionCaughtDuringExecutionOfHook;
  298. /***/ public String exceptionCaughtDuringExecutionOfAddCommand;
  299. /***/ public String exceptionCaughtDuringExecutionOfArchiveCommand;
  300. /***/ public String exceptionCaughtDuringExecutionOfCherryPickCommand;
  301. /***/ public String exceptionCaughtDuringExecutionOfCommand;
  302. /***/ public String exceptionCaughtDuringExecutionOfCommitCommand;
  303. /***/ public String exceptionCaughtDuringExecutionOfFetchCommand;
  304. /***/ public String exceptionCaughtDuringExecutionOfLsRemoteCommand;
  305. /***/ public String exceptionCaughtDuringExecutionOfMergeCommand;
  306. /***/ public String exceptionCaughtDuringExecutionOfPullCommand;
  307. /***/ public String exceptionCaughtDuringExecutionOfPushCommand;
  308. /***/ public String exceptionCaughtDuringExecutionOfResetCommand;
  309. /***/ public String exceptionCaughtDuringExecutionOfRevertCommand;
  310. /***/ public String exceptionCaughtDuringExecutionOfRmCommand;
  311. /***/ public String exceptionCaughtDuringExecutionOfTagCommand;
  312. /***/ public String exceptionHookExecutionInterrupted;
  313. /***/ public String exceptionOccurredDuringAddingOfOptionToALogCommand;
  314. /***/ public String exceptionOccurredDuringReadingOfGIT_DIR;
  315. /***/ public String exceptionWhileFindingUserHome;
  316. /***/ public String exceptionWhileReadingPack;
  317. /***/ public String expectedACKNAKFoundEOF;
  318. /***/ public String expectedACKNAKGot;
  319. /***/ public String expectedBooleanStringValue;
  320. /***/ public String expectedCharacterEncodingGuesses;
  321. /***/ public String expectedDirectoryNotSubmodule;
  322. /***/ public String expectedEOFReceived;
  323. /***/ public String expectedGot;
  324. /***/ public String expectedLessThanGot;
  325. /***/ public String expectedPktLineWithService;
  326. /***/ public String expectedReceivedContentType;
  327. /***/ public String expectedReportForRefNotReceived;
  328. /***/ public String failedAtomicFileCreation;
  329. /***/ public String failedCreateLockFile;
  330. /***/ public String failedReadHttpsProtocols;
  331. /***/ public String failedToDetermineFilterDefinition;
  332. /***/ public String failedToConvert;
  333. /***/ public String failedUpdatingRefs;
  334. /***/ public String failureDueToOneOfTheFollowing;
  335. /***/ public String failureUpdatingFETCH_HEAD;
  336. /***/ public String failureUpdatingTrackingRef;
  337. /***/ public String fileAlreadyExists;
  338. /***/ public String fileCannotBeDeleted;
  339. /***/ public String fileIsTooLarge;
  340. /***/ public String fileModeNotSetForPath;
  341. /***/ public String filterExecutionFailed;
  342. /***/ public String filterExecutionFailedRc;
  343. /***/ public String filterRequiresCapability;
  344. /***/ public String findingGarbage;
  345. /***/ public String flagIsDisposed;
  346. /***/ public String flagNotFromThis;
  347. /***/ public String flagsAlreadyCreated;
  348. /***/ public String funnyRefname;
  349. /***/ public String gcFailed;
  350. /***/ public String gcTooManyUnpruned;
  351. /***/ public String headRequiredToStash;
  352. /***/ public String hoursAgo;
  353. /***/ public String httpConfigCannotNormalizeURL;
  354. /***/ public String httpConfigInvalidURL;
  355. /***/ public String httpFactoryInUse;
  356. /***/ public String httpPreAuthTooLate;
  357. /***/ public String httpUserInfoDecodeError;
  358. /***/ public String httpWrongConnectionType;
  359. /***/ public String hugeIndexesAreNotSupportedByJgitYet;
  360. /***/ public String hunkBelongsToAnotherFile;
  361. /***/ public String hunkDisconnectedFromFile;
  362. /***/ public String hunkHeaderDoesNotMatchBodyLineCountOf;
  363. /***/ public String illegalArgumentNotA;
  364. /***/ public String illegalCombinationOfArguments;
  365. /***/ public String illegalHookName;
  366. /***/ public String illegalPackingPhase;
  367. /***/ public String incorrectHashFor;
  368. /***/ public String incorrectOBJECT_ID_LENGTH;
  369. /***/ public String indexFileCorruptedNegativeBucketCount;
  370. /***/ public String indexFileIsTooLargeForJgit;
  371. /***/ public String indexNumbersNotIncreasing;
  372. /***/ public String indexWriteException;
  373. /***/ public String initFailedBareRepoDifferentDirs;
  374. /***/ public String initFailedDirIsNoDirectory;
  375. /***/ public String initFailedGitDirIsNoDirectory;
  376. /***/ public String initFailedNonBareRepoSameDirs;
  377. /***/ public String inMemoryBufferLimitExceeded;
  378. /***/ public String inputDidntMatchLength;
  379. /***/ public String inputStreamMustSupportMark;
  380. /***/ public String integerValueOutOfRange;
  381. /***/ public String internalRevisionError;
  382. /***/ public String internalServerError;
  383. /***/ public String interruptedWriting;
  384. /***/ public String inTheFuture;
  385. /***/ public String invalidAdvertisementOf;
  386. /***/ public String invalidAncestryLength;
  387. /***/ public String invalidBooleanValue;
  388. /***/ public String invalidChannel;
  389. /***/ public String invalidCommitParentNumber;
  390. /***/ public String invalidDepth;
  391. /***/ public String invalidEncryption;
  392. /***/ public String invalidExpandWildcard;
  393. /***/ public String invalidFilter;
  394. /***/ public String invalidGitdirRef;
  395. /***/ public String invalidGitModules;
  396. /***/ public String invalidGitType;
  397. /***/ public String invalidHeaderFormat;
  398. /***/ public String invalidHeaderKey;
  399. /***/ public String invalidHeaderValue;
  400. /***/ public String invalidHexString;
  401. /***/ public String invalidHomeDirectory;
  402. /***/ public String invalidHooksPath;
  403. /***/ public String invalidId;
  404. /***/ public String invalidId0;
  405. /***/ public String invalidIdLength;
  406. /***/ public String invalidIgnoreParamSubmodule;
  407. /***/ public String invalidIgnoreRule;
  408. /***/ public String invalidIntegerValue;
  409. /***/ public String invalidKey;
  410. /***/ public String invalidLineInConfigFile;
  411. /***/ public String invalidLineInConfigFileWithParam;
  412. /***/ public String invalidModeFor;
  413. /***/ public String invalidModeForPath;
  414. /***/ public String invalidNameContainsDotDot;
  415. /***/ public String invalidObject;
  416. /***/ public String invalidOldIdSent;
  417. /***/ public String invalidPacketLineHeader;
  418. /***/ public String invalidPath;
  419. /***/ public String invalidPurgeFactor;
  420. /***/ public String invalidRedirectLocation;
  421. /***/ public String invalidRefAdvertisementLine;
  422. /***/ public String invalidReflogRevision;
  423. /***/ public String invalidRefName;
  424. /***/ public String invalidReftableBlock;
  425. /***/ public String invalidReftableCRC;
  426. /***/ public String invalidReftableFile;
  427. /***/ public String invalidRemote;
  428. /***/ public String invalidShallowObject;
  429. /***/ public String invalidStageForPath;
  430. /***/ public String invalidSystemProperty;
  431. /***/ public String invalidTagOption;
  432. /***/ public String invalidTimeout;
  433. /***/ public String invalidTimestamp;
  434. /***/ public String invalidTimeUnitValue2;
  435. /***/ public String invalidTimeUnitValue3;
  436. /***/ public String invalidTreeZeroLengthName;
  437. /***/ public String invalidURL;
  438. /***/ public String invalidWildcards;
  439. /***/ public String invalidRefSpec;
  440. /***/ public String invalidRepositoryStateNoHead;
  441. /***/ public String invalidWindowSize;
  442. /***/ public String isAStaticFlagAndHasNorevWalkInstance;
  443. /***/ public String JRELacksMD5Implementation;
  444. /***/ public String kNotInRange;
  445. /***/ public String largeObjectExceedsByteArray;
  446. /***/ public String largeObjectExceedsLimit;
  447. /***/ public String largeObjectException;
  448. /***/ public String largeObjectOutOfMemory;
  449. /***/ public String lengthExceedsMaximumArraySize;
  450. /***/ public String lfsHookConflict;
  451. /***/ public String listingAlternates;
  452. /***/ public String listingPacks;
  453. /***/ public String localObjectsIncomplete;
  454. /***/ public String localRefIsMissingObjects;
  455. /***/ public String localRepository;
  456. /***/ public String lockCountMustBeGreaterOrEqual1;
  457. /***/ public String lockAlreadyHeld;
  458. /***/ public String lockError;
  459. /***/ public String lockFailedRetry;
  460. /***/ public String lockOnNotClosed;
  461. /***/ public String lockOnNotHeld;
  462. /***/ public String lockStreamClosed;
  463. /***/ public String lockStreamMultiple;
  464. /***/ public String logInconsistentFiletimeDiff;
  465. /***/ public String logLargerFiletimeDiff;
  466. /***/ public String logSmallerFiletime;
  467. /***/ public String logXDGConfigHomeInvalid;
  468. /***/ public String maxCountMustBeNonNegative;
  469. /***/ public String mergeConflictOnNonNoteEntries;
  470. /***/ public String mergeConflictOnNotes;
  471. /***/ public String mergeStrategyAlreadyExistsAsDefault;
  472. /***/ public String mergeStrategyDoesNotSupportHeads;
  473. /***/ public String mergeUsingStrategyResultedInDescription;
  474. /***/ public String mergeRecursiveConflictsWhenMergingCommonAncestors;
  475. /***/ public String mergeRecursiveTooManyMergeBasesFor;
  476. /***/ public String messageAndTaggerNotAllowedInUnannotatedTags;
  477. /***/ public String minutesAgo;
  478. /***/ public String mismatchOffset;
  479. /***/ public String mismatchCRC;
  480. /***/ public String missingAccesskey;
  481. /***/ public String missingConfigurationForKey;
  482. /***/ public String missingCookieFile;
  483. /***/ public String missingCRC;
  484. /***/ public String missingDeltaBase;
  485. /***/ public String missingForwardImageInGITBinaryPatch;
  486. /***/ public String missingObject;
  487. /***/ public String missingPrerequisiteCommits;
  488. /***/ public String missingRequiredParameter;
  489. /***/ public String missingSecretkey;
  490. /***/ public String mixedStagesNotAllowed;
  491. /***/ public String mkDirFailed;
  492. /***/ public String mkDirsFailed;
  493. /***/ public String month;
  494. /***/ public String months;
  495. /***/ public String monthsAgo;
  496. /***/ public String multipleMergeBasesFor;
  497. /***/ public String nameMustNotBeNullOrEmpty;
  498. /***/ public String need2Arguments;
  499. /***/ public String newIdMustNotBeNull;
  500. /***/ public String newlineInQuotesNotAllowed;
  501. /***/ public String noApplyInDelete;
  502. /***/ public String noClosingBracket;
  503. /***/ public String noCommitsSelectedForShallow;
  504. /***/ public String noCredentialsProvider;
  505. /***/ public String noHEADExistsAndNoExplicitStartingRevisionWasSpecified;
  506. /***/ public String noHMACsupport;
  507. /***/ public String noMergeBase;
  508. /***/ public String noMergeHeadSpecified;
  509. /***/ public String nonBareLinkFilesNotSupported;
  510. /***/ public String nonCommitToHeads;
  511. /***/ public String noPathAttributesFound;
  512. /***/ public String noSuchRef;
  513. /***/ public String noSuchRefKnown;
  514. /***/ public String noSuchSubmodule;
  515. /***/ public String notABoolean;
  516. /***/ public String notABundle;
  517. /***/ public String notADIRCFile;
  518. /***/ public String notAGitDirectory;
  519. /***/ public String notAPACKFile;
  520. /***/ public String notARef;
  521. /***/ public String notASCIIString;
  522. /***/ public String notAuthorized;
  523. /***/ public String notAValidPack;
  524. /***/ public String notFound;
  525. /***/ public String nothingToFetch;
  526. /***/ public String nothingToPush;
  527. /***/ public String notMergedExceptionMessage;
  528. /***/ public String noXMLParserAvailable;
  529. /***/ public String objectAtHasBadZlibStream;
  530. /***/ public String objectIsCorrupt;
  531. /***/ public String objectIsCorrupt3;
  532. /***/ public String objectIsNotA;
  533. /***/ public String objectNotFound;
  534. /***/ public String objectNotFoundIn;
  535. /***/ public String obtainingCommitsForCherryPick;
  536. /***/ public String oldIdMustNotBeNull;
  537. /***/ public String onlyOneFetchSupported;
  538. /***/ public String onlyOneOperationCallPerConnectionIsSupported;
  539. /***/ public String onlyOpenPgpSupportedForSigning;
  540. /***/ public String openFilesMustBeAtLeast1;
  541. /***/ public String openingConnection;
  542. /***/ public String operationCanceled;
  543. /***/ public String outputHasAlreadyBeenStarted;
  544. /***/ public String overflowedReftableBlock;
  545. /***/ public String packChecksumMismatch;
  546. /***/ public String packCorruptedWhileWritingToFilesystem;
  547. /***/ public String packedRefsHandleIsStale;
  548. /***/ public String packetSizeMustBeAtLeast;
  549. /***/ public String packetSizeMustBeAtMost;
  550. /***/ public String packedRefsCorruptionDetected;
  551. /***/ public String packfileCorruptionDetected;
  552. /***/ public String packFileInvalid;
  553. /***/ public String packfileIsTruncated;
  554. /***/ public String packfileIsTruncatedNoParam;
  555. /***/ public String packHandleIsStale;
  556. /***/ public String packHasUnresolvedDeltas;
  557. /***/ public String packInaccessible;
  558. /***/ public String packingCancelledDuringObjectsWriting;
  559. /***/ public String packObjectCountMismatch;
  560. /***/ public String packRefs;
  561. /***/ public String packSizeNotSetYet;
  562. /***/ public String packTooLargeForIndexVersion1;
  563. /***/ public String packWasDeleted;
  564. /***/ public String packWriterStatistics;
  565. /***/ public String panicCantRenameIndexFile;
  566. /***/ public String patchApplyException;
  567. /***/ public String patchFormatException;
  568. /***/ public String pathNotConfigured;
  569. /***/ public String peeledLineBeforeRef;
  570. /***/ public String peeledRefIsRequired;
  571. /***/ public String peerDidNotSupplyACompleteObjectGraph;
  572. /***/ public String personIdentEmailNonNull;
  573. /***/ public String personIdentNameNonNull;
  574. /***/ public String postCommitHookFailed;
  575. /***/ public String prefixRemote;
  576. /***/ public String problemWithResolvingPushRefSpecsLocally;
  577. /***/ public String progressMonUploading;
  578. /***/ public String propertyIsAlreadyNonNull;
  579. /***/ public String pruneLoosePackedObjects;
  580. /***/ public String pruneLooseUnreferencedObjects;
  581. /***/ public String pullTaskName;
  582. /***/ public String pushCancelled;
  583. /***/ public String pushCertificateInvalidField;
  584. /***/ public String pushCertificateInvalidFieldValue;
  585. /***/ public String pushCertificateInvalidHeader;
  586. /***/ public String pushCertificateInvalidSignature;
  587. /***/ public String pushIsNotSupportedForBundleTransport;
  588. /***/ public String pushNotPermitted;
  589. /***/ public String pushOptionsNotSupported;
  590. /***/ public String rawLogMessageDoesNotParseAsLogEntry;
  591. /***/ public String readConfigFailed;
  592. /***/ public String readFileStoreAttributesFailed;
  593. /***/ public String readerIsRequired;
  594. /***/ public String readingObjectsFromLocalRepositoryFailed;
  595. /***/ public String readLastModifiedFailed;
  596. /***/ public String readPipeIsNotAllowed;
  597. /***/ public String readPipeIsNotAllowedRequiredPermission;
  598. /***/ public String readTimedOut;
  599. /***/ public String receivePackObjectTooLarge1;
  600. /***/ public String receivePackObjectTooLarge2;
  601. /***/ public String receivePackInvalidLimit;
  602. /***/ public String receivePackTooLarge;
  603. /***/ public String receivingObjects;
  604. /***/ public String redirectBlocked;
  605. /***/ public String redirectHttp;
  606. /***/ public String redirectLimitExceeded;
  607. /***/ public String redirectLocationMissing;
  608. /***/ public String redirectsOff;
  609. /***/ public String refAlreadyExists;
  610. /***/ public String refAlreadyExists1;
  611. /***/ public String reflogEntryNotFound;
  612. /***/ public String refNotResolved;
  613. /***/ public String reftableDirExists;
  614. /***/ public String reftableRecordsMustIncrease;
  615. /***/ public String refUpdateReturnCodeWas;
  616. /***/ public String remoteBranchNotFound;
  617. /***/ public String remoteConfigHasNoURIAssociated;
  618. /***/ public String remoteDoesNotHaveSpec;
  619. /***/ public String remoteDoesNotSupportSmartHTTPPush;
  620. /***/ public String remoteHungUpUnexpectedly;
  621. /***/ public String remoteNameCannotBeNull;
  622. /***/ public String renameBranchFailedAmbiguous;
  623. /***/ public String renameBranchFailedNotABranch;
  624. /***/ public String renameBranchFailedUnknownReason;
  625. /***/ public String renameBranchUnexpectedResult;
  626. /***/ public String renameCancelled;
  627. /***/ public String renameFileFailed;
  628. /***/ public String renamesAlreadyFound;
  629. /***/ public String renamesBreakingModifies;
  630. /***/ public String renamesFindingByContent;
  631. /***/ public String renamesFindingExact;
  632. /***/ public String renamesRejoiningModifies;
  633. /***/ public String repositoryAlreadyExists;
  634. /***/ public String repositoryConfigFileInvalid;
  635. /***/ public String repositoryIsRequired;
  636. /***/ public String repositoryNotFound;
  637. /***/ public String repositoryState_applyMailbox;
  638. /***/ public String repositoryState_bare;
  639. /***/ public String repositoryState_bisecting;
  640. /***/ public String repositoryState_conflicts;
  641. /***/ public String repositoryState_merged;
  642. /***/ public String repositoryState_normal;
  643. /***/ public String repositoryState_rebase;
  644. /***/ public String repositoryState_rebaseInteractive;
  645. /***/ public String repositoryState_rebaseOrApplyMailbox;
  646. /***/ public String repositoryState_rebaseWithMerge;
  647. /***/ public String requiredHashFunctionNotAvailable;
  648. /***/ public String resettingHead;
  649. /***/ public String resolvingDeltas;
  650. /***/ public String resultLengthIncorrect;
  651. /***/ public String rewinding;
  652. /***/ public String s3ActionDeletion;
  653. /***/ public String s3ActionReading;
  654. /***/ public String s3ActionWriting;
  655. /***/ public String saveFileStoreAttributesFailed;
  656. /***/ public String searchForReachableBranches;
  657. /***/ public String searchForReuse;
  658. /***/ public String searchForReuseTimeout;
  659. /***/ public String searchForSizes;
  660. /***/ public String secondsAgo;
  661. /***/ public String selectingCommits;
  662. /***/ public String sequenceTooLargeForDiffAlgorithm;
  663. /***/ public String serviceNotEnabledNoName;
  664. /***/ public String serviceNotPermitted;
  665. /***/ public String sha1CollisionDetected;
  666. /***/ public String shallowCommitsAlreadyInitialized;
  667. /***/ public String shallowPacksRequireDepthWalk;
  668. /***/ public String shortCompressedStreamAt;
  669. /***/ public String shortReadOfBlock;
  670. /***/ public String shortReadOfOptionalDIRCExtensionExpectedAnotherBytes;
  671. /***/ public String shortSkipOfBlock;
  672. /***/ public String signatureVerificationError;
  673. /***/ public String signatureVerificationUnavailable;
  674. /***/ public String signedTagMessageNoLf;
  675. /***/ public String signingServiceUnavailable;
  676. /***/ public String similarityScoreMustBeWithinBounds;
  677. /***/ public String skipMustBeNonNegative;
  678. /***/ public String skipNotAccessiblePath;
  679. /***/ public String smartHTTPPushDisabled;
  680. /***/ public String sourceDestinationMustMatch;
  681. /***/ public String sourceIsNotAWildcard;
  682. /***/ public String sourceRefDoesntResolveToAnyObject;
  683. /***/ public String sourceRefNotSpecifiedForRefspec;
  684. /***/ public String squashCommitNotUpdatingHEAD;
  685. /***/ public String sshCommandFailed;
  686. /***/ public String sshCommandTimeout;
  687. /***/ public String sslFailureExceptionMessage;
  688. /***/ public String sslFailureInfo;
  689. /***/ public String sslFailureCause;
  690. /***/ public String sslFailureTrustExplanation;
  691. /***/ public String sslTrustAlways;
  692. /***/ public String sslTrustForRepo;
  693. /***/ public String sslTrustNow;
  694. /***/ public String sslVerifyCannotSave;
  695. /***/ public String staleRevFlagsOn;
  696. /***/ public String startingReadStageWithoutWrittenRequestDataPendingIsNotSupported;
  697. /***/ public String stashApplyConflict;
  698. /***/ public String stashApplyFailed;
  699. /***/ public String stashApplyWithoutHead;
  700. /***/ public String stashApplyOnUnsafeRepository;
  701. /***/ public String stashCommitIncorrectNumberOfParents;
  702. /***/ public String stashDropDeleteRefFailed;
  703. /***/ public String stashDropFailed;
  704. /***/ public String stashDropMissingReflog;
  705. /***/ public String stashDropNotSupported;
  706. /***/ public String stashFailed;
  707. /***/ public String stashResolveFailed;
  708. /***/ public String statelessRPCRequiresOptionToBeEnabled;
  709. /***/ public String storePushCertMultipleRefs;
  710. /***/ public String storePushCertOneRef;
  711. /***/ public String storePushCertReflog;
  712. /***/ public String submoduleExists;
  713. /***/ public String submoduleNameInvalid;
  714. /***/ public String submoduleParentRemoteUrlInvalid;
  715. /***/ public String submodulePathInvalid;
  716. /***/ public String submoduleUrlInvalid;
  717. /***/ public String supportOnlyPackIndexVersion2;
  718. /***/ public String systemConfigFileInvalid;
  719. /***/ public String tagAlreadyExists;
  720. /***/ public String tagNameInvalid;
  721. /***/ public String tagOnRepoWithoutHEADCurrentlyNotSupported;
  722. /***/ public String timeoutMeasureFsTimestampResolution;
  723. /***/ public String transactionAborted;
  724. /***/ public String theFactoryMustNotBeNull;
  725. /***/ public String threadInterruptedWhileRunning;
  726. /***/ public String timeIsUncertain;
  727. /***/ public String timerAlreadyTerminated;
  728. /***/ public String tooManyCommands;
  729. /***/ public String tooManyFilters;
  730. /***/ public String tooManyIncludeRecursions;
  731. /***/ public String topologicalSortRequired;
  732. /***/ public String transportExceptionBadRef;
  733. /***/ public String transportExceptionEmptyRef;
  734. /***/ public String transportExceptionInvalid;
  735. /***/ public String transportExceptionMissingAssumed;
  736. /***/ public String transportExceptionReadRef;
  737. /***/ public String transportNeedsRepository;
  738. /***/ public String transportProtoBundleFile;
  739. /***/ public String transportProtoFTP;
  740. /***/ public String transportProtoGitAnon;
  741. /***/ public String transportProtoHTTP;
  742. /***/ public String transportProtoLocal;
  743. /***/ public String transportProtoSFTP;
  744. /***/ public String transportProtoSSH;
  745. /***/ public String transportProtoTest;
  746. /***/ public String transportProvidedRefWithNoObjectId;
  747. /***/ public String treeEntryAlreadyExists;
  748. /***/ public String treeFilterMarkerTooManyFilters;
  749. /***/ public String treeWalkMustHaveExactlyTwoTrees;
  750. /***/ public String truncatedHunkLinesMissingForAncestor;
  751. /***/ public String truncatedHunkNewLinesMissing;
  752. /***/ public String truncatedHunkOldLinesMissing;
  753. /***/ public String tSizeMustBeGreaterOrEqual1;
  754. /***/ public String unableToCheckConnectivity;
  755. /***/ public String unableToCreateNewObject;
  756. /***/ public String unableToReadPackfile;
  757. /***/ public String unableToRemovePath;
  758. /***/ public String unableToWrite;
  759. /***/ public String unableToSignCommitNoSecretKey;
  760. /***/ public String unauthorized;
  761. /***/ public String unencodeableFile;
  762. /***/ public String unexpectedCompareResult;
  763. /***/ public String unexpectedEndOfConfigFile;
  764. /***/ public String unexpectedEndOfInput;
  765. /***/ public String unexpectedEofInPack;
  766. /***/ public String unexpectedHunkTrailer;
  767. /***/ public String unexpectedOddResult;
  768. /***/ public String unexpectedPacketLine;
  769. /***/ public String unexpectedRefReport;
  770. /***/ public String unexpectedReportLine;
  771. /***/ public String unexpectedReportLine2;
  772. /***/ public String unexpectedSubmoduleStatus;
  773. /***/ public String unknownOrUnsupportedCommand;
  774. /***/ public String unknownDIRCVersion;
  775. /***/ public String unknownHost;
  776. /***/ public String unknownObject;
  777. /***/ public String unknownObjectInIndex;
  778. /***/ public String unknownObjectType;
  779. /***/ public String unknownObjectType2;
  780. /***/ public String unknownRefStorageFormat;
  781. /***/ public String unknownRepositoryFormat;
  782. /***/ public String unknownRepositoryFormat2;
  783. /***/ public String unknownTransportCommand;
  784. /***/ public String unknownZlibError;
  785. /***/ public String unlockLockFileFailed;
  786. /***/ public String unmergedPath;
  787. /***/ public String unmergedPaths;
  788. /***/ public String unpackException;
  789. /***/ public String unreadablePackIndex;
  790. /***/ public String unrecognizedPackExtension;
  791. /***/ public String unrecognizedRef;
  792. /***/ public String unsetMark;
  793. /***/ public String unsupportedAlternates;
  794. /***/ public String unsupportedArchiveFormat;
  795. /***/ public String unsupportedCommand0;
  796. /***/ public String unsupportedEncryptionAlgorithm;
  797. /***/ public String unsupportedEncryptionVersion;
  798. /***/ public String unsupportedGC;
  799. /***/ public String unsupportedMark;
  800. /***/ public String unsupportedOperationNotAddAtEnd;
  801. /***/ public String unsupportedPackIndexVersion;
  802. /***/ public String unsupportedPackVersion;
  803. /***/ public String unsupportedReftableVersion;
  804. /***/ public String unsupportedRepositoryDescription;
  805. /***/ public String updateRequiresOldIdAndNewId;
  806. /***/ public String updatingHeadFailed;
  807. /***/ public String updatingReferences;
  808. /***/ public String updatingRefFailed;
  809. /***/ public String upstreamBranchName;
  810. /***/ public String uriNotConfigured;
  811. /***/ public String uriNotFound;
  812. /***/ public String uriNotFoundWithMessage;
  813. /***/ public String URINotSupported;
  814. /***/ public String userConfigInvalid;
  815. /***/ public String validatingGitModules;
  816. /***/ public String verifySignatureBad;
  817. /***/ public String verifySignatureExpired;
  818. /***/ public String verifySignatureGood;
  819. /***/ public String verifySignatureIssuer;
  820. /***/ public String verifySignatureKey;
  821. /***/ public String verifySignatureMade;
  822. /***/ public String verifySignatureTrust;
  823. /***/ public String walkFailure;
  824. /***/ public String wantNoSpaceWithCapabilities;
  825. /***/ public String wantNotValid;
  826. /***/ public String weeksAgo;
  827. /***/ public String windowSizeMustBeLesserThanLimit;
  828. /***/ public String windowSizeMustBePowerOf2;
  829. /***/ public String writerAlreadyInitialized;
  830. /***/ public String writeTimedOut;
  831. /***/ public String writingNotPermitted;
  832. /***/ public String writingNotSupported;
  833. /***/ public String writingObjects;
  834. /***/ public String wrongDecompressedLength;
  835. /***/ public String wrongRepositoryState;
  836. /***/ public String year;
  837. /***/ public String years;
  838. /***/ public String years0MonthsAgo;
  839. /***/ public String yearsAgo;
  840. /***/ public String yearsMonthsAgo;
  841. }