You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

JGitText.properties 42KB

Support creating pack bitmap indexes in PackWriter. Update the PackWriter to support writing out pack bitmap indexes, a parallel ".bitmap" file to the ".pack" file. Bitmaps are selected at commits every 1 to 5,000 commits for each unique path from the start. The most recent 100 commits are all bitmapped. The next 19,000 commits have a bitmaps every 100 commits. The remaining commits have a bitmap every 5,000 commits. Commits with more than 1 parent are prefered over ones with 1 or less. Furthermore, previously computed bitmaps are reused, if the previous entry had the reuse flag set, which is set when the bitmap was placed at the max allowed distance. Bitmaps are used to speed up the counting phase when packing, for requests that are not shallow. The PackWriterBitmapWalker uses a RevFilter to proactively mark commits with RevFlag.SEEN, when they appear in a bitmap. The walker produces the full closure of reachable ObjectIds, given the collection of starting ObjectIds. For fetch request, two ObjectWalks are executed to compute the ObjectIds reachable from the haves and from the wants. The ObjectIds needed to be written are determined by taking all the resulting wants AND NOT the haves. For clone requests, we get cached pack support for "free" since it is possible to determine if all of the ObjectIds in a pack file are included in the resulting list of ObjectIds to write. On my machine, the best times for clones and fetches of the linux kernel repository (with about 2.6M objects and 300K commits) are tabulated below: Operation Index V2 Index VE003 Clone 37530ms (524.06 MiB) 82ms (524.06 MiB) Fetch (1 commit back) 75ms 107ms Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB) Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB) Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB) Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB) Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB) Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
11 years ago
blame: Compute the origin of lines in a result file BlameGenerator digs through history and discovers the origin of each line of some result file. BlameResult consumes the stream of regions created by the generator and lays them out in a table for applications to display alongside of source lines. Applications may optionally push in the working tree copy of a file using the push(String, byte[]) method, allowing the application to receive accurate line annotations for the working tree version. Lines that are uncommitted (difference between HEAD and working tree) will show up with the description given by the application as the author, or "Not Committed Yet" as a default string. Applications may also run the BlameGenerator in reverse mode using the reverse(AnyObjectId, AnyObjectId) method instead of push(). When running in the reverse mode the generator annotates lines by the commit they are removed in, rather than the commit they were added in. This allows a user to discover where a line disappeared from when they are looking at an older revision in the repository. For example: blame --reverse 16e810b2..master -L 1080, org.eclipse.jgit.test/tst/org/eclipse/jgit/storage/file/RefDirectoryTest.java ( 1080) } 2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1081) 2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1082) /** 2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1083) * Kick the timestamp of a local file. Above we learn that line 1080 (a closing curly brace of the prior method) still exists in branch master, but the Javadoc comment below it has been removed by Christian Halstrick on May 20th as part of commit 2302a6d3. This result differs considerably from that of C Git's blame --reverse feature. JGit tells the reader which commit performed the delete, while C Git tells the reader the last commit that still contained the line, leaving it an exercise to the reader to discover the descendant that performed the removal. This is still only a basic implementation. Quite notably it is missing support for the smart block copy/move detection that the C implementation of `git blame` is well known for. Despite being incremental, the BlameGenerator can only be run once. After the generator runs it cannot be reused. A better implementation would support applications browsing through history efficiently. In regards to CQ 5110, only a little of the original code survives. CQ: 5110 Bug: 306161 Change-Id: I84b8ea4838bb7d25f4fcdd540547884704661b8f Signed-off-by: Kevin Sawicki <kevin@github.com> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
13 years ago
Support creating pack bitmap indexes in PackWriter. Update the PackWriter to support writing out pack bitmap indexes, a parallel ".bitmap" file to the ".pack" file. Bitmaps are selected at commits every 1 to 5,000 commits for each unique path from the start. The most recent 100 commits are all bitmapped. The next 19,000 commits have a bitmaps every 100 commits. The remaining commits have a bitmap every 5,000 commits. Commits with more than 1 parent are prefered over ones with 1 or less. Furthermore, previously computed bitmaps are reused, if the previous entry had the reuse flag set, which is set when the bitmap was placed at the max allowed distance. Bitmaps are used to speed up the counting phase when packing, for requests that are not shallow. The PackWriterBitmapWalker uses a RevFilter to proactively mark commits with RevFlag.SEEN, when they appear in a bitmap. The walker produces the full closure of reachable ObjectIds, given the collection of starting ObjectIds. For fetch request, two ObjectWalks are executed to compute the ObjectIds reachable from the haves and from the wants. The ObjectIds needed to be written are determined by taking all the resulting wants AND NOT the haves. For clone requests, we get cached pack support for "free" since it is possible to determine if all of the ObjectIds in a pack file are included in the resulting list of ObjectIds to write. On my machine, the best times for clones and fetches of the linux kernel repository (with about 2.6M objects and 300K commits) are tabulated below: Operation Index V2 Index VE003 Clone 37530ms (524.06 MiB) 82ms (524.06 MiB) Fetch (1 commit back) 75ms 107ms Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB) Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB) Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB) Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB) Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB) Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
11 years ago
PackWriter: Support reuse of entire packs The most expensive part of packing a repository for transport to another system is enumerating all of the objects in the repository. Once this gets to the size of the linux-2.6 repository (1.8 million objects), enumeration can take several CPU minutes and costs a lot of temporary working set memory. Teach PackWriter to efficiently reuse an existing "cached pack" by answering a clone request with a thin pack followed by a larger cached pack appended to the end. This requires the repository owner to first construct the cached pack by hand, and record the tip commits inside of $GIT_DIR/objects/info/cached-packs: cd $GIT_DIR root=$(git rev-parse master) tmp=objects/.tmp-$$ names=$(echo $root | git pack-objects --keep-true-parents --revs $tmp) for n in $names; do chmod a-w $tmp-$n.pack $tmp-$n.idx touch objects/pack/pack-$n.keep mv $tmp-$n.pack objects/pack/pack-$n.pack mv $tmp-$n.idx objects/pack/pack-$n.idx done (echo "+ $root"; for n in $names; do echo "P $n"; done; echo) >>objects/info/cached-packs git repack -a -d When a clone request needs to include $root, the corresponding cached pack will be copied as-is, rather than enumerating all of the objects that are reachable from $root. For a linux-2.6 kernel repository that should be about 376 MiB, the above process creates two packs of 368 MiB and 38 MiB[1]. This is a local disk usage increase of ~26 MiB, due to reduced delta compression between the large cached pack and the smaller recent activity pack. The overhead is similar to 1 full copy of the compressed project sources. With this cached pack in hand, JGit daemon completes a clone request in 1m17s less time, but a slightly larger data transfer (+2.39 MiB): Before: remote: Counting objects: 1861830, done remote: Finding sources: 100% (1861830/1861830) remote: Getting sizes: 100% (88243/88243) remote: Compressing objects: 100% (88184/88184) Receiving objects: 100% (1861830/1861830), 376.01 MiB | 19.01 MiB/s, done. remote: Total 1861830 (delta 4706), reused 1851053 (delta 1553844) Resolving deltas: 100% (1564621/1564621), done. real 3m19.005s After: remote: Counting objects: 1601, done remote: Counting objects: 1828460, done remote: Finding sources: 100% (50475/50475) remote: Getting sizes: 100% (18843/18843) remote: Compressing objects: 100% (7585/7585) remote: Total 1861830 (delta 2407), reused 1856197 (delta 37510) Receiving objects: 100% (1861830/1861830), 378.40 MiB | 31.31 MiB/s, done. Resolving deltas: 100% (1559477/1559477), done. real 2m2.938s Repository owners can periodically refresh their cached packs by repacking their repository, folding all newer objects into a larger cached pack. Since repacking is already considered to be a normal Git maintenance activity, this isn't a very big burden. [1] In this test $root was set back about two weeks. Change-Id: Ib87131d5c4b5e8c5cacb0f4fe16ff4ece554734b Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
13 years ago
Persist filesystem timestamp resolution and allow manual configuration To enable persisting filesystem timestamp resolution per FileStore add a new config section to the user global git configuration: - Config section is "filesystem" - Config subsection is concatenation of - Java vendor (system property "java.vm.vendor") - runtime version (system property "java.vm.version") - FileStore's name - separated by '|' e.g. "AdoptOpenJDK|1.8.0_212-b03|/dev/disk1s1" The prefix is needed since some Java versions do not expose the full timestamp resolution of the underlying filesystem. This may also depend on the underlying operating system hence concrete key values may not be portable. - Config key for timestamp resolution is "timestampResolution" as a time value, supported time units are those supported by DefaultTypedConfigGetter#getTimeUnit If timestamp resolution is already configured for a given FileStore the configured value is used instead of measuring the resolution. When timestamp resolution was measured it is persisted in the user global git configuration. Example: [filesystem "AdoptOpenJDK|1.8.0_212-b03|/dev/disk1s1"] timestampResolution = 1 seconds If locking the git config file fails retry saving the resolution up to 5 times in order to workaround races with another thread. In order to avoid stack overflow use the fallback filesystem timestamp resolution when loading FileBasedConfig which creates itself a FileSnapshot to help checking if the config changed. Note: - on some OSes Java 8,9 truncate to milliseconds or seconds, see https://bugs.openjdk.java.net/browse/JDK-8177809, fixed in Java 10 - UnixFileAttributes up to Java 12 truncates timestamp resolution to microseconds when converting the internal representation to FileTime exposed in the API, see https://bugs.openjdk.java.net/browse/JDK-8181493 - WindowsFileAttributes also provides only microsecond resolution up to Java 12 Hence do not attempt to manually configure a higher timestamp resolution than supported by the Java version being used at runtime. Bug: 546891 Bug: 548188 Change-Id: Iff91b8f9e6e5e2295e1463f87c8e95edf4abbcf8 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
4 years ago
Fix atomic lock file creation on NFS FS_POSIX.createNewFile(File) failed to properly implement atomic file creation on NFS using the algorithm [1]: - name of the hard link must be unique to prevent that two processes using different NFS clients try to create the same link. This would render nlink useless to detect if there was a race. - the hard link must be retained for the lifetime of the file since we don't know when the state of the involved NFS clients will be synchronized. This depends on NFS configuration options. To fix these issues we need to change the signature of createNewFile which would break API. Hence deprecate the old method FS.createNewFile(File) and add a new method createNewFileAtomic(File). The new method returns a LockToken which needs to be retained by the caller (LockFile) until all involved NFS clients synchronized their state. Since we don't know when the NFS caches are synchronized we need to retain the token until the corresponding file is no longer needed. The LockToken must be closed after the LockFile using it has been committed or unlocked. On Posix, if core.supportsAtomicCreateNewFile = false this will delete the hard link which guarded the atomic creation of the file. When acquiring the lock fails ensure that the hard link is removed. [1] https://www.time-travellers.org/shane/papers/NFS_considered_harmful.html also see file creation flag O_EXCL in http://man7.org/linux/man-pages/man2/open.2.html Change-Id: I84fcb16143a5f877e9b08c6ee0ff8fa4ea68a90d Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
5 years ago
Config: Rewrite subsection and value escaping and parsing Previously, Config was using the same method for both escaping and parsing subsection names and config values. The goal was presumably code savings, but unfortunately, these two pieces of the git config format are simply different. In git v2.15.1, Documentation/config.txt says the following about subsection names: "Subsection names are case sensitive and can contain any characters except newline (doublequote `"` and backslash can be included by escaping them as `\"` and `\\`, respectively). Section headers cannot span multiple lines. Variables may belong directly to a section or to a given subsection." And, later in the same documentation section, about values: "A line that defines a value can be continued to the next line by ending it with a `\`; the backquote and the end-of-line are stripped. Leading whitespaces after 'name =', the remainder of the line after the first comment character '#' or ';', and trailing whitespaces of the line are discarded unless they are enclosed in double quotes. Internal whitespaces within the value are retained verbatim. Inside double quotes, double quote `"` and backslash `\` characters must be escaped: use `\"` for `"` and `\\` for `\`. The following escape sequences (beside `\"` and `\\`) are recognized: `\n` for newline character (NL), `\t` for horizontal tabulation (HT, TAB) and `\b` for backspace (BS). Other char escape sequences (including octal escape sequences) are invalid." The main important differences are that subsection names have a limited set of supported escape sequences, and do not support newlines at all, either escaped or unescaped. Arguably, it would be easy to support escaped newlines, but C git simply does not: $ git config -f foo.config $'foo.bar\nbaz.quux' value error: invalid key (newline): foo.bar baz.quux I468106ac was an attempt to fix one bug in escapeValue, around leading whitespace, without having to rewrite the whole escaping/parsing code. Unfortunately, because escapeValue was used for escaping subsection names as well, this made it possible to write invalid config files, any time Config#toText is called with a subsection name with trailing whitespace, like {foo }. Rather than pile hacks on top of hacks, fix it for real by largely rewriting the escaping and parsing code. In addition to fixing escape sequences, fix (and write tests for) a few more issues in the old implementation: * Now that we can properly parse it, always emit newlines as "\n" from escapeValue, rather than the weird (but still supported) syntax with a non-quoted trailing literal "\n\" before the newline. In addition to producing more readable output and matching the behavior of C git, this makes the escaping code much simpler. * Disallow '\0' entirely within both subsection names and values, since due to Unix command line argument conventions it is impossible to pass such values to "git config". * Properly preserve intra-value whitespace when parsing, rather than collapsing it all to a single space. Change-Id: I304f626b9d0ad1592c4e4e449a11b136c0f8b3e3
6 years ago
Retry stale file handles on .git/config file On a local non-NFS filesystem the .git/config file will be orphaned if it is replaced by a new process while the current process is reading the old file. The current process successfully continues to read the orphaned file until it closes the file handle. Since NFS servers do not keep track of open files, instead of orphaning the old .git/config file, such a replacement on an NFS filesystem will instead cause the old file to be garbage collected (deleted). A stale file handle exception will be raised on NFS clients if the file is garbage collected (deleted) on the server while it is being read. Since we no longer have access to the old file in these cases, the previous code would just fail. However, in these cases, reopening the file and rereading it will succeed (since it will open the new replacement file). Since retrying the read is a viable strategy to deal with stale file handles on the .git/config file, implement such a strategy. Since it is possible that the .git/config file could be replaced again while rereading it, loop on stale file handle exceptions, up to 5 extra times, trying to read the .git/config file again, until we either read the new file, or find that the file no longer exists. The limit of 5 is arbitrary, and provides a safe upper bounds to prevent infinite loops consuming resources in a potential unforeseen persistent error condition. Change-Id: I6901157b9dfdbd3013360ebe3eb40af147a8c626 Signed-off-by: Nasser Grainawi <nasser@codeaurora.org> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
6 years ago
Added read/write support for pack bitmap index. A pack bitmap index is an additional index of compressed bitmaps of the object graph. Furthermore, a logical API of the index functionality is included, as it is expected to be used by the PackWriter. Compressed bitmaps are created using the javaewah library, which is a word-aligned compressed variant of the Java bitset class based on run-length encoding. The library only works with positive integer values. Thus, the maximum number of ObjectIds in a pack file that this index can currently support is limited to Integer.MAX_VALUE. Every ObjectId is given an integer mapping. The integer is the position of the ObjectId in the complete ObjectId list, sorted by offset, for the pack file. That integer is what the bitmaps use to reference the ObjectId. Currently, the new index format can only be used with pack files that contain a complete closure of the object graph e.g. the result of a garbage collection. The index file includes four bitmaps for the Git object types i.e. commits, trees, blobs, and tags. In addition, a collection of bitmaps keyed by an ObjectId is also included. The bitmap for each entry in the collection represents the full closure of ObjectIds reachable from the keyed ObjectId (including the keyed ObjectId itself). The bitmaps are further compressed by XORing the current bitmaps against prior bitmaps in the index, and selecting the smallest representation. The XOR'd bitmap and offset from the current entry to the position of the bitmap to XOR against is the actual representation of the entry in the index file. Each entry contains one byte, which is currently used to note whether the bitmap should be blindly reused. Change-Id: Id328724bf6b4c8366a088233098c18643edcf40f
11 years ago
Support http.<url>.* configs Git has a rather elaborate mechanism to specify HTTP configuration options per URL, based on pattern matching the URL against "http" subsection names.[1] The URLs used for this matching are always the original URLs; redirected URLs do not participate. * Scheme and host must match exactly case-insensitively. * An optional user name must match exactly. * Ports must match exactly after default ports have been filled in. * The path of a subsection, if any, must match a segment prefix of the path of the URL. * Matches with user name take precedence over equal-length path matches without, but longer path matches are preferred over shorter matches with user name. Implement this for JGit. Factor out the HttpConfig from TransportHttp and implement the matching and override mechanism. The set of supported settings is still the same; JGit currently supports only followRedirects, postBuffer, and sslVerify, plus the JGit-specific maxRedirects key. Add tests for path normalization and prefix matching only on segment separators, and use the new mechanism in SmartClientSmartServerSslTest to disable sslVerify selectively for only the test server URLs. Compare also bug 374703 and bug 465492. With this commit it would be possible to set sslVerify to false for only the git server using a self-signed certificate instead of having to switch it off globally via http.sslVerify. [1] https://git-scm.com/docs/git-config Change-Id: I42a3c2399cb937cd7884116a2a32fcaa7a418fcb Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
6 years ago
Add support to follow HTTP redirects git-core follows HTTP redirects so JGit should also provide this. Implement config setting http.followRedirects with possible values "false" (= never), "true" (= always), and "initial" (only on GET, but not on POST).[1] We must do our own redirect handling and cannot rely on the support that the underlying real connection may offer. At least the JDK's HttpURLConnection has two features that get in the way: * it does not allow cross-protocol redirects and thus fails on http->https redirects (for instance, on Github). * it translates a redirect after a POST to a GET unless the system property "http.strictPostRedirect" is set to true. We don't want to manipulate that system setting nor require it. Additionally, git has its own rules about what redirects it accepts;[2] for instance, it does not allow a redirect that adds query arguments. We handle response codes 301, 302, 303, and 307 as per RFC 2616.[3] On POST we do not handle 303, and we follow redirects only if http.followRedirects == true. Redirects are followed only a certain number of times. There are two ways to control that limit: * by default, the limit is given by the http.maxRedirects system property that is also used by the JDK. If the system property is not set, the default is 5. (This is much lower than the JDK default of 20, but I don't see the value of following so many redirects.) * this can be overwritten by a http.maxRedirects git config setting. The JGit http.* git config settings are currently all global; JGit has no support yet for URI-specific settings "http.<pattern>.name". Adding support for that is well beyond the scope of this change. Like git-core, we log every redirect attempt (LOG.info) so that users may know about the redirection having occurred. Extends the test framework to configure an AppServer with HTTPS support so that we can test cloning via HTTPS and redirections involving HTTPS. [1] https://git-scm.com/docs/git-config [2] https://kernel.googlesource.com/pub/scm/git/git/+/6628eb41db5189c0cdfdced6d8697e7c813c5f0f [3] https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html CQ: 13987 Bug: 465167 Change-Id: I86518cb76842f7d326b51f8715e3bbf8ada89859 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
9 years ago
Merging Git notes Merging Git notes branches has several differences from merging "normal" branches. Although Git notes are initially stored as one flat tree the tree may fanout when the number of notes becomes too large for efficient access. In this case the first two hex digits of the note name will be used as a subdirectory name and the rest 38 hex digits as the file name under that directory. Similarly, when number of notes decreases a fanout tree may collapse back into a flat tree. The Git notes merge algorithm must take into account possibly different tree structures in different note branches and must properly match them against each other. Any conflict on a Git note is, by default, resolved by concatenating the two conflicting versions of the note. A delete-edit conflict is, by default, resolved by keeping the edit version. The note merge logic is pluggable and the caller may provide custom note merger that will perform different merging strategy. Additionally, it is possible to have non-note entries inside a notes tree. The merge algorithm must also take this fact into account and will try to merge such non-note entries. However, in case of any merge conflicts the merge operation will fail. Git notes merge algorithm is currently not trying to do content merge of non-note entries. Thanks to Shawn Pearce for patiently answering my questions related to this topic, giving hints and providing code snippets. Change-Id: I3b2335c76c766fd7ea25752e54087f9b19d69c88 Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
13 years ago
Added read/write support for pack bitmap index. A pack bitmap index is an additional index of compressed bitmaps of the object graph. Furthermore, a logical API of the index functionality is included, as it is expected to be used by the PackWriter. Compressed bitmaps are created using the javaewah library, which is a word-aligned compressed variant of the Java bitset class based on run-length encoding. The library only works with positive integer values. Thus, the maximum number of ObjectIds in a pack file that this index can currently support is limited to Integer.MAX_VALUE. Every ObjectId is given an integer mapping. The integer is the position of the ObjectId in the complete ObjectId list, sorted by offset, for the pack file. That integer is what the bitmaps use to reference the ObjectId. Currently, the new index format can only be used with pack files that contain a complete closure of the object graph e.g. the result of a garbage collection. The index file includes four bitmaps for the Git object types i.e. commits, trees, blobs, and tags. In addition, a collection of bitmaps keyed by an ObjectId is also included. The bitmap for each entry in the collection represents the full closure of ObjectIds reachable from the keyed ObjectId (including the keyed ObjectId itself). The bitmaps are further compressed by XORing the current bitmaps against prior bitmaps in the index, and selecting the smallest representation. The XOR'd bitmap and offset from the current entry to the position of the bitmap to XOR against is the actual representation of the entry in the index file. Each entry contains one byte, which is currently used to note whether the bitmap should be blindly reused. Change-Id: Id328724bf6b4c8366a088233098c18643edcf40f
11 years ago
Handle stale file handles on packed-refs file On a local filesystem the packed-refs file will be orphaned if it is replaced by another client while the current client is reading the old one. However, since NFS servers do not keep track of open files, instead of orphaning the old packed-refs file, such a replacement will cause the old file to be garbage collected instead. A stale file handle exception will be raised on NFS servers if the file is garbage collected (deleted) on the server while it is being read. Since we no longer have access to the old file in these cases, the previous code would just fail. However, in these cases, reopening the file and rereading it will succeed (since it will reopen the new replacement file). So retrying the read is a viable strategy to deal with stale file handles on the packed-refs file, implement such a strategy. Since it is possible that the packed-refs file could be replaced again while rereading it (multiple consecutive updates can easily occur with ref deletions), loop on stale file handle exceptions, up to 5 extra times, trying to read the packed-refs file again, until we either read the new file, or find that the file no longer exists. The limit of 5 is arbitrary, and provides a safe upper bounds to prevent infinite loops consuming resources in a potential unforeseen persistent error condition. Change-Id: I085c472bafa6e2f32f610a33ddc8368bb4ab1814 Signed-off-by: Martin Fick<mfick@codeaurora.org> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
8 years ago
Rewrite push certificate parsing - Consistently return structured data, such as actual ReceiveCommands, which is more useful for callers that are doing things other than verifying the signature, e.g. recording the set of commands. - Store the certificate version field, as this is required to be part of the signed payload. - Add a toText() method to recreate the actual payload for signature verification. This requires keeping track of the un-chomped command strings from the original protocol stream. - Separate the parser from the certificate itself, so the actual PushCertificate object can be immutable. Make a fair attempt at deep immutability, but this is not possible with the current mutable ReceiveCommand structure. - Use more detailed error messages that don't involve NON-NLS strings. - Document null return values more thoroughly. Instead of having the undocumented behavior of throwing NPE from certain methods if they are not first guarded by enabled(), eliminate enabled() and return null from those methods. - Add tests for parsing a push cert from a section of pkt-line stream using a real live stream captured with Wireshark (which, it should be noted, uncovered several simply incorrect statements in C git's Documentation/technical/pack-protocol.txt). This is a slightly breaking API change to classes that were technically public and technically released in 4.0. However, it is highly unlikely that people were actually depending on public behavior, since there were no public methods to create PushCertificates with anything other than null field values, or a PushCertificateParser that did anything other than infinite loop or throw exceptions when reading. Change-Id: I5382193347a8eb1811032d9b32af9651871372d0
9 years ago
Persist filesystem timestamp resolution and allow manual configuration To enable persisting filesystem timestamp resolution per FileStore add a new config section to the user global git configuration: - Config section is "filesystem" - Config subsection is concatenation of - Java vendor (system property "java.vm.vendor") - runtime version (system property "java.vm.version") - FileStore's name - separated by '|' e.g. "AdoptOpenJDK|1.8.0_212-b03|/dev/disk1s1" The prefix is needed since some Java versions do not expose the full timestamp resolution of the underlying filesystem. This may also depend on the underlying operating system hence concrete key values may not be portable. - Config key for timestamp resolution is "timestampResolution" as a time value, supported time units are those supported by DefaultTypedConfigGetter#getTimeUnit If timestamp resolution is already configured for a given FileStore the configured value is used instead of measuring the resolution. When timestamp resolution was measured it is persisted in the user global git configuration. Example: [filesystem "AdoptOpenJDK|1.8.0_212-b03|/dev/disk1s1"] timestampResolution = 1 seconds If locking the git config file fails retry saving the resolution up to 5 times in order to workaround races with another thread. In order to avoid stack overflow use the fallback filesystem timestamp resolution when loading FileBasedConfig which creates itself a FileSnapshot to help checking if the config changed. Note: - on some OSes Java 8,9 truncate to milliseconds or seconds, see https://bugs.openjdk.java.net/browse/JDK-8177809, fixed in Java 10 - UnixFileAttributes up to Java 12 truncates timestamp resolution to microseconds when converting the internal representation to FileTime exposed in the API, see https://bugs.openjdk.java.net/browse/JDK-8181493 - WindowsFileAttributes also provides only microsecond resolution up to Java 12 Hence do not attempt to manually configure a higher timestamp resolution than supported by the Java version being used at runtime. Bug: 546891 Bug: 548188 Change-Id: Iff91b8f9e6e5e2295e1463f87c8e95edf4abbcf8 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
4 years ago
maxObjectSizeLimit for receive-pack. ReceivePack (and PackParser) can be configured with the maxObjectSizeLimit in order to prevent users from pushing too large objects to Git. The limit check is applied to all object types although it is most likely that a BLOB will exceed the limit. In all cases the size of the object header is excluded from the object size which is checked against the limit as this is the size of which a BLOB object would take in the working tree when checked out as a file. When an object exceeds the maxObjectSizeLimit the receive-pack will abort immediately. Delta objects (both offset and ref delta) are also checked against the limit. However, for delta objects we will first check the size of the inflated delta block against the maxObjectSizeLimit and abort immediately if it exceeds the limit. In this case we even do not know the exact size of the resolved delta object but we assume it will be larger than the given maxObjectSizeLimit as delta is generally only chosen if the delta can copy more data from the base object than the delta needs to insert or needs to represent the copy ranges. Aborting early, in this case, avoids unnecessary inflating of the (huge) delta block. Unfortunately, it is too expensive (especially for a large delta) to compute SHA-1 of an object that causes the receive-pack to abort. This would decrease the value of this feature whose main purpose is to protect server resources from users pushing huge objects. Therefore we don't report the SHA-1 in the error message. Change-Id: I177ef24553faacda444ed5895e40ac8925ca0d1e Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
12 years ago
Add support to follow HTTP redirects git-core follows HTTP redirects so JGit should also provide this. Implement config setting http.followRedirects with possible values "false" (= never), "true" (= always), and "initial" (only on GET, but not on POST).[1] We must do our own redirect handling and cannot rely on the support that the underlying real connection may offer. At least the JDK's HttpURLConnection has two features that get in the way: * it does not allow cross-protocol redirects and thus fails on http->https redirects (for instance, on Github). * it translates a redirect after a POST to a GET unless the system property "http.strictPostRedirect" is set to true. We don't want to manipulate that system setting nor require it. Additionally, git has its own rules about what redirects it accepts;[2] for instance, it does not allow a redirect that adds query arguments. We handle response codes 301, 302, 303, and 307 as per RFC 2616.[3] On POST we do not handle 303, and we follow redirects only if http.followRedirects == true. Redirects are followed only a certain number of times. There are two ways to control that limit: * by default, the limit is given by the http.maxRedirects system property that is also used by the JDK. If the system property is not set, the default is 5. (This is much lower than the JDK default of 20, but I don't see the value of following so many redirects.) * this can be overwritten by a http.maxRedirects git config setting. The JGit http.* git config settings are currently all global; JGit has no support yet for URI-specific settings "http.<pattern>.name". Adding support for that is well beyond the scope of this change. Like git-core, we log every redirect attempt (LOG.info) so that users may know about the redirection having occurred. Extends the test framework to configure an AppServer with HTTPS support so that we can test cloning via HTTPS and redirections involving HTTPS. [1] https://git-scm.com/docs/git-config [2] https://kernel.googlesource.com/pub/scm/git/git/+/6628eb41db5189c0cdfdced6d8697e7c813c5f0f [3] https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html CQ: 13987 Bug: 465167 Change-Id: I86518cb76842f7d326b51f8715e3bbf8ada89859 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
9 years ago
Implement similarity based rename detection Content similarity based rename detection is performed only after a linear time detection is performed using exact content match on the ObjectIds. Any names which were paired up during that exact match phase are excluded from the inexact similarity based rename, which reduces the space that must be considered. During rename detection two entries cannot be marked as a rename if they are different types of files. This prevents a symlink from being renamed to a regular file, even if their blob content appears to be similar, or is identical. Efficiently comparing two files is performed by building up two hash indexes and hashing lines or short blocks from each file, counting the number of bytes that each line or block represents. Instead of using a standard java.util.HashMap, we use a custom open hashing scheme similiar to what we use in ObjecIdSubclassMap. This permits us to have a very light-weight hash, with very little memory overhead per cell stored. As we only need two ints per record in the map (line/block key and number of bytes), we collapse them into a single long inside of a long array, making very efficient use of available memory when we create the index table. We only need object headers for the index structure itself, and the index table, but not per-cell. This offers a massive space savings over using java.util.HashMap. The score calculation is done by approximating how many bytes are the same between the two inputs (which for a delta would be how much is copied from the base into the result). The score is derived by dividing the approximate number of bytes in common into the length of the larger of the two input files. Right now the SimilarityIndex table should average about 1/2 full, which means we waste about 50% of our memory on empty entries after we are done indexing a file and sort the table's contents. If memory becomes an issue we could discard the table and copy all records over to a new array that is properly sized. Building the index requires O(M + N log N) time, where M is the size of the input file in bytes, and N is the number of unique lines/blocks in the file. The N log N time constraint comes from the sort of the index table that is necessary to perform linear time matching against another SimilarityIndex created for a different file. To actually perform the rename detection, a SxD matrix is created, placing the sources (aka deletions) along one dimension and the destinations (aka additions) along the other. A simple O(S x D) loop examines every cell in this matrix. A SimilarityIndex is built along the row and reused for each column compare along that row, avoiding the costly index rebuild at the row level. A future improvement would be to load a smaller square matrix into SimilarityIndexes and process everything in that sub-matrix before discarding the column dimension and moving down to the next sub-matrix block along that same grid of rows. An optional ProgressMonitor is permitted to be passed in, allowing applications to see the progress of the detector as it works through the matrix cells. This provides some indication of current status for very long running renames. The default line/block hash function used by the SimilarityIndex may not be optimal, and may produce too many collisions. It is borrowed from RawText's hash, which is used to quickly skip out of a longer equality test if two lines have different hash functions. We may need to refine this hash in the future, in order to minimize the number of collisions we get on common source files. Based on a handful of test commits in JGit (especially my own recent rename repository refactoring series), this rename detector produces output that is very close to C Git. The content similarity scores are sometimes off by 1%, which is most probably caused by our SimilarityIndex type using a different hash function than C Git uses when it computes the delta size between any two objects in the rename matrix. Bug: 318504 Change-Id: I11dff969e8a2e4cf252636d857d2113053bdd9dc Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 years ago
Increase core.streamFileThreshold default to 50 MiB Projects like org.eclipse.mdt contain large XML files about 6 MiB in size. So does the Android project platform/frameworks/base. Doing a clone of either project with JGit takes forever to checkout the files into the working directory, because delta decompression tends to be very expensive as we need to constantly reposition the base stream for each copy instruction. This can be made worse by a very bad ordering of offsets, possibly due to an XML editor that doesn't preserve the order of elements in the file very well. Increasing the threshold to the same limit PackWriter uses when doing delta compression (50 MiB) permits a default configured JGit to decompress these XML file objects using the faster random-access arrays, rather than re-seeking through an inflate stream, significantly reducing checkout time after a clone. Since this new limit may be dangerously close to the JVM maximum heap size, every allocation attempt is now wrapped in a try/catch so that JGit can degrade by switching to the large object stream mode when the allocation is refused. It will run slower, but the operation will still complete. The large stream mode will run very well for big objects that aren't delta compressed, and is acceptable for delta compressed objects that are using only forward referencing copy instructions. Copies using prior offsets are still going to be horrible, and there is nothing we can do about it except increase core.streamFileThreshold. We might in the future want to consider changing the way the delta generators work in JGit and native C Git to avoid prior offsets once an object reaches a certain size, even if that causes the delta instruction stream to be slightly larger. Unfortunately native C Git won't want to do that until its also able to stream objects rather than malloc them as contiguous blocks. Change-Id: Ief7a3896afce15073e80d3691bed90c6a3897307 Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
13 years ago
Support creating pack bitmap indexes in PackWriter. Update the PackWriter to support writing out pack bitmap indexes, a parallel ".bitmap" file to the ".pack" file. Bitmaps are selected at commits every 1 to 5,000 commits for each unique path from the start. The most recent 100 commits are all bitmapped. The next 19,000 commits have a bitmaps every 100 commits. The remaining commits have a bitmap every 5,000 commits. Commits with more than 1 parent are prefered over ones with 1 or less. Furthermore, previously computed bitmaps are reused, if the previous entry had the reuse flag set, which is set when the bitmap was placed at the max allowed distance. Bitmaps are used to speed up the counting phase when packing, for requests that are not shallow. The PackWriterBitmapWalker uses a RevFilter to proactively mark commits with RevFlag.SEEN, when they appear in a bitmap. The walker produces the full closure of reachable ObjectIds, given the collection of starting ObjectIds. For fetch request, two ObjectWalks are executed to compute the ObjectIds reachable from the haves and from the wants. The ObjectIds needed to be written are determined by taking all the resulting wants AND NOT the haves. For clone requests, we get cached pack support for "free" since it is possible to determine if all of the ObjectIds in a pack file are included in the resulting list of ObjectIds to write. On my machine, the best times for clones and fetches of the linux kernel repository (with about 2.6M objects and 300K commits) are tabulated below: Operation Index V2 Index VE003 Clone 37530ms (524.06 MiB) 82ms (524.06 MiB) Fetch (1 commit back) 75ms 107ms Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB) Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB) Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB) Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB) Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB) Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
11 years ago
Do authentication re-tries on HTTP POST There is at least one git server out there (GOGS) that does not require authentication on the initial GET for info/refs?service=git-receive-pack but that _does_ require authentication for the subsequent POST to actually do the push. This occurs on GOGS with public repositories; for private repositories it wants authentication up front. Handle this behavior by adding 401 handling to our POST request. Note that this is suboptimal; we'll re-send the push data at least twice if an authentication failure on POST occurs. It would be much better if the server required authentication up-front in the GET request. Added authentication unit tests (using BASIC auth) to the SmartClientSmartServerTest: - clone with authentication - clone with authentication but lacking CredentialsProvider - clone with authentication and wrong password - clone with authentication after redirect - clone with authentication only on POST, but not on GET Also tested manually in the wild using repositories at try.gogs.io. That server offers only BASIC auth, so the other paths (DIGEST, NEGOTIATE, fall back from DIGEST to BASIC) are untested and I have no way to test them. * public repository: GET unauthenticated, POST authenticated Also tested after clearing the credentials and then entering a wrong password: correctly asks three times during the HTTP POST for user name and password, then gives up. * private repository: authentication already on GET; then gets applied correctly initially to the POST request, which succeeds. Also fix the authentication to use the credentials for the redirected URI if redirects had occurred. We must not present the credentials for the original URI in that case. Consider a malicious redirect A->B: this would allow server B to harvest the user credentials for server A. The unit test for authentication after a redirect also tests for this. Bug: 513043 Change-Id: I97ee5058569efa1545a6c6f6edfd2b357c40592a Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
7 years ago
SHA-1: collision detection support Update SHA1 class to include a Java port of sha1dc[1]'s ubc_check, which can detect the attack pattern used by the SHAttered[2] authors. Given the shattered example files that have the same SHA-1, this modified implementation can identify there is risk of collision given only one file in the pair: $ jgit ... [main] WARN org.eclipse.jgit.util.sha1.SHA1 - SHA-1 collision 38762cf7f55934b34d179ae6a4c80cadccbb7f0a When JGit detects probability of a collision the SHA1 class now warns on the logger, reporting the object's SHA-1 hash, and then throws a Sha1CollisionException to the caller. From the paper[3] by Marc Stevens, the probability of a false positive identification of a collision is about 14 * 2^(-160), sufficiently low enough for any detected collision to likely be a real collision. git-core[4] may adopt sha1dc before the system migrates to an entirely new hash function. This commit enables JGit to remain compatible with that move to sha1dc, and help protect users by warning if similar attacks as SHAttered are identified. Performance declined about 8% (detection off), now: MessageDigest 238.41 MiB/s MessageDigest 244.52 MiB/s MessageDigest 244.06 MiB/s MessageDigest 242.58 MiB/s SHA1 216.77 MiB/s (was ~240.83 MiB/s) SHA1 220.98 MiB/s SHA1 221.76 MiB/s SHA1 221.34 MiB/s This decline in throughput is attributed to the step loop unrolling in compress(), which was necessary to easily fit the UbcCheck logic into the hash function. Using helper functions s1-s4 reduces the code explosion, providing acceptable throughput. With detection enabled (default): SHA1 detectCollision 180.12 MiB/s SHA1 detectCollision 181.59 MiB/s SHA1 detectCollision 181.64 MiB/s SHA1 detectCollision 182.24 MiB/s sha1dc (native C) ~206.28 MiB/s sha1dc (native C) ~204.47 MiB/s sha1dc (native C) ~203.74 MiB/s Average time across 100,000 calls to hash 4100 bytes (such as a commit or tree) for the various algorithms available to JGit also shows SHA1 is slower than MessageDigest, but by an acceptable margin: MessageDigest 17 usec SHA1 18 usec SHA1 detectCollision 22 usec Time to index-pack for git.git (217982 objects, 69 MiB) has increased: MessageDigest SHA1 w/ detectCollision ------------- ----------------------- 20.12s 25.25s 19.87s 25.48s 20.04s 25.26s avg 20.01s 25.33s +26% Being implemented in Java with these additional safety checks is clearly a penalty, but throughput is still acceptable given the increased security against object name collisions. [1] https://github.com/cr-marcstevens/sha1collisiondetection [2] https://shattered.it/ [3] https://marc-stevens.nl/research/papers/C13-S.pdf [4] https://public-inbox.org/git/20170223230621.43anex65ndoqbgnf@sigill.intra.peff.net/ Change-Id: I9fe4c6d8fc5e5a661af72cd3246c9e67b1b9fee6
7 years ago
Shallow fetch: Respect "shallow" lines When fetching from a shallow clone, the client sends "have" lines to tell the server about objects it already has and "shallow" lines to tell where its local history terminates. In some circumstances, the server fails to honor the shallow lines and fails to return objects that the client needs. UploadPack passes the "have" lines to PackWriter so PackWriter can omit them from the generated pack. UploadPack processes "shallow" lines by calling RevWalk.assumeShallow() with the set of shallow commits. RevWalk creates and caches RevCommits for these shallow commits, clearing out their parents. That way, walks correctly terminate at the shallow commits instead of assuming the client has history going back behind them. UploadPack converts its RevWalk to an ObjectWalk, maintaining the cached RevCommits, and passes it to PackWriter. Unfortunately, to support shallow fetches the PackWriter does the following: if (shallowPack && !(walk instanceof DepthWalk.ObjectWalk)) walk = new DepthWalk.ObjectWalk(reader, depth); That is, when the client sends a "deepen" line (fetch --depth=<n>) and the caller has not passed in a DepthWalk.ObjectWalk, PackWriter throws away the RevWalk that was passed in and makes a new one. The cleared parent lists prepared by RevWalk.assumeShallow() are lost. Fortunately UploadPack intends to pass in a DepthWalk.ObjectWalk. It tries to create it by calling toObjectWalkWithSameObjects() on a DepthWalk.RevWalk. But it doesn't work: because DepthWalk.RevWalk does not override the standard RevWalk#toObjectWalkWithSameObjects implementation, the result is a plain ObjectWalk instead of an instance of DepthWalk.ObjectWalk. The result is that the "shallow" information is thrown away and objects reachable from the shallow commits can be omitted from the pack sent when fetching with --depth from a shallow clone. Multiple factors collude to limit the circumstances under which this bug can be observed: 1. Commits with depth != 0 don't enter DepthGenerator's pending queue. That means a "have" cannot have any effect on DepthGenerator unless it is also a "want". 2. DepthGenerator#next() doesn't call carryFlagsImpl(), so the uninteresting flag is not propagated to ancestors there even if a "have" is also a "want". 3. JGit treats a depth of 1 as "1 past the wants". Because of (2), the only place the UNINTERESTING flag can leak to a shallow commit's parents is in the carryFlags() call from markUninteresting(). carryFlags() only traverses commits that have already been parsed: commits yet to be parsed are supposed to inherit correct flags from their parent in PendingGenerator#next (which doesn't happen here --- that is (2)). So the list of commits that have already been parsed becomes relevant. When we hit the markUninteresting() call, all "want"s, "have"s, and commits to be unshallowed have been parsed. carryFlags() only affects the parsed commits. If the "want" is a direct parent of a "have", then it carryFlags() marks it as uninteresting. If the "have" was also a "shallow", then its parent pointer should have been null and the "want" shouldn't have been marked, so we see the bug. If the "want" is a more distant ancestor then (2) keeps the uninteresting state from propagating to the "want" and we don't see the bug. If the "shallow" is not also a "have" then the shallow commit isn't parsed so (2) keeps the uninteresting state from propagating to the "want so we don't see the bug. Here is a reproduction case (time flowing left to right, arrows pointing to parents). "C" must be a commit that the client reports as a "have" during negotiation. That can only happen if the server reports it as an existing branch or tag in the first round of negotiation: A <-- B <-- C <-- D First do git clone --depth 1 <repo> which yields D as a "have" and C as a "shallow" commit. Then try git fetch --depth 1 <repo> B:refs/heads/B Negotiation sets up: have D, shallow C, have C, want B. But due to this bug B is marked as uninteresting and is not sent. Change-Id: I6e14b57b2f85e52d28cdcf356df647870f475440 Signed-off-by: Terry Parker <tparker@google.com>
7 years ago
Implement similarity based rename detection Content similarity based rename detection is performed only after a linear time detection is performed using exact content match on the ObjectIds. Any names which were paired up during that exact match phase are excluded from the inexact similarity based rename, which reduces the space that must be considered. During rename detection two entries cannot be marked as a rename if they are different types of files. This prevents a symlink from being renamed to a regular file, even if their blob content appears to be similar, or is identical. Efficiently comparing two files is performed by building up two hash indexes and hashing lines or short blocks from each file, counting the number of bytes that each line or block represents. Instead of using a standard java.util.HashMap, we use a custom open hashing scheme similiar to what we use in ObjecIdSubclassMap. This permits us to have a very light-weight hash, with very little memory overhead per cell stored. As we only need two ints per record in the map (line/block key and number of bytes), we collapse them into a single long inside of a long array, making very efficient use of available memory when we create the index table. We only need object headers for the index structure itself, and the index table, but not per-cell. This offers a massive space savings over using java.util.HashMap. The score calculation is done by approximating how many bytes are the same between the two inputs (which for a delta would be how much is copied from the base into the result). The score is derived by dividing the approximate number of bytes in common into the length of the larger of the two input files. Right now the SimilarityIndex table should average about 1/2 full, which means we waste about 50% of our memory on empty entries after we are done indexing a file and sort the table's contents. If memory becomes an issue we could discard the table and copy all records over to a new array that is properly sized. Building the index requires O(M + N log N) time, where M is the size of the input file in bytes, and N is the number of unique lines/blocks in the file. The N log N time constraint comes from the sort of the index table that is necessary to perform linear time matching against another SimilarityIndex created for a different file. To actually perform the rename detection, a SxD matrix is created, placing the sources (aka deletions) along one dimension and the destinations (aka additions) along the other. A simple O(S x D) loop examines every cell in this matrix. A SimilarityIndex is built along the row and reused for each column compare along that row, avoiding the costly index rebuild at the row level. A future improvement would be to load a smaller square matrix into SimilarityIndexes and process everything in that sub-matrix before discarding the column dimension and moving down to the next sub-matrix block along that same grid of rows. An optional ProgressMonitor is permitted to be passed in, allowing applications to see the progress of the detector as it works through the matrix cells. This provides some indication of current status for very long running renames. The default line/block hash function used by the SimilarityIndex may not be optimal, and may produce too many collisions. It is borrowed from RawText's hash, which is used to quickly skip out of a longer equality test if two lines have different hash functions. We may need to refine this hash in the future, in order to minimize the number of collisions we get on common source files. Based on a handful of test commits in JGit (especially my own recent rename repository refactoring series), this rename detector produces output that is very close to C Git. The content similarity scores are sometimes off by 1%, which is most probably caused by our SimilarityIndex type using a different hash function than C Git uses when it computes the delta size between any two objects in the rename matrix. Bug: 318504 Change-Id: I11dff969e8a2e4cf252636d857d2113053bdd9dc Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 years ago
Handle SSL handshake failures in TransportHttp When a https connection could not be established because the SSL handshake was unsuccessful, TransportHttp would unconditionally throw a TransportException. Other https clients like web browsers or also some SVN clients handle this more gracefully. If there's a problem with the server certificate, they inform the user and give him a possibility to connect to the server all the same. In git, this would correspond to dynamically setting http.sslVerify to false for the server. Implement this using the CredentialsProvider to inform and ask the user. We offer three choices: 1. skip SSL verification for the current git operation, or 2. skip SSL verification for the server always from now on for requests originating from the current repository, or 3. always skip SSL verification for the server from now on. For (1), we just suppress SSL verification for the current instance of TransportHttp. For (2), we store a http.<uri>.sslVerify = false setting for the original URI in the repo config. For (3), we store the http.<uri>.sslVerify setting in the git user config. Adapt the SmartClientSmartServerSslTest such that it uses this mechanism instead of setting http.sslVerify up front. Improve SimpleHttpServer to enable setting it up also with HTTPS support in anticipation of an EGit SWTbot UI test verifying that cloning via HTTPS from a server that has a certificate that doesn't validate pops up the correct dialog, and that cloning subsequently proceeds successfully if the user decides to skip SSL verification. Bug: 374703 Change-Id: Ie1abada9a3d389ad4d8d52c2d5265d2764e3fb0e Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
6 years ago
Fix atomic lock file creation on NFS FS_POSIX.createNewFile(File) failed to properly implement atomic file creation on NFS using the algorithm [1]: - name of the hard link must be unique to prevent that two processes using different NFS clients try to create the same link. This would render nlink useless to detect if there was a race. - the hard link must be retained for the lifetime of the file since we don't know when the state of the involved NFS clients will be synchronized. This depends on NFS configuration options. To fix these issues we need to change the signature of createNewFile which would break API. Hence deprecate the old method FS.createNewFile(File) and add a new method createNewFileAtomic(File). The new method returns a LockToken which needs to be retained by the caller (LockFile) until all involved NFS clients synchronized their state. Since we don't know when the NFS caches are synchronized we need to retain the token until the corresponding file is no longer needed. The LockToken must be closed after the LockFile using it has been committed or unlocked. On Posix, if core.supportsAtomicCreateNewFile = false this will delete the hard link which guarded the atomic creation of the file. When acquiring the lock fails ensure that the hard link is removed. [1] https://www.time-travellers.org/shane/papers/NFS_considered_harmful.html also see file creation flag O_EXCL in http://man7.org/linux/man-pages/man2/open.2.html Change-Id: I84fcb16143a5f877e9b08c6ee0ff8fa4ea68a90d Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
5 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737
  1. #
  2. # Messages with format elements ({0}) are processed using java.text.MessageFormat.
  3. #
  4. abbreviationLengthMustBeNonNegative=Abbreviation length must not be negative.
  5. abortingRebase=Aborting rebase: resetting to {0}
  6. abortingRebaseFailed=Could not abort rebase
  7. abortingRebaseFailedNoOrigHead=Could not abort rebase since ORIG_HEAD is null
  8. advertisementCameBefore=advertisement of {0}^'{}' came before {1}
  9. advertisementOfCameBefore=advertisement of {0}^'{}' came before {1}
  10. amazonS3ActionFailed={0} of ''{1}'' failed: {2} {3}
  11. amazonS3ActionFailedGivingUp={0} of ''{1}'' failed: Giving up after {2} attempts.
  12. ambiguousObjectAbbreviation=Object abbreviation {0} is ambiguous
  13. aNewObjectIdIsRequired=A NewObjectId is required.
  14. anExceptionOccurredWhileTryingToAddTheIdOfHEAD=An exception occurred while trying to add the Id of HEAD
  15. anSSHSessionHasBeenAlreadyCreated=An SSH session has been already created
  16. applyingCommit=Applying {0}
  17. archiveFormatAlreadyAbsent=Archive format already absent: {0}
  18. archiveFormatAlreadyRegistered=Archive format already registered with different implementation: {0}
  19. argumentIsNotAValidCommentString=Invalid comment: {0}
  20. assumeAtomicCreateNewFile=Reading option "core.supportsAtomicFileCreation" failed, fallback to default assuming atomic file creation is supported
  21. atLeastOnePathIsRequired=At least one path is required.
  22. atLeastOnePatternIsRequired=At least one pattern is required.
  23. atLeastTwoFiltersNeeded=At least two filters needed.
  24. atomicPushNotSupported=Atomic push not supported.
  25. atomicRefUpdatesNotSupported=Atomic ref updates not supported
  26. atomicSymRefNotSupported=Atomic symref not supported
  27. authenticationNotSupported=authentication not supported
  28. badBase64InputCharacterAt=Bad Base64 input character at {0} : {1} (decimal)
  29. badEntryDelimiter=Bad entry delimiter
  30. badEntryName=Bad entry name: {0}
  31. badEscape=Bad escape: {0}
  32. badGroupHeader=Bad group header
  33. badObjectType=Bad object type: {0}
  34. badRef=Bad ref: {0}: {1}
  35. badSectionEntry=Bad section entry: {0}
  36. badShallowLine=Bad shallow line: {0}
  37. bareRepositoryNoWorkdirAndIndex=Bare Repository has neither a working tree, nor an index
  38. baseLengthIncorrect=base length incorrect
  39. bitmapMissingObject=Bitmap at {0} is missing {1}.
  40. bitmapsMustBePrepared=Bitmaps must be prepared before they may be written.
  41. blameNotCommittedYet=Not Committed Yet
  42. blockLimitNotMultipleOfBlockSize=blockLimit {0} must be a multiple of blockSize {1}
  43. blockLimitNotPositive=blockLimit must be positive: {0}
  44. blockSizeNotPowerOf2=blockSize must be a power of 2
  45. bothRefTargetsMustNotBeNull=both old and new ref targets must not be null.
  46. branchNameInvalid=Branch name {0} is not allowed
  47. buildingBitmaps=Building bitmaps
  48. cachedPacksPreventsIndexCreation=Using cached packs prevents index creation
  49. cachedPacksPreventsListingObjects=Using cached packs prevents listing objects
  50. cannotAccessLastModifiedForSafeDeletion=Unable to access lastModifiedTime of file {0}, skip deletion since we cannot safely avoid race condition
  51. cannotBeCombined=Cannot be combined.
  52. cannotBeRecursiveWhenTreesAreIncluded=TreeWalk shouldn't be recursive when tree objects are included.
  53. cannotChangeActionOnComment=Cannot change action on comment line in git-rebase-todo file, old action: {0}, new action: {1}.
  54. cannotCheckoutFromUnbornBranch=Cannot checkout from unborn branch
  55. cannotCheckoutOursSwitchBranch=Checking out ours/theirs is only possible when checking out index, not when switching branches.
  56. cannotCombineSquashWithNoff=Cannot combine --squash with --no-ff.
  57. cannotCombineTreeFilterWithRevFilter=Cannot combine TreeFilter {0} with RevFilter {1}.
  58. cannotCommitOnARepoWithState=Cannot commit on a repo with state: {0}
  59. cannotCommitWriteTo=Cannot commit write to {0}
  60. cannotConnectPipes=cannot connect pipes
  61. cannotConvertScriptToText=Cannot convert script to text
  62. cannotCreateConfig=cannot create config
  63. cannotCreateDirectory=Cannot create directory {0}
  64. cannotCreateHEAD=cannot create HEAD
  65. cannotCreateIndexfile=Cannot create an index file with name {0}
  66. cannotCreateTempDir=Cannot create a temp dir
  67. cannotDeleteCheckedOutBranch=Branch {0} is checked out and cannot be deleted
  68. cannotDeleteFile=Cannot delete file: {0}
  69. cannotDeleteObjectsPath=Cannot delete {0}/{1}: {2}
  70. cannotDetermineProxyFor=Cannot determine proxy for {0}
  71. cannotDownload=Cannot download {0}
  72. cannotEnterObjectsPath=Cannot enter {0}/objects: {1}
  73. cannotEnterPathFromParent=Cannot enter {0} from {1}: {2}
  74. cannotExecute=cannot execute: {0}
  75. cannotGet=Cannot get {0}
  76. cannotGetObjectsPath=Cannot get {0}/{1}: {2}
  77. cannotListObjectsPath=Cannot ls {0}/{1}: {2}
  78. cannotListPackPath=Cannot ls {0}/pack: {1}
  79. cannotListRefs=cannot list refs
  80. cannotLock=Cannot lock {0}. Ensure that no other process has an open file handle on the lock file {0}.lock, then you may delete the lock file and retry.
  81. cannotLockPackIn=Cannot lock pack in {0}
  82. cannotMatchOnEmptyString=Cannot match on empty string.
  83. cannotMkdirObjectPath=Cannot create directory {0}/{1}: {2}
  84. cannotMoveIndexTo=Cannot move index to {0}
  85. cannotMovePackTo=Cannot move pack to {0}
  86. cannotOpenService=cannot open {0}
  87. cannotParseDate=The date specification "{0}" could not be parsed with the following formats: {1}
  88. cannotParseGitURIish=Cannot parse Git URI-ish
  89. cannotPullOnARepoWithState=Cannot pull into a repository with state: {0}
  90. cannotRead=Cannot read {0}
  91. cannotReadBackDelta=Cannot read delta type {0}
  92. cannotReadBlob=Cannot read blob {0}
  93. cannotReadCommit=Cannot read commit {0}
  94. cannotReadFile=Cannot read file {0}
  95. cannotReadHEAD=cannot read HEAD: {0} {1}
  96. cannotReadIndex=The index file {0} exists but cannot be read
  97. cannotReadObject=Cannot read object
  98. cannotReadObjectsPath=Cannot read {0}/{1}: {2}
  99. cannotReadTree=Cannot read tree {0}
  100. cannotRebaseWithoutCurrentHead=Can not rebase without a current HEAD
  101. cannotSaveConfig=Cannot save config file ''{0}''
  102. cannotSquashFixupWithoutPreviousCommit=Cannot {0} without previous commit.
  103. cannotStoreObjects=cannot store objects
  104. cannotResolveUniquelyAbbrevObjectId=Could not resolve uniquely the abbreviated object ID
  105. cannotUpdateUnbornBranch=Cannot update unborn branch
  106. cannotWriteObjectsPath=Cannot write {0}/{1}: {2}
  107. canOnlyCherryPickCommitsWithOneParent=Cannot cherry-pick commit ''{0}'' because it has {1} parents, only commits with exactly one parent are supported.
  108. canOnlyRevertCommitsWithOneParent=Cannot revert commit ''{0}'' because it has {1} parents, only commits with exactly one parent are supported
  109. commitDoesNotHaveGivenParent=The commit ''{0}'' does not have a parent number {1}.
  110. cantFindObjectInReversePackIndexForTheSpecifiedOffset=Can''t find object in (reverse) pack index for the specified offset {0}
  111. channelMustBeInRange1_255=channel {0} must be in range [1, 255]
  112. characterClassIsNotSupported=The character class {0} is not supported.
  113. checkingOutFiles=Checking out files
  114. checkoutConflictWithFile=Checkout conflict with file: {0}
  115. checkoutConflictWithFiles=Checkout conflict with files: {0}
  116. checkoutUnexpectedResult=Checkout returned unexpected result {0}
  117. classCastNotA=Not a {0}
  118. cloneNonEmptyDirectory=Destination path "{0}" already exists and is not an empty directory
  119. closed=closed
  120. closeLockTokenFailed=Closing LockToken ''{0}'' failed
  121. collisionOn=Collision on {0}
  122. commandClosedStderrButDidntExit=Command {0} closed stderr stream but didn''t exit within timeout {1} seconds
  123. commandRejectedByHook=Rejected by "{0}" hook.\n{1}
  124. commandWasCalledInTheWrongState=Command {0} was called in the wrong state
  125. commitMessageNotSpecified=commit message not specified
  126. commitOnRepoWithoutHEADCurrentlyNotSupported=Commit on repo without HEAD currently not supported
  127. commitAmendOnInitialNotPossible=Amending is not possible on initial commit.
  128. compressingObjects=Compressing objects
  129. configSubsectionContainsNewline=config subsection name contains newline
  130. configSubsectionContainsNullByte=config subsection name contains byte 0x00
  131. configValueContainsNullByte=config value contains byte 0x00
  132. configHandleIsStale=config file handle is stale, {0}. retry
  133. connectionFailed=connection failed
  134. connectionTimeOut=Connection time out: {0}
  135. contextMustBeNonNegative=context must be >= 0
  136. corruptionDetectedReReadingAt=Corruption detected re-reading at {0}
  137. corruptObjectBadDate=bad date
  138. corruptObjectBadEmail=bad email
  139. corruptObjectBadStream=bad stream
  140. corruptObjectBadTimezone=bad time zone
  141. corruptObjectDuplicateEntryNames=duplicate entry names
  142. corruptObjectGarbageAfterSize=garbage after size
  143. corruptObjectIncorrectLength=incorrect length
  144. corruptObjectIncorrectSorting=incorrectly sorted
  145. corruptObjectInvalidModeChar=invalid mode character
  146. corruptObjectInvalidModeStartsZero=mode starts with '0'
  147. corruptObjectInvalidMode2=invalid mode {0,number,#}
  148. corruptObjectInvalidMode3=invalid mode {0} for {1} ''{2}'' in {3}.
  149. corruptObjectInvalidName=invalid name '%s'
  150. corruptObjectInvalidNameAux=invalid name 'AUX'
  151. corruptObjectInvalidNameCon=invalid name 'CON'
  152. corruptObjectInvalidNameCom=invalid name 'COM%c'
  153. corruptObjectInvalidNameEnd=invalid name ends with '%c'
  154. corruptObjectInvalidNameIgnorableUnicode=invalid name '%s' contains ignorable Unicode characters
  155. corruptObjectInvalidNameInvalidUtf8=invalid name contains byte sequence ''{0}'' which is not a valid UTF-8 character
  156. corruptObjectInvalidNameLpt=invalid name 'LPT%c'
  157. corruptObjectInvalidNameNul=invalid name 'NUL'
  158. corruptObjectInvalidNamePrn=invalid name 'PRN'
  159. corruptObjectInvalidObject=invalid object
  160. corruptObjectInvalidParent=invalid parent
  161. corruptObjectInvalidTree=invalid tree
  162. corruptObjectInvalidType=invalid type
  163. corruptObjectInvalidType2=invalid type {0}
  164. corruptObjectMissingEmail=missing email
  165. corruptObjectNameContainsByte=name contains byte 0x%x
  166. corruptObjectNameContainsChar=name contains '%c'
  167. corruptObjectNameContainsNullByte=name contains byte 0x00
  168. corruptObjectNameContainsSlash=name contains '/'
  169. corruptObjectNameDot=invalid name '.'
  170. corruptObjectNameDotDot=invalid name '..'
  171. corruptObjectNameZeroLength=zero length name
  172. corruptObjectNegativeSize=negative size
  173. corruptObjectNoAuthor=no author
  174. corruptObjectNoCommitter=no committer
  175. corruptObjectNoHeader=no header
  176. corruptObjectNoObjectHeader=no object header
  177. corruptObjectNoTagHeader=no tag header
  178. corruptObjectNotreeHeader=no tree header
  179. corruptObjectNoTypeHeader=no type header
  180. corruptObjectPackfileChecksumIncorrect=Packfile checksum incorrect.
  181. corruptObjectTruncatedInMode=truncated in mode
  182. corruptObjectTruncatedInName=truncated in name
  183. corruptObjectTruncatedInObjectId=truncated in object id
  184. corruptObjectZeroId=entry points to null SHA-1
  185. corruptUseCnt=close() called when useCnt is already zero for {0}
  186. couldNotGetAdvertisedRef=Remote {0} did not advertise Ref for branch {1}. This Ref may not exist in the remote or may be hidden by permission settings.
  187. couldNotGetRepoStatistics=Could not get repository statistics
  188. couldNotLockHEAD=Could not lock HEAD
  189. couldNotReadObjectWhileParsingCommit=Could not read an object while parsing commit {0}
  190. couldNotRewindToUpstreamCommit=Could not rewind to upstream commit
  191. couldNotURLEncodeToUTF8=Could not URL encode to UTF-8
  192. countingObjects=Counting objects
  193. corruptPack=Pack file {0} is corrupt, removing it from pack list
  194. createBranchFailedUnknownReason=Create branch failed for unknown reason
  195. createBranchUnexpectedResult=Create branch returned unexpected result {0}
  196. createNewFileFailed=Could not create new file {0}
  197. createRequiresZeroOldId=Create requires old ID to be zero
  198. credentialPassword=Password
  199. credentialUsername=Username
  200. daemonAlreadyRunning=Daemon already running
  201. daysAgo={0} days ago
  202. deepenNotWithDeepen=Cannot combine deepen with deepen-not
  203. deepenSinceWithDeepen=Cannot combine deepen with deepen-since
  204. deleteBranchUnexpectedResult=Delete branch returned unexpected result {0}
  205. deleteFileFailed=Could not delete file {0}
  206. deleteRequiresZeroNewId=Delete requires new ID to be zero
  207. deleteTagUnexpectedResult=Delete tag returned unexpected result {0}
  208. deletingNotSupported=Deleting {0} not supported.
  209. destinationIsNotAWildcard=Destination is not a wildcard.
  210. detachedHeadDetected=HEAD is detached
  211. dirCacheDoesNotHaveABackingFile=DirCache does not have a backing file
  212. dirCacheFileIsNotLocked=DirCache {0} not locked
  213. dirCacheIsNotLocked=DirCache is not locked
  214. DIRCChecksumMismatch=DIRC checksum mismatch
  215. DIRCExtensionIsTooLargeAt=DIRC extension {0} is too large at {1} bytes.
  216. DIRCExtensionNotSupportedByThisVersion=DIRC extension {0} not supported by this version.
  217. DIRCHasTooManyEntries=DIRC has too many entries.
  218. DIRCUnrecognizedExtendedFlags=Unrecognized extended flags: {0}
  219. downloadCancelled=Download cancelled
  220. downloadCancelledDuringIndexing=Download cancelled during indexing
  221. duplicateAdvertisementsOf=duplicate advertisements of {0}
  222. duplicateRef=Duplicate ref: {0}
  223. duplicateRemoteRefUpdateIsIllegal=Duplicate remote ref update is illegal. Affected remote name: {0}
  224. duplicateStagesNotAllowed=Duplicate stages not allowed
  225. eitherGitDirOrWorkTreeRequired=One of setGitDir or setWorkTree must be called.
  226. emptyCommit=No changes
  227. emptyPathNotPermitted=Empty path not permitted.
  228. emptyRef=Empty ref: {0}
  229. encryptionError=Encryption error: {0}
  230. encryptionOnlyPBE=Encryption error: only password-based encryption (PBE) algorithms are supported.
  231. endOfFileInEscape=End of file in escape
  232. entryNotFoundByPath=Entry not found by path: {0}
  233. enumValueNotSupported0=Invalid value: {0}
  234. enumValueNotSupported2=Invalid value: {0}.{1}={2}
  235. enumValueNotSupported3=Invalid value: {0}.{1}.{2}={3}
  236. enumValuesNotAvailable=Enumerated values of type {0} not available
  237. errorInPackedRefs=error in packed-refs
  238. errorInvalidProtocolWantedOldNewRef=error: invalid protocol: wanted 'old new ref'
  239. errorListing=Error listing {0}
  240. errorOccurredDuringUnpackingOnTheRemoteEnd=error occurred during unpacking on the remote end: {0}
  241. errorReadingInfoRefs=error reading info/refs
  242. exceptionCaughtDuringExecutionOfHook=Exception caught during execution of "{0}" hook.
  243. exceptionCaughtDuringExecutionOfAddCommand=Exception caught during execution of add command
  244. exceptionCaughtDuringExecutionOfArchiveCommand=Exception caught during execution of archive command
  245. exceptionCaughtDuringExecutionOfCherryPickCommand=Exception caught during execution of cherry-pick command. {0}
  246. exceptionCaughtDuringExecutionOfCommand=Exception caught during execution of command ''{0}'' in ''{1}'', return code ''{2}'', error message ''{3}''
  247. exceptionCaughtDuringExecutionOfCommitCommand=Exception caught during execution of commit command
  248. exceptionCaughtDuringExecutionOfFetchCommand=Exception caught during execution of fetch command
  249. exceptionCaughtDuringExecutionOfLsRemoteCommand=Exception caught during execution of ls-remote command
  250. exceptionCaughtDuringExecutionOfMergeCommand=Exception caught during execution of merge command. {0}
  251. exceptionCaughtDuringExecutionOfPullCommand=Exception caught during execution of pull command
  252. exceptionCaughtDuringExecutionOfPushCommand=Exception caught during execution of push command
  253. exceptionCaughtDuringExecutionOfResetCommand=Exception caught during execution of reset command. {0}
  254. exceptionCaughtDuringExecutionOfRevertCommand=Exception caught during execution of revert command. {0}
  255. exceptionCaughtDuringExecutionOfRmCommand=Exception caught during execution of rm command
  256. exceptionCaughtDuringExecutionOfTagCommand=Exception caught during execution of tag command
  257. exceptionHookExecutionInterrupted=Execution of "{0}" hook interrupted.
  258. exceptionOccurredDuringAddingOfOptionToALogCommand=Exception occurred during adding of {0} as option to a Log command
  259. exceptionOccurredDuringReadingOfGIT_DIR=Exception occurred during reading of $GIT_DIR/{0}. {1}
  260. exceptionWhileReadingPack=Exception caught while accessing pack file {0}, the pack file might be corrupt. Caught {1} consecutive errors while trying to read this pack.
  261. expectedACKNAKFoundEOF=Expected ACK/NAK, found EOF
  262. expectedACKNAKGot=Expected ACK/NAK, got: {0}
  263. expectedBooleanStringValue=Expected boolean string value
  264. expectedCharacterEncodingGuesses=Expected {0} character encoding guesses
  265. expectedDirectoryNotSubmodule=Expected submodule ''{0}'' to be a directory
  266. expectedEOFReceived=expected EOF; received ''{0}'' instead
  267. expectedGot=expected ''{0}'', got ''{1}''
  268. expectedLessThanGot=expected less than ''{0}'', got ''{1}''
  269. expectedPktLineWithService=expected pkt-line with ''# service=-'', got ''{0}''
  270. expectedReceivedContentType=expected Content-Type {0}; received Content-Type {1}
  271. expectedReportForRefNotReceived={0}: expected report for ref {1} not received
  272. failedAtomicFileCreation=Atomic file creation failed, number of hard links to file {0} was not 2 but {1}
  273. failedCreateLockFile=Creating lock file {} failed
  274. failedToDetermineFilterDefinition=An exception occurred while determining filter definitions
  275. failedUpdatingRefs=failed updating refs
  276. failureDueToOneOfTheFollowing=Failure due to one of the following:
  277. failureUpdatingFETCH_HEAD=Failure updating FETCH_HEAD: {0}
  278. failureUpdatingTrackingRef=Failure updating tracking ref {0}: {1}
  279. fileCannotBeDeleted=File cannot be deleted: {0}
  280. fileIsTooLarge=File is too large: {0}
  281. fileModeNotSetForPath=FileMode not set for path {0}
  282. filterExecutionFailed=Execution of filter command ''{0}'' on file ''{1}'' failed
  283. filterExecutionFailedRc=Execution of filter command ''{0}'' on file ''{1}'' failed with return code ''{2}'', message on stderr: ''{3}''
  284. filterRequiresCapability=filter requires server to advertise that capability
  285. findingGarbage=Finding garbage
  286. flagIsDisposed={0} is disposed.
  287. flagNotFromThis={0} not from this.
  288. flagsAlreadyCreated={0} flags already created.
  289. funnyRefname=funny refname
  290. gcFailed=Garbage collection failed.
  291. gcTooManyUnpruned=Too many loose, unpruneable objects after garbage collection. Consider adjusting gc.auto or gc.pruneExpire.
  292. headRequiredToStash=HEAD required to stash local changes
  293. hoursAgo={0} hours ago
  294. httpConfigCannotNormalizeURL=Cannot normalize URL path {0}: too many .. segments
  295. httpConfigInvalidURL=Cannot parse URL from subsection http.{0} in git config; ignored.
  296. hugeIndexesAreNotSupportedByJgitYet=Huge indexes are not supported by jgit, yet
  297. hunkBelongsToAnotherFile=Hunk belongs to another file
  298. hunkDisconnectedFromFile=Hunk disconnected from file
  299. hunkHeaderDoesNotMatchBodyLineCountOf=Hunk header {0} does not match body line count of {1}
  300. illegalArgumentNotA=Not {0}
  301. illegalCombinationOfArguments=The combination of arguments {0} and {1} is not allowed
  302. illegalHookName=Illegal hook name {0}
  303. illegalPackingPhase=Illegal packing phase {0}
  304. incorrectHashFor=Incorrect hash for {0}; computed {1} as a {2} from {3} bytes.
  305. incorrectOBJECT_ID_LENGTH=Incorrect OBJECT_ID_LENGTH.
  306. indexFileCorruptedNegativeBucketCount=Invalid negative bucket count read from pack v2 index file: {0}
  307. indexFileIsTooLargeForJgit=Index file is too large for jgit
  308. indexWriteException=Modified index could not be written
  309. initFailedBareRepoDifferentDirs=When initializing a bare repo with directory {0} and separate git-dir {1} specified both folders must point to the same location
  310. initFailedDirIsNoDirectory=Cannot set directory to ''{0}'' which is not a directory
  311. initFailedGitDirIsNoDirectory=Cannot set git-dir to ''{0}'' which is not a directory
  312. initFailedNonBareRepoSameDirs=When initializing a non-bare repo with directory {0} and separate git-dir {1} specified both folders should not point to the same location
  313. inMemoryBufferLimitExceeded=In-memory buffer limit exceeded
  314. inputDidntMatchLength=Input did not match supplied length. {0} bytes are missing.
  315. inputStreamMustSupportMark=InputStream must support mark()
  316. integerValueOutOfRange=Integer value {0}.{1} out of range
  317. internalRevisionError=internal revision error
  318. internalServerError=internal server error
  319. interruptedWriting=Interrupted writing {0}
  320. inTheFuture=in the future
  321. invalidAdvertisementOf=invalid advertisement of {0}
  322. invalidAncestryLength=Invalid ancestry length
  323. invalidBooleanValue=Invalid boolean value: {0}.{1}={2}
  324. invalidChannel=Invalid channel {0}
  325. invalidCommitParentNumber=Invalid commit parent number
  326. invalidDepth=Invalid depth: {0}
  327. invalidEncryption=Invalid encryption
  328. invalidExpandWildcard=ExpandFromSource on a refspec that can have mismatched wildcards does not make sense.
  329. invalidFilter=Invalid filter: {0}
  330. invalidGitdirRef = Invalid .git reference in file ''{0}''
  331. invalidGitModules=Invalid .gitmodules file
  332. invalidGitType=invalid git type: {0}
  333. invalidId=Invalid id: {0}
  334. invalidId0=Invalid id
  335. invalidIdLength=Invalid id length {0}; should be {1}
  336. invalidIgnoreParamSubmodule=Found invalid ignore param for submodule {0}.
  337. invalidIgnoreRule=Exception caught while parsing ignore rule ''{0}''.
  338. invalidIntegerValue=Invalid integer value: {0}.{1}={2}
  339. invalidKey=Invalid key: {0}
  340. invalidLineInConfigFile=Invalid line in config file
  341. invalidLineInConfigFileWithParam=Invalid line in config file: {0}
  342. invalidModeFor=Invalid mode {0} for {1} {2} in {3}.
  343. invalidModeForPath=Invalid mode {0} for path {1}
  344. invalidNameContainsDotDot=Invalid name (contains ".."): {0}
  345. invalidObject=Invalid {0} {1}: {2}
  346. invalidOldIdSent=invalid old id sent
  347. invalidPacketLineHeader=Invalid packet line header: {0}
  348. invalidPath=Invalid path: {0}
  349. invalidPurgeFactor=Invalid purgeFactor {0}, values have to be in range between 0 and 1
  350. invalidRedirectLocation=Invalid redirect location {0} -> {1}
  351. invalidRefAdvertisementLine=Invalid ref advertisement line: ''{1}''
  352. invalidReflogRevision=Invalid reflog revision: {0}
  353. invalidRefName=Invalid ref name: {0}
  354. invalidReftableBlock=Invalid reftable block
  355. invalidReftableCRC=Invalid reftable CRC-32
  356. invalidReftableFile=Invalid reftable file
  357. invalidRemote=Invalid remote: {0}
  358. invalidRepositoryStateNoHead=Invalid repository --- cannot read HEAD
  359. invalidShallowObject=invalid shallow object {0}, expected commit
  360. invalidStageForPath=Invalid stage {0} for path {1}
  361. invalidSystemProperty=Invalid system property ''{0}'': ''{1}''; using default value {2}
  362. invalidTagOption=Invalid tag option: {0}
  363. invalidTimeout=Invalid timeout: {0}
  364. invalidTimestamp=Invalid timestamp in {0}
  365. invalidTimeUnitValue2=Invalid time unit value: {0}.{1}={2}
  366. invalidTimeUnitValue3=Invalid time unit value: {0}.{1}.{2}={3}
  367. invalidTreeZeroLengthName=Cannot append a tree entry with zero-length name
  368. invalidURL=Invalid URL {0}
  369. invalidWildcards=Invalid wildcards {0}
  370. invalidRefSpec=Invalid refspec {0}
  371. invalidWindowSize=Invalid window size
  372. isAStaticFlagAndHasNorevWalkInstance={0} is a static flag and has no RevWalk instance
  373. JRELacksMD5Implementation=JRE lacks MD5 implementation
  374. kNotInRange=k {0} not in {1} - {2}
  375. largeObjectExceedsByteArray=Object {0} exceeds 2 GiB byte array limit
  376. largeObjectExceedsLimit=Object {0} exceeds {1} limit, actual size is {2}
  377. largeObjectException={0} exceeds size limit
  378. largeObjectOutOfMemory=Out of memory loading {0}
  379. lengthExceedsMaximumArraySize=Length exceeds maximum array size
  380. lfsHookConflict=LFS built-in hook conflicts with existing pre-push hook in repository {0}. Either remove the pre-push hook or disable built-in LFS support.
  381. listingAlternates=Listing alternates
  382. listingPacks=Listing packs
  383. localObjectsIncomplete=Local objects incomplete.
  384. localRefIsMissingObjects=Local ref {0} is missing object(s).
  385. localRepository=local repository
  386. lockCountMustBeGreaterOrEqual1=lockCount must be >= 1
  387. lockError=lock error: {0}
  388. lockFailedRetry=locking {0} failed after {1} retries
  389. lockOnNotClosed=Lock on {0} not closed.
  390. lockOnNotHeld=Lock on {0} not held.
  391. maxCountMustBeNonNegative=max count must be >= 0
  392. mergeConflictOnNonNoteEntries=Merge conflict on non-note entries: base = {0}, ours = {1}, theirs = {2}
  393. mergeConflictOnNotes=Merge conflict on note {0}. base = {1}, ours = {2}, theirs = {2}
  394. mergeStrategyAlreadyExistsAsDefault=Merge strategy "{0}" already exists as a default strategy
  395. mergeStrategyDoesNotSupportHeads=merge strategy {0} does not support {1} heads to be merged into HEAD
  396. mergeUsingStrategyResultedInDescription=Merge of revisions {0} with base {1} using strategy {2} resulted in: {3}. {4}
  397. mergeRecursiveConflictsWhenMergingCommonAncestors=Multiple common ancestors were found and merging them resulted in a conflict: {0}, {1}
  398. mergeRecursiveTooManyMergeBasesFor = "More than {0} merge bases for:\n a {1}\n b {2} found:\n count {3}"
  399. messageAndTaggerNotAllowedInUnannotatedTags = Unannotated tags cannot have a message or tagger
  400. minutesAgo={0} minutes ago
  401. mismatchOffset=mismatch offset for object {0}
  402. mismatchCRC=mismatch CRC for object {0}
  403. missingAccesskey=Missing accesskey.
  404. missingConfigurationForKey=No value for key {0} found in configuration
  405. missingCRC=missing CRC for object {0}
  406. missingDeltaBase=delta base
  407. missingForwardImageInGITBinaryPatch=Missing forward-image in GIT binary patch
  408. missingObject=Missing {0} {1}
  409. missingPrerequisiteCommits=missing prerequisite commits:
  410. missingRequiredParameter=Parameter "{0}" is missing
  411. missingSecretkey=Missing secretkey.
  412. mixedStagesNotAllowed=Mixed stages not allowed
  413. mkDirFailed=Creating directory {0} failed
  414. mkDirsFailed=Creating directories for {0} failed
  415. month=month
  416. months=months
  417. monthsAgo={0} months ago
  418. multipleMergeBasesFor=Multiple merge bases for:\n {0}\n {1} found:\n {2}\n {3}
  419. nameMustNotBeNullOrEmpty=Ref name must not be null or empty.
  420. need2Arguments=Need 2 arguments
  421. newIdMustNotBeNull=New ID must not be null
  422. newlineInQuotesNotAllowed=Newline in quotes not allowed
  423. noApplyInDelete=No apply in delete
  424. noClosingBracket=No closing {0} found for {1} at index {2}.
  425. noCommitsSelectedForShallow=No commits selected for shallow request
  426. noCredentialsProvider=Authentication is required but no CredentialsProvider has been registered
  427. noHEADExistsAndNoExplicitStartingRevisionWasSpecified=No HEAD exists and no explicit starting revision was specified
  428. noHMACsupport=No {0} support: {1}
  429. noMergeBase=No merge base could be determined. Reason={0}. {1}
  430. noMergeHeadSpecified=No merge head specified
  431. nonBareLinkFilesNotSupported=Link files are not supported with nonbare repos
  432. noPathAttributesFound=No Attributes found for {0}.
  433. noSuchRef=no such ref
  434. noSuchSubmodule=no such submodule {0}
  435. notABoolean=Not a boolean: {0}
  436. notABundle=not a bundle
  437. notADIRCFile=Not a DIRC file.
  438. notAGitDirectory=not a git directory
  439. notAPACKFile=Not a PACK file.
  440. notARef=Not a ref: {0}: {1}
  441. notASCIIString=Not ASCII string: {0}
  442. notAuthorized=not authorized
  443. notAValidPack=Not a valid pack {0}
  444. notFound=not found.
  445. nothingToFetch=Nothing to fetch.
  446. nothingToPush=Nothing to push.
  447. notMergedExceptionMessage=Branch was not deleted as it has not been merged yet; use the force option to delete it anyway
  448. noXMLParserAvailable=No XML parser available.
  449. objectAtHasBadZlibStream=Object at {0} in {1} has bad zlib stream
  450. objectIsCorrupt=Object {0} is corrupt: {1}
  451. objectIsCorrupt3={0}: object {1}: {2}
  452. objectIsNotA=Object {0} is not a {1}.
  453. objectNotFound=Object {0} not found.
  454. objectNotFoundIn=Object {0} not found in {1}.
  455. obtainingCommitsForCherryPick=Obtaining commits that need to be cherry-picked
  456. oldIdMustNotBeNull=Expected old ID must not be null
  457. onlyOneFetchSupported=Only one fetch supported
  458. onlyOneOperationCallPerConnectionIsSupported=Only one operation call per connection is supported.
  459. openFilesMustBeAtLeast1=Open files must be >= 1
  460. openingConnection=Opening connection
  461. operationCanceled=Operation {0} was canceled
  462. outputHasAlreadyBeenStarted=Output has already been started.
  463. overflowedReftableBlock=Overflowed reftable block
  464. packChecksumMismatch=Pack checksum mismatch detected for pack file {0}: .pack has {1} whilst .idx has {2}
  465. packCorruptedWhileWritingToFilesystem=Pack corrupted while writing to filesystem
  466. packedRefsHandleIsStale=packed-refs handle is stale, {0}. retry
  467. packetSizeMustBeAtLeast=packet size {0} must be >= {1}
  468. packetSizeMustBeAtMost=packet size {0} must be <= {1}
  469. packedRefsCorruptionDetected=packed-refs corruption detected: {0}
  470. packfileCorruptionDetected=Packfile corruption detected: {0}
  471. packFileInvalid=Pack file invalid: {0}
  472. packfileIsTruncated=Packfile {0} is truncated.
  473. packfileIsTruncatedNoParam=Packfile is truncated.
  474. packHandleIsStale=Pack file {0} handle is stale, removing it from pack list
  475. packHasUnresolvedDeltas=pack has unresolved deltas
  476. packInaccessible=Failed to access pack file {0}, caught {1} consecutive errors while trying to access this pack.
  477. packingCancelledDuringObjectsWriting=Packing cancelled during objects writing
  478. packObjectCountMismatch=Pack object count mismatch: pack {0} index {1}: {2}
  479. packRefs=Pack refs
  480. packSizeNotSetYet=Pack size not yet set since it has not yet been received
  481. packTooLargeForIndexVersion1=Pack too large for index version 1
  482. packWasDeleted=Pack file {0} was deleted, removing it from pack list
  483. packWriterStatistics=Total {0,number,#0} (delta {1,number,#0}), reused {2,number,#0} (delta {3,number,#0})
  484. panicCantRenameIndexFile=Panic: index file {0} must be renamed to replace {1}; until then repository is corrupt
  485. patchApplyException=Cannot apply: {0}
  486. patchFormatException=Format error: {0}
  487. pathNotConfigured=Submodule path is not configured
  488. peeledLineBeforeRef=Peeled line before ref.
  489. peeledRefIsRequired=Peeled ref is required.
  490. peerDidNotSupplyACompleteObjectGraph=peer did not supply a complete object graph
  491. personIdentEmailNonNull=E-mail address of PersonIdent must not be null.
  492. personIdentNameNonNull=Name of PersonIdent must not be null.
  493. prefixRemote=remote:
  494. problemWithResolvingPushRefSpecsLocally=Problem with resolving push ref specs locally: {0}
  495. progressMonUploading=Uploading {0}
  496. propertyIsAlreadyNonNull=Property is already non null
  497. pruneLoosePackedObjects=Prune loose objects also found in pack files
  498. pruneLooseUnreferencedObjects=Prune loose, unreferenced objects
  499. pullTaskName=Pull
  500. pushCancelled=push cancelled
  501. pushCertificateInvalidField=Push certificate has missing or invalid value for {0}
  502. pushCertificateInvalidFieldValue=Push certificate has missing or invalid value for {0}: {1}
  503. pushCertificateInvalidHeader=Push certificate has invalid header format
  504. pushCertificateInvalidSignature=Push certificate has invalid signature format
  505. pushIsNotSupportedForBundleTransport=Push is not supported for bundle transport
  506. pushNotPermitted=push not permitted
  507. pushOptionsNotSupported=Push options not supported; received {0}
  508. rawLogMessageDoesNotParseAsLogEntry=Raw log message does not parse as log entry
  509. readConfigFailed=Reading config file ''{0}'' failed
  510. readFileStoreAttributesFailed=Reading FileStore attributes from user config failed
  511. readerIsRequired=Reader is required
  512. readingObjectsFromLocalRepositoryFailed=reading objects from local repository failed: {0}
  513. readLastModifiedFailed=Reading lastModified of {0} failed
  514. readTimedOut=Read timed out after {0} ms
  515. receivePackObjectTooLarge1=Object too large, rejecting the pack. Max object size limit is {0} bytes.
  516. receivePackObjectTooLarge2=Object too large ({0} bytes), rejecting the pack. Max object size limit is {1} bytes.
  517. receivePackInvalidLimit=Illegal limit parameter value {0}
  518. receivePackTooLarge=Pack exceeds the limit of {0} bytes, rejecting the pack
  519. receivingObjects=Receiving objects
  520. redirectBlocked=Redirection blocked: redirect {0} -> {1} not allowed
  521. redirectHttp=URI ''{0}'': following HTTP redirect #{1} {2} -> {3}
  522. redirectLimitExceeded=Redirected more than {0} times; aborted at {1} -> {2}
  523. redirectLocationMissing=Invalid redirect: no redirect location for {0}
  524. redirectsOff=Cannot redirect because http.followRedirects is false (HTTP status {0})
  525. refAlreadyExists=already exists
  526. refAlreadyExists1=Ref {0} already exists
  527. reflogEntryNotFound=Entry {0} not found in reflog for ''{1}''
  528. refNotResolved=Ref {0} cannot be resolved
  529. refUpdateReturnCodeWas=RefUpdate return code was: {0}
  530. remoteConfigHasNoURIAssociated=Remote config "{0}" has no URIs associated
  531. remoteDoesNotHaveSpec=Remote does not have {0} available for fetch.
  532. remoteDoesNotSupportSmartHTTPPush=remote does not support smart HTTP push
  533. remoteHungUpUnexpectedly=remote hung up unexpectedly
  534. remoteNameCantBeNull=Remote name can't be null.
  535. renameBranchFailedBecauseTag=Can not rename as Ref {0} is a tag
  536. renameBranchFailedUnknownReason=Rename failed with unknown reason
  537. renameBranchUnexpectedResult=Unexpected rename result {0}
  538. renameCancelled=Rename detection was cancelled
  539. renameFileFailed=Could not rename file {0} to {1}
  540. renamesAlreadyFound=Renames have already been found.
  541. renamesBreakingModifies=Breaking apart modified file pairs
  542. renamesFindingByContent=Finding renames by content similarity
  543. renamesFindingExact=Finding exact renames
  544. renamesRejoiningModifies=Rejoining modified file pairs
  545. repositoryAlreadyExists=Repository already exists: {0}
  546. repositoryConfigFileInvalid=Repository config file {0} invalid {1}
  547. repositoryIsRequired=repository is required
  548. repositoryNotFound=repository not found: {0}
  549. repositoryState_applyMailbox=Apply mailbox
  550. repositoryState_bare=Bare
  551. repositoryState_bisecting=Bisecting
  552. repositoryState_conflicts=Conflicts
  553. repositoryState_merged=Merged
  554. repositoryState_normal=Normal
  555. repositoryState_rebase=Rebase
  556. repositoryState_rebaseInteractive=Rebase interactive
  557. repositoryState_rebaseOrApplyMailbox=Rebase/Apply mailbox
  558. repositoryState_rebaseWithMerge=Rebase w/merge
  559. requiredHashFunctionNotAvailable=Required hash function {0} not available.
  560. resettingHead=Resetting head to {0}
  561. resolvingDeltas=Resolving deltas
  562. resultLengthIncorrect=result length incorrect
  563. rewinding=Rewinding to commit {0}
  564. s3ActionDeletion=Deletion
  565. s3ActionReading=Reading
  566. s3ActionWriting=Writing
  567. saveFileStoreAttributesFailed=Saving measured FileStore attributes to user config failed
  568. searchForReuse=Finding sources
  569. searchForSizes=Getting sizes
  570. secondsAgo={0} seconds ago
  571. selectingCommits=Selecting commits
  572. sequenceTooLargeForDiffAlgorithm=Sequence too large for difference algorithm.
  573. serviceNotEnabledNoName=Service not enabled
  574. serviceNotPermitted={1} not permitted on ''{0}''
  575. sha1CollisionDetected1=SHA-1 collision detected on {0}
  576. shallowCommitsAlreadyInitialized=Shallow commits have already been initialized
  577. shallowPacksRequireDepthWalk=Shallow packs require a DepthWalk
  578. shortCompressedStreamAt=Short compressed stream at {0}
  579. shortReadOfBlock=Short read of block.
  580. shortReadOfOptionalDIRCExtensionExpectedAnotherBytes=Short read of optional DIRC extension {0}; expected another {1} bytes within the section.
  581. shortSkipOfBlock=Short skip of block.
  582. signingNotSupportedOnTag=Signing isn't supported on tag operations yet.
  583. similarityScoreMustBeWithinBounds=Similarity score must be between 0 and 100.
  584. skipMustBeNonNegative=skip must be >= 0
  585. smartHTTPPushDisabled=smart HTTP push disabled
  586. sourceDestinationMustMatch=Source/Destination must match.
  587. sourceIsNotAWildcard=Source is not a wildcard.
  588. sourceRefDoesntResolveToAnyObject=Source ref {0} doesn''t resolve to any object.
  589. sourceRefNotSpecifiedForRefspec=Source ref not specified for refspec: {0}
  590. squashCommitNotUpdatingHEAD=Squash commit -- not updating HEAD
  591. sshCommandFailed=Execution of ssh command ''{0}'' failed with error ''{1}''
  592. sshUserNameError=Jsch error: failed to set SSH user name correctly to ''{0}''; using ''{1}'' picked up from SSH config file.
  593. sslFailureExceptionMessage=Secure connection to {0} could not be established because of SSL problems
  594. sslFailureInfo=A secure connection to {0} could not be established because the server''s certificate could not be validated.
  595. sslFailureCause=SSL reported: {0}
  596. sslFailureTrustExplanation=Do you want to skip SSL verification for this server?
  597. sslTrustAlways=Always skip SSL verification for this server from now on
  598. sslTrustForRepo=Skip SSL verification for git operations for repository {0}
  599. sslTrustNow=Skip SSL verification for this single git operation
  600. sslVerifyCannotSave=Could not save setting for http.sslVerify
  601. staleRevFlagsOn=Stale RevFlags on {0}
  602. startingReadStageWithoutWrittenRequestDataPendingIsNotSupported=Starting read stage without written request data pending is not supported
  603. stashApplyConflict=Applying stashed changes resulted in a conflict
  604. stashApplyFailed=Applying stashed changes did not successfully complete
  605. stashApplyOnUnsafeRepository=Cannot apply stashed commit on a repository with state: {0}
  606. stashApplyWithoutHead=Cannot apply stashed commit in an empty repository or onto an unborn branch
  607. stashCommitIncorrectNumberOfParents=Stashed commit ''{0}'' does have {1} parent commits instead of 2 or 3.
  608. stashDropDeleteRefFailed=Deleting stash reference failed with result: {0}
  609. stashDropFailed=Dropping stashed commit failed
  610. stashDropMissingReflog=Stash reflog does not contain entry ''{0}''
  611. stashDropNotSupported=Dropping stash not supported on this ref backend
  612. stashFailed=Stashing local changes did not successfully complete
  613. stashResolveFailed=Reference ''{0}'' does not resolve to stashed commit
  614. statelessRPCRequiresOptionToBeEnabled=stateless RPC requires {0} to be enabled
  615. storePushCertMultipleRefs=Store push certificate for {0} refs
  616. storePushCertOneRef=Store push certificate for {0}
  617. storePushCertReflog=Store push certificate
  618. submoduleExists=Submodule ''{0}'' already exists in the index
  619. submoduleNameInvalid=Invalid submodule name ''{0}''
  620. submoduleParentRemoteUrlInvalid=Cannot remove segment from remote url ''{0}''
  621. submodulePathInvalid=Invalid submodule path ''{0}''
  622. submoduleUrlInvalid=Invalid submodule URL ''{0}''
  623. supportOnlyPackIndexVersion2=Only support index version 2
  624. systemConfigFileInvalid=System wide config file {0} is invalid {1}
  625. tagAlreadyExists=tag ''{0}'' already exists
  626. tagNameInvalid=tag name {0} is invalid
  627. tagOnRepoWithoutHEADCurrentlyNotSupported=Tag on repository without HEAD currently not supported
  628. theFactoryMustNotBeNull=The factory must not be null
  629. threadInterruptedWhileRunning="Current thread interrupted while running {0}"
  630. timeIsUncertain=Time is uncertain
  631. timerAlreadyTerminated=Timer already terminated
  632. timeoutMeasureFsTimestampResolution=measuring filesystem timestamp resolution for ''{0}'' timed out, fall back to resolution of 2 seconds
  633. tooManyCommands=Too many commands
  634. tooManyFilters=Too many "filter" lines in request
  635. tooManyIncludeRecursions=Too many recursions; circular includes in config file(s)?
  636. topologicalSortRequired=Topological sort required.
  637. transactionAborted=transaction aborted
  638. transportExceptionBadRef=Empty ref: {0}: {1}
  639. transportExceptionEmptyRef=Empty ref: {0}
  640. transportExceptionInvalid=Invalid {0} {1}:{2}
  641. transportExceptionMissingAssumed=Missing assumed {0}
  642. transportExceptionReadRef=read {0}
  643. transportNeedsRepository=Transport needs repository
  644. transportProvidedRefWithNoObjectId=Transport provided ref {0} with no object id
  645. transportProtoBundleFile=Git Bundle File
  646. transportProtoFTP=FTP
  647. transportProtoGitAnon=Anonymous Git
  648. transportProtoHTTP=HTTP
  649. transportProtoLocal=Local Git Repository
  650. transportProtoSFTP=SFTP
  651. transportProtoSSH=SSH
  652. transportProtoTest=Test
  653. transportSSHRetryInterrupt=Interrupted while waiting for retry
  654. treeEntryAlreadyExists=Tree entry "{0}" already exists.
  655. treeFilterMarkerTooManyFilters=Too many markTreeFilters passed, maximum number is {0} (passed {1})
  656. treeWalkMustHaveExactlyTwoTrees=TreeWalk should have exactly two trees.
  657. truncatedHunkLinesMissingForAncestor=Truncated hunk, at least {0} lines missing for ancestor {1}
  658. truncatedHunkNewLinesMissing=Truncated hunk, at least {0} new lines is missing
  659. truncatedHunkOldLinesMissing=Truncated hunk, at least {0} old lines is missing
  660. tSizeMustBeGreaterOrEqual1=tSize must be >= 1
  661. unableToCheckConnectivity=Unable to check connectivity.
  662. unableToCreateNewObject=Unable to create new object: {0}
  663. unableToReadPackfile=Unable to read packfile {0}
  664. unableToRemovePath=Unable to remove path ''{0}''
  665. unableToWrite=Unable to write {0}
  666. unauthorized=Unauthorized
  667. unencodeableFile=Unencodable file: {0}
  668. unexpectedCompareResult=Unexpected metadata comparison result: {0}
  669. unexpectedEndOfConfigFile=Unexpected end of config file
  670. unexpectedEndOfInput=Unexpected end of input
  671. unexpectedEofInPack=Unexpected EOF in partially created pack
  672. unexpectedHunkTrailer=Unexpected hunk trailer
  673. unexpectedOddResult=odd: {0} + {1} - {2}
  674. unexpectedPacketLine=unexpected {0}
  675. unexpectedRefReport={0}: unexpected ref report: {1}
  676. unexpectedReportLine=unexpected report line: {0}
  677. unexpectedReportLine2={0} unexpected report line: {1}
  678. unexpectedSubmoduleStatus=Unexpected submodule status: ''{0}''
  679. unknownOrUnsupportedCommand=Unknown or unsupported command "{0}", only "{1}" is allowed.
  680. unknownDIRCVersion=Unknown DIRC version {0}
  681. unknownHost=unknown host
  682. unknownObject=unknown object
  683. unknownObjectInIndex=unknown object {0} found in index but not in pack file
  684. unknownObjectType=Unknown object type {0}.
  685. unknownObjectType2=unknown
  686. unknownRepositoryFormat=Unknown repository format
  687. unknownRepositoryFormat2=Unknown repository format "{0}"; expected "0".
  688. unknownTransportCommand=unknown command {0}
  689. unknownZlibError=Unknown zlib error.
  690. unlockLockFileFailed=Unlocking LockFile ''{0}'' failed
  691. unmergedPath=Unmerged path: {0}
  692. unmergedPaths=Repository contains unmerged paths
  693. unpackException=Exception while parsing pack stream
  694. unreadablePackIndex=Unreadable pack index: {0}
  695. unrecognizedRef=Unrecognized ref: {0}
  696. unsetMark=Mark not set
  697. unsupportedAlternates=Alternates not supported
  698. unsupportedArchiveFormat=Unknown archive format ''{0}''
  699. unsupportedCommand0=unsupported command 0
  700. unsupportedEncryptionAlgorithm=Unsupported encryption algorithm: {0}
  701. unsupportedEncryptionVersion=Unsupported encryption version: {0}
  702. unsupportedGC=Unsupported garbage collector for repository type: {0}
  703. unsupportedMark=Mark not supported
  704. unsupportedOperationNotAddAtEnd=Not add-at-end: {0}
  705. unsupportedPackIndexVersion=Unsupported pack index version {0}
  706. unsupportedPackVersion=Unsupported pack version {0}.
  707. unsupportedReftableVersion=Unsupported reftable version {0}.
  708. unsupportedRepositoryDescription=Repository description not supported
  709. updateRequiresOldIdAndNewId=Update requires both old ID and new ID to be nonzero
  710. updatingHeadFailed=Updating HEAD failed
  711. updatingReferences=Updating references
  712. updatingRefFailed=Updating the ref {0} to {1} failed. ReturnCode from RefUpdate.update() was {2}
  713. upstreamBranchName=branch ''{0}'' of {1}
  714. uriNotConfigured=Submodule URI not configured
  715. uriNotFound={0} not found
  716. uriNotFoundWithMessage={0} not found: {1}
  717. URINotSupported=URI not supported: {0}
  718. userConfigInvalid=Git config in the user's home directory {0} is invalid {1}
  719. validatingGitModules=Validating .gitmodules files
  720. walkFailure=Walk failure.
  721. wantNoSpaceWithCapabilities=No space between oid and first capability in first want line
  722. wantNotValid=want {0} not valid
  723. weeksAgo={0} weeks ago
  724. windowSizeMustBeLesserThanLimit=Window size must be < limit
  725. windowSizeMustBePowerOf2=Window size must be power of 2
  726. writerAlreadyInitialized=Writer already initialized
  727. writeTimedOut=Write timed out after {0} ms
  728. writingNotPermitted=Writing not permitted
  729. writingNotSupported=Writing {0} not supported.
  730. writingObjects=Writing objects
  731. wrongDecompressedLength=wrong decompressed length
  732. wrongRepositoryState=Wrong Repository State: {0}
  733. year=year
  734. years=years
  735. years0MonthsAgo={0} {1} ago
  736. yearsAgo={0} years ago
  737. yearsMonthsAgo={0} {1}, {2} {3} ago