You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

Support creating pack bitmap indexes in PackWriter. Update the PackWriter to support writing out pack bitmap indexes, a parallel ".bitmap" file to the ".pack" file. Bitmaps are selected at commits every 1 to 5,000 commits for each unique path from the start. The most recent 100 commits are all bitmapped. The next 19,000 commits have a bitmaps every 100 commits. The remaining commits have a bitmap every 5,000 commits. Commits with more than 1 parent are prefered over ones with 1 or less. Furthermore, previously computed bitmaps are reused, if the previous entry had the reuse flag set, which is set when the bitmap was placed at the max allowed distance. Bitmaps are used to speed up the counting phase when packing, for requests that are not shallow. The PackWriterBitmapWalker uses a RevFilter to proactively mark commits with RevFlag.SEEN, when they appear in a bitmap. The walker produces the full closure of reachable ObjectIds, given the collection of starting ObjectIds. For fetch request, two ObjectWalks are executed to compute the ObjectIds reachable from the haves and from the wants. The ObjectIds needed to be written are determined by taking all the resulting wants AND NOT the haves. For clone requests, we get cached pack support for "free" since it is possible to determine if all of the ObjectIds in a pack file are included in the resulting list of ObjectIds to write. On my machine, the best times for clones and fetches of the linux kernel repository (with about 2.6M objects and 300K commits) are tabulated below: Operation Index V2 Index VE003 Clone 37530ms (524.06 MiB) 82ms (524.06 MiB) Fetch (1 commit back) 75ms 107ms Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB) Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB) Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB) Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB) Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB) Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
11 lat temu
blame: Compute the origin of lines in a result file BlameGenerator digs through history and discovers the origin of each line of some result file. BlameResult consumes the stream of regions created by the generator and lays them out in a table for applications to display alongside of source lines. Applications may optionally push in the working tree copy of a file using the push(String, byte[]) method, allowing the application to receive accurate line annotations for the working tree version. Lines that are uncommitted (difference between HEAD and working tree) will show up with the description given by the application as the author, or "Not Committed Yet" as a default string. Applications may also run the BlameGenerator in reverse mode using the reverse(AnyObjectId, AnyObjectId) method instead of push(). When running in the reverse mode the generator annotates lines by the commit they are removed in, rather than the commit they were added in. This allows a user to discover where a line disappeared from when they are looking at an older revision in the repository. For example: blame --reverse 16e810b2..master -L 1080, org.eclipse.jgit.test/tst/org/eclipse/jgit/storage/file/RefDirectoryTest.java ( 1080) } 2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1081) 2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1082) /** 2302a6d3 (Christian Halstrick 2011-05-20 11:18:20 +0200 1083) * Kick the timestamp of a local file. Above we learn that line 1080 (a closing curly brace of the prior method) still exists in branch master, but the Javadoc comment below it has been removed by Christian Halstrick on May 20th as part of commit 2302a6d3. This result differs considerably from that of C Git's blame --reverse feature. JGit tells the reader which commit performed the delete, while C Git tells the reader the last commit that still contained the line, leaving it an exercise to the reader to discover the descendant that performed the removal. This is still only a basic implementation. Quite notably it is missing support for the smart block copy/move detection that the C implementation of `git blame` is well known for. Despite being incremental, the BlameGenerator can only be run once. After the generator runs it cannot be reused. A better implementation would support applications browsing through history efficiently. In regards to CQ 5110, only a little of the original code survives. CQ: 5110 Bug: 306161 Change-Id: I84b8ea4838bb7d25f4fcdd540547884704661b8f Signed-off-by: Kevin Sawicki <kevin@github.com> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
13 lat temu
Support creating pack bitmap indexes in PackWriter. Update the PackWriter to support writing out pack bitmap indexes, a parallel ".bitmap" file to the ".pack" file. Bitmaps are selected at commits every 1 to 5,000 commits for each unique path from the start. The most recent 100 commits are all bitmapped. The next 19,000 commits have a bitmaps every 100 commits. The remaining commits have a bitmap every 5,000 commits. Commits with more than 1 parent are prefered over ones with 1 or less. Furthermore, previously computed bitmaps are reused, if the previous entry had the reuse flag set, which is set when the bitmap was placed at the max allowed distance. Bitmaps are used to speed up the counting phase when packing, for requests that are not shallow. The PackWriterBitmapWalker uses a RevFilter to proactively mark commits with RevFlag.SEEN, when they appear in a bitmap. The walker produces the full closure of reachable ObjectIds, given the collection of starting ObjectIds. For fetch request, two ObjectWalks are executed to compute the ObjectIds reachable from the haves and from the wants. The ObjectIds needed to be written are determined by taking all the resulting wants AND NOT the haves. For clone requests, we get cached pack support for "free" since it is possible to determine if all of the ObjectIds in a pack file are included in the resulting list of ObjectIds to write. On my machine, the best times for clones and fetches of the linux kernel repository (with about 2.6M objects and 300K commits) are tabulated below: Operation Index V2 Index VE003 Clone 37530ms (524.06 MiB) 82ms (524.06 MiB) Fetch (1 commit back) 75ms 107ms Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB) Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB) Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB) Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB) Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB) Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
11 lat temu
PackWriter: Support reuse of entire packs The most expensive part of packing a repository for transport to another system is enumerating all of the objects in the repository. Once this gets to the size of the linux-2.6 repository (1.8 million objects), enumeration can take several CPU minutes and costs a lot of temporary working set memory. Teach PackWriter to efficiently reuse an existing "cached pack" by answering a clone request with a thin pack followed by a larger cached pack appended to the end. This requires the repository owner to first construct the cached pack by hand, and record the tip commits inside of $GIT_DIR/objects/info/cached-packs: cd $GIT_DIR root=$(git rev-parse master) tmp=objects/.tmp-$$ names=$(echo $root | git pack-objects --keep-true-parents --revs $tmp) for n in $names; do chmod a-w $tmp-$n.pack $tmp-$n.idx touch objects/pack/pack-$n.keep mv $tmp-$n.pack objects/pack/pack-$n.pack mv $tmp-$n.idx objects/pack/pack-$n.idx done (echo "+ $root"; for n in $names; do echo "P $n"; done; echo) >>objects/info/cached-packs git repack -a -d When a clone request needs to include $root, the corresponding cached pack will be copied as-is, rather than enumerating all of the objects that are reachable from $root. For a linux-2.6 kernel repository that should be about 376 MiB, the above process creates two packs of 368 MiB and 38 MiB[1]. This is a local disk usage increase of ~26 MiB, due to reduced delta compression between the large cached pack and the smaller recent activity pack. The overhead is similar to 1 full copy of the compressed project sources. With this cached pack in hand, JGit daemon completes a clone request in 1m17s less time, but a slightly larger data transfer (+2.39 MiB): Before: remote: Counting objects: 1861830, done remote: Finding sources: 100% (1861830/1861830) remote: Getting sizes: 100% (88243/88243) remote: Compressing objects: 100% (88184/88184) Receiving objects: 100% (1861830/1861830), 376.01 MiB | 19.01 MiB/s, done. remote: Total 1861830 (delta 4706), reused 1851053 (delta 1553844) Resolving deltas: 100% (1564621/1564621), done. real 3m19.005s After: remote: Counting objects: 1601, done remote: Counting objects: 1828460, done remote: Finding sources: 100% (50475/50475) remote: Getting sizes: 100% (18843/18843) remote: Compressing objects: 100% (7585/7585) remote: Total 1861830 (delta 2407), reused 1856197 (delta 37510) Receiving objects: 100% (1861830/1861830), 378.40 MiB | 31.31 MiB/s, done. Resolving deltas: 100% (1559477/1559477), done. real 2m2.938s Repository owners can periodically refresh their cached packs by repacking their repository, folding all newer objects into a larger cached pack. Since repacking is already considered to be a normal Git maintenance activity, this isn't a very big burden. [1] In this test $root was set back about two weeks. Change-Id: Ib87131d5c4b5e8c5cacb0f4fe16ff4ece554734b Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
13 lat temu
Added read/write support for pack bitmap index. A pack bitmap index is an additional index of compressed bitmaps of the object graph. Furthermore, a logical API of the index functionality is included, as it is expected to be used by the PackWriter. Compressed bitmaps are created using the javaewah library, which is a word-aligned compressed variant of the Java bitset class based on run-length encoding. The library only works with positive integer values. Thus, the maximum number of ObjectIds in a pack file that this index can currently support is limited to Integer.MAX_VALUE. Every ObjectId is given an integer mapping. The integer is the position of the ObjectId in the complete ObjectId list, sorted by offset, for the pack file. That integer is what the bitmaps use to reference the ObjectId. Currently, the new index format can only be used with pack files that contain a complete closure of the object graph e.g. the result of a garbage collection. The index file includes four bitmaps for the Git object types i.e. commits, trees, blobs, and tags. In addition, a collection of bitmaps keyed by an ObjectId is also included. The bitmap for each entry in the collection represents the full closure of ObjectIds reachable from the keyed ObjectId (including the keyed ObjectId itself). The bitmaps are further compressed by XORing the current bitmaps against prior bitmaps in the index, and selecting the smallest representation. The XOR'd bitmap and offset from the current entry to the position of the bitmap to XOR against is the actual representation of the entry in the index file. Each entry contains one byte, which is currently used to note whether the bitmap should be blindly reused. Change-Id: Id328724bf6b4c8366a088233098c18643edcf40f
11 lat temu
Merging Git notes Merging Git notes branches has several differences from merging "normal" branches. Although Git notes are initially stored as one flat tree the tree may fanout when the number of notes becomes too large for efficient access. In this case the first two hex digits of the note name will be used as a subdirectory name and the rest 38 hex digits as the file name under that directory. Similarly, when number of notes decreases a fanout tree may collapse back into a flat tree. The Git notes merge algorithm must take into account possibly different tree structures in different note branches and must properly match them against each other. Any conflict on a Git note is, by default, resolved by concatenating the two conflicting versions of the note. A delete-edit conflict is, by default, resolved by keeping the edit version. The note merge logic is pluggable and the caller may provide custom note merger that will perform different merging strategy. Additionally, it is possible to have non-note entries inside a notes tree. The merge algorithm must also take this fact into account and will try to merge such non-note entries. However, in case of any merge conflicts the merge operation will fail. Git notes merge algorithm is currently not trying to do content merge of non-note entries. Thanks to Shawn Pearce for patiently answering my questions related to this topic, giving hints and providing code snippets. Change-Id: I3b2335c76c766fd7ea25752e54087f9b19d69c88 Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
13 lat temu
Added read/write support for pack bitmap index. A pack bitmap index is an additional index of compressed bitmaps of the object graph. Furthermore, a logical API of the index functionality is included, as it is expected to be used by the PackWriter. Compressed bitmaps are created using the javaewah library, which is a word-aligned compressed variant of the Java bitset class based on run-length encoding. The library only works with positive integer values. Thus, the maximum number of ObjectIds in a pack file that this index can currently support is limited to Integer.MAX_VALUE. Every ObjectId is given an integer mapping. The integer is the position of the ObjectId in the complete ObjectId list, sorted by offset, for the pack file. That integer is what the bitmaps use to reference the ObjectId. Currently, the new index format can only be used with pack files that contain a complete closure of the object graph e.g. the result of a garbage collection. The index file includes four bitmaps for the Git object types i.e. commits, trees, blobs, and tags. In addition, a collection of bitmaps keyed by an ObjectId is also included. The bitmap for each entry in the collection represents the full closure of ObjectIds reachable from the keyed ObjectId (including the keyed ObjectId itself). The bitmaps are further compressed by XORing the current bitmaps against prior bitmaps in the index, and selecting the smallest representation. The XOR'd bitmap and offset from the current entry to the position of the bitmap to XOR against is the actual representation of the entry in the index file. Each entry contains one byte, which is currently used to note whether the bitmap should be blindly reused. Change-Id: Id328724bf6b4c8366a088233098c18643edcf40f
11 lat temu
Handle stale file handles on packed-refs file On a local filesystem the packed-refs file will be orphaned if it is replaced by another client while the current client is reading the old one. However, since NFS servers do not keep track of open files, instead of orphaning the old packed-refs file, such a replacement will cause the old file to be garbage collected instead. A stale file handle exception will be raised on NFS servers if the file is garbage collected (deleted) on the server while it is being read. Since we no longer have access to the old file in these cases, the previous code would just fail. However, in these cases, reopening the file and rereading it will succeed (since it will reopen the new replacement file). So retrying the read is a viable strategy to deal with stale file handles on the packed-refs file, implement such a strategy. Since it is possible that the packed-refs file could be replaced again while rereading it (multiple consecutive updates can easily occur with ref deletions), loop on stale file handle exceptions, up to 5 extra times, trying to read the packed-refs file again, until we either read the new file, or find that the file no longer exists. The limit of 5 is arbitrary, and provides a safe upper bounds to prevent infinite loops consuming resources in a potential unforeseen persistent error condition. Change-Id: I085c472bafa6e2f32f610a33ddc8368bb4ab1814 Signed-off-by: Martin Fick<mfick@codeaurora.org> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
8 lat temu
Rewrite push certificate parsing - Consistently return structured data, such as actual ReceiveCommands, which is more useful for callers that are doing things other than verifying the signature, e.g. recording the set of commands. - Store the certificate version field, as this is required to be part of the signed payload. - Add a toText() method to recreate the actual payload for signature verification. This requires keeping track of the un-chomped command strings from the original protocol stream. - Separate the parser from the certificate itself, so the actual PushCertificate object can be immutable. Make a fair attempt at deep immutability, but this is not possible with the current mutable ReceiveCommand structure. - Use more detailed error messages that don't involve NON-NLS strings. - Document null return values more thoroughly. Instead of having the undocumented behavior of throwing NPE from certain methods if they are not first guarded by enabled(), eliminate enabled() and return null from those methods. - Add tests for parsing a push cert from a section of pkt-line stream using a real live stream captured with Wireshark (which, it should be noted, uncovered several simply incorrect statements in C git's Documentation/technical/pack-protocol.txt). This is a slightly breaking API change to classes that were technically public and technically released in 4.0. However, it is highly unlikely that people were actually depending on public behavior, since there were no public methods to create PushCertificates with anything other than null field values, or a PushCertificateParser that did anything other than infinite loop or throw exceptions when reading. Change-Id: I5382193347a8eb1811032d9b32af9651871372d0
9 lat temu
maxObjectSizeLimit for receive-pack. ReceivePack (and PackParser) can be configured with the maxObjectSizeLimit in order to prevent users from pushing too large objects to Git. The limit check is applied to all object types although it is most likely that a BLOB will exceed the limit. In all cases the size of the object header is excluded from the object size which is checked against the limit as this is the size of which a BLOB object would take in the working tree when checked out as a file. When an object exceeds the maxObjectSizeLimit the receive-pack will abort immediately. Delta objects (both offset and ref delta) are also checked against the limit. However, for delta objects we will first check the size of the inflated delta block against the maxObjectSizeLimit and abort immediately if it exceeds the limit. In this case we even do not know the exact size of the resolved delta object but we assume it will be larger than the given maxObjectSizeLimit as delta is generally only chosen if the delta can copy more data from the base object than the delta needs to insert or needs to represent the copy ranges. Aborting early, in this case, avoids unnecessary inflating of the (huge) delta block. Unfortunately, it is too expensive (especially for a large delta) to compute SHA-1 of an object that causes the receive-pack to abort. This would decrease the value of this feature whose main purpose is to protect server resources from users pushing huge objects. Therefore we don't report the SHA-1 in the error message. Change-Id: I177ef24553faacda444ed5895e40ac8925ca0d1e Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
12 lat temu
Implement similarity based rename detection Content similarity based rename detection is performed only after a linear time detection is performed using exact content match on the ObjectIds. Any names which were paired up during that exact match phase are excluded from the inexact similarity based rename, which reduces the space that must be considered. During rename detection two entries cannot be marked as a rename if they are different types of files. This prevents a symlink from being renamed to a regular file, even if their blob content appears to be similar, or is identical. Efficiently comparing two files is performed by building up two hash indexes and hashing lines or short blocks from each file, counting the number of bytes that each line or block represents. Instead of using a standard java.util.HashMap, we use a custom open hashing scheme similiar to what we use in ObjecIdSubclassMap. This permits us to have a very light-weight hash, with very little memory overhead per cell stored. As we only need two ints per record in the map (line/block key and number of bytes), we collapse them into a single long inside of a long array, making very efficient use of available memory when we create the index table. We only need object headers for the index structure itself, and the index table, but not per-cell. This offers a massive space savings over using java.util.HashMap. The score calculation is done by approximating how many bytes are the same between the two inputs (which for a delta would be how much is copied from the base into the result). The score is derived by dividing the approximate number of bytes in common into the length of the larger of the two input files. Right now the SimilarityIndex table should average about 1/2 full, which means we waste about 50% of our memory on empty entries after we are done indexing a file and sort the table's contents. If memory becomes an issue we could discard the table and copy all records over to a new array that is properly sized. Building the index requires O(M + N log N) time, where M is the size of the input file in bytes, and N is the number of unique lines/blocks in the file. The N log N time constraint comes from the sort of the index table that is necessary to perform linear time matching against another SimilarityIndex created for a different file. To actually perform the rename detection, a SxD matrix is created, placing the sources (aka deletions) along one dimension and the destinations (aka additions) along the other. A simple O(S x D) loop examines every cell in this matrix. A SimilarityIndex is built along the row and reused for each column compare along that row, avoiding the costly index rebuild at the row level. A future improvement would be to load a smaller square matrix into SimilarityIndexes and process everything in that sub-matrix before discarding the column dimension and moving down to the next sub-matrix block along that same grid of rows. An optional ProgressMonitor is permitted to be passed in, allowing applications to see the progress of the detector as it works through the matrix cells. This provides some indication of current status for very long running renames. The default line/block hash function used by the SimilarityIndex may not be optimal, and may produce too many collisions. It is borrowed from RawText's hash, which is used to quickly skip out of a longer equality test if two lines have different hash functions. We may need to refine this hash in the future, in order to minimize the number of collisions we get on common source files. Based on a handful of test commits in JGit (especially my own recent rename repository refactoring series), this rename detector produces output that is very close to C Git. The content similarity scores are sometimes off by 1%, which is most probably caused by our SimilarityIndex type using a different hash function than C Git uses when it computes the delta size between any two objects in the rename matrix. Bug: 318504 Change-Id: I11dff969e8a2e4cf252636d857d2113053bdd9dc Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 lat temu
Increase core.streamFileThreshold default to 50 MiB Projects like org.eclipse.mdt contain large XML files about 6 MiB in size. So does the Android project platform/frameworks/base. Doing a clone of either project with JGit takes forever to checkout the files into the working directory, because delta decompression tends to be very expensive as we need to constantly reposition the base stream for each copy instruction. This can be made worse by a very bad ordering of offsets, possibly due to an XML editor that doesn't preserve the order of elements in the file very well. Increasing the threshold to the same limit PackWriter uses when doing delta compression (50 MiB) permits a default configured JGit to decompress these XML file objects using the faster random-access arrays, rather than re-seeking through an inflate stream, significantly reducing checkout time after a clone. Since this new limit may be dangerously close to the JVM maximum heap size, every allocation attempt is now wrapped in a try/catch so that JGit can degrade by switching to the large object stream mode when the allocation is refused. It will run slower, but the operation will still complete. The large stream mode will run very well for big objects that aren't delta compressed, and is acceptable for delta compressed objects that are using only forward referencing copy instructions. Copies using prior offsets are still going to be horrible, and there is nothing we can do about it except increase core.streamFileThreshold. We might in the future want to consider changing the way the delta generators work in JGit and native C Git to avoid prior offsets once an object reaches a certain size, even if that causes the delta instruction stream to be slightly larger. Unfortunately native C Git won't want to do that until its also able to stream objects rather than malloc them as contiguous blocks. Change-Id: Ief7a3896afce15073e80d3691bed90c6a3897307 Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
13 lat temu
Support creating pack bitmap indexes in PackWriter. Update the PackWriter to support writing out pack bitmap indexes, a parallel ".bitmap" file to the ".pack" file. Bitmaps are selected at commits every 1 to 5,000 commits for each unique path from the start. The most recent 100 commits are all bitmapped. The next 19,000 commits have a bitmaps every 100 commits. The remaining commits have a bitmap every 5,000 commits. Commits with more than 1 parent are prefered over ones with 1 or less. Furthermore, previously computed bitmaps are reused, if the previous entry had the reuse flag set, which is set when the bitmap was placed at the max allowed distance. Bitmaps are used to speed up the counting phase when packing, for requests that are not shallow. The PackWriterBitmapWalker uses a RevFilter to proactively mark commits with RevFlag.SEEN, when they appear in a bitmap. The walker produces the full closure of reachable ObjectIds, given the collection of starting ObjectIds. For fetch request, two ObjectWalks are executed to compute the ObjectIds reachable from the haves and from the wants. The ObjectIds needed to be written are determined by taking all the resulting wants AND NOT the haves. For clone requests, we get cached pack support for "free" since it is possible to determine if all of the ObjectIds in a pack file are included in the resulting list of ObjectIds to write. On my machine, the best times for clones and fetches of the linux kernel repository (with about 2.6M objects and 300K commits) are tabulated below: Operation Index V2 Index VE003 Clone 37530ms (524.06 MiB) 82ms (524.06 MiB) Fetch (1 commit back) 75ms 107ms Fetch (10 commits back) 456ms (269.51 KiB) 341ms (265.19 KiB) Fetch (100 commits back) 449ms (269.91 KiB) 337ms (267.28 KiB) Fetch (1000 commits back) 2229ms ( 14.75 MiB) 189ms ( 14.42 MiB) Fetch (10000 commits back) 2177ms ( 16.30 MiB) 254ms ( 15.88 MiB) Fetch (100000 commits back) 14340ms (185.83 MiB) 1655ms (189.39 MiB) Change-Id: Icdb0cdd66ff168917fb9ef17b96093990cc6a98d
11 lat temu
Implement similarity based rename detection Content similarity based rename detection is performed only after a linear time detection is performed using exact content match on the ObjectIds. Any names which were paired up during that exact match phase are excluded from the inexact similarity based rename, which reduces the space that must be considered. During rename detection two entries cannot be marked as a rename if they are different types of files. This prevents a symlink from being renamed to a regular file, even if their blob content appears to be similar, or is identical. Efficiently comparing two files is performed by building up two hash indexes and hashing lines or short blocks from each file, counting the number of bytes that each line or block represents. Instead of using a standard java.util.HashMap, we use a custom open hashing scheme similiar to what we use in ObjecIdSubclassMap. This permits us to have a very light-weight hash, with very little memory overhead per cell stored. As we only need two ints per record in the map (line/block key and number of bytes), we collapse them into a single long inside of a long array, making very efficient use of available memory when we create the index table. We only need object headers for the index structure itself, and the index table, but not per-cell. This offers a massive space savings over using java.util.HashMap. The score calculation is done by approximating how many bytes are the same between the two inputs (which for a delta would be how much is copied from the base into the result). The score is derived by dividing the approximate number of bytes in common into the length of the larger of the two input files. Right now the SimilarityIndex table should average about 1/2 full, which means we waste about 50% of our memory on empty entries after we are done indexing a file and sort the table's contents. If memory becomes an issue we could discard the table and copy all records over to a new array that is properly sized. Building the index requires O(M + N log N) time, where M is the size of the input file in bytes, and N is the number of unique lines/blocks in the file. The N log N time constraint comes from the sort of the index table that is necessary to perform linear time matching against another SimilarityIndex created for a different file. To actually perform the rename detection, a SxD matrix is created, placing the sources (aka deletions) along one dimension and the destinations (aka additions) along the other. A simple O(S x D) loop examines every cell in this matrix. A SimilarityIndex is built along the row and reused for each column compare along that row, avoiding the costly index rebuild at the row level. A future improvement would be to load a smaller square matrix into SimilarityIndexes and process everything in that sub-matrix before discarding the column dimension and moving down to the next sub-matrix block along that same grid of rows. An optional ProgressMonitor is permitted to be passed in, allowing applications to see the progress of the detector as it works through the matrix cells. This provides some indication of current status for very long running renames. The default line/block hash function used by the SimilarityIndex may not be optimal, and may produce too many collisions. It is borrowed from RawText's hash, which is used to quickly skip out of a longer equality test if two lines have different hash functions. We may need to refine this hash in the future, in order to minimize the number of collisions we get on common source files. Based on a handful of test commits in JGit (especially my own recent rename repository refactoring series), this rename detector produces output that is very close to C Git. The content similarity scores are sometimes off by 1%, which is most probably caused by our SimilarityIndex type using a different hash function than C Git uses when it computes the delta size between any two objects in the rename matrix. Bug: 318504 Change-Id: I11dff969e8a2e4cf252636d857d2113053bdd9dc Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 lat temu
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673
  1. #
  2. # Messages with format elements ({0}) are processed using java.text.MessageFormat.
  3. #
  4. abbreviationLengthMustBeNonNegative=Abbreviation length must not be negative.
  5. abortingRebase=Aborting rebase: resetting to {0}
  6. abortingRebaseFailed=Could not abort rebase
  7. abortingRebaseFailedNoOrigHead=Could not abort rebase since ORIG_HEAD is null
  8. advertisementCameBefore=advertisement of {0}^'{}' came before {1}
  9. advertisementOfCameBefore=advertisement of {0}^'{}' came before {1}
  10. amazonS3ActionFailed={0} of ''{1}'' failed: {2} {3}
  11. amazonS3ActionFailedGivingUp={0} of ''{1}'' failed: Giving up after {2} attempts.
  12. ambiguousObjectAbbreviation=Object abbreviation {0} is ambiguous
  13. aNewObjectIdIsRequired=A NewObjectId is required.
  14. anExceptionOccurredWhileTryingToAddTheIdOfHEAD=An exception occurred while trying to add the Id of HEAD
  15. anSSHSessionHasBeenAlreadyCreated=An SSH session has been already created
  16. applyingCommit=Applying {0}
  17. archiveFormatAlreadyAbsent=Archive format already absent: {0}
  18. archiveFormatAlreadyRegistered=Archive format already registered with different implementation: {0}
  19. argumentIsNotAValidCommentString=Invalid comment: {0}
  20. atLeastOnePathIsRequired=At least one path is required.
  21. atLeastOnePatternIsRequired=At least one pattern is required.
  22. atLeastTwoFiltersNeeded=At least two filters needed.
  23. authenticationNotSupported=authentication not supported
  24. badBase64InputCharacterAt=Bad Base64 input character at {0} : {1} (decimal)
  25. badEntryDelimiter=Bad entry delimiter
  26. badEntryName=Bad entry name: {0}
  27. badEscape=Bad escape: {0}
  28. badGroupHeader=Bad group header
  29. badObjectType=Bad object type: {0}
  30. badRef=Bad ref: {0}: {1}
  31. badSectionEntry=Bad section entry: {0}
  32. bareRepositoryNoWorkdirAndIndex=Bare Repository has neither a working tree, nor an index
  33. base64InputNotProperlyPadded=Base64 input not properly padded.
  34. baseLengthIncorrect=base length incorrect
  35. bitmapMissingObject=Bitmap at {0} is missing {1}.
  36. bitmapsMustBePrepared=Bitmaps must be prepared before they may be written.
  37. blameNotCommittedYet=Not Committed Yet
  38. blobNotFound=Blob not found: {0}
  39. blobNotFoundForPath=Blob not found: {0} for path: {1}
  40. branchNameInvalid=Branch name {0} is not allowed
  41. buildingBitmaps=Building bitmaps
  42. cachedPacksPreventsIndexCreation=Using cached packs prevents index creation
  43. cachedPacksPreventsListingObjects=Using cached packs prevents listing objects
  44. cannotBeCombined=Cannot be combined.
  45. cannotBeRecursiveWhenTreesAreIncluded=TreeWalk shouldn't be recursive when tree objects are included.
  46. cannotChangeActionOnComment=Cannot change action on comment line in git-rebase-todo file, old action: {0}, new action: {1}.
  47. cannotChangeToComment=Cannot change a non-comment line to a comment line.
  48. cannotCheckoutOursSwitchBranch=Checking out ours/theirs is only possible when checking out index, not when switching branches.
  49. cannotCombineSquashWithNoff=Cannot combine --squash with --no-ff.
  50. cannotCombineTreeFilterWithRevFilter=Cannot combine TreeFilter {0} with RevFilter {1}.
  51. cannotCommitOnARepoWithState=Cannot commit on a repo with state: {0}
  52. cannotCommitWriteTo=Cannot commit write to {0}
  53. cannotConnectPipes=cannot connect pipes
  54. cannotConvertScriptToText=Cannot convert script to text
  55. cannotCreateConfig=cannot create config
  56. cannotCreateDirectory=Cannot create directory {0}
  57. cannotCreateHEAD=cannot create HEAD
  58. cannotCreateIndexfile=Cannot create an index file with name {0}
  59. cannotCreateTempDir=Cannot create a temp dir
  60. cannotDeleteCheckedOutBranch=Branch {0} is checked out and can not be deleted
  61. cannotDeleteFile=Cannot delete file: {0}
  62. cannotDeleteObjectsPath=Cannot delete {0}/{1}: {2}
  63. cannotDeleteStaleTrackingRef=Cannot delete stale tracking ref {0}
  64. cannotDeleteStaleTrackingRef2=Cannot delete stale tracking ref {0}: {1}
  65. cannotDetermineProxyFor=Cannot determine proxy for {0}
  66. cannotDownload=Cannot download {0}
  67. cannotEnterObjectsPath=Cannot enter {0}/objects: {1}
  68. cannotEnterPathFromParent=Cannot enter {0} from {1}: {2}
  69. cannotExecute=cannot execute: {0}
  70. cannotGet=Cannot get {0}
  71. cannotGetObjectsPath=Cannot get {0}/{1}: {2}
  72. cannotListObjectsPath=Cannot ls {0}/{1}: {2}
  73. cannotListPackPath=Cannot ls {0}/pack: {1}
  74. cannotListRefs=cannot list refs
  75. cannotLock=Cannot lock {0}
  76. cannotLockPackIn=Cannot lock pack in {0}
  77. cannotMatchOnEmptyString=Cannot match on empty string.
  78. cannotMkdirObjectPath=Cannot mkdir {0}/{1}: {2}
  79. cannotMoveIndexTo=Cannot move index to {0}
  80. cannotMovePackTo=Cannot move pack to {0}
  81. cannotOpenService=cannot open {0}
  82. cannotParseDate=The date specification "{0}" could not be parsed with the following formats: {1}
  83. cannotParseGitURIish=Cannot parse Git URI-ish
  84. cannotPullOnARepoWithState=Cannot pull into a repository with state: {0}
  85. cannotRead=Cannot read {0}
  86. cannotReadBlob=Cannot read blob {0}
  87. cannotReadCommit=Cannot read commit {0}
  88. cannotReadFile=Cannot read file {0}
  89. cannotReadHEAD=cannot read HEAD: {0} {1}
  90. cannotReadObject=Cannot read object
  91. cannotReadObjectsPath=Cannot read {0}/{1}: {2}
  92. cannotReadTree=Cannot read tree {0}
  93. cannotRebaseWithoutCurrentHead=Can not rebase without a current HEAD
  94. cannotResolveLocalTrackingRefForUpdating=Cannot resolve local tracking ref {0} for updating.
  95. cannotSquashFixupWithoutPreviousCommit=Cannot {0} without previous commit.
  96. cannotStoreObjects=cannot store objects
  97. cannotResolveUniquelyAbbrevObjectId=Could not resolve uniquely the abbreviated object ID
  98. cannotUnloadAModifiedTree=Cannot unload a modified tree.
  99. cannotWorkWithOtherStagesThanZeroRightNow=Cannot work with other stages than zero right now. Won't write corrupt index.
  100. cannotWriteObjectsPath=Cannot write {0}/{1}: {2}
  101. canOnlyCherryPickCommitsWithOneParent=Cannot cherry-pick commit ''{0}'' because it has {1} parents, only commits with exactly one parent are supported.
  102. canOnlyRevertCommitsWithOneParent=Cannot revert commit ''{0}'' because it has {1} parents, only commits with exactly one parent are supported
  103. commitDoesNotHaveGivenParent=The commit ''{0}'' does not have a parent number {1}.
  104. cantFindObjectInReversePackIndexForTheSpecifiedOffset=Can''t find object in (reverse) pack index for the specified offset {0}
  105. cantPassMeATree=Can't pass me a tree!
  106. channelMustBeInRange1_255=channel {0} must be in range [1, 255]
  107. characterClassIsNotSupported=The character class {0} is not supported.
  108. checkoutConflictWithFile=Checkout conflict with file: {0}
  109. checkoutConflictWithFiles=Checkout conflict with files: {0}
  110. checkoutUnexpectedResult=Checkout returned unexpected result {0}
  111. classCastNotA=Not a {0}
  112. cloneNonEmptyDirectory=Destination path "{0}" already exists and is not an empty directory
  113. collisionOn=Collision on {0}
  114. commandRejectedByHook=Rejected by "{0}" hook.\n{1}
  115. commandWasCalledInTheWrongState=Command {0} was called in the wrong state
  116. commitAlreadyExists=exists {0}
  117. commitMessageNotSpecified=commit message not specified
  118. commitOnRepoWithoutHEADCurrentlyNotSupported=Commit on repo without HEAD currently not supported
  119. commitAmendOnInitialNotPossible=Amending is not possible on initial commit.
  120. compressingObjects=Compressing objects
  121. connectionFailed=connection failed
  122. connectionTimeOut=Connection time out: {0}
  123. contextMustBeNonNegative=context must be >= 0
  124. corruptionDetectedReReadingAt=Corruption detected re-reading at {0}
  125. corruptObjectBadStream=bad stream
  126. corruptObjectBadStreamCorruptHeader=bad stream, corrupt header
  127. corruptObjectDuplicateEntryNames=duplicate entry names
  128. corruptObjectGarbageAfterSize=garbage after size
  129. corruptObjectIncorrectLength=incorrect length
  130. corruptObjectIncorrectSorting=incorrectly sorted
  131. corruptObjectInvalidAuthor=invalid author
  132. corruptObjectInvalidCommitter=invalid committer
  133. corruptObjectInvalidEntryMode=invalid entry mode
  134. corruptObjectInvalidMode=invalid mode
  135. corruptObjectInvalidModeChar=invalid mode character
  136. corruptObjectInvalidModeStartsZero=mode starts with '0'
  137. corruptObjectInvalidMode2=invalid mode {0,number,#}
  138. corruptObjectInvalidMode3=invalid mode {0} for {1} ''{2}'' in {3}.
  139. corruptObjectInvalidName=invalid name '%s'
  140. corruptObjectInvalidNameAux=invalid name 'AUX'
  141. corruptObjectInvalidNameCon=invalid name 'CON'
  142. corruptObjectInvalidNameCom=invalid name 'COM%c'
  143. corruptObjectInvalidNameEnd=invalid name ends with '%c'
  144. corruptObjectInvalidNameIgnorableUnicode=invalid name '%s' contains ignorable Unicode characters
  145. corruptObjectInvalidNameInvalidUtf8=invalid name contains byte sequence ''{0}'' which is not a valid UTF-8 character
  146. corruptObjectInvalidNameLpt=invalid name 'LPT%c'
  147. corruptObjectInvalidNameNul=invalid name 'NUL'
  148. corruptObjectInvalidNamePrn=invalid name 'PRN'
  149. corruptObjectInvalidObject=invalid object
  150. corruptObjectInvalidParent=invalid parent
  151. corruptObjectInvalidTagger=invalid tagger
  152. corruptObjectInvalidTree=invalid tree
  153. corruptObjectInvalidType=invalid type
  154. corruptObjectInvalidType2=invalid type {0}
  155. corruptObjectMalformedHeader=malformed header: {0}
  156. corruptObjectNameContainsByte=name contains byte 0x%x
  157. corruptObjectNameContainsChar=name contains '%c'
  158. corruptObjectNameContainsNullByte=name contains byte 0x00
  159. corruptObjectNameContainsSlash=name contains '/'
  160. corruptObjectNameDot=invalid name '.'
  161. corruptObjectNameDotDot=invalid name '..'
  162. corruptObjectNameZeroLength=zero length name
  163. corruptObjectNegativeSize=negative size
  164. corruptObjectNoAuthor=no author
  165. corruptObjectNoCommitter=no committer
  166. corruptObjectNoHeader=no header
  167. corruptObjectNoObject=no object
  168. corruptObjectNoObjectHeader=no object header
  169. corruptObjectNoTaggerBadHeader=no tagger/bad header
  170. corruptObjectNoTaggerHeader=no tagger header
  171. corruptObjectNoTagHeader=no tag header
  172. corruptObjectNoTagName=no tag name
  173. corruptObjectNotree=no tree
  174. corruptObjectNotreeHeader=no tree header
  175. corruptObjectNoType=no type
  176. corruptObjectNoTypeHeader=no type header
  177. corruptObjectPackfileChecksumIncorrect=Packfile checksum incorrect.
  178. corruptObjectTruncatedInMode=truncated in mode
  179. corruptObjectTruncatedInName=truncated in name
  180. corruptObjectTruncatedInObjectId=truncated in object id
  181. couldNotCheckOutBecauseOfConflicts=Could not check out because of conflicts
  182. couldNotDeleteLockFileShouldNotHappen=Could not delete lock file. Should not happen
  183. couldNotDeleteTemporaryIndexFileShouldNotHappen=Could not delete temporary index file. Should not happen
  184. couldNotGetAdvertisedRef=Remote {0} did not advertise Ref for branch {1}. This Ref may not exist in the remote or may be hidden by permission settings.
  185. couldNotGetRepoStatistics=Could not get repository statistics
  186. couldNotLockHEAD=Could not lock HEAD
  187. couldNotReadIndexInOneGo=Could not read index in one go, only {0} out of {1} read
  188. couldNotReadObjectWhileParsingCommit=Could not read an object while parsing commit {0}
  189. couldNotRenameDeleteOldIndex=Could not rename delete old index
  190. couldNotRenameTemporaryFile=Could not rename temporary file {0} to new location {1}
  191. couldNotRenameTemporaryIndexFileToIndex=Could not rename temporary index file to index
  192. couldNotRewindToUpstreamCommit=Could not rewind to upstream commit
  193. couldNotURLEncodeToUTF8=Could not URL encode to UTF-8
  194. couldNotWriteFile=Could not write file {0}
  195. countingObjects=Counting objects
  196. corruptPack=Pack file {0} is corrupt, removing it from pack list
  197. createBranchFailedUnknownReason=Create branch failed for unknown reason
  198. createBranchUnexpectedResult=Create branch returned unexpected result {0}
  199. createNewFileFailed=Could not create new file {0}
  200. credentialPassword=Password
  201. credentialUsername=Username
  202. daemonAlreadyRunning=Daemon already running
  203. daysAgo={0} days ago
  204. deleteBranchUnexpectedResult=Delete branch returned unexpected result {0}
  205. deleteFileFailed=Could not delete file {0}
  206. deleteTagUnexpectedResult=Delete tag returned unexpected result {0}
  207. deletingNotSupported=Deleting {0} not supported.
  208. destinationIsNotAWildcard=Destination is not a wildcard.
  209. detachedHeadDetected=HEAD is detached
  210. dirCacheDoesNotHaveABackingFile=DirCache does not have a backing file
  211. dirCacheFileIsNotLocked=DirCache {0} not locked
  212. dirCacheIsNotLocked=DirCache is not locked
  213. DIRCChecksumMismatch=DIRC checksum mismatch
  214. DIRCExtensionIsTooLargeAt=DIRC extension {0} is too large at {1} bytes.
  215. DIRCExtensionNotSupportedByThisVersion=DIRC extension {0} not supported by this version.
  216. DIRCHasTooManyEntries=DIRC has too many entries.
  217. DIRCUnrecognizedExtendedFlags=Unrecognized extended flags: {0}
  218. dirtyFilesExist=Dirty files exist. Refusing to merge
  219. doesNotHandleMode=Does not handle mode {0} ({1})
  220. downloadCancelled=Download cancelled
  221. downloadCancelledDuringIndexing=Download cancelled during indexing
  222. duplicateAdvertisementsOf=duplicate advertisements of {0}
  223. duplicateRef=Duplicate ref: {0}
  224. duplicateRemoteRefUpdateIsIllegal=Duplicate remote ref update is illegal. Affected remote name: {0}
  225. duplicateStagesNotAllowed=Duplicate stages not allowed
  226. eitherGitDirOrWorkTreeRequired=One of setGitDir or setWorkTree must be called.
  227. emptyCommit=No changes
  228. emptyPathNotPermitted=Empty path not permitted.
  229. emptyRef=Empty ref: {0}
  230. encryptionError=Encryption error: {0}
  231. endOfFileInEscape=End of file in escape
  232. entryNotFoundByPath=Entry not found by path: {0}
  233. enumValueNotSupported2=Invalid value: {0}.{1}={2}
  234. enumValueNotSupported3=Invalid value: {0}.{1}.{2}={3}
  235. enumValuesNotAvailable=Enumerated values of type {0} not available
  236. errorDecodingFromFile=Error decoding from file {0}
  237. errorEncodingFromFile=Error encoding from file {0}
  238. errorInBase64CodeReadingStream=Error in Base64 code reading stream.
  239. errorInPackedRefs=error in packed-refs
  240. errorInvalidProtocolWantedOldNewRef=error: invalid protocol: wanted 'old new ref'
  241. errorListing=Error listing {0}
  242. errorOccurredDuringUnpackingOnTheRemoteEnd=error occurred during unpacking on the remote end: {0}
  243. errorReadingInfoRefs=error reading info/refs
  244. exceptionCaughtDuringExecutionOfHook=Exception caught during execution of "{0}" hook.
  245. exceptionCaughtDuringExecutionOfAddCommand=Exception caught during execution of add command
  246. exceptionCaughtDuringExecutionOfArchiveCommand=Exception caught during execution of archive command
  247. exceptionCaughtDuringExecutionOfCherryPickCommand=Exception caught during execution of cherry-pick command. {0}
  248. exceptionCaughtDuringExecutionOfCommitCommand=Exception caught during execution of commit command
  249. exceptionCaughtDuringExecutionOfFetchCommand=Exception caught during execution of fetch command
  250. exceptionCaughtDuringExecutionOfLsRemoteCommand=Exception caught during execution of ls-remote command
  251. exceptionCaughtDuringExecutionOfMergeCommand=Exception caught during execution of merge command. {0}
  252. exceptionCaughtDuringExecutionOfPullCommand=Exception caught during execution of pull command
  253. exceptionCaughtDuringExecutionOfPushCommand=Exception caught during execution of push command
  254. exceptionCaughtDuringExecutionOfResetCommand=Exception caught during execution of reset command. {0}
  255. exceptionCaughtDuringExecutionOfRevertCommand=Exception caught during execution of revert command. {0}
  256. exceptionCaughtDuringExecutionOfRmCommand=Exception caught during execution of rm command
  257. exceptionCaughtDuringExecutionOfTagCommand=Exception caught during execution of tag command
  258. exceptionCaughtDuringExcecutionOfCommand=Exception caught during execution of command {0} in {1}
  259. exceptionHookExecutionInterrupted=Execution of "{0}" hook interrupted.
  260. exceptionOccurredDuringAddingOfOptionToALogCommand=Exception occurred during adding of {0} as option to a Log command
  261. exceptionOccurredDuringReadingOfGIT_DIR=Exception occurred during reading of $GIT_DIR/{0}. {1}
  262. exceptionWhileReadingPack=ERROR: Exception caught while accessing pack file {0}, the pack file might be corrupt
  263. expectedACKNAKFoundEOF=Expected ACK/NAK, found EOF
  264. expectedACKNAKGot=Expected ACK/NAK, got: {0}
  265. expectedBooleanStringValue=Expected boolean string value
  266. expectedCharacterEncodingGuesses=Expected {0} character encoding guesses
  267. expectedEOFReceived=expected EOF; received ''{0}'' instead
  268. expectedGot=expected ''{0}'', got ''{1}''
  269. expectedLessThanGot=expected less than ''{0}'', got ''{1}''
  270. expectedPktLineWithService=expected pkt-line with ''# service=-'', got ''{0}''
  271. expectedReceivedContentType=expected Content-Type {0}; received Content-Type {1}
  272. expectedReportForRefNotReceived={0}: expected report for ref {1} not received
  273. failedUpdatingRefs=failed updating refs
  274. failureDueToOneOfTheFollowing=Failure due to one of the following:
  275. failureUpdatingFETCH_HEAD=Failure updating FETCH_HEAD: {0}
  276. failureUpdatingTrackingRef=Failure updating tracking ref {0}: {1}
  277. fileCannotBeDeleted=File cannot be deleted: {0}
  278. fileIsTooBigForThisConvenienceMethod=File is too big for this convenience method ({0} bytes).
  279. fileIsTooLarge=File is too large: {0}
  280. fileModeNotSetForPath=FileMode not set for path {0}
  281. findingGarbage=Finding garbage
  282. flagIsDisposed={0} is disposed.
  283. flagNotFromThis={0} not from this.
  284. flagsAlreadyCreated={0} flags already created.
  285. funnyRefname=funny refname
  286. gcFailed=Garbage collection failed.
  287. gitmodulesNotFound=.gitmodules not found in tree.
  288. headRequiredToStash=HEAD required to stash local changes
  289. hoursAgo={0} hours ago
  290. hugeIndexesAreNotSupportedByJgitYet=Huge indexes are not supported by jgit, yet
  291. hunkBelongsToAnotherFile=Hunk belongs to another file
  292. hunkDisconnectedFromFile=Hunk disconnected from file
  293. hunkHeaderDoesNotMatchBodyLineCountOf=Hunk header {0} does not match body line count of {1}
  294. illegalArgumentNotA=Not {0}
  295. illegalCombinationOfArguments=The combination of arguments {0} and {1} is not allowed
  296. illegalPackingPhase=Illegal packing phase {0}
  297. illegalStateExists=exists {0}
  298. improperlyPaddedBase64Input=Improperly padded Base64 input.
  299. incorrectHashFor=Incorrect hash for {0}; computed {1} as a {2} from {3} bytes.
  300. incorrectOBJECT_ID_LENGTH=Incorrect OBJECT_ID_LENGTH.
  301. indexFileCorruptedNegativeBucketCount=Invalid negative bucket count read from pack v2 index file: {0}
  302. indexFileIsInUse=Index file is in use
  303. indexFileIsTooLargeForJgit=Index file is too large for jgit
  304. indexSignatureIsInvalid=Index signature is invalid: {0}
  305. indexWriteException=Modified index could not be written
  306. initFailedBareRepoDifferentDirs=When initializing a bare repo with directory {0} and separate git-dir {1} specified both folders must point to the same location
  307. initFailedNonBareRepoSameDirs=When initializing a non-bare repo with directory {0} and separate git-dir {1} specified both folders should not point to the same location
  308. inMemoryBufferLimitExceeded=In-memory buffer limit exceeded
  309. inputDidntMatchLength=Input did not match supplied length. {0} bytes are missing.
  310. inputStreamMustSupportMark=InputStream must support mark()
  311. integerValueOutOfRange=Integer value {0}.{1} out of range
  312. internalRevisionError=internal revision error
  313. internalServerError=internal server error
  314. interruptedWriting=Interrupted writing {0}
  315. inTheFuture=in the future
  316. invalidAdvertisementOf=invalid advertisement of {0}
  317. invalidAncestryLength=Invalid ancestry length
  318. invalidBooleanValue=Invalid boolean value: {0}.{1}={2}
  319. invalidChannel=Invalid channel {0}
  320. invalidCharacterInBase64Data=Invalid character in Base64 data.
  321. invalidCommitParentNumber=Invalid commit parent number
  322. invalidEncryption=Invalid encryption
  323. invalidGitdirRef = Invalid .git reference in file ''{0}''
  324. invalidGitType=invalid git type: {0}
  325. invalidId=Invalid id: {0}
  326. invalidId0=Invalid id
  327. invalidIdLength=Invalid id length {0}; should be {1}
  328. invalidIgnoreParamSubmodule=Found invalid ignore param for submodule {0}.
  329. invalidIgnoreRule=Exception caught while parsing ignore rule ''{0}''.
  330. invalidIntegerValue=Invalid integer value: {0}.{1}={2}
  331. invalidKey=Invalid key: {0}
  332. invalidLineInConfigFile=Invalid line in config file
  333. invalidModeFor=Invalid mode {0} for {1} {2} in {3}.
  334. invalidModeForPath=Invalid mode {0} for path {1}
  335. invalidObject=Invalid {0} {1}: {2}
  336. invalidOldIdSent=invalid old id sent
  337. invalidPacketLineHeader=Invalid packet line header: {0}
  338. invalidPath=Invalid path: {0}
  339. invalidPathContainsSeparator=Invalid path (contains separator ''{0}''): {1}
  340. invalidPathPeriodAtEndWindows=Invalid path (period at end is ignored by Windows): {0}
  341. invalidPathSpaceAtEndWindows=Invalid path (space at end is ignored by Windows): {0}
  342. invalidPathReservedOnWindows=Invalid path (''{0}'' is reserved on Windows): {1}
  343. invalidReflogRevision=Invalid reflog revision: {0}
  344. invalidRefName=Invalid ref name: {0}
  345. invalidRemote=Invalid remote: {0}
  346. invalidShallowObject=invalid shallow object {0}, expected commit
  347. invalidStageForPath=Invalid stage {0} for path {1}
  348. invalidTagOption=Invalid tag option: {0}
  349. invalidTimeout=Invalid timeout: {0}
  350. invalidURL=Invalid URL {0}
  351. invalidWildcards=Invalid wildcards {0}
  352. invalidRefSpec=Invalid refspec {0}
  353. invalidWindowSize=Invalid window size
  354. isAStaticFlagAndHasNorevWalkInstance={0} is a static flag and has no RevWalk instance
  355. JRELacksMD5Implementation=JRE lacks MD5 implementation
  356. kNotInRange=k {0} not in {1} - {2}
  357. largeObjectExceedsByteArray=Object {0} exceeds 2 GiB byte array limit
  358. largeObjectExceedsLimit=Object {0} exceeds {1} limit, actual size is {2}
  359. largeObjectException={0} exceeds size limit
  360. largeObjectOutOfMemory=Out of memory loading {0}
  361. lengthExceedsMaximumArraySize=Length exceeds maximum array size
  362. listingAlternates=Listing alternates
  363. listingPacks=Listing packs
  364. localObjectsIncomplete=Local objects incomplete.
  365. localRefIsMissingObjects=Local ref {0} is missing object(s).
  366. localRepository=local repository
  367. lockCountMustBeGreaterOrEqual1=lockCount must be >= 1
  368. lockError=lock error: {0}
  369. lockOnNotClosed=Lock on {0} not closed.
  370. lockOnNotHeld=Lock on {0} not held.
  371. malformedpersonIdentString=Malformed PersonIdent string (no < was found): {0}
  372. maxCountMustBeNonNegative=max count must be >= 0
  373. mergeConflictOnNonNoteEntries=Merge conflict on non-note entries: base = {0}, ours = {1}, theirs = {2}
  374. mergeConflictOnNotes=Merge conflict on note {0}. base = {1}, ours = {2}, theirs = {2}
  375. mergeStrategyAlreadyExistsAsDefault=Merge strategy "{0}" already exists as a default strategy
  376. mergeStrategyDoesNotSupportHeads=merge strategy {0} does not support {1} heads to be merged into HEAD
  377. mergeUsingStrategyResultedInDescription=Merge of revisions {0} with base {1} using strategy {2} resulted in: {3}. {4}
  378. mergeRecursiveConflictsWhenMergingCommonAncestors=Multiple common ancestors were found and merging them resulted in a conflict: {0}, {1}
  379. mergeRecursiveReturnedNoCommit=Merge returned no commit:\n Depth {0}\n Head one {1}\n Head two {2}
  380. mergeRecursiveTooManyMergeBasesFor = "More than {0} merge bases for:\n a {1}\n b {2} found:\n count {3}"
  381. messageAndTaggerNotAllowedInUnannotatedTags = Unannotated tags cannot have a message or tagger
  382. minutesAgo={0} minutes ago
  383. missingAccesskey=Missing accesskey.
  384. missingConfigurationForKey=No value for key {0} found in configuration
  385. missingDeltaBase=delta base
  386. missingForwardImageInGITBinaryPatch=Missing forward-image in GIT binary patch
  387. missingObject=Missing {0} {1}
  388. missingPrerequisiteCommits=missing prerequisite commits:
  389. missingRequiredParameter=Parameter "{0}" is missing
  390. missingSecretkey=Missing secretkey.
  391. mixedStagesNotAllowed=Mixed stages not allowed
  392. mkDirFailed=Creating directory {0} failed
  393. mkDirsFailed=Creating directories for {0} failed
  394. month=month
  395. months=months
  396. monthsAgo={0} months ago
  397. multipleMergeBasesFor=Multiple merge bases for:\n {0}\n {1} found:\n {2}\n {3}
  398. need2Arguments=Need 2 arguments
  399. needPackOut=need packOut
  400. needsAtLeastOneEntry=Needs at least one entry
  401. needsWorkdir=Needs workdir
  402. newlineInQuotesNotAllowed=Newline in quotes not allowed
  403. noApplyInDelete=No apply in delete
  404. noClosingBracket=No closing {0} found for {1} at index {2}.
  405. noCredentialsProvider=Authentication is required but no CredentialsProvider has been registered
  406. noHEADExistsAndNoExplicitStartingRevisionWasSpecified=No HEAD exists and no explicit starting revision was specified
  407. noHMACsupport=No {0} support: {1}
  408. noMergeBase=No merge base could be determined. Reason={0}. {1}
  409. noMergeHeadSpecified=No merge head specified
  410. noSuchRef=no such ref
  411. notABoolean=Not a boolean: {0}
  412. notABundle=not a bundle
  413. notADIRCFile=Not a DIRC file.
  414. notAGitDirectory=not a git directory
  415. notAPACKFile=Not a PACK file.
  416. notARef=Not a ref: {0}: {1}
  417. notASCIIString=Not ASCII string: {0}
  418. notAuthorized=not authorized
  419. notAValidPack=Not a valid pack {0}
  420. notFound=not found.
  421. nothingToFetch=Nothing to fetch.
  422. nothingToPush=Nothing to push.
  423. notMergedExceptionMessage=Branch was not deleted as it has not been merged yet; use the force option to delete it anyway
  424. noXMLParserAvailable=No XML parser available.
  425. objectAtHasBadZlibStream=Object at {0} in {1} has bad zlib stream
  426. objectAtPathDoesNotHaveId=Object at path "{0}" does not have an id assigned. All object ids must be assigned prior to writing a tree.
  427. objectIsCorrupt=Object {0} is corrupt: {1}
  428. objectIsNotA=Object {0} is not a {1}.
  429. objectNotFound=Object {0} not found.
  430. objectNotFoundIn=Object {0} not found in {1}.
  431. obtainingCommitsForCherryPick=Obtaining commits that need to be cherry-picked
  432. offsetWrittenDeltaBaseForObjectNotFoundInAPack=Offset-written delta base for object not found in a pack
  433. onlyAlreadyUpToDateAndFastForwardMergesAreAvailable=only already-up-to-date and fast forward merges are available
  434. onlyOneFetchSupported=Only one fetch supported
  435. onlyOneOperationCallPerConnectionIsSupported=Only one operation call per connection is supported.
  436. openFilesMustBeAtLeast1=Open files must be >= 1
  437. openingConnection=Opening connection
  438. operationCanceled=Operation {0} was canceled
  439. outputHasAlreadyBeenStarted=Output has already been started.
  440. packChecksumMismatch=Pack checksum mismatch detected for pack file {0}
  441. packCorruptedWhileWritingToFilesystem=Pack corrupted while writing to filesystem
  442. packDoesNotMatchIndex=Pack {0} does not match index
  443. packedRefsHandleIsStale=packed-refs handle is stale, {0}. retry
  444. packetSizeMustBeAtLeast=packet size {0} must be >= {1}
  445. packetSizeMustBeAtMost=packet size {0} must be <= {1}
  446. packfileCorruptionDetected=Packfile corruption detected: {0}
  447. packFileInvalid=Pack file invalid: {0}
  448. packfileIsTruncated=Packfile {0} is truncated.
  449. packfileIsTruncatedNoParam=Packfile is truncated.
  450. packHandleIsStale=Pack file {0} handle is stale, removing it from pack list
  451. packHasUnresolvedDeltas=pack has unresolved deltas
  452. packingCancelledDuringObjectsWriting=Packing cancelled during objects writing
  453. packObjectCountMismatch=Pack object count mismatch: pack {0} index {1}: {2}
  454. packRefs=Pack refs
  455. packSizeNotSetYet=Pack size not yet set since it has not yet been received
  456. packTooLargeForIndexVersion1=Pack too large for index version 1
  457. packWasDeleted=Pack file {0} was deleted, removing it from pack list
  458. packWriterStatistics=Total {0,number,#0} (delta {1,number,#0}), reused {2,number,#0} (delta {3,number,#0})
  459. panicCantRenameIndexFile=Panic: index file {0} must be renamed to replace {1}; until then repository is corrupt
  460. patchApplyException=Cannot apply: {0}
  461. patchFormatException=Format error: {0}
  462. pathIsNotInWorkingDir=Path is not in working dir
  463. pathNotConfigured=Submodule path is not configured
  464. peeledLineBeforeRef=Peeled line before ref.
  465. peerDidNotSupplyACompleteObjectGraph=peer did not supply a complete object graph
  466. personIdentEmailNonNull=E-mail address of PersonIdent must not be null.
  467. personIdentNameNonNull=Name of PersonIdent must not be null.
  468. prefixRemote=remote:
  469. problemWithResolvingPushRefSpecsLocally=Problem with resolving push ref specs locally: {0}
  470. progressMonUploading=Uploading {0}
  471. propertyIsAlreadyNonNull=Property is already non null
  472. pruneLoosePackedObjects=Prune loose objects also found in pack files
  473. pruneLooseUnreferencedObjects=Prune loose, unreferenced objects
  474. pullOnRepoWithoutHEADCurrentlyNotSupported=Pull on repository without HEAD currently not supported
  475. pullTaskName=Pull
  476. pushCancelled=push cancelled
  477. pushCertificateInvalidField=Push certificate has missing or invalid value for {0}
  478. pushCertificateInvalidFieldValue=Push certificate has missing or invalid value for {0}: {1}
  479. pushCertificateInvalidHeader=Push certificate has invalid header format
  480. pushCertificateInvalidSignature=Push certificate has invalid signature format
  481. pushIsNotSupportedForBundleTransport=Push is not supported for bundle transport
  482. pushNotPermitted=push not permitted
  483. rawLogMessageDoesNotParseAsLogEntry=Raw log message does not parse as log entry
  484. readingObjectsFromLocalRepositoryFailed=reading objects from local repository failed: {0}
  485. readTimedOut=Read timed out after {0} ms
  486. receivePackObjectTooLarge1=Object too large, rejecting the pack. Max object size limit is {0} bytes.
  487. receivePackObjectTooLarge2=Object too large ({0} bytes), rejecting the pack. Max object size limit is {1} bytes.
  488. receivePackInvalidLimit=Illegal limit parameter value {0}
  489. receivePackTooLarge=Pack exceeds the limit of {0} bytes, rejecting the pack
  490. receivingObjects=Receiving objects
  491. refAlreadyExists=already exists
  492. refAlreadyExists1=Ref {0} already exists
  493. reflogEntryNotFound=Entry {0} not found in reflog for ''{1}''
  494. refNotResolved=Ref {0} can not be resolved
  495. refUpdateReturnCodeWas=RefUpdate return code was: {0}
  496. remoteConfigHasNoURIAssociated=Remote config "{0}" has no URIs associated
  497. remoteDoesNotHaveSpec=Remote does not have {0} available for fetch.
  498. remoteDoesNotSupportSmartHTTPPush=remote does not support smart HTTP push
  499. remoteHungUpUnexpectedly=remote hung up unexpectedly
  500. remoteNameCantBeNull=Remote name can't be null.
  501. renameBranchFailedBecauseTag=Can not rename as Ref {0} is a tag
  502. renameBranchFailedUnknownReason=Rename failed with unknown reason
  503. renameBranchUnexpectedResult=Unexpected rename result {0}
  504. renameFileFailed=Could not rename file {0} to {1}
  505. renamesAlreadyFound=Renames have already been found.
  506. renamesBreakingModifies=Breaking apart modified file pairs
  507. renamesFindingByContent=Finding renames by content similarity
  508. renamesFindingExact=Finding exact renames
  509. renamesRejoiningModifies=Rejoining modified file pairs
  510. repositoryAlreadyExists=Repository already exists: {0}
  511. repositoryConfigFileInvalid=Repository config file {0} invalid {1}
  512. repositoryIsRequired=Repository is required.
  513. repositoryNotFound=repository not found: {0}
  514. repositoryState_applyMailbox=Apply mailbox
  515. repositoryState_bare=Bare
  516. repositoryState_bisecting=Bisecting
  517. repositoryState_conflicts=Conflicts
  518. repositoryState_merged=Merged
  519. repositoryState_normal=Normal
  520. repositoryState_rebase=Rebase
  521. repositoryState_rebaseInteractive=Rebase interactive
  522. repositoryState_rebaseOrApplyMailbox=Rebase/Apply mailbox
  523. repositoryState_rebaseWithMerge=Rebase w/merge
  524. requiredHashFunctionNotAvailable=Required hash function {0} not available.
  525. resettingHead=Resetting head to {0}
  526. resolvingDeltas=Resolving deltas
  527. resultLengthIncorrect=result length incorrect
  528. rewinding=Rewinding to commit {0}
  529. s3ActionDeletion=Deletion
  530. s3ActionReading=Reading
  531. s3ActionWriting=Writing
  532. searchForReuse=Finding sources
  533. searchForSizes=Getting sizes
  534. secondsAgo={0} seconds ago
  535. selectingCommits=Selecting commits
  536. sequenceTooLargeForDiffAlgorithm=Sequence too large for difference algorithm.
  537. serviceNotEnabledNoName=Service not enabled
  538. serviceNotPermitted={0} not permitted
  539. shallowCommitsAlreadyInitialized=Shallow commits have already been initialized
  540. shortCompressedStreamAt=Short compressed stream at {0}
  541. shortReadOfBlock=Short read of block.
  542. shortReadOfOptionalDIRCExtensionExpectedAnotherBytes=Short read of optional DIRC extension {0}; expected another {1} bytes within the section.
  543. shortSkipOfBlock=Short skip of block.
  544. signingNotSupportedOnTag=Signing isn't supported on tag operations yet.
  545. similarityScoreMustBeWithinBounds=Similarity score must be between 0 and 100.
  546. sizeExceeds2GB=Path {0} size {1} exceeds 2 GiB limit.
  547. skipMustBeNonNegative=skip must be >= 0
  548. smartHTTPPushDisabled=smart HTTP push disabled
  549. sourceDestinationMustMatch=Source/Destination must match.
  550. sourceIsNotAWildcard=Source is not a wildcard.
  551. sourceRefDoesntResolveToAnyObject=Source ref {0} doesn''t resolve to any object.
  552. sourceRefNotSpecifiedForRefspec=Source ref not specified for refspec: {0}
  553. squashCommitNotUpdatingHEAD=Squash commit -- not updating HEAD
  554. staleRevFlagsOn=Stale RevFlags on {0}
  555. startingReadStageWithoutWrittenRequestDataPendingIsNotSupported=Starting read stage without written request data pending is not supported
  556. stashApplyConflict=Applying stashed changes resulted in a conflict
  557. stashApplyConflictInIndex=Applying stashed index changes resulted in a conflict. Dropped index changes.
  558. stashApplyFailed=Applying stashed changes did not successfully complete
  559. stashApplyOnUnsafeRepository=Cannot apply stashed commit on a repository with state: {0}
  560. stashApplyWithoutHead=Cannot apply stashed commit in an empty repository or onto an unborn branch
  561. stashCommitIncorrectNumberOfParents=Stashed commit ''{0}'' does have {1} parent commits instead of 2 or 3.
  562. stashDropDeleteRefFailed=Deleting stash reference failed with result: {0}
  563. stashDropFailed=Dropping stashed commit failed
  564. stashDropMissingReflog=Stash reflog does not contain entry ''{0}''
  565. stashFailed=Stashing local changes did not successfully complete
  566. stashResolveFailed=Reference ''{0}'' does not resolve to stashed commit
  567. statelessRPCRequiresOptionToBeEnabled=stateless RPC requires {0} to be enabled
  568. storePushCertMultipleRefs=Store push certificate for {0} refs
  569. storePushCertOneRef=Store push certificate for {0}
  570. storePushCertReflog=Store push certificate
  571. submoduleExists=Submodule ''{0}'' already exists in the index
  572. submoduleParentRemoteUrlInvalid=Cannot remove segment from remote url ''{0}''
  573. submodulesNotSupported=Submodules are not supported
  574. supportOnlyPackIndexVersion2=Only support index version 2
  575. symlinkCannotBeWrittenAsTheLinkTarget=Symlink "{0}" cannot be written as the link target cannot be read from within Java.
  576. systemConfigFileInvalid=Systen wide config file {0} is invalid {1}
  577. tagAlreadyExists=tag ''{0}'' already exists
  578. tagNameInvalid=tag name {0} is invalid
  579. tagOnRepoWithoutHEADCurrentlyNotSupported=Tag on repository without HEAD currently not supported
  580. theFactoryMustNotBeNull=The factory must not be null
  581. timerAlreadyTerminated=Timer already terminated
  582. topologicalSortRequired=Topological sort required.
  583. transactionAborted=transaction aborted
  584. transportExceptionBadRef=Empty ref: {0}: {1}
  585. transportExceptionEmptyRef=Empty ref: {0}
  586. transportExceptionInvalid=Invalid {0} {1}:{2}
  587. transportExceptionMissingAssumed=Missing assumed {0}
  588. transportExceptionReadRef=read {0}
  589. transportNeedsRepository=Transport needs repository
  590. transportProtoAmazonS3=Amazon S3
  591. transportProtoBundleFile=Git Bundle File
  592. transportProtoFTP=FTP
  593. transportProtoGitAnon=Anonymous Git
  594. transportProtoHTTP=HTTP
  595. transportProtoLocal=Local Git Repository
  596. transportProtoSFTP=SFTP
  597. transportProtoSSH=SSH
  598. transportProtoTest=Test
  599. transportSSHRetryInterrupt=Interrupted while waiting for retry
  600. treeEntryAlreadyExists=Tree entry "{0}" already exists.
  601. treeFilterMarkerTooManyFilters=Too many markTreeFilters passed, maximum number is {0} (passed {1})
  602. treeIteratorDoesNotSupportRemove=TreeIterator does not support remove()
  603. treeWalkMustHaveExactlyTwoTrees=TreeWalk should have exactly two trees.
  604. truncatedHunkLinesMissingForAncestor=Truncated hunk, at least {0} lines missing for ancestor {1}
  605. truncatedHunkNewLinesMissing=Truncated hunk, at least {0} new lines is missing
  606. truncatedHunkOldLinesMissing=Truncated hunk, at least {0} old lines is missing
  607. tSizeMustBeGreaterOrEqual1=tSize must be >= 1
  608. unableToCheckConnectivity=Unable to check connectivity.
  609. unableToCreateNewObject=Unable to create new object: {0}
  610. unableToStore=Unable to store {0}.
  611. unableToWrite=Unable to write {0}
  612. unauthorized=Unauthorized
  613. unencodeableFile=Unencodable file: {0}
  614. unexpectedCompareResult=Unexpected metadata comparison result: {0}
  615. unexpectedEndOfConfigFile=Unexpected end of config file
  616. unexpectedEndOfInput=Unexpected end of input
  617. unexpectedHunkTrailer=Unexpected hunk trailer
  618. unexpectedOddResult=odd: {0} + {1} - {2}
  619. unexpectedRefReport={0}: unexpected ref report: {1}
  620. unexpectedReportLine=unexpected report line: {0}
  621. unexpectedReportLine2={0} unexpected report line: {1}
  622. unknownOrUnsupportedCommand=Unknown or unsupported command "{0}", only "{1}" is allowed.
  623. unknownDIRCVersion=Unknown DIRC version {0}
  624. unknownHost=unknown host
  625. unknownIndexVersionOrCorruptIndex=Unknown index version (or corrupt index): {0}
  626. unknownObject=unknown object
  627. unknownObjectType=Unknown object type {0}.
  628. unknownObjectType2=unknown
  629. unknownRepositoryFormat=Unknown repository format
  630. unknownRepositoryFormat2=Unknown repository format "{0}"; expected "0".
  631. unknownZlibError=Unknown zlib error.
  632. unmergedPath=Unmerged path: {0}
  633. unmergedPaths=Repository contains unmerged paths
  634. unpackException=Exception while parsing pack stream
  635. unreadablePackIndex=Unreadable pack index: {0}
  636. unrecognizedRef=Unrecognized ref: {0}
  637. unsetMark=Mark not set
  638. unsupportedAlternates=Alternates not supported
  639. unsupportedArchiveFormat=Unknown archive format ''{0}''
  640. unsupportedCommand0=unsupported command 0
  641. unsupportedEncryptionAlgorithm=Unsupported encryption algorithm: {0}
  642. unsupportedEncryptionVersion=Unsupported encryption version: {0}
  643. unsupportedGC=Unsupported garbage collector for repository type: {0}
  644. unsupportedMark=Mark not supported
  645. unsupportedOperationNotAddAtEnd=Not add-at-end: {0}
  646. unsupportedPackIndexVersion=Unsupported pack index version {0}
  647. unsupportedPackVersion=Unsupported pack version {0}.
  648. updatingHeadFailed=Updating HEAD failed
  649. updatingReferences=Updating references
  650. updatingRefFailed=Updating the ref {0} to {1} failed. ReturnCode from RefUpdate.update() was {2}
  651. upstreamBranchName=branch ''{0}'' of {1}
  652. uriNotConfigured=Submodule URI not configured
  653. uriNotFound={0} not found
  654. URINotSupported=URI not supported: {0}
  655. URLNotFound={0} not found
  656. userConfigFileInvalid=User config file {0} invalid {1}
  657. walkFailure=Walk failure.
  658. wantNotValid=want {0} not valid
  659. weeksAgo={0} weeks ago
  660. windowSizeMustBeLesserThanLimit=Window size must be < limit
  661. windowSizeMustBePowerOf2=Window size must be power of 2
  662. writerAlreadyInitialized=Writer already initialized
  663. writeTimedOut=Write timed out after {0} ms
  664. writingNotPermitted=Writing not permitted
  665. writingNotSupported=Writing {0} not supported.
  666. writingObjects=Writing objects
  667. wrongDecompressedLength=wrong decompressed length
  668. wrongRepositoryState=Wrong Repository State: {0}
  669. year=year
  670. years=years
  671. years0MonthsAgo={0} {1} ago
  672. yearsAgo={0} years ago
  673. yearsMonthsAgo={0} {1}, {2} {3} ago