summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* JGit 0.11.1v0.11.1Matthias Sohn2011-02-1130-192/+192
| | | | | Change-Id: I9ac2fdfb4326536502964ba614d37d0bd103f524 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* Fix version.shMatthias Sohn2011-02-111-1/+18
| | | | | Change-Id: Ia010c9cecefbfb90ae54786adc7c8d838525d2f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* Merge "Fix NPE on reading global config on MAC" into stable-0.11Chris Aniszczyk2011-02-091-1/+6
|\
| * Fix NPE on reading global config on MACJens Baumgart2011-02-091-1/+6
| | | | | | | | | | | | | | Bug: 336610 Change-Id: Iefcb85e791723801faa315b3ee45fb19e3ca52fb Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
* | Add isOutdated method to DirCacheJens Baumgart2011-02-091-0/+10
|/ | | | | | | | isOutdated returns true iff the memory state differs from the index file. Change-Id: If35db06743f5f588ab19d360fd2a18a07c918edb Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
* PullCommand: use default remote instead of throwing ExceptionMathias Kinzler2011-02-081-7/+4
| | | | | | | | | | When pulling into a local branch that has no upstream configuration, pull should try to used the default remote ("origin") instead of throwing an Exception. Bug: 336504 Change-Id: Ife75858e89ea79c0d6d88ba73877fe8400448e34 Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
* Remove quoting of command over SSHShawn O. Pearce2011-02-061-32/+3
| | | | | | | | | | If the command contains spaces, it needs to be evaluated by the remote shell. Quoting the command breaks this, making it impossible to run a remote command that needs additional options. Bug: 336301 Change-Id: Ib5d88f0b2151df2d1d2b4e08d51ee979f6da67b5 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* Fix JGit --upload-pack, --receive-pack optionsShawn O. Pearce2011-02-055-16/+78
| | | | | | | | | | | | | | JGit did not use sh -c to run the receive-pack or upload-pack programs locally, which caused errors if these strings contained spaces and needed the local shell to evaluate them. Win32 support using cmd.exe /c is completely untested, but seems like it should work based on the limited information I could get through Google search results. Bug: 336301 Change-Id: I22e5e3492fdebbae092d1ce6b47ad411e57cc1ba Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* In iplog list approved CQs as "active"Matthias Sohn2011-02-061-1/+8
| | | | | Change-Id: I69c60576ae648fea2a730c9e9f042004bccecc90 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* Reuse cached SHA-1 when computing from WorkingTreeIteratorShawn O. Pearce2011-02-031-0/+40
| | | | | | Change-Id: I2b2170c29017993d8cb7a1d3c8cd94fb16c7dd02 Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
* PackWriter: Support reuse of entire packsShawn O. Pearce2011-02-0318-58/+661
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The most expensive part of packing a repository for transport to another system is enumerating all of the objects in the repository. Once this gets to the size of the linux-2.6 repository (1.8 million objects), enumeration can take several CPU minutes and costs a lot of temporary working set memory. Teach PackWriter to efficiently reuse an existing "cached pack" by answering a clone request with a thin pack followed by a larger cached pack appended to the end. This requires the repository owner to first construct the cached pack by hand, and record the tip commits inside of $GIT_DIR/objects/info/cached-packs: cd $GIT_DIR root=$(git rev-parse master) tmp=objects/.tmp-$$ names=$(echo $root | git pack-objects --keep-true-parents --revs $tmp) for n in $names; do chmod a-w $tmp-$n.pack $tmp-$n.idx touch objects/pack/pack-$n.keep mv $tmp-$n.pack objects/pack/pack-$n.pack mv $tmp-$n.idx objects/pack/pack-$n.idx done (echo "+ $root"; for n in $names; do echo "P $n"; done; echo) >>objects/info/cached-packs git repack -a -d When a clone request needs to include $root, the corresponding cached pack will be copied as-is, rather than enumerating all of the objects that are reachable from $root. For a linux-2.6 kernel repository that should be about 376 MiB, the above process creates two packs of 368 MiB and 38 MiB[1]. This is a local disk usage increase of ~26 MiB, due to reduced delta compression between the large cached pack and the smaller recent activity pack. The overhead is similar to 1 full copy of the compressed project sources. With this cached pack in hand, JGit daemon completes a clone request in 1m17s less time, but a slightly larger data transfer (+2.39 MiB): Before: remote: Counting objects: 1861830, done remote: Finding sources: 100% (1861830/1861830) remote: Getting sizes: 100% (88243/88243) remote: Compressing objects: 100% (88184/88184) Receiving objects: 100% (1861830/1861830), 376.01 MiB | 19.01 MiB/s, done. remote: Total 1861830 (delta 4706), reused 1851053 (delta 1553844) Resolving deltas: 100% (1564621/1564621), done. real 3m19.005s After: remote: Counting objects: 1601, done remote: Counting objects: 1828460, done remote: Finding sources: 100% (50475/50475) remote: Getting sizes: 100% (18843/18843) remote: Compressing objects: 100% (7585/7585) remote: Total 1861830 (delta 2407), reused 1856197 (delta 37510) Receiving objects: 100% (1861830/1861830), 378.40 MiB | 31.31 MiB/s, done. Resolving deltas: 100% (1559477/1559477), done. real 2m2.938s Repository owners can periodically refresh their cached packs by repacking their repository, folding all newer objects into a larger cached pack. Since repacking is already considered to be a normal Git maintenance activity, this isn't a very big burden. [1] In this test $root was set back about two weeks. Change-Id: Ib87131d5c4b5e8c5cacb0f4fe16ff4ece554734b Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* PackWriter: Display totals after sending objectsShawn O. Pearce2011-02-024-4/+54
| | | | | | | | | | | | | | | | | | | | CGit pack-objects displays a totals line after the pack data was fully written. This can be useful to understand some of the decisions made by the packer, and has been a great tool for helping to debug some of that code. Track some of the basic values, and send it to the client when packing is done: remote: Counting objects: 1826776, done remote: Finding sources: 100% (55121/55121) remote: Getting sizes: 100% (25654/25654) remote: Compressing objects: 100% (11434/11434) remote: Total 1861830 (delta 3926), reused 1854705 (delta 38306) Receiving objects: 100% (1861830/1861830), 386.03 MiB | 30.32 MiB/s, done. Change-Id: If3b039017a984ed5d5ae80940ce32bda93652df5 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* RefAdvertiser: Avoid object parsingShawn O. Pearce2011-02-025-112/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | | | It isn't strictly necessary to validate every reference's target object is reachable in the repository before advertising it to a client. This is an expensive operation when there are thousands of references, and its very unlikely that a reference uses a missing object, because garbage collection proceeds from the references and walks down through the graph. So trying to hide a dangling reference from clients is relatively pointless. Even if we are trying to avoid giving a client a corrupt repository, this simple check isn't sufficient. It is possible for a reference to point to a valid commit, but that commit to have a missing blob in its root tree. This can be caused by staging a file into the index, waiting several weeks, then committing that file while also racing against a prune. The prune may delete the blob, since its modification time is more than 2 weeks ago, but retain the commit, since its modification time is right now. Such graph corruption is already caught during PackWriter as it enumerates the graph from the client's want list and digs back to the roots or common base. Leave the reference validation also for that same phase, where we know we have to parse the object to support the enumeration. Change-Id: Iee70ead0d3ed2d2fcc980417d09d7a69b05f5c2f Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* Merge "Expose some constants needed for reading the Pull configuration"Chris Aniszczyk2011-02-021-0/+20
|\
| * Expose some constants needed for reading the Pull configurationMathias Kinzler2011-02-021-0/+20
| | | | | | | | | | Change-Id: I72cb1cc718800c09366306ab2eebd43cd82023ff Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
* | Merge "Adapt expected commit message in tests"Chris Aniszczyk2011-02-021-2/+2
|\ \
| * | Adapt expected commit message in testsRobin Stocker2011-02-021-2/+2
| |/ | | | | | | | | | | | | | | Because of change I28ae5713, the commit message lost the "into HEAD" and caused the MergeCommandTest to fail. This change fixes it. Bug: 336059 Change-Id: Ifac0138c6c6d66c40d7295b5e11ff3cd98bc9e0c
* / PushCommand: do not set a null credentials providerJens Baumgart2011-02-021-1/+2
|/ | | | | | | | | | | PushCommand now does not set a null credentials provider on Transport because in this case the default provider is replaced with null and the default mechanism for providing credentials is not working. Bug: 336023 Change-Id: I7a7a9221afcfebe2e1595a5e59641e6c1ae4a207 Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
* Don't print "into HEAD" when merging refs/heads/masterRobin Stocker2011-02-012-1/+12
| | | | | | | | | | | | | | | | When MergeMessageFormatter was given a symbolic ref HEAD which points to refs/heads/master (which is the case when merging a branch in EGit), it would result in a merge message like the following: Merge branch 'a' into HEAD But it should print the following (as C Git does): Merge branch 'a' The solution is to use the leaf ref when checking for refs/heads/master. Change-Id: I28ae5713b7e8123a0176fc6d7356e469900e7e97
* PackWriter: Make thin packs more efficientShawn O. Pearce2011-02-016-26/+452
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is no point in pushing all of the files within the edge commits into the delta search when making a thin pack. This floods the delta search window with objects that are unlikely to be useful bases for the objects that will be written out, resulting in lower data compression and higher transfer sizes. Instead observe the path of a tree or blob that is being pushed into the outgoing set, and use that path to locate up to WINDOW ancestor versions from the edge commits. Push only those objects into the edgeObjects set, reducing the number of objects seen by the search window. This allows PackWriter to only look at ancestors for the modified files, rather than all files in the project. Limiting the search to WINDOW size makes sense, because more than WINDOW edge objects will just skip through the window search as none of them need to be delta compressed. To further improve compression, sort edge objects into the front of the window list, rather than randomly throughout. This puts non-edges later in the window and gives them a better chance at finding their base, since they search backwards through the window. These changes make a significant difference in the thin-pack: Before: remote: Counting objects: 144190, done remote: Finding sources: 100% (50275/50275) remote: Getting sizes: 100% (101405/101405) remote: Compressing objects: 100% (7587/7587) Receiving objects: 100% (50275/50275), 24.67 MiB | 9.90 MiB/s, done. Resolving deltas: 100% (40339/40339), completed with 2218 local objects. real 0m30.267s After: remote: Counting objects: 61549, done remote: Finding sources: 100% (50275/50275) remote: Getting sizes: 100% (18862/18862) remote: Compressing objects: 100% (7588/7588) Receiving objects: 100% (50275/50275), 11.04 MiB | 3.51 MiB/s, done. Resolving deltas: 100% (43160/43160), completed with 5014 local objects. real 0m22.170s The resulting pack is 13.63 MiB smaller, even though it contains the same exact objects. 82,543 fewer objects had to have their sizes looked up, which saved about 8s of server CPU time. 2,796 more objects from the client were used as part of the base object set, which contributed to the smaller transfer size. Change-Id: Id01271950432c6960897495b09deab70e33993a9 Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Sigend-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
* PackWriter: Cleanup findObjectToPack methodShawn O. Pearce2011-02-011-32/+20
| | | | | | | | | | | | | | | Some of this code predates making ObjectId.equals() final and fixing RevObject.equals() to match ObjectId.equals(). It was therefore more complex than it needs to be, because it tried to work around RevObject's broken equals() rules by converting to ObjectId in a different collection. Also combine setUpWalker() and findObjectsToPack() methods, these can be one method and the code is actually cleaner. Change-Id: I0f4cf9997cd66d8b6e7f80873979ef1439e507fe Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
* PackWriter: Correct 'Compressing objects' progress messageShawn O. Pearce2011-02-013-1/+3
| | | | | | | | | | | | | | | The first 'Compressing objects' progress message is wrong, its actually PackWriter looking up the sizes of each object in the ObjectDatabase, so objects can be sorted correctly in the later type-size sort that tries to take advantage of "Linus' Law" to improve delta compression. Rename the progress to say 'Getting sizes', which is an accurate description of what it is doing. Change-Id: Ida0a052ad2f6e994996189ca12959caab9e556a3 Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
* Merge "Add git-clone to the Git API"Chris Aniszczyk2011-02-013-0/+411
|\
| * Add git-clone to the Git APIChris Aniszczyk2011-01-313-0/+411
| | | | | | | | | | | | | | | | Enhance the Git API to support cloning repositories. Bug: 334763 Change-Id: Ibe1191498dceb9cbd1325aed85b4c403db19f41e Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
* | PackWriter: Don't include edges in progress meterShawn O. Pearce2011-02-012-4/+7
| | | | | | | | | | | | | | | | | | | | When compressing objects, don't include the edges in the progress meter. These cost almost no CPU time as they are simply pushed into and popped out of the delta search window. Change-Id: I7ea19f0263e463c65da34a7e92718c6db1d4a131 Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
* | Merge "Show resolving deltas progress to push clients"Chris Aniszczyk2011-02-013-22/+56
|\ \
| * | Show resolving deltas progress to push clientsShawn O. Pearce2011-01-313-22/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CGit push clients 1.6.6 and later support progress messages on the side-band-64k channel during push, as this was introduced to handle server side hook errors reported over smart HTTP. Since JGit's delta resolution isn't always as fast as CGit's is, a user may think the server has crashed and failed to report status if the user pushed a lot of content and sees no feedback. Exposing the progress monitor during the resolving deltas phase will let the user know the server is still making forward progress. This also helps BasePackPushConnection, which has a bounded timeout on how long it will wait before assuming the remote server is dead. Progress messages pushed down the side-band channel will reset the read timer, helping the connection to stay alive and avoid timing out before the remote side's work is complete. Change-Id: I429c825e5a724d2f21c66f95526d9c49edcc6ca9 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* | | Merge "ObjectWalk: Fix reset for non-commit objects"Chris Aniszczyk2011-02-011-0/+11
|\| |
| * | ObjectWalk: Fix reset for non-commit objectsShawn O. Pearce2011-01-311-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Non-commits are added to a pending queue, but duplicates are removed by checking a flag. During a reset that flag must be stripped off the old roots, otherwise the caller cannot reuse the old roots after the reset. RevWalk already does this correctly for commits, but ObjectWalk failed to handle the non-commit case itself. Change-Id: I99e1832bf204eac5a424fdb04f327792e8cded4a Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* | | Merge "Revert Ie48d6556"Chris Aniszczyk2011-01-311-1/+1
|\ \ \
| * | | Revert Ie48d6556Chris Aniszczyk2011-01-311-1/+1
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This was a mistake that was missed due to historical reasons. "The first /r/ tells our Apache to redirect the request to Gerrit. The second /r/ tells Gerrit that the thing following is a Git SHA-1 and it should try to locate the changes that use that commit object. Nothing I can easily do about it now. The second /r/ is historical and comes from Gerrit 1.x days." Change-Id: Iec2dbf5e077f29c0e0686cab11ef197ffc705012 Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
* | | Merge "Proper handling of rebase during pull"Chris Aniszczyk2011-01-315-43/+56
|\ \ \ | |/ / |/| |
| * | Proper handling of rebase during pullMathias Kinzler2011-01-315-43/+56
| |/ | | | | | | | | | | | | | | After consulting with Christian Halstrick, it turned out that the handling of rebase during pull was implemented incorrectly. Change-Id: I40f03409e080cdfeceb21460150f5e02a016e7f4 Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
* / Fix incorrect review url in IP log fileRobert Munteanu2011-01-311-1/+1
|/ | | | | Change-Id: Ie48d655698dc1f4cd4f00606436a57c451c13179 Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
* Merge changes I3a74cc84,I219f864fRobin Rosenberg2011-01-293-10/+6
|\ | | | | | | | | | | * changes: [findbugs] Do not ignore exceptional return value of createNewFile() Do not create files to be updated before checkout of DirCache entry
| * [findbugs] Do not ignore exceptional return value of createNewFile()Matthias Sohn2011-01-282-9/+6
| | | | | | | | | | | | | | Properly handle return value of java.io.File.createNewFile(). Change-Id: I3a74cc84cd126ca1a0eaccc77b2944d783ff0747 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
| * Do not create files to be updated before checkout of DirCache entryMatthias Sohn2011-01-281-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | DirCacheCheckout.checkoutEntry() prepares the new file content using a temporary file and then renames it to the file to be written during checkout. For files to be updated checkout() created each file before calling checkoutEntry(). Hence renaming the temporary file always failed which was corrected in exception handling by retrying to rename the file after deleting the just newly created file. Change-Id: I219f864f2ed8d68051d7b5955d0659964fa27274 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* | Add setCredentialsProvider to PullCommandTomasz Zarna2011-01-281-0/+17
| | | | | | | | | | | | Bug: 335703 Change-Id: Id9713a4849c772e030fca23dd64b993264f28366 Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
* | Merge "ObjectIdSubclassMap: Support duplicate additions"Chris Aniszczyk2011-01-283-5/+266
|\ \
| * | ObjectIdSubclassMap: Support duplicate additionsShawn O. Pearce2011-01-283-5/+266
| | | | | | | | | | | | | | | | | | | | | | | | | | | The new addIfAbsent() method combines get() with add(), but does it in a single step so that the common case of get() returning null for a new object can immediately insert the object into the map. Change-Id: Ib599ab4de13ad67665ccfccf3ece52ba3222bcba Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* | | Merge "Make PullCommand work with Rebase"Chris Aniszczyk2011-01-284-51/+359
|\ \ \ | |/ / |/| |
| * | Make PullCommand work with RebaseMathias Kinzler2011-01-284-51/+359
| | | | | | | | | | | | | | | | | | | | | | | | | | | Rebase must honor the upstream configuration branch.<branchname>.rebase Change-Id: Ic94f263d3f47b630ad75bd5412cb4741bb1109ca Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
* | | Merge "RebaseCommand: detect and handle fast-forward properly"Chris Aniszczyk2011-01-283-7/+139
|\| |
| * | RebaseCommand: detect and handle fast-forward properlyMathias Kinzler2011-01-283-7/+139
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This bug was hidden by an incomplete test: the current Rebase implementation using the "git rebase -i" pattern does not work correctly if fast-forwarding is involved. The reason for this is that the log command does not return any commits in this case. In addition, a check for already merged commits was introduced to avoid spurious conflicts. Change-Id: Ib9898fe0f982fa08e41f1dca9452c43de715fdb6 Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
* | | Revert "Teach PackWriter how to reuse an existing object list"Shawn O. Pearce2011-01-283-271/+21
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit f5fe2dca3cb9f57891e1a4b18832fcc158d0c490. I regret adding this feature to the public API. Caches aren't always the best idea, as they require work to maintain. Here the cache is redundant information that must be computed, and when it grows stale must be removed. The redundant information takes up more disk space, about the same size as the pack-*.idx files are. For the linux-2.6 repository, that's more than 40 MB for a 400 MB repository. So the cache is a 10% increase in disk usage. The entire point of this cache is to improve PackWriter performance, and only PackWriter performance, and only when sending an initial clone to a new client. There may be better ways to optimize this, and until we have a solid solution, we shouldn't be using a separate cache in JGit.
* / TransportHttp wrongly uses JDK 6 constructor of IOExceptionMathias Kinzler2011-01-281-2/+2
|/ | | | | | | | IOException constructor taking Exception as parameter is new for JDK 6. Change-Id: Iec349fc7be9e9fbaeb53841894883c47a98a7b29 Signed-off-by: Mathias Kinzler <mathias.kinzler@sap.com>
* [findbugs] Do not ignore exceptional return value of mkdirMatthias Sohn2011-01-2823-82/+96
| | | | | | | | | java.io.File.mkdir() and mkdirs() report failure as an exceptional return value false. Fix the code which silently ignored this exceptional return value. Change-Id: I41244f4b9d66176e68e2c07e2329cf08492f8619 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* Teach PackWriter how to reuse an existing object listShawn O. Pearce2011-01-273-21/+271
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Counting the objects needed for packing is the most expensive part of an UploadPack request that has no uninteresting objects (otherwise known as an initial clone). During this phase the PackWriter is enumerating the entire set of objects in this repository, so they can be sent to the client for their new clone. Allow the ObjectReader (and therefore the underlying storage system) to keep a cached list of all reachable objects from a small number of points in the project's history. If one of those points is reached during enumeration of the commit graph, most objects are obtained from the cached list instead of direct traversal. PackWriter uses the list by discarding the current object lists and restarting a traversal from all refs but marking the object list name as uninteresting. This allows PackWriter to enumerate all objects that are more recent than the list creation, or that were on side branches that the list does not include. However, ObjectWalk tags all of the trees and commits within the list commit as UNINTERESTING, which would normally cause PackWriter to construct a thin pack that excludes these objects. To avoid that, addObject() was refactored to allow this list-based enumeration to always include an object, even if it has been tagged UNINTERESTING by the ObjectWalk. This implies the list-based enumeration may only be used for initial clones, where all objects are being sent. The UNINTERESTING labeling occurs because StartGenerator always enables the BoundaryGenerator if the walker is an ObjectWalk and a commit was marked UNINTERESTING, even if RevSort.BOUNDARY was not enabled. This is the default reasonable behavior for an ObjectWalk, but isn't desired here in PackWriter with the list-based enumeration. Rather than trying to change all of this behavior, PackWriter works around it. Because the list name commit's immediate files and trees were all enumerated before the list enumeration itself starts (and are also within the list itself) PackWriter runs the risk of adding the same objects to its ObjectIdSubclassMap twice. Since this breaks the internal map data structure (and also may cause the object to transmit twice), PackWriter needs to use a new "added" RevFlag to track whether or not an object has been put into the outgoing list yet. Change-Id: Ie99ed4d969a6bb20cc2528ac6b8fb91043cee071 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* Allow ObjectReuseAsIs to resort objects during writingShawn O. Pearce2011-01-272-3/+6
| | | | | | | | | | | | | It can be very handy for the implementation to resort the object list based on data locality, improving prefetch in the operating system's buffer cache. Export the list to the implementation was a proper List, and document that its mutable and OK to be modified. The only caller in PackWriter is already OK with these rules. Change-Id: I3f51cf4388898917b2be36670587a5aee902ff10 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* PackWriter: Use TOPO order only for incremental packsShawn O. Pearce2011-01-271-1/+4
| | | | | | | | | | | | | | | | | When performing an initial clone of a repository there are no uninteresting commits, and the resulting pack will be completely self-contained. Therefore PackWriter does not need to honor C Git standard TOPO ordering as described in JGit commit ba984ba2e0a ("Fix checkReferencedIsReachable to use correct base list"). Switching to COMMIT_TIME_DESC when there are no uninteresting commits allows the "Counting objects" phase to emit progress earlier, as the RevWalk will not buffer the commit list. When TOPO is set the RevWalk enumerates all commits first, before outputing any for PackWriter to mark progress updates from. Change-Id: If2b6a9903b536c7fb3c45f85d0a67ff6c6e66f22 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>