You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

PackWriterTest.java 28KB

Shallow fetch: Respect "shallow" lines When fetching from a shallow clone, the client sends "have" lines to tell the server about objects it already has and "shallow" lines to tell where its local history terminates. In some circumstances, the server fails to honor the shallow lines and fails to return objects that the client needs. UploadPack passes the "have" lines to PackWriter so PackWriter can omit them from the generated pack. UploadPack processes "shallow" lines by calling RevWalk.assumeShallow() with the set of shallow commits. RevWalk creates and caches RevCommits for these shallow commits, clearing out their parents. That way, walks correctly terminate at the shallow commits instead of assuming the client has history going back behind them. UploadPack converts its RevWalk to an ObjectWalk, maintaining the cached RevCommits, and passes it to PackWriter. Unfortunately, to support shallow fetches the PackWriter does the following: if (shallowPack && !(walk instanceof DepthWalk.ObjectWalk)) walk = new DepthWalk.ObjectWalk(reader, depth); That is, when the client sends a "deepen" line (fetch --depth=<n>) and the caller has not passed in a DepthWalk.ObjectWalk, PackWriter throws away the RevWalk that was passed in and makes a new one. The cleared parent lists prepared by RevWalk.assumeShallow() are lost. Fortunately UploadPack intends to pass in a DepthWalk.ObjectWalk. It tries to create it by calling toObjectWalkWithSameObjects() on a DepthWalk.RevWalk. But it doesn't work: because DepthWalk.RevWalk does not override the standard RevWalk#toObjectWalkWithSameObjects implementation, the result is a plain ObjectWalk instead of an instance of DepthWalk.ObjectWalk. The result is that the "shallow" information is thrown away and objects reachable from the shallow commits can be omitted from the pack sent when fetching with --depth from a shallow clone. Multiple factors collude to limit the circumstances under which this bug can be observed: 1. Commits with depth != 0 don't enter DepthGenerator's pending queue. That means a "have" cannot have any effect on DepthGenerator unless it is also a "want". 2. DepthGenerator#next() doesn't call carryFlagsImpl(), so the uninteresting flag is not propagated to ancestors there even if a "have" is also a "want". 3. JGit treats a depth of 1 as "1 past the wants". Because of (2), the only place the UNINTERESTING flag can leak to a shallow commit's parents is in the carryFlags() call from markUninteresting(). carryFlags() only traverses commits that have already been parsed: commits yet to be parsed are supposed to inherit correct flags from their parent in PendingGenerator#next (which doesn't happen here --- that is (2)). So the list of commits that have already been parsed becomes relevant. When we hit the markUninteresting() call, all "want"s, "have"s, and commits to be unshallowed have been parsed. carryFlags() only affects the parsed commits. If the "want" is a direct parent of a "have", then it carryFlags() marks it as uninteresting. If the "have" was also a "shallow", then its parent pointer should have been null and the "want" shouldn't have been marked, so we see the bug. If the "want" is a more distant ancestor then (2) keeps the uninteresting state from propagating to the "want" and we don't see the bug. If the "shallow" is not also a "have" then the shallow commit isn't parsed so (2) keeps the uninteresting state from propagating to the "want so we don't see the bug. Here is a reproduction case (time flowing left to right, arrows pointing to parents). "C" must be a commit that the client reports as a "have" during negotiation. That can only happen if the server reports it as an existing branch or tag in the first round of negotiation: A <-- B <-- C <-- D First do git clone --depth 1 <repo> which yields D as a "have" and C as a "shallow" commit. Then try git fetch --depth 1 <repo> B:refs/heads/B Negotiation sets up: have D, shallow C, have C, want B. But due to this bug B is marked as uninteresting and is not sent. Change-Id: I6e14b57b2f85e52d28cdcf356df647870f475440 Signed-off-by: Terry Parker <tparker@google.com>
8 years ago
Shallow fetch: Respect "shallow" lines When fetching from a shallow clone, the client sends "have" lines to tell the server about objects it already has and "shallow" lines to tell where its local history terminates. In some circumstances, the server fails to honor the shallow lines and fails to return objects that the client needs. UploadPack passes the "have" lines to PackWriter so PackWriter can omit them from the generated pack. UploadPack processes "shallow" lines by calling RevWalk.assumeShallow() with the set of shallow commits. RevWalk creates and caches RevCommits for these shallow commits, clearing out their parents. That way, walks correctly terminate at the shallow commits instead of assuming the client has history going back behind them. UploadPack converts its RevWalk to an ObjectWalk, maintaining the cached RevCommits, and passes it to PackWriter. Unfortunately, to support shallow fetches the PackWriter does the following: if (shallowPack && !(walk instanceof DepthWalk.ObjectWalk)) walk = new DepthWalk.ObjectWalk(reader, depth); That is, when the client sends a "deepen" line (fetch --depth=<n>) and the caller has not passed in a DepthWalk.ObjectWalk, PackWriter throws away the RevWalk that was passed in and makes a new one. The cleared parent lists prepared by RevWalk.assumeShallow() are lost. Fortunately UploadPack intends to pass in a DepthWalk.ObjectWalk. It tries to create it by calling toObjectWalkWithSameObjects() on a DepthWalk.RevWalk. But it doesn't work: because DepthWalk.RevWalk does not override the standard RevWalk#toObjectWalkWithSameObjects implementation, the result is a plain ObjectWalk instead of an instance of DepthWalk.ObjectWalk. The result is that the "shallow" information is thrown away and objects reachable from the shallow commits can be omitted from the pack sent when fetching with --depth from a shallow clone. Multiple factors collude to limit the circumstances under which this bug can be observed: 1. Commits with depth != 0 don't enter DepthGenerator's pending queue. That means a "have" cannot have any effect on DepthGenerator unless it is also a "want". 2. DepthGenerator#next() doesn't call carryFlagsImpl(), so the uninteresting flag is not propagated to ancestors there even if a "have" is also a "want". 3. JGit treats a depth of 1 as "1 past the wants". Because of (2), the only place the UNINTERESTING flag can leak to a shallow commit's parents is in the carryFlags() call from markUninteresting(). carryFlags() only traverses commits that have already been parsed: commits yet to be parsed are supposed to inherit correct flags from their parent in PendingGenerator#next (which doesn't happen here --- that is (2)). So the list of commits that have already been parsed becomes relevant. When we hit the markUninteresting() call, all "want"s, "have"s, and commits to be unshallowed have been parsed. carryFlags() only affects the parsed commits. If the "want" is a direct parent of a "have", then it carryFlags() marks it as uninteresting. If the "have" was also a "shallow", then its parent pointer should have been null and the "want" shouldn't have been marked, so we see the bug. If the "want" is a more distant ancestor then (2) keeps the uninteresting state from propagating to the "want" and we don't see the bug. If the "shallow" is not also a "have" then the shallow commit isn't parsed so (2) keeps the uninteresting state from propagating to the "want so we don't see the bug. Here is a reproduction case (time flowing left to right, arrows pointing to parents). "C" must be a commit that the client reports as a "have" during negotiation. That can only happen if the server reports it as an existing branch or tag in the first round of negotiation: A <-- B <-- C <-- D First do git clone --depth 1 <repo> which yields D as a "have" and C as a "shallow" commit. Then try git fetch --depth 1 <repo> B:refs/heads/B Negotiation sets up: have D, shallow C, have C, want B. But due to this bug B is marked as uninteresting and is not sent. Change-Id: I6e14b57b2f85e52d28cdcf356df647870f475440 Signed-off-by: Terry Parker <tparker@google.com>
8 years ago
Shallow fetch: Respect "shallow" lines When fetching from a shallow clone, the client sends "have" lines to tell the server about objects it already has and "shallow" lines to tell where its local history terminates. In some circumstances, the server fails to honor the shallow lines and fails to return objects that the client needs. UploadPack passes the "have" lines to PackWriter so PackWriter can omit them from the generated pack. UploadPack processes "shallow" lines by calling RevWalk.assumeShallow() with the set of shallow commits. RevWalk creates and caches RevCommits for these shallow commits, clearing out their parents. That way, walks correctly terminate at the shallow commits instead of assuming the client has history going back behind them. UploadPack converts its RevWalk to an ObjectWalk, maintaining the cached RevCommits, and passes it to PackWriter. Unfortunately, to support shallow fetches the PackWriter does the following: if (shallowPack && !(walk instanceof DepthWalk.ObjectWalk)) walk = new DepthWalk.ObjectWalk(reader, depth); That is, when the client sends a "deepen" line (fetch --depth=<n>) and the caller has not passed in a DepthWalk.ObjectWalk, PackWriter throws away the RevWalk that was passed in and makes a new one. The cleared parent lists prepared by RevWalk.assumeShallow() are lost. Fortunately UploadPack intends to pass in a DepthWalk.ObjectWalk. It tries to create it by calling toObjectWalkWithSameObjects() on a DepthWalk.RevWalk. But it doesn't work: because DepthWalk.RevWalk does not override the standard RevWalk#toObjectWalkWithSameObjects implementation, the result is a plain ObjectWalk instead of an instance of DepthWalk.ObjectWalk. The result is that the "shallow" information is thrown away and objects reachable from the shallow commits can be omitted from the pack sent when fetching with --depth from a shallow clone. Multiple factors collude to limit the circumstances under which this bug can be observed: 1. Commits with depth != 0 don't enter DepthGenerator's pending queue. That means a "have" cannot have any effect on DepthGenerator unless it is also a "want". 2. DepthGenerator#next() doesn't call carryFlagsImpl(), so the uninteresting flag is not propagated to ancestors there even if a "have" is also a "want". 3. JGit treats a depth of 1 as "1 past the wants". Because of (2), the only place the UNINTERESTING flag can leak to a shallow commit's parents is in the carryFlags() call from markUninteresting(). carryFlags() only traverses commits that have already been parsed: commits yet to be parsed are supposed to inherit correct flags from their parent in PendingGenerator#next (which doesn't happen here --- that is (2)). So the list of commits that have already been parsed becomes relevant. When we hit the markUninteresting() call, all "want"s, "have"s, and commits to be unshallowed have been parsed. carryFlags() only affects the parsed commits. If the "want" is a direct parent of a "have", then it carryFlags() marks it as uninteresting. If the "have" was also a "shallow", then its parent pointer should have been null and the "want" shouldn't have been marked, so we see the bug. If the "want" is a more distant ancestor then (2) keeps the uninteresting state from propagating to the "want" and we don't see the bug. If the "shallow" is not also a "have" then the shallow commit isn't parsed so (2) keeps the uninteresting state from propagating to the "want so we don't see the bug. Here is a reproduction case (time flowing left to right, arrows pointing to parents). "C" must be a commit that the client reports as a "have" during negotiation. That can only happen if the server reports it as an existing branch or tag in the first round of negotiation: A <-- B <-- C <-- D First do git clone --depth 1 <repo> which yields D as a "have" and C as a "shallow" commit. Then try git fetch --depth 1 <repo> B:refs/heads/B Negotiation sets up: have D, shallow C, have C, want B. But due to this bug B is marked as uninteresting and is not sent. Change-Id: I6e14b57b2f85e52d28cdcf356df647870f475440 Signed-off-by: Terry Parker <tparker@google.com>
8 years ago
Shallow fetch: Respect "shallow" lines When fetching from a shallow clone, the client sends "have" lines to tell the server about objects it already has and "shallow" lines to tell where its local history terminates. In some circumstances, the server fails to honor the shallow lines and fails to return objects that the client needs. UploadPack passes the "have" lines to PackWriter so PackWriter can omit them from the generated pack. UploadPack processes "shallow" lines by calling RevWalk.assumeShallow() with the set of shallow commits. RevWalk creates and caches RevCommits for these shallow commits, clearing out their parents. That way, walks correctly terminate at the shallow commits instead of assuming the client has history going back behind them. UploadPack converts its RevWalk to an ObjectWalk, maintaining the cached RevCommits, and passes it to PackWriter. Unfortunately, to support shallow fetches the PackWriter does the following: if (shallowPack && !(walk instanceof DepthWalk.ObjectWalk)) walk = new DepthWalk.ObjectWalk(reader, depth); That is, when the client sends a "deepen" line (fetch --depth=<n>) and the caller has not passed in a DepthWalk.ObjectWalk, PackWriter throws away the RevWalk that was passed in and makes a new one. The cleared parent lists prepared by RevWalk.assumeShallow() are lost. Fortunately UploadPack intends to pass in a DepthWalk.ObjectWalk. It tries to create it by calling toObjectWalkWithSameObjects() on a DepthWalk.RevWalk. But it doesn't work: because DepthWalk.RevWalk does not override the standard RevWalk#toObjectWalkWithSameObjects implementation, the result is a plain ObjectWalk instead of an instance of DepthWalk.ObjectWalk. The result is that the "shallow" information is thrown away and objects reachable from the shallow commits can be omitted from the pack sent when fetching with --depth from a shallow clone. Multiple factors collude to limit the circumstances under which this bug can be observed: 1. Commits with depth != 0 don't enter DepthGenerator's pending queue. That means a "have" cannot have any effect on DepthGenerator unless it is also a "want". 2. DepthGenerator#next() doesn't call carryFlagsImpl(), so the uninteresting flag is not propagated to ancestors there even if a "have" is also a "want". 3. JGit treats a depth of 1 as "1 past the wants". Because of (2), the only place the UNINTERESTING flag can leak to a shallow commit's parents is in the carryFlags() call from markUninteresting(). carryFlags() only traverses commits that have already been parsed: commits yet to be parsed are supposed to inherit correct flags from their parent in PendingGenerator#next (which doesn't happen here --- that is (2)). So the list of commits that have already been parsed becomes relevant. When we hit the markUninteresting() call, all "want"s, "have"s, and commits to be unshallowed have been parsed. carryFlags() only affects the parsed commits. If the "want" is a direct parent of a "have", then it carryFlags() marks it as uninteresting. If the "have" was also a "shallow", then its parent pointer should have been null and the "want" shouldn't have been marked, so we see the bug. If the "want" is a more distant ancestor then (2) keeps the uninteresting state from propagating to the "want" and we don't see the bug. If the "shallow" is not also a "have" then the shallow commit isn't parsed so (2) keeps the uninteresting state from propagating to the "want so we don't see the bug. Here is a reproduction case (time flowing left to right, arrows pointing to parents). "C" must be a commit that the client reports as a "have" during negotiation. That can only happen if the server reports it as an existing branch or tag in the first round of negotiation: A <-- B <-- C <-- D First do git clone --depth 1 <repo> which yields D as a "have" and C as a "shallow" commit. Then try git fetch --depth 1 <repo> B:refs/heads/B Negotiation sets up: have D, shallow C, have C, want B. But due to this bug B is marked as uninteresting and is not sent. Change-Id: I6e14b57b2f85e52d28cdcf356df647870f475440 Signed-off-by: Terry Parker <tparker@google.com>
8 years ago
Shallow fetch: Respect "shallow" lines When fetching from a shallow clone, the client sends "have" lines to tell the server about objects it already has and "shallow" lines to tell where its local history terminates. In some circumstances, the server fails to honor the shallow lines and fails to return objects that the client needs. UploadPack passes the "have" lines to PackWriter so PackWriter can omit them from the generated pack. UploadPack processes "shallow" lines by calling RevWalk.assumeShallow() with the set of shallow commits. RevWalk creates and caches RevCommits for these shallow commits, clearing out their parents. That way, walks correctly terminate at the shallow commits instead of assuming the client has history going back behind them. UploadPack converts its RevWalk to an ObjectWalk, maintaining the cached RevCommits, and passes it to PackWriter. Unfortunately, to support shallow fetches the PackWriter does the following: if (shallowPack && !(walk instanceof DepthWalk.ObjectWalk)) walk = new DepthWalk.ObjectWalk(reader, depth); That is, when the client sends a "deepen" line (fetch --depth=<n>) and the caller has not passed in a DepthWalk.ObjectWalk, PackWriter throws away the RevWalk that was passed in and makes a new one. The cleared parent lists prepared by RevWalk.assumeShallow() are lost. Fortunately UploadPack intends to pass in a DepthWalk.ObjectWalk. It tries to create it by calling toObjectWalkWithSameObjects() on a DepthWalk.RevWalk. But it doesn't work: because DepthWalk.RevWalk does not override the standard RevWalk#toObjectWalkWithSameObjects implementation, the result is a plain ObjectWalk instead of an instance of DepthWalk.ObjectWalk. The result is that the "shallow" information is thrown away and objects reachable from the shallow commits can be omitted from the pack sent when fetching with --depth from a shallow clone. Multiple factors collude to limit the circumstances under which this bug can be observed: 1. Commits with depth != 0 don't enter DepthGenerator's pending queue. That means a "have" cannot have any effect on DepthGenerator unless it is also a "want". 2. DepthGenerator#next() doesn't call carryFlagsImpl(), so the uninteresting flag is not propagated to ancestors there even if a "have" is also a "want". 3. JGit treats a depth of 1 as "1 past the wants". Because of (2), the only place the UNINTERESTING flag can leak to a shallow commit's parents is in the carryFlags() call from markUninteresting(). carryFlags() only traverses commits that have already been parsed: commits yet to be parsed are supposed to inherit correct flags from their parent in PendingGenerator#next (which doesn't happen here --- that is (2)). So the list of commits that have already been parsed becomes relevant. When we hit the markUninteresting() call, all "want"s, "have"s, and commits to be unshallowed have been parsed. carryFlags() only affects the parsed commits. If the "want" is a direct parent of a "have", then it carryFlags() marks it as uninteresting. If the "have" was also a "shallow", then its parent pointer should have been null and the "want" shouldn't have been marked, so we see the bug. If the "want" is a more distant ancestor then (2) keeps the uninteresting state from propagating to the "want" and we don't see the bug. If the "shallow" is not also a "have" then the shallow commit isn't parsed so (2) keeps the uninteresting state from propagating to the "want so we don't see the bug. Here is a reproduction case (time flowing left to right, arrows pointing to parents). "C" must be a commit that the client reports as a "have" during negotiation. That can only happen if the server reports it as an existing branch or tag in the first round of negotiation: A <-- B <-- C <-- D First do git clone --depth 1 <repo> which yields D as a "have" and C as a "shallow" commit. Then try git fetch --depth 1 <repo> B:refs/heads/B Negotiation sets up: have D, shallow C, have C, want B. But due to this bug B is marked as uninteresting and is not sent. Change-Id: I6e14b57b2f85e52d28cdcf356df647870f475440 Signed-off-by: Terry Parker <tparker@google.com>
8 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859
  1. /*
  2. * Copyright (C) 2008, Marek Zawirski <marek.zawirski@gmail.com>
  3. * and other copyright owners as documented in the project's IP log.
  4. *
  5. * This program and the accompanying materials are made available
  6. * under the terms of the Eclipse Distribution License v1.0 which
  7. * accompanies this distribution, is reproduced below, and is
  8. * available at http://www.eclipse.org/org/documents/edl-v10.php
  9. *
  10. * All rights reserved.
  11. *
  12. * Redistribution and use in source and binary forms, with or
  13. * without modification, are permitted provided that the following
  14. * conditions are met:
  15. *
  16. * - Redistributions of source code must retain the above copyright
  17. * notice, this list of conditions and the following disclaimer.
  18. *
  19. * - Redistributions in binary form must reproduce the above
  20. * copyright notice, this list of conditions and the following
  21. * disclaimer in the documentation and/or other materials provided
  22. * with the distribution.
  23. *
  24. * - Neither the name of the Eclipse Foundation, Inc. nor the
  25. * names of its contributors may be used to endorse or promote
  26. * products derived from this software without specific prior
  27. * written permission.
  28. *
  29. * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
  30. * CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
  31. * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
  32. * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
  33. * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
  34. * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
  35. * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
  36. * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
  37. * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
  38. * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
  39. * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
  40. * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
  41. * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  42. */
  43. package org.eclipse.jgit.internal.storage.file;
  44. import static org.eclipse.jgit.lib.Constants.OBJ_BLOB;
  45. import static org.junit.Assert.assertEquals;
  46. import static org.junit.Assert.assertFalse;
  47. import static org.junit.Assert.assertNotNull;
  48. import static org.junit.Assert.assertTrue;
  49. import static org.junit.Assert.fail;
  50. import static org.eclipse.jgit.internal.storage.pack.PackWriter.NONE;
  51. import java.io.ByteArrayInputStream;
  52. import java.io.ByteArrayOutputStream;
  53. import java.io.File;
  54. import java.io.FileOutputStream;
  55. import java.io.IOException;
  56. import java.text.ParseException;
  57. import java.util.ArrayList;
  58. import java.util.Arrays;
  59. import java.util.Collections;
  60. import java.util.Comparator;
  61. import java.util.HashSet;
  62. import java.util.List;
  63. import java.util.Set;
  64. import org.eclipse.jgit.errors.MissingObjectException;
  65. import org.eclipse.jgit.internal.storage.file.PackIndex.MutableEntry;
  66. import org.eclipse.jgit.internal.storage.pack.PackWriter;
  67. import org.eclipse.jgit.junit.JGitTestUtil;
  68. import org.eclipse.jgit.junit.TestRepository;
  69. import org.eclipse.jgit.junit.TestRepository.BranchBuilder;
  70. import org.eclipse.jgit.lib.NullProgressMonitor;
  71. import org.eclipse.jgit.lib.ObjectId;
  72. import org.eclipse.jgit.lib.ObjectIdSet;
  73. import org.eclipse.jgit.lib.ObjectInserter;
  74. import org.eclipse.jgit.lib.Sets;
  75. import org.eclipse.jgit.lib.Repository;
  76. import org.eclipse.jgit.revwalk.DepthWalk;
  77. import org.eclipse.jgit.revwalk.ObjectWalk;
  78. import org.eclipse.jgit.revwalk.RevBlob;
  79. import org.eclipse.jgit.revwalk.RevCommit;
  80. import org.eclipse.jgit.revwalk.RevObject;
  81. import org.eclipse.jgit.revwalk.RevWalk;
  82. import org.eclipse.jgit.storage.pack.PackConfig;
  83. import org.eclipse.jgit.storage.pack.PackStatistics;
  84. import org.eclipse.jgit.test.resources.SampleDataRepositoryTestCase;
  85. import org.eclipse.jgit.transport.PackParser;
  86. import org.junit.After;
  87. import org.junit.Before;
  88. import org.junit.Test;
  89. public class PackWriterTest extends SampleDataRepositoryTestCase {
  90. private static final List<RevObject> EMPTY_LIST_REVS = Collections
  91. .<RevObject> emptyList();
  92. private static final Set<ObjectIdSet> EMPTY_ID_SET = Collections
  93. .<ObjectIdSet> emptySet();
  94. private PackConfig config;
  95. private PackWriter writer;
  96. private ByteArrayOutputStream os;
  97. private PackFile pack;
  98. private ObjectInserter inserter;
  99. private FileRepository dst;
  100. private RevBlob contentA;
  101. private RevBlob contentB;
  102. private RevBlob contentC;
  103. private RevBlob contentD;
  104. private RevBlob contentE;
  105. private RevCommit c1;
  106. private RevCommit c2;
  107. private RevCommit c3;
  108. private RevCommit c4;
  109. private RevCommit c5;
  110. @Before
  111. public void setUp() throws Exception {
  112. super.setUp();
  113. os = new ByteArrayOutputStream();
  114. config = new PackConfig(db);
  115. dst = createBareRepository();
  116. File alt = new File(dst.getObjectDatabase().getDirectory(), "info/alternates");
  117. alt.getParentFile().mkdirs();
  118. write(alt, db.getObjectDatabase().getDirectory().getAbsolutePath() + "\n");
  119. }
  120. @After
  121. public void tearDown() throws Exception {
  122. if (writer != null) {
  123. writer.close();
  124. writer = null;
  125. }
  126. if (inserter != null) {
  127. inserter.close();
  128. inserter = null;
  129. }
  130. super.tearDown();
  131. }
  132. /**
  133. * Test constructor for exceptions, default settings, initialization.
  134. *
  135. * @throws IOException
  136. */
  137. @Test
  138. public void testContructor() throws IOException {
  139. writer = new PackWriter(config, db.newObjectReader());
  140. assertFalse(writer.isDeltaBaseAsOffset());
  141. assertTrue(config.isReuseDeltas());
  142. assertTrue(config.isReuseObjects());
  143. assertEquals(0, writer.getObjectCount());
  144. }
  145. /**
  146. * Change default settings and verify them.
  147. */
  148. @Test
  149. public void testModifySettings() {
  150. config.setReuseDeltas(false);
  151. config.setReuseObjects(false);
  152. config.setDeltaBaseAsOffset(false);
  153. assertFalse(config.isReuseDeltas());
  154. assertFalse(config.isReuseObjects());
  155. assertFalse(config.isDeltaBaseAsOffset());
  156. writer = new PackWriter(config, db.newObjectReader());
  157. writer.setDeltaBaseAsOffset(true);
  158. assertTrue(writer.isDeltaBaseAsOffset());
  159. assertFalse(config.isDeltaBaseAsOffset());
  160. }
  161. /**
  162. * Write empty pack by providing empty sets of interesting/uninteresting
  163. * objects and check for correct format.
  164. *
  165. * @throws IOException
  166. */
  167. @Test
  168. public void testWriteEmptyPack1() throws IOException {
  169. createVerifyOpenPack(NONE, NONE, false, false);
  170. assertEquals(0, writer.getObjectCount());
  171. assertEquals(0, pack.getObjectCount());
  172. assertEquals("da39a3ee5e6b4b0d3255bfef95601890afd80709", writer
  173. .computeName().name());
  174. }
  175. /**
  176. * Write empty pack by providing empty iterator of objects to write and
  177. * check for correct format.
  178. *
  179. * @throws IOException
  180. */
  181. @Test
  182. public void testWriteEmptyPack2() throws IOException {
  183. createVerifyOpenPack(EMPTY_LIST_REVS);
  184. assertEquals(0, writer.getObjectCount());
  185. assertEquals(0, pack.getObjectCount());
  186. }
  187. /**
  188. * Try to pass non-existing object as uninteresting, with non-ignoring
  189. * setting.
  190. *
  191. * @throws IOException
  192. */
  193. @Test
  194. public void testNotIgnoreNonExistingObjects() throws IOException {
  195. final ObjectId nonExisting = ObjectId
  196. .fromString("0000000000000000000000000000000000000001");
  197. try {
  198. createVerifyOpenPack(NONE, haves(nonExisting), false, false);
  199. fail("Should have thrown MissingObjectException");
  200. } catch (MissingObjectException x) {
  201. // expected
  202. }
  203. }
  204. /**
  205. * Try to pass non-existing object as uninteresting, with ignoring setting.
  206. *
  207. * @throws IOException
  208. */
  209. @Test
  210. public void testIgnoreNonExistingObjects() throws IOException {
  211. final ObjectId nonExisting = ObjectId
  212. .fromString("0000000000000000000000000000000000000001");
  213. createVerifyOpenPack(NONE, haves(nonExisting), false, true);
  214. // shouldn't throw anything
  215. }
  216. /**
  217. * Try to pass non-existing object as uninteresting, with ignoring setting.
  218. * Use a repo with bitmap indexes because then PackWriter will use
  219. * PackWriterBitmapWalker which had problems with this situation.
  220. *
  221. * @throws IOException
  222. * @throws ParseException
  223. */
  224. @Test
  225. public void testIgnoreNonExistingObjectsWithBitmaps() throws IOException,
  226. ParseException {
  227. final ObjectId nonExisting = ObjectId
  228. .fromString("0000000000000000000000000000000000000001");
  229. new GC(db).gc();
  230. createVerifyOpenPack(NONE, haves(nonExisting), false, true, true);
  231. // shouldn't throw anything
  232. }
  233. /**
  234. * Create pack basing on only interesting objects, then precisely verify
  235. * content. No delta reuse here.
  236. *
  237. * @throws IOException
  238. */
  239. @Test
  240. public void testWritePack1() throws IOException {
  241. config.setReuseDeltas(false);
  242. writeVerifyPack1();
  243. }
  244. /**
  245. * Test writing pack without object reuse. Pack content/preparation as in
  246. * {@link #testWritePack1()}.
  247. *
  248. * @throws IOException
  249. */
  250. @Test
  251. public void testWritePack1NoObjectReuse() throws IOException {
  252. config.setReuseDeltas(false);
  253. config.setReuseObjects(false);
  254. writeVerifyPack1();
  255. }
  256. /**
  257. * Create pack basing on both interesting and uninteresting objects, then
  258. * precisely verify content. No delta reuse here.
  259. *
  260. * @throws IOException
  261. */
  262. @Test
  263. public void testWritePack2() throws IOException {
  264. writeVerifyPack2(false);
  265. }
  266. /**
  267. * Test pack writing with deltas reuse, delta-base first rule. Pack
  268. * content/preparation as in {@link #testWritePack2()}.
  269. *
  270. * @throws IOException
  271. */
  272. @Test
  273. public void testWritePack2DeltasReuseRefs() throws IOException {
  274. writeVerifyPack2(true);
  275. }
  276. /**
  277. * Test pack writing with delta reuse. Delta bases referred as offsets. Pack
  278. * configuration as in {@link #testWritePack2DeltasReuseRefs()}.
  279. *
  280. * @throws IOException
  281. */
  282. @Test
  283. public void testWritePack2DeltasReuseOffsets() throws IOException {
  284. config.setDeltaBaseAsOffset(true);
  285. writeVerifyPack2(true);
  286. }
  287. /**
  288. * Test pack writing with delta reuse. Raw-data copy (reuse) is made on a
  289. * pack with CRC32 index. Pack configuration as in
  290. * {@link #testWritePack2DeltasReuseRefs()}.
  291. *
  292. * @throws IOException
  293. */
  294. @Test
  295. public void testWritePack2DeltasCRC32Copy() throws IOException {
  296. final File packDir = new File(db.getObjectDatabase().getDirectory(), "pack");
  297. final File crc32Pack = new File(packDir,
  298. "pack-34be9032ac282b11fa9babdc2b2a93ca996c9c2f.pack");
  299. final File crc32Idx = new File(packDir,
  300. "pack-34be9032ac282b11fa9babdc2b2a93ca996c9c2f.idx");
  301. copyFile(JGitTestUtil.getTestResourceFile(
  302. "pack-34be9032ac282b11fa9babdc2b2a93ca996c9c2f.idxV2"),
  303. crc32Idx);
  304. db.openPack(crc32Pack);
  305. writeVerifyPack2(true);
  306. }
  307. /**
  308. * Create pack basing on fixed objects list, then precisely verify content.
  309. * No delta reuse here.
  310. *
  311. * @throws IOException
  312. * @throws MissingObjectException
  313. *
  314. */
  315. @Test
  316. public void testWritePack3() throws MissingObjectException, IOException {
  317. config.setReuseDeltas(false);
  318. final ObjectId forcedOrder[] = new ObjectId[] {
  319. ObjectId.fromString("82c6b885ff600be425b4ea96dee75dca255b69e7"),
  320. ObjectId.fromString("c59759f143fb1fe21c197981df75a7ee00290799"),
  321. ObjectId.fromString("aabf2ffaec9b497f0950352b3e582d73035c2035"),
  322. ObjectId.fromString("902d5476fa249b7abc9d84c611577a81381f0327"),
  323. ObjectId.fromString("5b6e7c66c276e7610d4a73c70ec1a1f7c1003259"),
  324. ObjectId.fromString("6ff87c4664981e4397625791c8ea3bbb5f2279a3") };
  325. try (final RevWalk parser = new RevWalk(db)) {
  326. final RevObject forcedOrderRevs[] = new RevObject[forcedOrder.length];
  327. for (int i = 0; i < forcedOrder.length; i++)
  328. forcedOrderRevs[i] = parser.parseAny(forcedOrder[i]);
  329. createVerifyOpenPack(Arrays.asList(forcedOrderRevs));
  330. }
  331. assertEquals(forcedOrder.length, writer.getObjectCount());
  332. verifyObjectsOrder(forcedOrder);
  333. assertEquals("ed3f96b8327c7c66b0f8f70056129f0769323d86", writer
  334. .computeName().name());
  335. }
  336. /**
  337. * Another pack creation: basing on both interesting and uninteresting
  338. * objects. No delta reuse possible here, as this is a specific case when we
  339. * write only 1 commit, associated with 1 tree, 1 blob.
  340. *
  341. * @throws IOException
  342. */
  343. @Test
  344. public void testWritePack4() throws IOException {
  345. writeVerifyPack4(false);
  346. }
  347. /**
  348. * Test thin pack writing: 1 blob delta base is on objects edge. Pack
  349. * configuration as in {@link #testWritePack4()}.
  350. *
  351. * @throws IOException
  352. */
  353. @Test
  354. public void testWritePack4ThinPack() throws IOException {
  355. writeVerifyPack4(true);
  356. }
  357. /**
  358. * Compare sizes of packs created using {@link #testWritePack2()} and
  359. * {@link #testWritePack2DeltasReuseRefs()}. The pack using deltas should
  360. * be smaller.
  361. *
  362. * @throws Exception
  363. */
  364. @Test
  365. public void testWritePack2SizeDeltasVsNoDeltas() throws Exception {
  366. testWritePack2();
  367. final long sizePack2NoDeltas = os.size();
  368. tearDown();
  369. setUp();
  370. testWritePack2DeltasReuseRefs();
  371. final long sizePack2DeltasRefs = os.size();
  372. assertTrue(sizePack2NoDeltas > sizePack2DeltasRefs);
  373. }
  374. /**
  375. * Compare sizes of packs created using
  376. * {@link #testWritePack2DeltasReuseRefs()} and
  377. * {@link #testWritePack2DeltasReuseOffsets()}. The pack with delta bases
  378. * written as offsets should be smaller.
  379. *
  380. * @throws Exception
  381. */
  382. @Test
  383. public void testWritePack2SizeOffsetsVsRefs() throws Exception {
  384. testWritePack2DeltasReuseRefs();
  385. final long sizePack2DeltasRefs = os.size();
  386. tearDown();
  387. setUp();
  388. testWritePack2DeltasReuseOffsets();
  389. final long sizePack2DeltasOffsets = os.size();
  390. assertTrue(sizePack2DeltasRefs > sizePack2DeltasOffsets);
  391. }
  392. /**
  393. * Compare sizes of packs created using {@link #testWritePack4()} and
  394. * {@link #testWritePack4ThinPack()}. Obviously, the thin pack should be
  395. * smaller.
  396. *
  397. * @throws Exception
  398. */
  399. @Test
  400. public void testWritePack4SizeThinVsNoThin() throws Exception {
  401. testWritePack4();
  402. final long sizePack4 = os.size();
  403. tearDown();
  404. setUp();
  405. testWritePack4ThinPack();
  406. final long sizePack4Thin = os.size();
  407. assertTrue(sizePack4 > sizePack4Thin);
  408. }
  409. @Test
  410. public void testDeltaStatistics() throws Exception {
  411. config.setDeltaCompress(true);
  412. FileRepository repo = createBareRepository();
  413. TestRepository<FileRepository> testRepo = new TestRepository<FileRepository>(repo);
  414. ArrayList<RevObject> blobs = new ArrayList<>();
  415. blobs.add(testRepo.blob(genDeltableData(1000)));
  416. blobs.add(testRepo.blob(genDeltableData(1005)));
  417. try (PackWriter pw = new PackWriter(repo)) {
  418. NullProgressMonitor m = NullProgressMonitor.INSTANCE;
  419. pw.preparePack(blobs.iterator());
  420. pw.writePack(m, m, os);
  421. PackStatistics stats = pw.getStatistics();
  422. assertEquals(1, stats.getTotalDeltas());
  423. assertTrue("Delta bytes not set.",
  424. stats.byObjectType(OBJ_BLOB).getDeltaBytes() > 0);
  425. }
  426. }
  427. // Generate consistent junk data for building files that delta well
  428. private String genDeltableData(int length) {
  429. assertTrue("Generated data must have a length > 0", length > 0);
  430. char[] data = {'a', 'b', 'c', '\n'};
  431. StringBuilder builder = new StringBuilder(length);
  432. for (int i = 0; i < length; i++) {
  433. builder.append(data[i % 4]);
  434. }
  435. return builder.toString();
  436. }
  437. @Test
  438. public void testWriteIndex() throws Exception {
  439. config.setIndexVersion(2);
  440. writeVerifyPack4(false);
  441. File packFile = pack.getPackFile();
  442. String name = packFile.getName();
  443. String base = name.substring(0, name.lastIndexOf('.'));
  444. File indexFile = new File(packFile.getParentFile(), base + ".idx");
  445. // Validate that IndexPack came up with the right CRC32 value.
  446. final PackIndex idx1 = PackIndex.open(indexFile);
  447. assertTrue(idx1 instanceof PackIndexV2);
  448. assertEquals(0x4743F1E4L, idx1.findCRC32(ObjectId
  449. .fromString("82c6b885ff600be425b4ea96dee75dca255b69e7")));
  450. // Validate that an index written by PackWriter is the same.
  451. final File idx2File = new File(indexFile.getAbsolutePath() + ".2");
  452. final FileOutputStream is = new FileOutputStream(idx2File);
  453. try {
  454. writer.writeIndex(is);
  455. } finally {
  456. is.close();
  457. }
  458. final PackIndex idx2 = PackIndex.open(idx2File);
  459. assertTrue(idx2 instanceof PackIndexV2);
  460. assertEquals(idx1.getObjectCount(), idx2.getObjectCount());
  461. assertEquals(idx1.getOffset64Count(), idx2.getOffset64Count());
  462. for (int i = 0; i < idx1.getObjectCount(); i++) {
  463. final ObjectId id = idx1.getObjectId(i);
  464. assertEquals(id, idx2.getObjectId(i));
  465. assertEquals(idx1.findOffset(id), idx2.findOffset(id));
  466. assertEquals(idx1.findCRC32(id), idx2.findCRC32(id));
  467. }
  468. }
  469. @Test
  470. public void testExclude() throws Exception {
  471. FileRepository repo = createBareRepository();
  472. TestRepository<FileRepository> testRepo = new TestRepository<FileRepository>(
  473. repo);
  474. BranchBuilder bb = testRepo.branch("refs/heads/master");
  475. contentA = testRepo.blob("A");
  476. c1 = bb.commit().add("f", contentA).create();
  477. testRepo.getRevWalk().parseHeaders(c1);
  478. PackIndex pf1 = writePack(repo, wants(c1), EMPTY_ID_SET);
  479. assertContent(
  480. pf1,
  481. Arrays.asList(c1.getId(), c1.getTree().getId(),
  482. contentA.getId()));
  483. contentB = testRepo.blob("B");
  484. c2 = bb.commit().add("f", contentB).create();
  485. testRepo.getRevWalk().parseHeaders(c2);
  486. PackIndex pf2 = writePack(repo, wants(c2), Sets.of((ObjectIdSet) pf1));
  487. assertContent(
  488. pf2,
  489. Arrays.asList(c2.getId(), c2.getTree().getId(),
  490. contentB.getId()));
  491. }
  492. private static void assertContent(PackIndex pi, List<ObjectId> expected) {
  493. assertEquals("Pack index has wrong size.", expected.size(),
  494. pi.getObjectCount());
  495. for (int i = 0; i < pi.getObjectCount(); i++)
  496. assertTrue(
  497. "Pack index didn't contain the expected id "
  498. + pi.getObjectId(i),
  499. expected.contains(pi.getObjectId(i)));
  500. }
  501. @Test
  502. public void testShallowIsMinimal() throws Exception {
  503. FileRepository repo = setupRepoForShallowFetch();
  504. PackIndex idx = writeShallowPack(repo, 1, wants(c2), NONE, NONE);
  505. assertContent(idx,
  506. Arrays.asList(c1.getId(), c2.getId(), c1.getTree().getId(),
  507. c2.getTree().getId(), contentA.getId(),
  508. contentB.getId()));
  509. // Client already has blobs A and B, verify those are not packed.
  510. idx = writeShallowPack(repo, 1, wants(c5), haves(c1, c2), shallows(c1));
  511. assertContent(idx,
  512. Arrays.asList(c4.getId(), c5.getId(), c4.getTree().getId(),
  513. c5.getTree().getId(), contentC.getId(),
  514. contentD.getId(), contentE.getId()));
  515. }
  516. @Test
  517. public void testShallowFetchShallowParent() throws Exception {
  518. FileRepository repo = setupRepoForShallowFetch();
  519. PackIndex idx = writeShallowPack(repo, 1, wants(c5), NONE, NONE);
  520. assertContent(idx,
  521. Arrays.asList(c4.getId(), c5.getId(), c4.getTree().getId(),
  522. c5.getTree().getId(), contentA.getId(),
  523. contentB.getId(), contentC.getId(), contentD.getId(),
  524. contentE.getId()));
  525. idx = writeShallowPack(repo, 1, wants(c3), haves(c4, c5), shallows(c4));
  526. assertContent(idx, Arrays.asList(c2.getId(), c3.getId(),
  527. c2.getTree().getId(), c3.getTree().getId()));
  528. }
  529. @Test
  530. public void testShallowFetchShallowAncestor() throws Exception {
  531. FileRepository repo = setupRepoForShallowFetch();
  532. PackIndex idx = writeShallowPack(repo, 1, wants(c5), NONE, NONE);
  533. assertContent(idx,
  534. Arrays.asList(c4.getId(), c5.getId(), c4.getTree().getId(),
  535. c5.getTree().getId(), contentA.getId(),
  536. contentB.getId(), contentC.getId(), contentD.getId(),
  537. contentE.getId()));
  538. idx = writeShallowPack(repo, 1, wants(c2), haves(c4, c5), shallows(c4));
  539. assertContent(idx, Arrays.asList(c1.getId(), c2.getId(),
  540. c1.getTree().getId(), c2.getTree().getId()));
  541. }
  542. private FileRepository setupRepoForShallowFetch() throws Exception {
  543. FileRepository repo = createBareRepository();
  544. TestRepository<Repository> r = new TestRepository<Repository>(repo);
  545. BranchBuilder bb = r.branch("refs/heads/master");
  546. contentA = r.blob("A");
  547. contentB = r.blob("B");
  548. contentC = r.blob("C");
  549. contentD = r.blob("D");
  550. contentE = r.blob("E");
  551. c1 = bb.commit().add("a", contentA).create();
  552. c2 = bb.commit().add("b", contentB).create();
  553. c3 = bb.commit().add("c", contentC).create();
  554. c4 = bb.commit().add("d", contentD).create();
  555. c5 = bb.commit().add("e", contentE).create();
  556. r.getRevWalk().parseHeaders(c5); // fully initialize the tip RevCommit
  557. return repo;
  558. }
  559. private static PackIndex writePack(FileRepository repo,
  560. Set<? extends ObjectId> want, Set<ObjectIdSet> excludeObjects)
  561. throws IOException {
  562. RevWalk walk = new RevWalk(repo);
  563. return writePack(repo, walk, 0, want, NONE, excludeObjects);
  564. }
  565. private static PackIndex writeShallowPack(FileRepository repo, int depth,
  566. Set<? extends ObjectId> want, Set<? extends ObjectId> have,
  567. Set<? extends ObjectId> shallow) throws IOException {
  568. // During negotiation, UploadPack would have set up a DepthWalk and
  569. // marked the client's "shallow" commits. Emulate that here.
  570. DepthWalk.RevWalk walk = new DepthWalk.RevWalk(repo, depth);
  571. walk.assumeShallow(shallow);
  572. return writePack(repo, walk, depth, want, have, EMPTY_ID_SET);
  573. }
  574. private static PackIndex writePack(FileRepository repo, RevWalk walk,
  575. int depth, Set<? extends ObjectId> want,
  576. Set<? extends ObjectId> have, Set<ObjectIdSet> excludeObjects)
  577. throws IOException {
  578. try (PackWriter pw = new PackWriter(repo)) {
  579. pw.setDeltaBaseAsOffset(true);
  580. pw.setReuseDeltaCommits(false);
  581. for (ObjectIdSet idx : excludeObjects) {
  582. pw.excludeObjects(idx);
  583. }
  584. if (depth > 0) {
  585. pw.setShallowPack(depth, null);
  586. }
  587. ObjectWalk ow = walk.toObjectWalkWithSameObjects();
  588. pw.preparePack(NullProgressMonitor.INSTANCE, ow, want, have);
  589. String id = pw.computeName().getName();
  590. File packdir = new File(repo.getObjectsDirectory(), "pack");
  591. File packFile = new File(packdir, "pack-" + id + ".pack");
  592. FileOutputStream packOS = new FileOutputStream(packFile);
  593. pw.writePack(NullProgressMonitor.INSTANCE,
  594. NullProgressMonitor.INSTANCE, packOS);
  595. packOS.close();
  596. File idxFile = new File(packdir, "pack-" + id + ".idx");
  597. FileOutputStream idxOS = new FileOutputStream(idxFile);
  598. pw.writeIndex(idxOS);
  599. idxOS.close();
  600. return PackIndex.open(idxFile);
  601. }
  602. }
  603. // TODO: testWritePackDeltasCycle()
  604. // TODO: testWritePackDeltasDepth()
  605. private void writeVerifyPack1() throws IOException {
  606. final HashSet<ObjectId> interestings = new HashSet<ObjectId>();
  607. interestings.add(ObjectId
  608. .fromString("82c6b885ff600be425b4ea96dee75dca255b69e7"));
  609. createVerifyOpenPack(interestings, NONE, false, false);
  610. final ObjectId expectedOrder[] = new ObjectId[] {
  611. ObjectId.fromString("82c6b885ff600be425b4ea96dee75dca255b69e7"),
  612. ObjectId.fromString("c59759f143fb1fe21c197981df75a7ee00290799"),
  613. ObjectId.fromString("540a36d136cf413e4b064c2b0e0a4db60f77feab"),
  614. ObjectId.fromString("aabf2ffaec9b497f0950352b3e582d73035c2035"),
  615. ObjectId.fromString("902d5476fa249b7abc9d84c611577a81381f0327"),
  616. ObjectId.fromString("4b825dc642cb6eb9a060e54bf8d69288fbee4904"),
  617. ObjectId.fromString("5b6e7c66c276e7610d4a73c70ec1a1f7c1003259"),
  618. ObjectId.fromString("6ff87c4664981e4397625791c8ea3bbb5f2279a3") };
  619. assertEquals(expectedOrder.length, writer.getObjectCount());
  620. verifyObjectsOrder(expectedOrder);
  621. assertEquals("34be9032ac282b11fa9babdc2b2a93ca996c9c2f", writer
  622. .computeName().name());
  623. }
  624. private void writeVerifyPack2(boolean deltaReuse) throws IOException {
  625. config.setReuseDeltas(deltaReuse);
  626. final HashSet<ObjectId> interestings = new HashSet<ObjectId>();
  627. interestings.add(ObjectId
  628. .fromString("82c6b885ff600be425b4ea96dee75dca255b69e7"));
  629. final HashSet<ObjectId> uninterestings = new HashSet<ObjectId>();
  630. uninterestings.add(ObjectId
  631. .fromString("540a36d136cf413e4b064c2b0e0a4db60f77feab"));
  632. createVerifyOpenPack(interestings, uninterestings, false, false);
  633. final ObjectId expectedOrder[] = new ObjectId[] {
  634. ObjectId.fromString("82c6b885ff600be425b4ea96dee75dca255b69e7"),
  635. ObjectId.fromString("c59759f143fb1fe21c197981df75a7ee00290799"),
  636. ObjectId.fromString("aabf2ffaec9b497f0950352b3e582d73035c2035"),
  637. ObjectId.fromString("902d5476fa249b7abc9d84c611577a81381f0327"),
  638. ObjectId.fromString("5b6e7c66c276e7610d4a73c70ec1a1f7c1003259"),
  639. ObjectId.fromString("6ff87c4664981e4397625791c8ea3bbb5f2279a3") };
  640. if (deltaReuse) {
  641. // objects order influenced (swapped) by delta-base first rule
  642. ObjectId temp = expectedOrder[4];
  643. expectedOrder[4] = expectedOrder[5];
  644. expectedOrder[5] = temp;
  645. }
  646. assertEquals(expectedOrder.length, writer.getObjectCount());
  647. verifyObjectsOrder(expectedOrder);
  648. assertEquals("ed3f96b8327c7c66b0f8f70056129f0769323d86", writer
  649. .computeName().name());
  650. }
  651. private void writeVerifyPack4(final boolean thin) throws IOException {
  652. final HashSet<ObjectId> interestings = new HashSet<ObjectId>();
  653. interestings.add(ObjectId
  654. .fromString("82c6b885ff600be425b4ea96dee75dca255b69e7"));
  655. final HashSet<ObjectId> uninterestings = new HashSet<ObjectId>();
  656. uninterestings.add(ObjectId
  657. .fromString("c59759f143fb1fe21c197981df75a7ee00290799"));
  658. createVerifyOpenPack(interestings, uninterestings, thin, false);
  659. final ObjectId writtenObjects[] = new ObjectId[] {
  660. ObjectId.fromString("82c6b885ff600be425b4ea96dee75dca255b69e7"),
  661. ObjectId.fromString("aabf2ffaec9b497f0950352b3e582d73035c2035"),
  662. ObjectId.fromString("5b6e7c66c276e7610d4a73c70ec1a1f7c1003259") };
  663. assertEquals(writtenObjects.length, writer.getObjectCount());
  664. ObjectId expectedObjects[];
  665. if (thin) {
  666. expectedObjects = new ObjectId[4];
  667. System.arraycopy(writtenObjects, 0, expectedObjects, 0,
  668. writtenObjects.length);
  669. expectedObjects[3] = ObjectId
  670. .fromString("6ff87c4664981e4397625791c8ea3bbb5f2279a3");
  671. } else {
  672. expectedObjects = writtenObjects;
  673. }
  674. verifyObjectsOrder(expectedObjects);
  675. assertEquals("cded4b74176b4456afa456768b2b5aafb41c44fc", writer
  676. .computeName().name());
  677. }
  678. private void createVerifyOpenPack(final Set<ObjectId> interestings,
  679. final Set<ObjectId> uninterestings, final boolean thin,
  680. final boolean ignoreMissingUninteresting)
  681. throws MissingObjectException, IOException {
  682. createVerifyOpenPack(interestings, uninterestings, thin,
  683. ignoreMissingUninteresting, false);
  684. }
  685. private void createVerifyOpenPack(final Set<ObjectId> interestings,
  686. final Set<ObjectId> uninterestings, final boolean thin,
  687. final boolean ignoreMissingUninteresting, boolean useBitmaps)
  688. throws MissingObjectException, IOException {
  689. NullProgressMonitor m = NullProgressMonitor.INSTANCE;
  690. writer = new PackWriter(config, db.newObjectReader());
  691. writer.setUseBitmaps(useBitmaps);
  692. writer.setThin(thin);
  693. writer.setIgnoreMissingUninteresting(ignoreMissingUninteresting);
  694. writer.preparePack(m, interestings, uninterestings);
  695. writer.writePack(m, m, os);
  696. writer.close();
  697. verifyOpenPack(thin);
  698. }
  699. private void createVerifyOpenPack(final List<RevObject> objectSource)
  700. throws MissingObjectException, IOException {
  701. NullProgressMonitor m = NullProgressMonitor.INSTANCE;
  702. writer = new PackWriter(config, db.newObjectReader());
  703. writer.preparePack(objectSource.iterator());
  704. assertEquals(objectSource.size(), writer.getObjectCount());
  705. writer.writePack(m, m, os);
  706. writer.close();
  707. verifyOpenPack(false);
  708. }
  709. private void verifyOpenPack(final boolean thin) throws IOException {
  710. final byte[] packData = os.toByteArray();
  711. if (thin) {
  712. PackParser p = index(packData);
  713. try {
  714. p.parse(NullProgressMonitor.INSTANCE);
  715. fail("indexer should grumble about missing object");
  716. } catch (IOException x) {
  717. // expected
  718. }
  719. }
  720. ObjectDirectoryPackParser p = (ObjectDirectoryPackParser) index(packData);
  721. p.setKeepEmpty(true);
  722. p.setAllowThin(thin);
  723. p.setIndexVersion(2);
  724. p.parse(NullProgressMonitor.INSTANCE);
  725. pack = p.getPackFile();
  726. assertNotNull("have PackFile after parsing", pack);
  727. }
  728. private PackParser index(final byte[] packData) throws IOException {
  729. if (inserter == null)
  730. inserter = dst.newObjectInserter();
  731. return inserter.newPackParser(new ByteArrayInputStream(packData));
  732. }
  733. private void verifyObjectsOrder(final ObjectId objectsOrder[]) {
  734. final List<PackIndex.MutableEntry> entries = new ArrayList<PackIndex.MutableEntry>();
  735. for (MutableEntry me : pack) {
  736. entries.add(me.cloneEntry());
  737. }
  738. Collections.sort(entries, new Comparator<PackIndex.MutableEntry>() {
  739. public int compare(MutableEntry o1, MutableEntry o2) {
  740. return Long.signum(o1.getOffset() - o2.getOffset());
  741. }
  742. });
  743. int i = 0;
  744. for (MutableEntry me : entries) {
  745. assertEquals(objectsOrder[i++].toObjectId(), me.toObjectId());
  746. }
  747. }
  748. private static Set<ObjectId> haves(ObjectId... objects) {
  749. return Sets.of(objects);
  750. }
  751. private static Set<ObjectId> wants(ObjectId... objects) {
  752. return Sets.of(objects);
  753. }
  754. private static Set<ObjectId> shallows(ObjectId... objects) {
  755. return Sets.of(objects);
  756. }
  757. }