You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

PackParserTest.java 15KB

maxObjectSizeLimit for receive-pack. ReceivePack (and PackParser) can be configured with the maxObjectSizeLimit in order to prevent users from pushing too large objects to Git. The limit check is applied to all object types although it is most likely that a BLOB will exceed the limit. In all cases the size of the object header is excluded from the object size which is checked against the limit as this is the size of which a BLOB object would take in the working tree when checked out as a file. When an object exceeds the maxObjectSizeLimit the receive-pack will abort immediately. Delta objects (both offset and ref delta) are also checked against the limit. However, for delta objects we will first check the size of the inflated delta block against the maxObjectSizeLimit and abort immediately if it exceeds the limit. In this case we even do not know the exact size of the resolved delta object but we assume it will be larger than the given maxObjectSizeLimit as delta is generally only chosen if the delta can copy more data from the base object than the delta needs to insert or needs to represent the copy ranges. Aborting early, in this case, avoids unnecessary inflating of the (huge) delta block. Unfortunately, it is too expensive (especially for a large delta) to compute SHA-1 of an object that causes the receive-pack to abort. This would decrease the value of this feature whose main purpose is to protect server resources from users pushing huge objects. Therefore we don't report the SHA-1 in the error message. Change-Id: I177ef24553faacda444ed5895e40ac8925ca0d1e Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
12 years ago
maxObjectSizeLimit for receive-pack. ReceivePack (and PackParser) can be configured with the maxObjectSizeLimit in order to prevent users from pushing too large objects to Git. The limit check is applied to all object types although it is most likely that a BLOB will exceed the limit. In all cases the size of the object header is excluded from the object size which is checked against the limit as this is the size of which a BLOB object would take in the working tree when checked out as a file. When an object exceeds the maxObjectSizeLimit the receive-pack will abort immediately. Delta objects (both offset and ref delta) are also checked against the limit. However, for delta objects we will first check the size of the inflated delta block against the maxObjectSizeLimit and abort immediately if it exceeds the limit. In this case we even do not know the exact size of the resolved delta object but we assume it will be larger than the given maxObjectSizeLimit as delta is generally only chosen if the delta can copy more data from the base object than the delta needs to insert or needs to represent the copy ranges. Aborting early, in this case, avoids unnecessary inflating of the (huge) delta block. Unfortunately, it is too expensive (especially for a large delta) to compute SHA-1 of an object that causes the receive-pack to abort. This would decrease the value of this feature whose main purpose is to protect server resources from users pushing huge objects. Therefore we don't report the SHA-1 in the error message. Change-Id: I177ef24553faacda444ed5895e40ac8925ca0d1e Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
12 years ago
maxObjectSizeLimit for receive-pack. ReceivePack (and PackParser) can be configured with the maxObjectSizeLimit in order to prevent users from pushing too large objects to Git. The limit check is applied to all object types although it is most likely that a BLOB will exceed the limit. In all cases the size of the object header is excluded from the object size which is checked against the limit as this is the size of which a BLOB object would take in the working tree when checked out as a file. When an object exceeds the maxObjectSizeLimit the receive-pack will abort immediately. Delta objects (both offset and ref delta) are also checked against the limit. However, for delta objects we will first check the size of the inflated delta block against the maxObjectSizeLimit and abort immediately if it exceeds the limit. In this case we even do not know the exact size of the resolved delta object but we assume it will be larger than the given maxObjectSizeLimit as delta is generally only chosen if the delta can copy more data from the base object than the delta needs to insert or needs to represent the copy ranges. Aborting early, in this case, avoids unnecessary inflating of the (huge) delta block. Unfortunately, it is too expensive (especially for a large delta) to compute SHA-1 of an object that causes the receive-pack to abort. This would decrease the value of this feature whose main purpose is to protect server resources from users pushing huge objects. Therefore we don't report the SHA-1 in the error message. Change-Id: I177ef24553faacda444ed5895e40ac8925ca0d1e Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
12 years ago
maxObjectSizeLimit for receive-pack. ReceivePack (and PackParser) can be configured with the maxObjectSizeLimit in order to prevent users from pushing too large objects to Git. The limit check is applied to all object types although it is most likely that a BLOB will exceed the limit. In all cases the size of the object header is excluded from the object size which is checked against the limit as this is the size of which a BLOB object would take in the working tree when checked out as a file. When an object exceeds the maxObjectSizeLimit the receive-pack will abort immediately. Delta objects (both offset and ref delta) are also checked against the limit. However, for delta objects we will first check the size of the inflated delta block against the maxObjectSizeLimit and abort immediately if it exceeds the limit. In this case we even do not know the exact size of the resolved delta object but we assume it will be larger than the given maxObjectSizeLimit as delta is generally only chosen if the delta can copy more data from the base object than the delta needs to insert or needs to represent the copy ranges. Aborting early, in this case, avoids unnecessary inflating of the (huge) delta block. Unfortunately, it is too expensive (especially for a large delta) to compute SHA-1 of an object that causes the receive-pack to abort. This would decrease the value of this feature whose main purpose is to protect server resources from users pushing huge objects. Therefore we don't report the SHA-1 in the error message. Change-Id: I177ef24553faacda444ed5895e40ac8925ca0d1e Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
12 years ago
maxObjectSizeLimit for receive-pack. ReceivePack (and PackParser) can be configured with the maxObjectSizeLimit in order to prevent users from pushing too large objects to Git. The limit check is applied to all object types although it is most likely that a BLOB will exceed the limit. In all cases the size of the object header is excluded from the object size which is checked against the limit as this is the size of which a BLOB object would take in the working tree when checked out as a file. When an object exceeds the maxObjectSizeLimit the receive-pack will abort immediately. Delta objects (both offset and ref delta) are also checked against the limit. However, for delta objects we will first check the size of the inflated delta block against the maxObjectSizeLimit and abort immediately if it exceeds the limit. In this case we even do not know the exact size of the resolved delta object but we assume it will be larger than the given maxObjectSizeLimit as delta is generally only chosen if the delta can copy more data from the base object than the delta needs to insert or needs to represent the copy ranges. Aborting early, in this case, avoids unnecessary inflating of the (huge) delta block. Unfortunately, it is too expensive (especially for a large delta) to compute SHA-1 of an object that causes the receive-pack to abort. This would decrease the value of this feature whose main purpose is to protect server resources from users pushing huge objects. Therefore we don't report the SHA-1 in the error message. Change-Id: I177ef24553faacda444ed5895e40ac8925ca0d1e Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
12 years ago
maxObjectSizeLimit for receive-pack. ReceivePack (and PackParser) can be configured with the maxObjectSizeLimit in order to prevent users from pushing too large objects to Git. The limit check is applied to all object types although it is most likely that a BLOB will exceed the limit. In all cases the size of the object header is excluded from the object size which is checked against the limit as this is the size of which a BLOB object would take in the working tree when checked out as a file. When an object exceeds the maxObjectSizeLimit the receive-pack will abort immediately. Delta objects (both offset and ref delta) are also checked against the limit. However, for delta objects we will first check the size of the inflated delta block against the maxObjectSizeLimit and abort immediately if it exceeds the limit. In this case we even do not know the exact size of the resolved delta object but we assume it will be larger than the given maxObjectSizeLimit as delta is generally only chosen if the delta can copy more data from the base object than the delta needs to insert or needs to represent the copy ranges. Aborting early, in this case, avoids unnecessary inflating of the (huge) delta block. Unfortunately, it is too expensive (especially for a large delta) to compute SHA-1 of an object that causes the receive-pack to abort. This would decrease the value of this feature whose main purpose is to protect server resources from users pushing huge objects. Therefore we don't report the SHA-1 in the error message. Change-Id: I177ef24553faacda444ed5895e40ac8925ca0d1e Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
12 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475
  1. /*
  2. * Copyright (C) 2009, Google Inc.
  3. * Copyright (C) 2008, Imran M Yousuf <imyousuf@smartitengineering.com>
  4. * Copyright (C) 2007-2008, Robin Rosenberg <robin.rosenberg@dewire.com>
  5. * Copyright (C) 2008, Shawn O. Pearce <spearce@spearce.org> and others
  6. *
  7. * This program and the accompanying materials are made available under the
  8. * terms of the Eclipse Distribution License v. 1.0 which is available at
  9. * https://www.eclipse.org/org/documents/edl-v10.php.
  10. *
  11. * SPDX-License-Identifier: BSD-3-Clause
  12. */
  13. package org.eclipse.jgit.transport;
  14. import static org.junit.Assert.assertEquals;
  15. import static org.junit.Assert.assertTrue;
  16. import static org.junit.Assert.fail;
  17. import java.io.ByteArrayInputStream;
  18. import java.io.File;
  19. import java.io.FileInputStream;
  20. import java.io.IOException;
  21. import java.io.InputStream;
  22. import java.security.MessageDigest;
  23. import java.text.MessageFormat;
  24. import java.util.zip.Deflater;
  25. import org.eclipse.jgit.errors.TooLargeObjectInPackException;
  26. import org.eclipse.jgit.internal.JGitText;
  27. import org.eclipse.jgit.internal.storage.file.ObjectDirectoryPackParser;
  28. import org.eclipse.jgit.internal.storage.file.Pack;
  29. import org.eclipse.jgit.junit.JGitTestUtil;
  30. import org.eclipse.jgit.junit.RepositoryTestCase;
  31. import org.eclipse.jgit.junit.TestRepository;
  32. import org.eclipse.jgit.lib.Constants;
  33. import org.eclipse.jgit.lib.NullProgressMonitor;
  34. import org.eclipse.jgit.lib.ObjectId;
  35. import org.eclipse.jgit.lib.ObjectInserter;
  36. import org.eclipse.jgit.lib.Repository;
  37. import org.eclipse.jgit.revwalk.RevBlob;
  38. import org.eclipse.jgit.util.NB;
  39. import org.eclipse.jgit.util.TemporaryBuffer;
  40. import org.eclipse.jgit.util.io.UnionInputStream;
  41. import org.junit.After;
  42. import org.junit.Test;
  43. /**
  44. * Test indexing of git packs. A pack is read from a stream, copied
  45. * to a new pack and an index is created. Then the packs are tested
  46. * to make sure they contain the expected objects (well we don't test
  47. * for all of them unless the packs are very small).
  48. */
  49. public class PackParserTest extends RepositoryTestCase {
  50. /**
  51. * Test indexing one of the test packs in the egit repo. It has deltas.
  52. *
  53. * @throws IOException
  54. */
  55. @Test
  56. public void test1() throws IOException {
  57. File packFile = JGitTestUtil.getTestResourceFile("pack-34be9032ac282b11fa9babdc2b2a93ca996c9c2f.pack");
  58. try (InputStream is = new FileInputStream(packFile)) {
  59. ObjectDirectoryPackParser p = (ObjectDirectoryPackParser) index(is);
  60. p.parse(NullProgressMonitor.INSTANCE);
  61. Pack pack = p.getPack();
  62. assertTrue(pack.hasObject(ObjectId.fromString("4b825dc642cb6eb9a060e54bf8d69288fbee4904")));
  63. assertTrue(pack.hasObject(ObjectId.fromString("540a36d136cf413e4b064c2b0e0a4db60f77feab")));
  64. assertTrue(pack.hasObject(ObjectId.fromString("5b6e7c66c276e7610d4a73c70ec1a1f7c1003259")));
  65. assertTrue(pack.hasObject(ObjectId.fromString("6ff87c4664981e4397625791c8ea3bbb5f2279a3")));
  66. assertTrue(pack.hasObject(ObjectId.fromString("82c6b885ff600be425b4ea96dee75dca255b69e7")));
  67. assertTrue(pack.hasObject(ObjectId.fromString("902d5476fa249b7abc9d84c611577a81381f0327")));
  68. assertTrue(pack.hasObject(ObjectId.fromString("aabf2ffaec9b497f0950352b3e582d73035c2035")));
  69. assertTrue(pack.hasObject(ObjectId.fromString("c59759f143fb1fe21c197981df75a7ee00290799")));
  70. }
  71. }
  72. /**
  73. * This is just another pack. It so happens that we have two convenient pack to
  74. * test with in the repository.
  75. *
  76. * @throws IOException
  77. */
  78. @Test
  79. public void test2() throws IOException {
  80. File packFile = JGitTestUtil.getTestResourceFile("pack-df2982f284bbabb6bdb59ee3fcc6eb0983e20371.pack");
  81. try (InputStream is = new FileInputStream(packFile)) {
  82. ObjectDirectoryPackParser p = (ObjectDirectoryPackParser) index(is);
  83. p.parse(NullProgressMonitor.INSTANCE);
  84. Pack pack = p.getPack();
  85. assertTrue(pack.hasObject(ObjectId.fromString("02ba32d3649e510002c21651936b7077aa75ffa9")));
  86. assertTrue(pack.hasObject(ObjectId.fromString("0966a434eb1a025db6b71485ab63a3bfbea520b6")));
  87. assertTrue(pack.hasObject(ObjectId.fromString("09efc7e59a839528ac7bda9fa020dc9101278680")));
  88. assertTrue(pack.hasObject(ObjectId.fromString("0a3d7772488b6b106fb62813c4d6d627918d9181")));
  89. assertTrue(pack.hasObject(ObjectId.fromString("1004d0d7ac26fbf63050a234c9b88a46075719d3")));
  90. assertTrue(pack.hasObject(ObjectId.fromString("10da5895682013006950e7da534b705252b03be6")));
  91. assertTrue(pack.hasObject(ObjectId.fromString("1203b03dc816ccbb67773f28b3c19318654b0bc8")));
  92. assertTrue(pack.hasObject(ObjectId.fromString("15fae9e651043de0fd1deef588aa3fbf5a7a41c6")));
  93. assertTrue(pack.hasObject(ObjectId.fromString("16f9ec009e5568c435f473ba3a1df732d49ce8c3")));
  94. assertTrue(pack.hasObject(ObjectId.fromString("1fd7d579fb6ae3fe942dc09c2c783443d04cf21e")));
  95. assertTrue(pack.hasObject(ObjectId.fromString("20a8ade77639491ea0bd667bf95de8abf3a434c8")));
  96. assertTrue(pack.hasObject(ObjectId.fromString("2675188fd86978d5bc4d7211698b2118ae3bf658")));
  97. // and lots more...
  98. }
  99. }
  100. @Test
  101. public void testTinyThinPack() throws Exception {
  102. RevBlob a;
  103. try (TestRepository d = new TestRepository<Repository>(db)) {
  104. a = d.blob("a");
  105. }
  106. TemporaryBuffer.Heap pack = new TemporaryBuffer.Heap(1024);
  107. packHeader(pack, 1);
  108. pack.write((Constants.OBJ_REF_DELTA) << 4 | 4);
  109. a.copyRawTo(pack);
  110. deflate(pack, new byte[] { 0x1, 0x1, 0x1, 'b' });
  111. digest(pack);
  112. PackParser p = index(new ByteArrayInputStream(pack.toByteArray()));
  113. p.setAllowThin(true);
  114. p.parse(NullProgressMonitor.INSTANCE);
  115. }
  116. @Test
  117. public void testPackWithDuplicateBlob() throws Exception {
  118. final byte[] data = Constants.encode("0123456789abcdefg");
  119. try (TestRepository<Repository> d = new TestRepository<>(db)) {
  120. assertTrue(db.getObjectDatabase().has(d.blob(data)));
  121. }
  122. TemporaryBuffer.Heap pack = new TemporaryBuffer.Heap(1024);
  123. packHeader(pack, 1);
  124. pack.write((Constants.OBJ_BLOB) << 4 | 0x80 | 1);
  125. pack.write(1);
  126. deflate(pack, data);
  127. digest(pack);
  128. PackParser p = index(new ByteArrayInputStream(pack.toByteArray()));
  129. p.setAllowThin(false);
  130. p.parse(NullProgressMonitor.INSTANCE);
  131. }
  132. @Test
  133. public void testPackWithTrailingGarbage() throws Exception {
  134. RevBlob a;
  135. try (TestRepository d = new TestRepository<Repository>(db)) {
  136. a = d.blob("a");
  137. }
  138. TemporaryBuffer.Heap pack = new TemporaryBuffer.Heap(1024);
  139. packHeader(pack, 1);
  140. pack.write((Constants.OBJ_REF_DELTA) << 4 | 4);
  141. a.copyRawTo(pack);
  142. deflate(pack, new byte[] { 0x1, 0x1, 0x1, 'b' });
  143. digest(pack);
  144. PackParser p = index(new UnionInputStream(
  145. new ByteArrayInputStream(pack.toByteArray()),
  146. new ByteArrayInputStream(new byte[] { 0x7e })));
  147. p.setAllowThin(true);
  148. p.setCheckEofAfterPackFooter(true);
  149. try {
  150. p.parse(NullProgressMonitor.INSTANCE);
  151. fail("Pack with trailing garbage was accepted");
  152. } catch (IOException err) {
  153. assertEquals(
  154. MessageFormat.format(JGitText.get().expectedEOFReceived, "\\x7e"),
  155. err.getMessage());
  156. }
  157. }
  158. @Test
  159. public void testMaxObjectSizeFullBlob() throws Exception {
  160. final byte[] data = Constants.encode("0123456789");
  161. try (TestRepository d = new TestRepository<Repository>(db)) {
  162. d.blob(data);
  163. }
  164. TemporaryBuffer.Heap pack = new TemporaryBuffer.Heap(1024);
  165. packHeader(pack, 1);
  166. pack.write((Constants.OBJ_BLOB) << 4 | 10);
  167. deflate(pack, data);
  168. digest(pack);
  169. PackParser p = index(new ByteArrayInputStream(pack.toByteArray()));
  170. p.setMaxObjectSizeLimit(11);
  171. p.parse(NullProgressMonitor.INSTANCE);
  172. p = index(new ByteArrayInputStream(pack.toByteArray()));
  173. p.setMaxObjectSizeLimit(10);
  174. p.parse(NullProgressMonitor.INSTANCE);
  175. p = index(new ByteArrayInputStream(pack.toByteArray()));
  176. p.setMaxObjectSizeLimit(9);
  177. try {
  178. p.parse(NullProgressMonitor.INSTANCE);
  179. fail("PackParser should have failed");
  180. } catch (TooLargeObjectInPackException e) {
  181. assertTrue(e.getMessage().contains("10")); // obj size
  182. assertTrue(e.getMessage().contains("9")); // max obj size
  183. }
  184. }
  185. @Test
  186. public void testMaxObjectSizeDeltaBlock() throws Exception {
  187. RevBlob a;
  188. try (TestRepository d = new TestRepository<Repository>(db)) {
  189. a = d.blob("a");
  190. }
  191. TemporaryBuffer.Heap pack = new TemporaryBuffer.Heap(1024);
  192. packHeader(pack, 1);
  193. pack.write((Constants.OBJ_REF_DELTA) << 4 | 14);
  194. a.copyRawTo(pack);
  195. deflate(pack, new byte[] { 1, 11, 11, 'a', '0', '1', '2', '3', '4',
  196. '5', '6', '7', '8', '9' });
  197. digest(pack);
  198. PackParser p = index(new ByteArrayInputStream(pack.toByteArray()));
  199. p.setAllowThin(true);
  200. p.setMaxObjectSizeLimit(14);
  201. p.parse(NullProgressMonitor.INSTANCE);
  202. p = index(new ByteArrayInputStream(pack.toByteArray()));
  203. p.setAllowThin(true);
  204. p.setMaxObjectSizeLimit(13);
  205. try {
  206. p.parse(NullProgressMonitor.INSTANCE);
  207. fail("PackParser should have failed");
  208. } catch (TooLargeObjectInPackException e) {
  209. assertTrue(e.getMessage().contains("13")); // max obj size
  210. assertTrue(e.getMessage().contains("14")); // delta size
  211. }
  212. }
  213. @Test
  214. public void testMaxObjectSizeDeltaResultSize() throws Exception {
  215. RevBlob a;
  216. try (TestRepository d = new TestRepository<Repository>(db)) {
  217. a = d.blob("0123456789");
  218. }
  219. TemporaryBuffer.Heap pack = new TemporaryBuffer.Heap(1024);
  220. packHeader(pack, 1);
  221. pack.write((Constants.OBJ_REF_DELTA) << 4 | 4);
  222. a.copyRawTo(pack);
  223. deflate(pack, new byte[] { 10, 11, 1, 'a' });
  224. digest(pack);
  225. PackParser p = index(new ByteArrayInputStream(pack.toByteArray()));
  226. p.setAllowThin(true);
  227. p.setMaxObjectSizeLimit(11);
  228. p.parse(NullProgressMonitor.INSTANCE);
  229. p = index(new ByteArrayInputStream(pack.toByteArray()));
  230. p.setAllowThin(true);
  231. p.setMaxObjectSizeLimit(10);
  232. try {
  233. p.parse(NullProgressMonitor.INSTANCE);
  234. fail("PackParser should have failed");
  235. } catch (TooLargeObjectInPackException e) {
  236. assertTrue(e.getMessage().contains("11")); // result obj size
  237. assertTrue(e.getMessage().contains("10")); // max obj size
  238. }
  239. }
  240. @Test
  241. public void testNonMarkingInputStream() throws Exception {
  242. RevBlob a;
  243. try (TestRepository d = new TestRepository<Repository>(db)) {
  244. a = d.blob("a");
  245. }
  246. TemporaryBuffer.Heap pack = new TemporaryBuffer.Heap(1024);
  247. packHeader(pack, 1);
  248. pack.write((Constants.OBJ_REF_DELTA) << 4 | 4);
  249. a.copyRawTo(pack);
  250. deflate(pack, new byte[] { 0x1, 0x1, 0x1, 'b' });
  251. digest(pack);
  252. InputStream in = new ByteArrayInputStream(pack.toByteArray()) {
  253. @Override
  254. public boolean markSupported() {
  255. return false;
  256. }
  257. @Override
  258. public void mark(int maxlength) {
  259. fail("Mark should not be called");
  260. }
  261. };
  262. PackParser p = index(in);
  263. p.setAllowThin(true);
  264. p.setCheckEofAfterPackFooter(false);
  265. p.setExpectDataAfterPackFooter(true);
  266. try {
  267. p.parse(NullProgressMonitor.INSTANCE);
  268. fail("PackParser should have failed");
  269. } catch (IOException e) {
  270. assertEquals(e.getMessage(),
  271. JGitText.get().inputStreamMustSupportMark);
  272. }
  273. }
  274. @Test
  275. public void testDataAfterPackFooterSingleRead() throws Exception {
  276. RevBlob a;
  277. try (TestRepository d = new TestRepository<Repository>(db)) {
  278. a = d.blob("a");
  279. }
  280. TemporaryBuffer.Heap pack = new TemporaryBuffer.Heap(32*1024);
  281. packHeader(pack, 1);
  282. pack.write((Constants.OBJ_REF_DELTA) << 4 | 4);
  283. a.copyRawTo(pack);
  284. deflate(pack, new byte[] { 0x1, 0x1, 0x1, 'b' });
  285. digest(pack);
  286. byte packData[] = pack.toByteArray();
  287. byte streamData[] = new byte[packData.length + 1];
  288. System.arraycopy(packData, 0, streamData, 0, packData.length);
  289. streamData[packData.length] = 0x7e;
  290. InputStream in = new ByteArrayInputStream(streamData);
  291. PackParser p = index(in);
  292. p.setAllowThin(true);
  293. p.setCheckEofAfterPackFooter(false);
  294. p.setExpectDataAfterPackFooter(true);
  295. p.parse(NullProgressMonitor.INSTANCE);
  296. assertEquals(0x7e, in.read());
  297. }
  298. @Test
  299. public void testDataAfterPackFooterSplitObjectRead() throws Exception {
  300. final byte[] data = Constants.encode("0123456789");
  301. // Build a pack ~17k
  302. int objects = 900;
  303. TemporaryBuffer.Heap pack = new TemporaryBuffer.Heap(32 * 1024);
  304. packHeader(pack, objects);
  305. for (int i = 0; i < objects; i++) {
  306. pack.write((Constants.OBJ_BLOB) << 4 | 10);
  307. deflate(pack, data);
  308. }
  309. digest(pack);
  310. byte packData[] = pack.toByteArray();
  311. byte streamData[] = new byte[packData.length + 1];
  312. System.arraycopy(packData, 0, streamData, 0, packData.length);
  313. streamData[packData.length] = 0x7e;
  314. InputStream in = new ByteArrayInputStream(streamData);
  315. PackParser p = index(in);
  316. p.setAllowThin(true);
  317. p.setCheckEofAfterPackFooter(false);
  318. p.setExpectDataAfterPackFooter(true);
  319. p.parse(NullProgressMonitor.INSTANCE);
  320. assertEquals(0x7e, in.read());
  321. }
  322. @Test
  323. public void testDataAfterPackFooterSplitHeaderRead() throws Exception {
  324. final byte[] data = Constants.encode("a");
  325. RevBlob b;
  326. try (TestRepository d = new TestRepository<Repository>(db)) {
  327. b = d.blob(data);
  328. }
  329. int objects = 248;
  330. TemporaryBuffer.Heap pack = new TemporaryBuffer.Heap(32 * 1024);
  331. packHeader(pack, objects + 1);
  332. int offset = 13;
  333. StringBuilder sb = new StringBuilder();
  334. for (int i = 0; i < offset; i++)
  335. sb.append(i);
  336. offset = sb.toString().length();
  337. int lenByte = (Constants.OBJ_BLOB) << 4 | (offset & 0x0F);
  338. offset >>= 4;
  339. if (offset > 0)
  340. lenByte |= 1 << 7;
  341. pack.write(lenByte);
  342. while (offset > 0) {
  343. lenByte = offset & 0x7F;
  344. offset >>= 6;
  345. if (offset > 0)
  346. lenByte |= 1 << 7;
  347. pack.write(lenByte);
  348. }
  349. deflate(pack, Constants.encode(sb.toString()));
  350. for (int i = 0; i < objects; i++) {
  351. // The last pack header written falls across the 8192 byte boundary
  352. // between [8189:8210]
  353. pack.write((Constants.OBJ_REF_DELTA) << 4 | 4);
  354. b.copyRawTo(pack);
  355. deflate(pack, new byte[] { 0x1, 0x1, 0x1, 'b' });
  356. }
  357. digest(pack);
  358. byte packData[] = pack.toByteArray();
  359. byte streamData[] = new byte[packData.length + 1];
  360. System.arraycopy(packData, 0, streamData, 0, packData.length);
  361. streamData[packData.length] = 0x7e;
  362. InputStream in = new ByteArrayInputStream(streamData);
  363. PackParser p = index(in);
  364. p.setAllowThin(true);
  365. p.setCheckEofAfterPackFooter(false);
  366. p.setExpectDataAfterPackFooter(true);
  367. p.parse(NullProgressMonitor.INSTANCE);
  368. assertEquals(0x7e, in.read());
  369. }
  370. private static void packHeader(TemporaryBuffer.Heap tinyPack, int cnt)
  371. throws IOException {
  372. final byte[] hdr = new byte[8];
  373. NB.encodeInt32(hdr, 0, 2);
  374. NB.encodeInt32(hdr, 4, cnt);
  375. tinyPack.write(Constants.PACK_SIGNATURE);
  376. tinyPack.write(hdr, 0, 8);
  377. }
  378. private static void deflate(TemporaryBuffer.Heap tinyPack,
  379. final byte[] content)
  380. throws IOException {
  381. final Deflater deflater = new Deflater();
  382. final byte[] buf = new byte[128];
  383. deflater.setInput(content, 0, content.length);
  384. deflater.finish();
  385. do {
  386. final int n = deflater.deflate(buf, 0, buf.length);
  387. if (n > 0)
  388. tinyPack.write(buf, 0, n);
  389. } while (!deflater.finished());
  390. }
  391. private static void digest(TemporaryBuffer.Heap buf) throws IOException {
  392. MessageDigest md = Constants.newMessageDigest();
  393. md.update(buf.toByteArray());
  394. buf.write(md.digest());
  395. }
  396. private ObjectInserter inserter;
  397. @After
  398. public void release() {
  399. if (inserter != null) {
  400. inserter.close();
  401. }
  402. }
  403. private PackParser index(InputStream in) throws IOException {
  404. if (inserter == null)
  405. inserter = db.newObjectInserter();
  406. return inserter.newPackParser(in);
  407. }
  408. }