You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

LfsServerTest.java 9.4KB

Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 年之前
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 年之前
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 年之前
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 年之前
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 年之前
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 年之前
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 年之前
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 年之前
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 年之前
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285
  1. /*
  2. * Copyright (C) 2015, Matthias Sohn <matthias.sohn@sap.com>
  3. * and other copyright owners as documented in the project's IP log.
  4. *
  5. * This program and the accompanying materials are made available
  6. * under the terms of the Eclipse Distribution License v1.0 which
  7. * accompanies this distribution, is reproduced below, and is
  8. * available at http://www.eclipse.org/org/documents/edl-v10.php
  9. *
  10. * All rights reserved.
  11. *
  12. * Redistribution and use in source and binary forms, with or
  13. * without modification, are permitted provided that the following
  14. * conditions are met:
  15. *
  16. * - Redistributions of source code must retain the above copyright
  17. * notice, this list of conditions and the following disclaimer.
  18. *
  19. * - Redistributions in binary form must reproduce the above
  20. * copyright notice, this list of conditions and the following
  21. * disclaimer in the documentation and/or other materials provided
  22. * with the distribution.
  23. *
  24. * - Neither the name of the Eclipse Foundation, Inc. nor the
  25. * names of its contributors may be used to endorse or promote
  26. * products derived from this software without specific prior
  27. * written permission.
  28. *
  29. * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
  30. * CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
  31. * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
  32. * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
  33. * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
  34. * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
  35. * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
  36. * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
  37. * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
  38. * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
  39. * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
  40. * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
  41. * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  42. */
  43. package org.eclipse.jgit.lfs.server.fs;
  44. import static java.nio.charset.StandardCharsets.UTF_8;
  45. import static org.junit.Assert.assertEquals;
  46. import java.io.BufferedInputStream;
  47. import java.io.FileNotFoundException;
  48. import java.io.IOException;
  49. import java.io.InputStream;
  50. import java.nio.ByteBuffer;
  51. import java.nio.channels.Channels;
  52. import java.nio.channels.FileChannel;
  53. import java.nio.channels.ReadableByteChannel;
  54. import java.nio.file.Files;
  55. import java.nio.file.Path;
  56. import java.nio.file.Paths;
  57. import java.nio.file.StandardOpenOption;
  58. import java.security.DigestInputStream;
  59. import java.security.SecureRandom;
  60. import org.apache.http.HttpEntity;
  61. import org.apache.http.HttpResponse;
  62. import org.apache.http.StatusLine;
  63. import org.apache.http.client.ClientProtocolException;
  64. import org.apache.http.client.methods.CloseableHttpResponse;
  65. import org.apache.http.client.methods.HttpGet;
  66. import org.apache.http.client.methods.HttpPut;
  67. import org.apache.http.entity.ContentType;
  68. import org.apache.http.entity.InputStreamEntity;
  69. import org.apache.http.entity.StringEntity;
  70. import org.apache.http.impl.client.CloseableHttpClient;
  71. import org.apache.http.impl.client.HttpClientBuilder;
  72. import org.eclipse.jetty.servlet.ServletContextHandler;
  73. import org.eclipse.jetty.servlet.ServletHolder;
  74. import org.eclipse.jgit.junit.http.AppServer;
  75. import org.eclipse.jgit.lfs.errors.LfsException;
  76. import org.eclipse.jgit.lfs.lib.AnyLongObjectId;
  77. import org.eclipse.jgit.lfs.lib.Constants;
  78. import org.eclipse.jgit.lfs.lib.LongObjectId;
  79. import org.eclipse.jgit.lfs.server.LargeFileRepository;
  80. import org.eclipse.jgit.lfs.server.LfsProtocolServlet;
  81. import org.eclipse.jgit.lfs.test.LongObjectIdTestUtils;
  82. import org.eclipse.jgit.util.FileUtils;
  83. import org.eclipse.jgit.util.IO;
  84. import org.junit.After;
  85. import org.junit.Before;
  86. public abstract class LfsServerTest {
  87. private static final long timeout = /* 10 sec */ 10 * 1000;
  88. protected static final int MiB = 1024 * 1024;
  89. /** In-memory application server; subclass must start. */
  90. protected AppServer server;
  91. private Path tmp;
  92. private Path dir;
  93. protected FileLfsRepository repository;
  94. protected FileLfsServlet servlet;
  95. public LfsServerTest() {
  96. super();
  97. }
  98. public Path getTempDirectory() {
  99. return tmp;
  100. }
  101. public Path getDir() {
  102. return dir;
  103. }
  104. @Before
  105. public void setup() throws Exception {
  106. tmp = Files.createTempDirectory("jgit_test_");
  107. server = new AppServer();
  108. ServletContextHandler app = server.addContext("/lfs");
  109. dir = Paths.get(tmp.toString(), "lfs");
  110. this.repository = new FileLfsRepository(null, dir);
  111. servlet = new FileLfsServlet(repository, timeout);
  112. app.addServlet(new ServletHolder(servlet), "/objects/*");
  113. LfsProtocolServlet protocol = new LfsProtocolServlet() {
  114. private static final long serialVersionUID = 1L;
  115. @Override
  116. protected LargeFileRepository getLargeFileRepository(
  117. LfsRequest request, String path, String auth)
  118. throws LfsException {
  119. return repository;
  120. }
  121. };
  122. app.addServlet(new ServletHolder(protocol), "/objects/batch");
  123. server.setUp();
  124. this.repository.setUrl(server.getURI() + "/lfs/objects/");
  125. }
  126. @After
  127. public void tearDown() throws Exception {
  128. server.tearDown();
  129. FileUtils.delete(tmp.toFile(), FileUtils.RECURSIVE | FileUtils.RETRY);
  130. }
  131. protected AnyLongObjectId putContent(String s)
  132. throws IOException, ClientProtocolException {
  133. AnyLongObjectId id = LongObjectIdTestUtils.hash(s);
  134. return putContent(id, s);
  135. }
  136. protected AnyLongObjectId putContent(AnyLongObjectId id, String s)
  137. throws ClientProtocolException, IOException {
  138. try (CloseableHttpClient client = HttpClientBuilder.create().build()) {
  139. HttpEntity entity = new StringEntity(s,
  140. ContentType.APPLICATION_OCTET_STREAM);
  141. String hexId = id.name();
  142. HttpPut request = new HttpPut(
  143. server.getURI() + "/lfs/objects/" + hexId);
  144. request.setEntity(entity);
  145. try (CloseableHttpResponse response = client.execute(request)) {
  146. StatusLine statusLine = response.getStatusLine();
  147. int status = statusLine.getStatusCode();
  148. if (status >= 400) {
  149. throw new RuntimeException("Status: " + status + ". "
  150. + statusLine.getReasonPhrase());
  151. }
  152. }
  153. return id;
  154. }
  155. }
  156. protected LongObjectId putContent(Path f)
  157. throws FileNotFoundException, IOException {
  158. try (CloseableHttpClient client = HttpClientBuilder.create().build()) {
  159. LongObjectId id1, id2;
  160. String hexId1, hexId2;
  161. try (DigestInputStream in = new DigestInputStream(
  162. new BufferedInputStream(Files.newInputStream(f)),
  163. Constants.newMessageDigest())) {
  164. InputStreamEntity entity = new InputStreamEntity(in,
  165. Files.size(f), ContentType.APPLICATION_OCTET_STREAM);
  166. id1 = LongObjectIdTestUtils.hash(f);
  167. hexId1 = id1.name();
  168. HttpPut request = new HttpPut(
  169. server.getURI() + "/lfs/objects/" + hexId1);
  170. request.setEntity(entity);
  171. HttpResponse response = client.execute(request);
  172. checkResponseStatus(response);
  173. id2 = LongObjectId.fromRaw(in.getMessageDigest().digest());
  174. hexId2 = id2.name();
  175. assertEquals(hexId1, hexId2);
  176. }
  177. return id1;
  178. }
  179. }
  180. private void checkResponseStatus(HttpResponse response) {
  181. StatusLine statusLine = response.getStatusLine();
  182. int status = statusLine.getStatusCode();
  183. if (statusLine.getStatusCode() >= 400) {
  184. String error;
  185. try {
  186. ByteBuffer buf = IO.readWholeStream(new BufferedInputStream(
  187. response.getEntity().getContent()), 1024);
  188. if (buf.hasArray()) {
  189. error = new String(buf.array(),
  190. buf.arrayOffset() + buf.position(), buf.remaining(),
  191. UTF_8);
  192. } else {
  193. final byte[] b = new byte[buf.remaining()];
  194. buf.duplicate().get(b);
  195. error = new String(b, UTF_8);
  196. }
  197. } catch (IOException e) {
  198. error = statusLine.getReasonPhrase();
  199. }
  200. throw new RuntimeException("Status: " + status + " " + error);
  201. }
  202. assertEquals(200, status);
  203. }
  204. protected long getContent(AnyLongObjectId id, Path f) throws IOException {
  205. String hexId = id.name();
  206. return getContent(hexId, f);
  207. }
  208. protected long getContent(String hexId, Path f) throws IOException {
  209. try (CloseableHttpClient client = HttpClientBuilder.create().build()) {
  210. HttpGet request = new HttpGet(
  211. server.getURI() + "/lfs/objects/" + hexId);
  212. HttpResponse response = client.execute(request);
  213. checkResponseStatus(response);
  214. HttpEntity entity = response.getEntity();
  215. long pos = 0;
  216. try (InputStream in = entity.getContent();
  217. ReadableByteChannel inChannel = Channels.newChannel(in);
  218. FileChannel outChannel = FileChannel.open(f,
  219. StandardOpenOption.CREATE_NEW,
  220. StandardOpenOption.WRITE)) {
  221. long transferred;
  222. do {
  223. transferred = outChannel.transferFrom(inChannel, pos, MiB);
  224. pos += transferred;
  225. } while (transferred > 0);
  226. }
  227. return pos;
  228. }
  229. }
  230. /**
  231. * Creates a file with random content, repeatedly writing a random string of
  232. * 4k length to the file until the file has at least the specified length.
  233. *
  234. * @param f
  235. * file to fill
  236. * @param size
  237. * size of the file to generate
  238. * @return length of the generated file in bytes
  239. * @throws IOException
  240. */
  241. protected long createPseudoRandomContentFile(Path f, long size)
  242. throws IOException {
  243. SecureRandom rnd = new SecureRandom();
  244. byte[] buf = new byte[4096];
  245. rnd.nextBytes(buf);
  246. ByteBuffer bytebuf = ByteBuffer.wrap(buf);
  247. try (FileChannel outChannel = FileChannel.open(f,
  248. StandardOpenOption.CREATE_NEW, StandardOpenOption.WRITE)) {
  249. long len = 0;
  250. do {
  251. len += outChannel.write(bytebuf);
  252. if (bytebuf.position() == 4096) {
  253. bytebuf.rewind();
  254. }
  255. } while (len < size);
  256. }
  257. return Files.size(f);
  258. }
  259. }