You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

ObjectUploadListener.java 5.4KB

Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184
  1. /*
  2. * Copyright (C) 2015, Matthias Sohn <matthias.sohn@sap.com>
  3. * and other copyright owners as documented in the project's IP log.
  4. *
  5. * This program and the accompanying materials are made available
  6. * under the terms of the Eclipse Distribution License v1.0 which
  7. * accompanies this distribution, is reproduced below, and is
  8. * available at http://www.eclipse.org/org/documents/edl-v10.php
  9. *
  10. * All rights reserved.
  11. *
  12. * Redistribution and use in source and binary forms, with or
  13. * without modification, are permitted provided that the following
  14. * conditions are met:
  15. *
  16. * - Redistributions of source code must retain the above copyright
  17. * notice, this list of conditions and the following disclaimer.
  18. *
  19. * - Redistributions in binary form must reproduce the above
  20. * copyright notice, this list of conditions and the following
  21. * disclaimer in the documentation and/or other materials provided
  22. * with the distribution.
  23. *
  24. * - Neither the name of the Eclipse Foundation, Inc. nor the
  25. * names of its contributors may be used to endorse or promote
  26. * products derived from this software without specific prior
  27. * written permission.
  28. *
  29. * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
  30. * CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
  31. * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
  32. * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
  33. * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
  34. * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
  35. * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
  36. * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
  37. * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
  38. * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
  39. * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
  40. * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
  41. * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  42. */
  43. package org.eclipse.jgit.lfs.server.fs;
  44. import java.io.FileNotFoundException;
  45. import java.io.IOException;
  46. import java.nio.ByteBuffer;
  47. import java.nio.channels.Channels;
  48. import java.nio.channels.ReadableByteChannel;
  49. import java.nio.channels.WritableByteChannel;
  50. import java.util.logging.Level;
  51. import java.util.logging.Logger;
  52. import javax.servlet.AsyncContext;
  53. import javax.servlet.ReadListener;
  54. import javax.servlet.ServletInputStream;
  55. import javax.servlet.http.HttpServletRequest;
  56. import javax.servlet.http.HttpServletResponse;
  57. import org.apache.http.HttpStatus;
  58. import org.eclipse.jgit.lfs.errors.CorruptLongObjectException;
  59. import org.eclipse.jgit.lfs.internal.AtomicObjectOutputStream;
  60. import org.eclipse.jgit.lfs.lib.AnyLongObjectId;
  61. import org.eclipse.jgit.lfs.lib.Constants;
  62. /**
  63. * Handle asynchronous object upload.
  64. *
  65. * @since 4.6
  66. */
  67. public class ObjectUploadListener implements ReadListener {
  68. private static Logger LOG = Logger
  69. .getLogger(ObjectUploadListener.class.getName());
  70. private final AsyncContext context;
  71. private final HttpServletResponse response;
  72. private final ServletInputStream in;
  73. private final ReadableByteChannel inChannel;
  74. private final AtomicObjectOutputStream out;
  75. private WritableByteChannel channel;
  76. private final ByteBuffer buffer = ByteBuffer.allocateDirect(8192);
  77. /**
  78. * @param repository
  79. * the repository storing large objects
  80. * @param context
  81. * @param request
  82. * @param response
  83. * @param id
  84. * @throws FileNotFoundException
  85. * @throws IOException
  86. */
  87. public ObjectUploadListener(FileLfsRepository repository,
  88. AsyncContext context, HttpServletRequest request,
  89. HttpServletResponse response, AnyLongObjectId id)
  90. throws FileNotFoundException, IOException {
  91. this.context = context;
  92. this.response = response;
  93. this.in = request.getInputStream();
  94. this.inChannel = Channels.newChannel(in);
  95. this.out = repository.getOutputStream(id);
  96. this.channel = Channels.newChannel(out);
  97. response.setContentType(Constants.CONTENT_TYPE_GIT_LFS_JSON);
  98. }
  99. /**
  100. * Writes all the received data to the output channel
  101. *
  102. * @throws IOException
  103. */
  104. @Override
  105. public void onDataAvailable() throws IOException {
  106. while (in.isReady()) {
  107. if (inChannel.read(buffer) > 0) {
  108. buffer.flip();
  109. channel.write(buffer);
  110. buffer.compact();
  111. } else {
  112. buffer.flip();
  113. while (buffer.hasRemaining()) {
  114. channel.write(buffer);
  115. }
  116. close();
  117. return;
  118. }
  119. }
  120. }
  121. /**
  122. * @throws IOException
  123. */
  124. @Override
  125. public void onAllDataRead() throws IOException {
  126. close();
  127. }
  128. /**
  129. * @throws IOException
  130. */
  131. protected void close() throws IOException {
  132. try {
  133. inChannel.close();
  134. channel.close();
  135. // TODO check if status 200 is ok for PUT request, HTTP foresees 204
  136. // for successful PUT without response body
  137. if (!response.isCommitted()) {
  138. response.setStatus(HttpServletResponse.SC_OK);
  139. }
  140. } finally {
  141. context.complete();
  142. }
  143. }
  144. /**
  145. * @param e
  146. * the exception that caused the problem
  147. */
  148. @Override
  149. public void onError(Throwable e) {
  150. try {
  151. out.abort();
  152. inChannel.close();
  153. channel.close();
  154. int status;
  155. if (e instanceof CorruptLongObjectException) {
  156. status = HttpStatus.SC_BAD_REQUEST;
  157. LOG.log(Level.WARNING, e.getMessage(), e);
  158. } else {
  159. status = HttpStatus.SC_INTERNAL_SERVER_ERROR;
  160. LOG.log(Level.SEVERE, e.getMessage(), e);
  161. }
  162. FileLfsServlet.sendError(response, status, e.getMessage());
  163. } catch (IOException ex) {
  164. LOG.log(Level.SEVERE, ex.getMessage(), ex);
  165. }
  166. }
  167. }