You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

ObjectUploadListener.java 6.7KB

Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 anni fa
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230
  1. /*
  2. * Copyright (C) 2015, Matthias Sohn <matthias.sohn@sap.com>
  3. * and other copyright owners as documented in the project's IP log.
  4. *
  5. * This program and the accompanying materials are made available
  6. * under the terms of the Eclipse Distribution License v1.0 which
  7. * accompanies this distribution, is reproduced below, and is
  8. * available at http://www.eclipse.org/org/documents/edl-v10.php
  9. *
  10. * All rights reserved.
  11. *
  12. * Redistribution and use in source and binary forms, with or
  13. * without modification, are permitted provided that the following
  14. * conditions are met:
  15. *
  16. * - Redistributions of source code must retain the above copyright
  17. * notice, this list of conditions and the following disclaimer.
  18. *
  19. * - Redistributions in binary form must reproduce the above
  20. * copyright notice, this list of conditions and the following
  21. * disclaimer in the documentation and/or other materials provided
  22. * with the distribution.
  23. *
  24. * - Neither the name of the Eclipse Foundation, Inc. nor the
  25. * names of its contributors may be used to endorse or promote
  26. * products derived from this software without specific prior
  27. * written permission.
  28. *
  29. * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
  30. * CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
  31. * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
  32. * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
  33. * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
  34. * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
  35. * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
  36. * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
  37. * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
  38. * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
  39. * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
  40. * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
  41. * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  42. */
  43. package org.eclipse.jgit.lfs.server.fs;
  44. import java.io.FileNotFoundException;
  45. import java.io.IOException;
  46. import java.nio.ByteBuffer;
  47. import java.nio.channels.Channels;
  48. import java.nio.channels.ReadableByteChannel;
  49. import java.nio.channels.WritableByteChannel;
  50. import java.nio.file.Path;
  51. import java.util.logging.Level;
  52. import java.util.logging.Logger;
  53. import javax.servlet.AsyncContext;
  54. import javax.servlet.ReadListener;
  55. import javax.servlet.ServletInputStream;
  56. import javax.servlet.http.HttpServletRequest;
  57. import javax.servlet.http.HttpServletResponse;
  58. import org.apache.http.HttpStatus;
  59. import org.eclipse.jgit.lfs.errors.CorruptLongObjectException;
  60. import org.eclipse.jgit.lfs.internal.AtomicObjectOutputStream;
  61. import org.eclipse.jgit.lfs.lib.AnyLongObjectId;
  62. import org.eclipse.jgit.lfs.lib.Constants;
  63. /**
  64. * Handle asynchronous object upload.
  65. *
  66. * @since 4.6
  67. */
  68. public class ObjectUploadListener implements ReadListener {
  69. private static final Logger LOG = Logger
  70. .getLogger(ObjectUploadListener.class.getName());
  71. private final AsyncContext context;
  72. private final HttpServletResponse response;
  73. private final ServletInputStream in;
  74. private final ReadableByteChannel inChannel;
  75. private final AtomicObjectOutputStream out;
  76. private WritableByteChannel channel;
  77. private final ByteBuffer buffer = ByteBuffer.allocateDirect(8192);
  78. private final Path path;
  79. private long uploaded;
  80. private Callback callback;
  81. /**
  82. * Callback invoked after object upload completed.
  83. *
  84. * @since 5.1.7
  85. */
  86. public interface Callback {
  87. /**
  88. * Notified after object upload completed.
  89. *
  90. * @param path
  91. * path to the object on the backend
  92. * @param size
  93. * uploaded size in bytes
  94. */
  95. void uploadCompleted(String path, long size);
  96. }
  97. /**
  98. * Constructor for ObjectUploadListener.
  99. *
  100. * @param repository
  101. * the repository storing large objects
  102. * @param context
  103. * a {@link javax.servlet.AsyncContext} object.
  104. * @param request
  105. * a {@link javax.servlet.http.HttpServletRequest} object.
  106. * @param response
  107. * a {@link javax.servlet.http.HttpServletResponse} object.
  108. * @param id
  109. * a {@link org.eclipse.jgit.lfs.lib.AnyLongObjectId} object.
  110. * @throws java.io.FileNotFoundException
  111. * @throws java.io.IOException
  112. */
  113. public ObjectUploadListener(FileLfsRepository repository,
  114. AsyncContext context, HttpServletRequest request,
  115. HttpServletResponse response, AnyLongObjectId id)
  116. throws FileNotFoundException, IOException {
  117. this.context = context;
  118. this.response = response;
  119. this.in = request.getInputStream();
  120. this.inChannel = Channels.newChannel(in);
  121. this.out = repository.getOutputStream(id);
  122. this.channel = Channels.newChannel(out);
  123. this.path = repository.getPath(id);
  124. this.uploaded = 0L;
  125. response.setContentType(Constants.CONTENT_TYPE_GIT_LFS_JSON);
  126. }
  127. /**
  128. * Set the callback to invoke after upload completed.
  129. *
  130. * @param callback
  131. * the callback
  132. * @return {@code this}.
  133. * @since 5.1.7
  134. */
  135. public ObjectUploadListener setCallback(Callback callback) {
  136. this.callback = callback;
  137. return this;
  138. }
  139. /**
  140. * {@inheritDoc}
  141. *
  142. * Writes all the received data to the output channel
  143. */
  144. @Override
  145. public void onDataAvailable() throws IOException {
  146. while (in.isReady()) {
  147. if (inChannel.read(buffer) > 0) {
  148. buffer.flip();
  149. uploaded += Integer.valueOf(channel.write(buffer)).longValue();
  150. buffer.compact();
  151. } else {
  152. buffer.flip();
  153. while (buffer.hasRemaining()) {
  154. uploaded += Integer.valueOf(channel.write(buffer))
  155. .longValue();
  156. }
  157. close();
  158. return;
  159. }
  160. }
  161. }
  162. /** {@inheritDoc} */
  163. @Override
  164. public void onAllDataRead() throws IOException {
  165. close();
  166. }
  167. /**
  168. * Close resources held by this listener
  169. *
  170. * @throws java.io.IOException
  171. */
  172. protected void close() throws IOException {
  173. try {
  174. inChannel.close();
  175. channel.close();
  176. // TODO check if status 200 is ok for PUT request, HTTP foresees 204
  177. // for successful PUT without response body
  178. if (!response.isCommitted()) {
  179. response.setStatus(HttpServletResponse.SC_OK);
  180. }
  181. if (callback != null) {
  182. callback.uploadCompleted(path.toString(), uploaded);
  183. }
  184. } finally {
  185. context.complete();
  186. }
  187. }
  188. /** {@inheritDoc} */
  189. @Override
  190. public void onError(Throwable e) {
  191. try {
  192. out.abort();
  193. inChannel.close();
  194. channel.close();
  195. int status;
  196. if (e instanceof CorruptLongObjectException) {
  197. status = HttpStatus.SC_BAD_REQUEST;
  198. LOG.log(Level.WARNING, e.getMessage(), e);
  199. } else {
  200. status = HttpStatus.SC_INTERNAL_SERVER_ERROR;
  201. LOG.log(Level.SEVERE, e.getMessage(), e);
  202. }
  203. FileLfsServlet.sendError(response, status, e.getMessage());
  204. } catch (IOException ex) {
  205. LOG.log(Level.SEVERE, ex.getMessage(), ex);
  206. }
  207. }
  208. }