You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

LfsProtocolServlet.java 7.6KB

Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
Support LFS protocol and a file system based LFS storage Implement LfsProtocolServlet handling the "Git LFS v1 Batch API" protocol [1]. Add a simple file system based LFS content store and the debug-lfs-store command to simplify testing. Introduce a LargeFileRepository interface to enable additional storage implementation while reusing the same protocol implementation. At the client side we have to configure the lfs.url, specify that we use the batch API and we don't use authentication: [lfs] url = http://host:port/lfs batch = true [lfs "http://host:port/lfs"] access = none the git-lfs client appends the "objects/batch" to the lfs.url. Hard code an Authorization header in the FileLfsRepository.getAction because then git-lfs client will skip asking for credentials. It will just forward the Authorization header from the response to the download/upload request. The FileLfsServlet supports file content storage for "Large File Storage" (LFS) server as defined by the Github LFS API [2]. - upload and download of large files is probably network bound hence use an asynchronous servlet for good scalability - simple object storage in file system with 2 level fan-out - use LockFile to protect writing large objects against multiple concurrent uploads of the same object - to prevent corrupt uploads the uploaded file is rejected if its hash doesn't match id given in URL The debug-lfs-store command is used to run the LfsProtocolServlet and, optionally, the FileLfsServlet which makes it easier to setup a local test server. [1] https://github.com/github/git-lfs/blob/master/docs/api/http-v1-batch.md [2] https://github.com/github/git-lfs/tree/master/docs/api Bug: 472961 Change-Id: I7378da5575159d2195138d799704880c5c82d5f3 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
8 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222
  1. /*
  2. * Copyright (C) 2015, Sasa Zivkov <sasa.zivkov@sap.com> and others
  3. *
  4. * This program and the accompanying materials are made available under the
  5. * terms of the Eclipse Distribution License v. 1.0 which is available at
  6. * https://www.eclipse.org/org/documents/edl-v10.php.
  7. *
  8. * SPDX-License-Identifier: BSD-3-Clause
  9. */
  10. package org.eclipse.jgit.lfs.server;
  11. import static java.nio.charset.StandardCharsets.UTF_8;
  12. import static org.apache.http.HttpStatus.SC_FORBIDDEN;
  13. import static org.apache.http.HttpStatus.SC_INSUFFICIENT_STORAGE;
  14. import static org.apache.http.HttpStatus.SC_INTERNAL_SERVER_ERROR;
  15. import static org.apache.http.HttpStatus.SC_NOT_FOUND;
  16. import static org.apache.http.HttpStatus.SC_OK;
  17. import static org.apache.http.HttpStatus.SC_SERVICE_UNAVAILABLE;
  18. import static org.apache.http.HttpStatus.SC_UNAUTHORIZED;
  19. import static org.apache.http.HttpStatus.SC_UNPROCESSABLE_ENTITY;
  20. import static org.eclipse.jgit.lfs.lib.Constants.DOWNLOAD;
  21. import static org.eclipse.jgit.lfs.lib.Constants.UPLOAD;
  22. import static org.eclipse.jgit.lfs.lib.Constants.VERIFY;
  23. import static org.eclipse.jgit.util.HttpSupport.HDR_AUTHORIZATION;
  24. import java.io.BufferedReader;
  25. import java.io.BufferedWriter;
  26. import java.io.IOException;
  27. import java.io.InputStreamReader;
  28. import java.io.OutputStreamWriter;
  29. import java.io.Reader;
  30. import java.io.Writer;
  31. import java.text.MessageFormat;
  32. import java.util.List;
  33. import javax.servlet.ServletException;
  34. import javax.servlet.http.HttpServlet;
  35. import javax.servlet.http.HttpServletRequest;
  36. import javax.servlet.http.HttpServletResponse;
  37. import org.eclipse.jgit.lfs.errors.LfsBandwidthLimitExceeded;
  38. import org.eclipse.jgit.lfs.errors.LfsException;
  39. import org.eclipse.jgit.lfs.errors.LfsInsufficientStorage;
  40. import org.eclipse.jgit.lfs.errors.LfsRateLimitExceeded;
  41. import org.eclipse.jgit.lfs.errors.LfsRepositoryNotFound;
  42. import org.eclipse.jgit.lfs.errors.LfsRepositoryReadOnly;
  43. import org.eclipse.jgit.lfs.errors.LfsUnauthorized;
  44. import org.eclipse.jgit.lfs.errors.LfsUnavailable;
  45. import org.eclipse.jgit.lfs.errors.LfsValidationError;
  46. import org.eclipse.jgit.lfs.internal.LfsText;
  47. import org.eclipse.jgit.lfs.server.internal.LfsGson;
  48. import org.slf4j.Logger;
  49. import org.slf4j.LoggerFactory;
  50. /**
  51. * LFS protocol handler implementing the LFS batch API [1]
  52. *
  53. * [1] https://github.com/github/git-lfs/blob/master/docs/api/v1/http-v1-batch.md
  54. *
  55. * @since 4.3
  56. */
  57. public abstract class LfsProtocolServlet extends HttpServlet {
  58. private static final Logger LOG = LoggerFactory
  59. .getLogger(LfsProtocolServlet.class);
  60. private static final long serialVersionUID = 1L;
  61. private static final String CONTENTTYPE_VND_GIT_LFS_JSON =
  62. "application/vnd.git-lfs+json; charset=utf-8"; //$NON-NLS-1$
  63. private static final int SC_RATE_LIMIT_EXCEEDED = 429;
  64. private static final int SC_BANDWIDTH_LIMIT_EXCEEDED = 509;
  65. /**
  66. * Get the large file repository for the given request and path.
  67. *
  68. * @param request
  69. * the request
  70. * @param path
  71. * the path
  72. * @param auth
  73. * the Authorization HTTP header
  74. * @return the large file repository storing large files.
  75. * @throws org.eclipse.jgit.lfs.errors.LfsException
  76. * implementations should throw more specific exceptions to
  77. * signal which type of error occurred:
  78. * <dl>
  79. * <dt>{@link org.eclipse.jgit.lfs.errors.LfsValidationError}</dt>
  80. * <dd>when there is a validation error with one or more of the
  81. * objects in the request</dd>
  82. * <dt>{@link org.eclipse.jgit.lfs.errors.LfsRepositoryNotFound}</dt>
  83. * <dd>when the repository does not exist for the user</dd>
  84. * <dt>{@link org.eclipse.jgit.lfs.errors.LfsRepositoryReadOnly}</dt>
  85. * <dd>when the user has read, but not write access. Only
  86. * applicable when the operation in the request is "upload"</dd>
  87. * <dt>{@link org.eclipse.jgit.lfs.errors.LfsRateLimitExceeded}</dt>
  88. * <dd>when the user has hit a rate limit with the server</dd>
  89. * <dt>{@link org.eclipse.jgit.lfs.errors.LfsBandwidthLimitExceeded}</dt>
  90. * <dd>when the bandwidth limit for the user or repository has
  91. * been exceeded</dd>
  92. * <dt>{@link org.eclipse.jgit.lfs.errors.LfsInsufficientStorage}</dt>
  93. * <dd>when there is insufficient storage on the server</dd>
  94. * <dt>{@link org.eclipse.jgit.lfs.errors.LfsUnavailable}</dt>
  95. * <dd>when LFS is not available</dd>
  96. * <dt>{@link org.eclipse.jgit.lfs.errors.LfsException}</dt>
  97. * <dd>when an unexpected internal server error occurred</dd>
  98. * </dl>
  99. * @since 4.7
  100. */
  101. protected abstract LargeFileRepository getLargeFileRepository(
  102. LfsRequest request, String path, String auth) throws LfsException;
  103. /**
  104. * LFS request.
  105. *
  106. * @since 4.5
  107. */
  108. protected static class LfsRequest {
  109. private String operation;
  110. private List<LfsObject> objects;
  111. /**
  112. * Get the LFS operation.
  113. *
  114. * @return the operation
  115. */
  116. public String getOperation() {
  117. return operation;
  118. }
  119. /**
  120. * Get the LFS objects.
  121. *
  122. * @return the objects
  123. */
  124. public List<LfsObject> getObjects() {
  125. return objects;
  126. }
  127. /**
  128. * @return true if the operation is upload.
  129. * @since 4.7
  130. */
  131. public boolean isUpload() {
  132. return operation.equals(UPLOAD);
  133. }
  134. /**
  135. * @return true if the operation is download.
  136. * @since 4.7
  137. */
  138. public boolean isDownload() {
  139. return operation.equals(DOWNLOAD);
  140. }
  141. /**
  142. * @return true if the operation is verify.
  143. * @since 4.7
  144. */
  145. public boolean isVerify() {
  146. return operation.equals(VERIFY);
  147. }
  148. }
  149. /** {@inheritDoc} */
  150. @Override
  151. protected void doPost(HttpServletRequest req, HttpServletResponse res)
  152. throws ServletException, IOException {
  153. Writer w = new BufferedWriter(
  154. new OutputStreamWriter(res.getOutputStream(), UTF_8));
  155. Reader r = new BufferedReader(
  156. new InputStreamReader(req.getInputStream(), UTF_8));
  157. LfsRequest request = LfsGson.fromJson(r, LfsRequest.class);
  158. String path = req.getPathInfo();
  159. res.setContentType(CONTENTTYPE_VND_GIT_LFS_JSON);
  160. LargeFileRepository repo = null;
  161. try {
  162. repo = getLargeFileRepository(request, path,
  163. req.getHeader(HDR_AUTHORIZATION));
  164. if (repo == null) {
  165. String error = MessageFormat
  166. .format(LfsText.get().lfsFailedToGetRepository, path);
  167. LOG.error(error);
  168. throw new LfsException(error);
  169. }
  170. res.setStatus(SC_OK);
  171. TransferHandler handler = TransferHandler
  172. .forOperation(request.operation, repo, request.objects);
  173. LfsGson.toJson(handler.process(), w);
  174. } catch (LfsValidationError e) {
  175. sendError(res, w, SC_UNPROCESSABLE_ENTITY, e.getMessage());
  176. } catch (LfsRepositoryNotFound e) {
  177. sendError(res, w, SC_NOT_FOUND, e.getMessage());
  178. } catch (LfsRepositoryReadOnly e) {
  179. sendError(res, w, SC_FORBIDDEN, e.getMessage());
  180. } catch (LfsRateLimitExceeded e) {
  181. sendError(res, w, SC_RATE_LIMIT_EXCEEDED, e.getMessage());
  182. } catch (LfsBandwidthLimitExceeded e) {
  183. sendError(res, w, SC_BANDWIDTH_LIMIT_EXCEEDED, e.getMessage());
  184. } catch (LfsInsufficientStorage e) {
  185. sendError(res, w, SC_INSUFFICIENT_STORAGE, e.getMessage());
  186. } catch (LfsUnavailable e) {
  187. sendError(res, w, SC_SERVICE_UNAVAILABLE, e.getMessage());
  188. } catch (LfsUnauthorized e) {
  189. sendError(res, w, SC_UNAUTHORIZED, e.getMessage());
  190. } catch (LfsException e) {
  191. sendError(res, w, SC_INTERNAL_SERVER_ERROR, e.getMessage());
  192. } finally {
  193. w.flush();
  194. }
  195. }
  196. private void sendError(HttpServletResponse rsp, Writer writer, int status,
  197. String message) {
  198. rsp.setStatus(status);
  199. LfsGson.toJson(message, writer);
  200. }
  201. }