summaryrefslogtreecommitdiffstats
path: root/build/integration/features/webdav-related.feature
diff options
context:
space:
mode:
authorJulius Härtl <jus@bitgrid.net>2021-05-06 18:26:42 +0200
committerArthur Schiwon <blizzz@arthur-schiwon.de>2023-03-09 15:31:12 +0100
commit2664036b57cf807180376d3365e614b9d90f292f (patch)
treebfac8c903f5f01b9d78b00d47ef2fe4b53f70439 /build/integration/features/webdav-related.feature
parent5943d0a715d0da1608cd5c84f53aff3f3e0d01ac (diff)
downloadnextcloud-server-2664036b57cf807180376d3365e614b9d90f292f.tar.gz
nextcloud-server-2664036b57cf807180376d3365e614b9d90f292f.zip
feat(s3): Use multipart upload for chunked uploading
This allows to stream file chunks directly to S3 during upload. Signed-off-by: Julius Härtl <jus@bitgrid.net>
Diffstat (limited to 'build/integration/features/webdav-related.feature')
-rw-r--r--build/integration/features/webdav-related.feature104
1 files changed, 100 insertions, 4 deletions
diff --git a/build/integration/features/webdav-related.feature b/build/integration/features/webdav-related.feature
index 21e195af115..f63ee24527f 100644
--- a/build/integration/features/webdav-related.feature
+++ b/build/integration/features/webdav-related.feature
@@ -191,10 +191,10 @@ Feature: webdav-related
And As an "user1"
And user "user1" created a folder "/testquota"
And as "user1" creating a share with
- | path | testquota |
- | shareType | 0 |
- | permissions | 31 |
- | shareWith | user0 |
+ | path | testquota |
+ | shareType | 0 |
+ | permissions | 31 |
+ | shareWith | user0 |
And user "user0" accepts last share
And As an "user0"
When User "user0" uploads file "data/textfile.txt" to "/testquota/asdf.txt"
@@ -630,3 +630,99 @@ Feature: webdav-related
And As an "user1"
And user "user1" created a folder "/testshare "
Then the HTTP status code should be "400"
+
+ @s3-multipart
+ Scenario: Upload chunked file asc with new chunking v2
+ Given using new dav path
+ And user "user0" exists
+ And user "user0" creates a file locally with "3" x 5 MB chunks
+ And user "user0" creates a new chunking v2 upload with id "chunking-42" and destination "/myChunkedFile1.txt"
+ And user "user0" uploads new chunk v2 file "1" to id "chunking-42"
+ And user "user0" uploads new chunk v2 file "2" to id "chunking-42"
+ And user "user0" uploads new chunk v2 file "3" to id "chunking-42"
+ And user "user0" moves new chunk v2 file with id "chunking-42"
+ Then the S3 multipart upload was successful with status "201"
+ When As an "user0"
+ And Downloading file "/myChunkedFile1.txt"
+ Then Downloaded content should be the created file
+
+ @s3-multipart
+ Scenario: Upload chunked file desc with new chunking v2
+ Given using new dav path
+ And user "user0" exists
+ And user "user0" creates a file locally with "3" x 5 MB chunks
+ And user "user0" creates a new chunking v2 upload with id "chunking-42" and destination "/myChunkedFile.txt"
+ And user "user0" uploads new chunk v2 file "3" to id "chunking-42"
+ And user "user0" uploads new chunk v2 file "2" to id "chunking-42"
+ And user "user0" uploads new chunk v2 file "1" to id "chunking-42"
+ And user "user0" moves new chunk v2 file with id "chunking-42"
+ Then the S3 multipart upload was successful with status "201"
+ When As an "user0"
+ And Downloading file "/myChunkedFile.txt"
+ Then Downloaded content should be the created file
+
+ @s3-multipart
+ Scenario: Upload chunked file with random chunk sizes
+ Given using new dav path
+ And user "user0" exists
+ And user "user0" creates a new chunking v2 upload with id "chunking-random" and destination "/myChunkedFile.txt"
+ And user user0 creates the chunk 1 with a size of 5 MB
+ And user user0 creates the chunk 2 with a size of 7 MB
+ And user user0 creates the chunk 3 with a size of 9 MB
+ And user user0 creates the chunk 4 with a size of 1 MB
+ And user "user0" uploads new chunk v2 file "1" to id "chunking-random"
+ And user "user0" uploads new chunk v2 file "3" to id "chunking-random"
+ And user "user0" uploads new chunk v2 file "2" to id "chunking-random"
+ And user "user0" uploads new chunk v2 file "4" to id "chunking-random"
+ And user "user0" moves new chunk v2 file with id "chunking-random"
+ Then the S3 multipart upload was successful with status "201"
+ When As an "user0"
+ And Downloading file "/myChunkedFile.txt"
+ Then Downloaded content should be the created file
+
+ @s3-multipart
+ Scenario: Upload chunked file with too low chunk sizes
+ Given using new dav path
+ And user "user0" exists
+ And user "user0" creates a new chunking v2 upload with id "chunking-random" and destination "/myChunkedFile.txt"
+ And user user0 creates the chunk 1 with a size of 5 MB
+ And user user0 creates the chunk 2 with a size of 2 MB
+ And user user0 creates the chunk 3 with a size of 5 MB
+ And user user0 creates the chunk 4 with a size of 1 MB
+ And user "user0" uploads new chunk v2 file "1" to id "chunking-random"
+ And user "user0" uploads new chunk v2 file "3" to id "chunking-random"
+ And user "user0" uploads new chunk v2 file "2" to id "chunking-random"
+ And user "user0" uploads new chunk v2 file "4" to id "chunking-random"
+ And user "user0" moves new chunk v2 file with id "chunking-random"
+ Then the HTTP status code should be "500"
+
+ @s3-multipart
+ Scenario: Upload chunked file with special characters with new chunking v2
+ Given using new dav path
+ And user "user0" exists
+ And user "user0" creates a file locally with "3" x 5 MB chunks
+ And user "user0" creates a new chunking v2 upload with id "chunking-42" and destination "/äöü.txt"
+ And user "user0" uploads new chunk v2 file "1" to id "chunking-42"
+ And user "user0" uploads new chunk v2 file "2" to id "chunking-42"
+ And user "user0" uploads new chunk v2 file "3" to id "chunking-42"
+ And user "user0" moves new chunk v2 file with id "chunking-42"
+ Then the S3 multipart upload was successful with status "201"
+ When As an "user0"
+ And Downloading file "/äöü.txt"
+ Then Downloaded content should be the created file
+
+ @s3-multipart
+ Scenario: Upload chunked file with special characters in path with new chunking v2
+ Given using new dav path
+ And user "user0" exists
+ And User "user0" created a folder "üäöé"
+ And user "user0" creates a file locally with "3" x 5 MB chunks
+ And user "user0" creates a new chunking v2 upload with id "chunking-42" and destination "/üäöé/äöü.txt"
+ And user "user0" uploads new chunk v2 file "1" to id "chunking-42"
+ And user "user0" uploads new chunk v2 file "2" to id "chunking-42"
+ And user "user0" uploads new chunk v2 file "3" to id "chunking-42"
+ And user "user0" moves new chunk v2 file with id "chunking-42"
+ Then the S3 multipart upload was successful with status "201"
+ When As an "user0"
+ And Downloading file "/üäöé/äöü.txt"
+ Then Downloaded content should be the created file