You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

Don't use interruptable pread() to access pack files The J2SE NIO APIs require that FileChannel close the underlying file descriptor if a thread is interrupted while it is inside of a read or write operation on that channel. This is insane, because it means we cannot share the file descriptor between threads. If a thread is in the middle of the FileChannel variant of IO.readFully() and it receives an interrupt, the pack will be automatically closed on us. This causes the other threads trying to use that same FileChannel to receive IOExceptions, which leads to the pack getting marked as invalid. Once the pack is marked invalid, JGit loses access to its entire contents and starts to report MissingObjectExceptions. Because PackWriter must ensure that the chosen pack file stays available until the current object's data is fully copied to the output, JGit cannot simply reopen the pack when its automatically closed due to an interrupt being sent at the wrong time. The pack may have been deleted by a concurrent `git gc` process, and that open file descriptor might be the last reference to the inode on disk. Once its closed, the PackWriter loses access to that object representation, and it cannot complete sending the object the client. Fortunately, RandomAccessFile's readFully method does not have this problem. Interrupts during readFully() are ignored. However, it requires us to first seek to the offset we need to read, then issue the read call. This requires locking around the file descriptor to prevent concurrent threads from moving the pointer before the read. This reduces the concurrency level, as now only one window can be paged in at a time from each pack. However, the WindowCache should already be holding most of the pages required to handle the working set for a process, and its own internal locking was already limiting us on the number of concurrent loads possible. Provided that most concurrent accesses are getting hits in the WindowCache, or are for different repositories on the same server, we shouldn't see a major performance hit due to the more serialized loading. I would have preferred to use a pool of RandomAccessFiles for each pack, with threads borrowing an instance dedicated to that thread whenever they needed to page in a window. This would permit much higher levels of concurrency by using multiple file descriptors (and file pointers) for each pack. However the code became too complex to develop in any reasonable period of time, so I've chosen to retrofit the existing code with more serialization instead. Bug: 308945 Change-Id: I2e6e11c6e5a105e5aef68871b66200fd725134c9 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 年之前
Don't use interruptable pread() to access pack files The J2SE NIO APIs require that FileChannel close the underlying file descriptor if a thread is interrupted while it is inside of a read or write operation on that channel. This is insane, because it means we cannot share the file descriptor between threads. If a thread is in the middle of the FileChannel variant of IO.readFully() and it receives an interrupt, the pack will be automatically closed on us. This causes the other threads trying to use that same FileChannel to receive IOExceptions, which leads to the pack getting marked as invalid. Once the pack is marked invalid, JGit loses access to its entire contents and starts to report MissingObjectExceptions. Because PackWriter must ensure that the chosen pack file stays available until the current object's data is fully copied to the output, JGit cannot simply reopen the pack when its automatically closed due to an interrupt being sent at the wrong time. The pack may have been deleted by a concurrent `git gc` process, and that open file descriptor might be the last reference to the inode on disk. Once its closed, the PackWriter loses access to that object representation, and it cannot complete sending the object the client. Fortunately, RandomAccessFile's readFully method does not have this problem. Interrupts during readFully() are ignored. However, it requires us to first seek to the offset we need to read, then issue the read call. This requires locking around the file descriptor to prevent concurrent threads from moving the pointer before the read. This reduces the concurrency level, as now only one window can be paged in at a time from each pack. However, the WindowCache should already be holding most of the pages required to handle the working set for a process, and its own internal locking was already limiting us on the number of concurrent loads possible. Provided that most concurrent accesses are getting hits in the WindowCache, or are for different repositories on the same server, we shouldn't see a major performance hit due to the more serialized loading. I would have preferred to use a pool of RandomAccessFiles for each pack, with threads borrowing an instance dedicated to that thread whenever they needed to page in a window. This would permit much higher levels of concurrency by using multiple file descriptors (and file pointers) for each pack. However the code became too complex to develop in any reasonable period of time, so I've chosen to retrofit the existing code with more serialization instead. Bug: 308945 Change-Id: I2e6e11c6e5a105e5aef68871b66200fd725134c9 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 年之前
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192
  1. /*
  2. * Copyright (C) 2009-2010, Google Inc. and others
  3. *
  4. * This program and the accompanying materials are made available under the
  5. * terms of the Eclipse Distribution License v. 1.0 which is available at
  6. * https://www.eclipse.org/org/documents/edl-v10.php.
  7. *
  8. * SPDX-License-Identifier: BSD-3-Clause
  9. */
  10. package org.eclipse.jgit.http.server;
  11. import static javax.servlet.http.HttpServletResponse.SC_PARTIAL_CONTENT;
  12. import static javax.servlet.http.HttpServletResponse.SC_REQUESTED_RANGE_NOT_SATISFIABLE;
  13. import static org.eclipse.jgit.util.HttpSupport.HDR_ACCEPT_RANGES;
  14. import static org.eclipse.jgit.util.HttpSupport.HDR_CONTENT_LENGTH;
  15. import static org.eclipse.jgit.util.HttpSupport.HDR_CONTENT_RANGE;
  16. import static org.eclipse.jgit.util.HttpSupport.HDR_IF_RANGE;
  17. import static org.eclipse.jgit.util.HttpSupport.HDR_RANGE;
  18. import java.io.EOFException;
  19. import java.io.File;
  20. import java.io.FileNotFoundException;
  21. import java.io.IOException;
  22. import java.io.OutputStream;
  23. import java.io.RandomAccessFile;
  24. import java.text.MessageFormat;
  25. import java.time.Instant;
  26. import java.util.Enumeration;
  27. import javax.servlet.http.HttpServletRequest;
  28. import javax.servlet.http.HttpServletResponse;
  29. import org.eclipse.jgit.lib.ObjectId;
  30. import org.eclipse.jgit.util.FS;
  31. /**
  32. * Dumps a file over HTTP GET (or its information via HEAD).
  33. * <p>
  34. * Supports a single byte range requested via {@code Range} HTTP header. This
  35. * feature supports a dumb client to resume download of a larger object file.
  36. */
  37. final class FileSender {
  38. private final File path;
  39. private final RandomAccessFile source;
  40. private final Instant lastModified;
  41. private final long fileLen;
  42. private long pos;
  43. private long end;
  44. FileSender(File path) throws FileNotFoundException {
  45. this.path = path;
  46. this.source = new RandomAccessFile(path, "r");
  47. try {
  48. this.lastModified = FS.DETECTED.lastModifiedInstant(path);
  49. this.fileLen = source.getChannel().size();
  50. this.end = fileLen;
  51. } catch (IOException e) {
  52. try {
  53. source.close();
  54. } catch (IOException closeError) {
  55. // Ignore any error closing the stream.
  56. }
  57. final FileNotFoundException r;
  58. r = new FileNotFoundException(MessageFormat.format(HttpServerText.get().cannotGetLengthOf, path));
  59. r.initCause(e);
  60. throw r;
  61. }
  62. }
  63. void close() {
  64. try {
  65. source.close();
  66. } catch (IOException e) {
  67. // Ignore close errors on a read-only stream.
  68. }
  69. }
  70. Instant getLastModified() {
  71. return lastModified;
  72. }
  73. String getTailChecksum() throws IOException {
  74. final int n = 20;
  75. final byte[] buf = new byte[n];
  76. source.seek(fileLen - n);
  77. source.readFully(buf, 0, n);
  78. return ObjectId.fromRaw(buf).getName();
  79. }
  80. void serve(final HttpServletRequest req, final HttpServletResponse rsp,
  81. final boolean sendBody) throws IOException {
  82. if (!initRangeRequest(req, rsp)) {
  83. rsp.sendError(SC_REQUESTED_RANGE_NOT_SATISFIABLE);
  84. return;
  85. }
  86. rsp.setHeader(HDR_ACCEPT_RANGES, "bytes");
  87. rsp.setHeader(HDR_CONTENT_LENGTH, Long.toString(end - pos));
  88. if (sendBody) {
  89. try (OutputStream out = rsp.getOutputStream()) {
  90. final byte[] buf = new byte[4096];
  91. source.seek(pos);
  92. while (pos < end) {
  93. final int r = (int) Math.min(buf.length, end - pos);
  94. final int n = source.read(buf, 0, r);
  95. if (n < 0) {
  96. throw new EOFException(MessageFormat.format(HttpServerText.get().unexpectedeOFOn, path));
  97. }
  98. out.write(buf, 0, n);
  99. pos += n;
  100. }
  101. out.flush();
  102. }
  103. }
  104. }
  105. private boolean initRangeRequest(final HttpServletRequest req,
  106. final HttpServletResponse rsp) throws IOException {
  107. final Enumeration<String> rangeHeaders = getRange(req);
  108. if (!rangeHeaders.hasMoreElements()) {
  109. // No range headers, the request is fine.
  110. return true;
  111. }
  112. final String range = rangeHeaders.nextElement();
  113. if (rangeHeaders.hasMoreElements()) {
  114. // To simplify the code we support only one range.
  115. return false;
  116. }
  117. final int eq = range.indexOf('=');
  118. final int dash = range.indexOf('-');
  119. if (eq < 0 || dash < 0 || !range.startsWith("bytes=")) {
  120. return false;
  121. }
  122. final String ifRange = req.getHeader(HDR_IF_RANGE);
  123. if (ifRange != null && !getTailChecksum().equals(ifRange)) {
  124. // If the client asked us to verify the ETag and its not
  125. // what they expected we need to send the entire content.
  126. return true;
  127. }
  128. try {
  129. if (eq + 1 == dash) {
  130. // "bytes=-500" means last 500 bytes
  131. pos = Long.parseLong(range.substring(dash + 1));
  132. pos = fileLen - pos;
  133. } else {
  134. // "bytes=500-" (position 500 to end)
  135. // "bytes=500-1000" (position 500 to 1000)
  136. pos = Long.parseLong(range.substring(eq + 1, dash));
  137. if (dash < range.length() - 1) {
  138. end = Long.parseLong(range.substring(dash + 1));
  139. end++; // range was inclusive, want exclusive
  140. }
  141. }
  142. } catch (NumberFormatException e) {
  143. // We probably hit here because of a non-digit such as
  144. // "," appearing at the end of the first range telling
  145. // us there is a second range following. To simplify
  146. // the code we support only one range.
  147. return false;
  148. }
  149. if (end > fileLen) {
  150. end = fileLen;
  151. }
  152. if (pos >= end) {
  153. return false;
  154. }
  155. rsp.setStatus(SC_PARTIAL_CONTENT);
  156. rsp.setHeader(HDR_CONTENT_RANGE, "bytes " + pos + "-" + (end - 1) + "/"
  157. + fileLen);
  158. source.seek(pos);
  159. return true;
  160. }
  161. private static Enumeration<String> getRange(HttpServletRequest req) {
  162. return req.getHeaders(HDR_RANGE);
  163. }
  164. }