You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

Rewrite reference handling to be abstract and accurate This commit actually does three major changes to the way references are handled within JGit. Unfortunately they were easier to do as a single massive commit than to break them up into smaller units. Disambiguate symbolic references: --------------------------------- Reporting a symbolic reference such as HEAD as though it were any other normal reference like refs/heads/master causes subtle programming errors. We have been bitten by this error on several occasions, as have some downstream applications written by myself. Instead of reporting HEAD as a reference whose name differs from its "original name", report it as an actual SymbolicRef object that the application can test the type and examine the target of. With this change, Ref is now an abstract type with different subclasses for the different types. In the classical example of "HEAD" being a symbolic reference to branch "refs/heads/master", the Repository.getAllRefs() method will now return: Map<String, Ref> all = repository.getAllRefs(); SymbolicRef HEAD = (SymbolicRef) all.get("HEAD"); ObjectIdRef master = (ObjectIdRef) all.get("refs/heads/master"); assertSame(master, HEAD.getTarget()); assertSame(master.getObjectId(), HEAD.getObjectId()); assertEquals("HEAD", HEAD.getName()); assertEquals("refs/heads/master", master.getName()); A nice side-effect of this change is the storage type of the symbolic reference is no longer ambiguous with the storge type of the underlying reference it targets. In the above example, if master was only available in the packed-refs file, then the following is also true: assertSame(Ref.Storage.LOOSE, HEAD.getStorage()); assertSame(Ref.Storage.PACKED, master.getStorage()); (Prior to this change we returned the ambiguous storage of LOOSE_PACKED for HEAD, which was confusing since it wasn't actually true on disk). Another nice side-effect of this change is all intermediate symbolic references are preserved, and are therefore visible to the application when they walk the target chain. We can now correctly inspect chains of symbolic references. As a result of this change the Ref.getOrigName() method has been removed from the API. Applications should identify a symbolic reference by testing for isSymbolic() and not by using an arcane string comparsion between properties. Abstract the RefDatabase storage: --------------------------------- RefDatabase is now abstract, similar to ObjectDatabase, and a new concrete implementation called RefDirectory is used for the traditional on-disk storage layout. In the future we plan to support additional implementations, such as a pure in-memory RefDatabase for unit testing purposes. Optimize RefDirectory: ---------------------- The implementation of the in-memory reference cache, reading, and update routines has been completely rewritten. Much of the code was heavily borrowed or cribbed from the prior implementation, so copyright notices have been left intact as much as possible. The RefDirectory cache no longer confuses symbolic references with normal references. This permits the cache to resolve the value of a symbolic reference as late as possible, ensuring it is always current, without needing to maintain reverse pointers. The cache is now 2 sorted RefLists, rather than 3 HashMaps. Using sorted lists allows the implementation to reduce the in-memory footprint when storing many refs. Using specialized types for the elements allows the code to avoid additional map lookups for auxiliary stat information. To improve scan time during getRefs(), the lists are returned via a copy-on-write contract. Most callers of getRefs() do not modify the returned collections, so the copy-on-write semantics improves access on repositories with a large number of packed references. Iterator traversals of the returned Map<String,Ref> are performed using a simple merge-join of the two cache lists, ensuring we can perform the entire traversal in linear time as a function of the number of references: O(PackedRefs + LooseRefs). Scans of the loose reference space to update the cache run in O(LooseRefs log LooseRefs) time, as the directory contents are sorted before being merged against the in-memory cache. Since the majority of stable references are kept packed, there typically are only a handful of reference names to be sorted, so the sorting cost should not be very high. Locking is reduced during getRefs() by taking advantage of the copy-on-write semantics of the improved cache data structure. This permits concurrent readers to pull back references without blocking each other. If there is contention updating the cache during a scan, one or more updates are simply skipped and will get picked up again in a future scan. Writing to the $GIT_DIR/packed-refs during reference delete is now fully atomic. The file is locked, reparsed fresh, and written back out if a change is necessary. This avoids all race conditions with concurrent external updates of the packed-refs file. The RefLogWriter class has been fully folded into RefDirectory and is therefore deleted. Maintaining the reference's log is the responsiblity of the database implementation, and not all implementations will use java.io for access. Future work still remains to be done to abstract the ReflogReader class away from local disk IO. Change-Id: I26b9287c45a4b2d2be35ba2849daa316f5eec85d Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 年之前
Rewrite reference handling to be abstract and accurate This commit actually does three major changes to the way references are handled within JGit. Unfortunately they were easier to do as a single massive commit than to break them up into smaller units. Disambiguate symbolic references: --------------------------------- Reporting a symbolic reference such as HEAD as though it were any other normal reference like refs/heads/master causes subtle programming errors. We have been bitten by this error on several occasions, as have some downstream applications written by myself. Instead of reporting HEAD as a reference whose name differs from its "original name", report it as an actual SymbolicRef object that the application can test the type and examine the target of. With this change, Ref is now an abstract type with different subclasses for the different types. In the classical example of "HEAD" being a symbolic reference to branch "refs/heads/master", the Repository.getAllRefs() method will now return: Map<String, Ref> all = repository.getAllRefs(); SymbolicRef HEAD = (SymbolicRef) all.get("HEAD"); ObjectIdRef master = (ObjectIdRef) all.get("refs/heads/master"); assertSame(master, HEAD.getTarget()); assertSame(master.getObjectId(), HEAD.getObjectId()); assertEquals("HEAD", HEAD.getName()); assertEquals("refs/heads/master", master.getName()); A nice side-effect of this change is the storage type of the symbolic reference is no longer ambiguous with the storge type of the underlying reference it targets. In the above example, if master was only available in the packed-refs file, then the following is also true: assertSame(Ref.Storage.LOOSE, HEAD.getStorage()); assertSame(Ref.Storage.PACKED, master.getStorage()); (Prior to this change we returned the ambiguous storage of LOOSE_PACKED for HEAD, which was confusing since it wasn't actually true on disk). Another nice side-effect of this change is all intermediate symbolic references are preserved, and are therefore visible to the application when they walk the target chain. We can now correctly inspect chains of symbolic references. As a result of this change the Ref.getOrigName() method has been removed from the API. Applications should identify a symbolic reference by testing for isSymbolic() and not by using an arcane string comparsion between properties. Abstract the RefDatabase storage: --------------------------------- RefDatabase is now abstract, similar to ObjectDatabase, and a new concrete implementation called RefDirectory is used for the traditional on-disk storage layout. In the future we plan to support additional implementations, such as a pure in-memory RefDatabase for unit testing purposes. Optimize RefDirectory: ---------------------- The implementation of the in-memory reference cache, reading, and update routines has been completely rewritten. Much of the code was heavily borrowed or cribbed from the prior implementation, so copyright notices have been left intact as much as possible. The RefDirectory cache no longer confuses symbolic references with normal references. This permits the cache to resolve the value of a symbolic reference as late as possible, ensuring it is always current, without needing to maintain reverse pointers. The cache is now 2 sorted RefLists, rather than 3 HashMaps. Using sorted lists allows the implementation to reduce the in-memory footprint when storing many refs. Using specialized types for the elements allows the code to avoid additional map lookups for auxiliary stat information. To improve scan time during getRefs(), the lists are returned via a copy-on-write contract. Most callers of getRefs() do not modify the returned collections, so the copy-on-write semantics improves access on repositories with a large number of packed references. Iterator traversals of the returned Map<String,Ref> are performed using a simple merge-join of the two cache lists, ensuring we can perform the entire traversal in linear time as a function of the number of references: O(PackedRefs + LooseRefs). Scans of the loose reference space to update the cache run in O(LooseRefs log LooseRefs) time, as the directory contents are sorted before being merged against the in-memory cache. Since the majority of stable references are kept packed, there typically are only a handful of reference names to be sorted, so the sorting cost should not be very high. Locking is reduced during getRefs() by taking advantage of the copy-on-write semantics of the improved cache data structure. This permits concurrent readers to pull back references without blocking each other. If there is contention updating the cache during a scan, one or more updates are simply skipped and will get picked up again in a future scan. Writing to the $GIT_DIR/packed-refs during reference delete is now fully atomic. The file is locked, reparsed fresh, and written back out if a change is necessary. This avoids all race conditions with concurrent external updates of the packed-refs file. The RefLogWriter class has been fully folded into RefDirectory and is therefore deleted. Maintaining the reference's log is the responsiblity of the database implementation, and not all implementations will use java.io for access. Future work still remains to be done to abstract the ReflogReader class away from local disk IO. Change-Id: I26b9287c45a4b2d2be35ba2849daa316f5eec85d Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 年之前
Rewrite reference handling to be abstract and accurate This commit actually does three major changes to the way references are handled within JGit. Unfortunately they were easier to do as a single massive commit than to break them up into smaller units. Disambiguate symbolic references: --------------------------------- Reporting a symbolic reference such as HEAD as though it were any other normal reference like refs/heads/master causes subtle programming errors. We have been bitten by this error on several occasions, as have some downstream applications written by myself. Instead of reporting HEAD as a reference whose name differs from its "original name", report it as an actual SymbolicRef object that the application can test the type and examine the target of. With this change, Ref is now an abstract type with different subclasses for the different types. In the classical example of "HEAD" being a symbolic reference to branch "refs/heads/master", the Repository.getAllRefs() method will now return: Map<String, Ref> all = repository.getAllRefs(); SymbolicRef HEAD = (SymbolicRef) all.get("HEAD"); ObjectIdRef master = (ObjectIdRef) all.get("refs/heads/master"); assertSame(master, HEAD.getTarget()); assertSame(master.getObjectId(), HEAD.getObjectId()); assertEquals("HEAD", HEAD.getName()); assertEquals("refs/heads/master", master.getName()); A nice side-effect of this change is the storage type of the symbolic reference is no longer ambiguous with the storge type of the underlying reference it targets. In the above example, if master was only available in the packed-refs file, then the following is also true: assertSame(Ref.Storage.LOOSE, HEAD.getStorage()); assertSame(Ref.Storage.PACKED, master.getStorage()); (Prior to this change we returned the ambiguous storage of LOOSE_PACKED for HEAD, which was confusing since it wasn't actually true on disk). Another nice side-effect of this change is all intermediate symbolic references are preserved, and are therefore visible to the application when they walk the target chain. We can now correctly inspect chains of symbolic references. As a result of this change the Ref.getOrigName() method has been removed from the API. Applications should identify a symbolic reference by testing for isSymbolic() and not by using an arcane string comparsion between properties. Abstract the RefDatabase storage: --------------------------------- RefDatabase is now abstract, similar to ObjectDatabase, and a new concrete implementation called RefDirectory is used for the traditional on-disk storage layout. In the future we plan to support additional implementations, such as a pure in-memory RefDatabase for unit testing purposes. Optimize RefDirectory: ---------------------- The implementation of the in-memory reference cache, reading, and update routines has been completely rewritten. Much of the code was heavily borrowed or cribbed from the prior implementation, so copyright notices have been left intact as much as possible. The RefDirectory cache no longer confuses symbolic references with normal references. This permits the cache to resolve the value of a symbolic reference as late as possible, ensuring it is always current, without needing to maintain reverse pointers. The cache is now 2 sorted RefLists, rather than 3 HashMaps. Using sorted lists allows the implementation to reduce the in-memory footprint when storing many refs. Using specialized types for the elements allows the code to avoid additional map lookups for auxiliary stat information. To improve scan time during getRefs(), the lists are returned via a copy-on-write contract. Most callers of getRefs() do not modify the returned collections, so the copy-on-write semantics improves access on repositories with a large number of packed references. Iterator traversals of the returned Map<String,Ref> are performed using a simple merge-join of the two cache lists, ensuring we can perform the entire traversal in linear time as a function of the number of references: O(PackedRefs + LooseRefs). Scans of the loose reference space to update the cache run in O(LooseRefs log LooseRefs) time, as the directory contents are sorted before being merged against the in-memory cache. Since the majority of stable references are kept packed, there typically are only a handful of reference names to be sorted, so the sorting cost should not be very high. Locking is reduced during getRefs() by taking advantage of the copy-on-write semantics of the improved cache data structure. This permits concurrent readers to pull back references without blocking each other. If there is contention updating the cache during a scan, one or more updates are simply skipped and will get picked up again in a future scan. Writing to the $GIT_DIR/packed-refs during reference delete is now fully atomic. The file is locked, reparsed fresh, and written back out if a change is necessary. This avoids all race conditions with concurrent external updates of the packed-refs file. The RefLogWriter class has been fully folded into RefDirectory and is therefore deleted. Maintaining the reference's log is the responsiblity of the database implementation, and not all implementations will use java.io for access. Future work still remains to be done to abstract the ReflogReader class away from local disk IO. Change-Id: I26b9287c45a4b2d2be35ba2849daa316f5eec85d Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 年之前
Rewrite reference handling to be abstract and accurate This commit actually does three major changes to the way references are handled within JGit. Unfortunately they were easier to do as a single massive commit than to break them up into smaller units. Disambiguate symbolic references: --------------------------------- Reporting a symbolic reference such as HEAD as though it were any other normal reference like refs/heads/master causes subtle programming errors. We have been bitten by this error on several occasions, as have some downstream applications written by myself. Instead of reporting HEAD as a reference whose name differs from its "original name", report it as an actual SymbolicRef object that the application can test the type and examine the target of. With this change, Ref is now an abstract type with different subclasses for the different types. In the classical example of "HEAD" being a symbolic reference to branch "refs/heads/master", the Repository.getAllRefs() method will now return: Map<String, Ref> all = repository.getAllRefs(); SymbolicRef HEAD = (SymbolicRef) all.get("HEAD"); ObjectIdRef master = (ObjectIdRef) all.get("refs/heads/master"); assertSame(master, HEAD.getTarget()); assertSame(master.getObjectId(), HEAD.getObjectId()); assertEquals("HEAD", HEAD.getName()); assertEquals("refs/heads/master", master.getName()); A nice side-effect of this change is the storage type of the symbolic reference is no longer ambiguous with the storge type of the underlying reference it targets. In the above example, if master was only available in the packed-refs file, then the following is also true: assertSame(Ref.Storage.LOOSE, HEAD.getStorage()); assertSame(Ref.Storage.PACKED, master.getStorage()); (Prior to this change we returned the ambiguous storage of LOOSE_PACKED for HEAD, which was confusing since it wasn't actually true on disk). Another nice side-effect of this change is all intermediate symbolic references are preserved, and are therefore visible to the application when they walk the target chain. We can now correctly inspect chains of symbolic references. As a result of this change the Ref.getOrigName() method has been removed from the API. Applications should identify a symbolic reference by testing for isSymbolic() and not by using an arcane string comparsion between properties. Abstract the RefDatabase storage: --------------------------------- RefDatabase is now abstract, similar to ObjectDatabase, and a new concrete implementation called RefDirectory is used for the traditional on-disk storage layout. In the future we plan to support additional implementations, such as a pure in-memory RefDatabase for unit testing purposes. Optimize RefDirectory: ---------------------- The implementation of the in-memory reference cache, reading, and update routines has been completely rewritten. Much of the code was heavily borrowed or cribbed from the prior implementation, so copyright notices have been left intact as much as possible. The RefDirectory cache no longer confuses symbolic references with normal references. This permits the cache to resolve the value of a symbolic reference as late as possible, ensuring it is always current, without needing to maintain reverse pointers. The cache is now 2 sorted RefLists, rather than 3 HashMaps. Using sorted lists allows the implementation to reduce the in-memory footprint when storing many refs. Using specialized types for the elements allows the code to avoid additional map lookups for auxiliary stat information. To improve scan time during getRefs(), the lists are returned via a copy-on-write contract. Most callers of getRefs() do not modify the returned collections, so the copy-on-write semantics improves access on repositories with a large number of packed references. Iterator traversals of the returned Map<String,Ref> are performed using a simple merge-join of the two cache lists, ensuring we can perform the entire traversal in linear time as a function of the number of references: O(PackedRefs + LooseRefs). Scans of the loose reference space to update the cache run in O(LooseRefs log LooseRefs) time, as the directory contents are sorted before being merged against the in-memory cache. Since the majority of stable references are kept packed, there typically are only a handful of reference names to be sorted, so the sorting cost should not be very high. Locking is reduced during getRefs() by taking advantage of the copy-on-write semantics of the improved cache data structure. This permits concurrent readers to pull back references without blocking each other. If there is contention updating the cache during a scan, one or more updates are simply skipped and will get picked up again in a future scan. Writing to the $GIT_DIR/packed-refs during reference delete is now fully atomic. The file is locked, reparsed fresh, and written back out if a change is necessary. This avoids all race conditions with concurrent external updates of the packed-refs file. The RefLogWriter class has been fully folded into RefDirectory and is therefore deleted. Maintaining the reference's log is the responsiblity of the database implementation, and not all implementations will use java.io for access. Future work still remains to be done to abstract the ReflogReader class away from local disk IO. Change-Id: I26b9287c45a4b2d2be35ba2849daa316f5eec85d Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 年之前
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501
  1. /*
  2. * Copyright (C) 2007, Robin Rosenberg <robin.rosenberg@dewire.com>
  3. * Copyright (C) 2006-2008, Shawn O. Pearce <spearce@spearce.org>
  4. * and other copyright owners as documented in the project's IP log.
  5. *
  6. * This program and the accompanying materials are made available
  7. * under the terms of the Eclipse Distribution License v1.0 which
  8. * accompanies this distribution, is reproduced below, and is
  9. * available at http://www.eclipse.org/org/documents/edl-v10.php
  10. *
  11. * All rights reserved.
  12. *
  13. * Redistribution and use in source and binary forms, with or
  14. * without modification, are permitted provided that the following
  15. * conditions are met:
  16. *
  17. * - Redistributions of source code must retain the above copyright
  18. * notice, this list of conditions and the following disclaimer.
  19. *
  20. * - Redistributions in binary form must reproduce the above
  21. * copyright notice, this list of conditions and the following
  22. * disclaimer in the documentation and/or other materials provided
  23. * with the distribution.
  24. *
  25. * - Neither the name of the Eclipse Foundation, Inc. nor the
  26. * names of its contributors may be used to endorse or promote
  27. * products derived from this software without specific prior
  28. * written permission.
  29. *
  30. * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
  31. * CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
  32. * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
  33. * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
  34. * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
  35. * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
  36. * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
  37. * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
  38. * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
  39. * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
  40. * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
  41. * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
  42. * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  43. */
  44. package org.eclipse.jgit.storage.file;
  45. import java.io.File;
  46. import java.io.FileInputStream;
  47. import java.io.FileNotFoundException;
  48. import java.io.FileOutputStream;
  49. import java.io.FilenameFilter;
  50. import java.io.IOException;
  51. import java.io.OutputStream;
  52. import java.nio.ByteBuffer;
  53. import java.nio.channels.Channels;
  54. import java.nio.channels.FileChannel;
  55. import java.text.MessageFormat;
  56. import org.eclipse.jgit.JGitText;
  57. import org.eclipse.jgit.lib.Constants;
  58. import org.eclipse.jgit.lib.ObjectId;
  59. import org.eclipse.jgit.util.FS;
  60. import org.eclipse.jgit.util.FileUtils;
  61. /**
  62. * Git style file locking and replacement.
  63. * <p>
  64. * To modify a ref file Git tries to use an atomic update approach: we write the
  65. * new data into a brand new file, then rename it in place over the old name.
  66. * This way we can just delete the temporary file if anything goes wrong, and
  67. * nothing has been damaged. To coordinate access from multiple processes at
  68. * once Git tries to atomically create the new temporary file under a well-known
  69. * name.
  70. */
  71. public class LockFile {
  72. static final String SUFFIX = ".lock"; //$NON-NLS-1$
  73. /** Filter to skip over active lock files when listing a directory. */
  74. static final FilenameFilter FILTER = new FilenameFilter() {
  75. public boolean accept(File dir, String name) {
  76. return !name.endsWith(SUFFIX);
  77. }
  78. };
  79. private final File ref;
  80. private final File lck;
  81. private boolean haveLck;
  82. private FileOutputStream os;
  83. private boolean needSnapshot;
  84. private boolean fsync;
  85. private FileSnapshot commitSnapshot;
  86. private final FS fs;
  87. /**
  88. * Create a new lock for any file.
  89. *
  90. * @param f
  91. * the file that will be locked.
  92. * @param fs
  93. * the file system abstraction which will be necessary to perform
  94. * certain file system operations.
  95. */
  96. public LockFile(final File f, FS fs) {
  97. ref = f;
  98. lck = new File(ref.getParentFile(), ref.getName() + SUFFIX);
  99. this.fs = fs;
  100. }
  101. /**
  102. * Try to establish the lock.
  103. *
  104. * @return true if the lock is now held by the caller; false if it is held
  105. * by someone else.
  106. * @throws IOException
  107. * the temporary output file could not be created. The caller
  108. * does not hold the lock.
  109. */
  110. public boolean lock() throws IOException {
  111. FileUtils.mkdirs(lck.getParentFile(), true);
  112. if (lck.createNewFile()) {
  113. haveLck = true;
  114. try {
  115. os = new FileOutputStream(lck);
  116. } catch (IOException ioe) {
  117. unlock();
  118. throw ioe;
  119. }
  120. }
  121. return haveLck;
  122. }
  123. /**
  124. * Try to establish the lock for appending.
  125. *
  126. * @return true if the lock is now held by the caller; false if it is held
  127. * by someone else.
  128. * @throws IOException
  129. * the temporary output file could not be created. The caller
  130. * does not hold the lock.
  131. */
  132. public boolean lockForAppend() throws IOException {
  133. if (!lock())
  134. return false;
  135. copyCurrentContent();
  136. return true;
  137. }
  138. /**
  139. * Copy the current file content into the temporary file.
  140. * <p>
  141. * This method saves the current file content by inserting it into the
  142. * temporary file, so that the caller can safely append rather than replace
  143. * the primary file.
  144. * <p>
  145. * This method does nothing if the current file does not exist, or exists
  146. * but is empty.
  147. *
  148. * @throws IOException
  149. * the temporary file could not be written, or a read error
  150. * occurred while reading from the current file. The lock is
  151. * released before throwing the underlying IO exception to the
  152. * caller.
  153. * @throws RuntimeException
  154. * the temporary file could not be written. The lock is released
  155. * before throwing the underlying exception to the caller.
  156. */
  157. public void copyCurrentContent() throws IOException {
  158. requireLock();
  159. try {
  160. final FileInputStream fis = new FileInputStream(ref);
  161. try {
  162. if (fsync) {
  163. FileChannel in = fis.getChannel();
  164. long pos = 0;
  165. long cnt = in.size();
  166. while (0 < cnt) {
  167. long r = os.getChannel().transferFrom(in, pos, cnt);
  168. pos += r;
  169. cnt -= r;
  170. }
  171. } else {
  172. final byte[] buf = new byte[2048];
  173. int r;
  174. while ((r = fis.read(buf)) >= 0)
  175. os.write(buf, 0, r);
  176. }
  177. } finally {
  178. fis.close();
  179. }
  180. } catch (FileNotFoundException fnfe) {
  181. // Don't worry about a file that doesn't exist yet, it
  182. // conceptually has no current content to copy.
  183. //
  184. } catch (IOException ioe) {
  185. unlock();
  186. throw ioe;
  187. } catch (RuntimeException ioe) {
  188. unlock();
  189. throw ioe;
  190. } catch (Error ioe) {
  191. unlock();
  192. throw ioe;
  193. }
  194. }
  195. /**
  196. * Write an ObjectId and LF to the temporary file.
  197. *
  198. * @param id
  199. * the id to store in the file. The id will be written in hex,
  200. * followed by a sole LF.
  201. * @throws IOException
  202. * the temporary file could not be written. The lock is released
  203. * before throwing the underlying IO exception to the caller.
  204. * @throws RuntimeException
  205. * the temporary file could not be written. The lock is released
  206. * before throwing the underlying exception to the caller.
  207. */
  208. public void write(final ObjectId id) throws IOException {
  209. byte[] buf = new byte[Constants.OBJECT_ID_STRING_LENGTH + 1];
  210. id.copyTo(buf, 0);
  211. buf[Constants.OBJECT_ID_STRING_LENGTH] = '\n';
  212. write(buf);
  213. }
  214. /**
  215. * Write arbitrary data to the temporary file.
  216. *
  217. * @param content
  218. * the bytes to store in the temporary file. No additional bytes
  219. * are added, so if the file must end with an LF it must appear
  220. * at the end of the byte array.
  221. * @throws IOException
  222. * the temporary file could not be written. The lock is released
  223. * before throwing the underlying IO exception to the caller.
  224. * @throws RuntimeException
  225. * the temporary file could not be written. The lock is released
  226. * before throwing the underlying exception to the caller.
  227. */
  228. public void write(final byte[] content) throws IOException {
  229. requireLock();
  230. try {
  231. if (fsync) {
  232. FileChannel fc = os.getChannel();
  233. ByteBuffer buf = ByteBuffer.wrap(content);
  234. while (0 < buf.remaining())
  235. fc.write(buf);
  236. fc.force(true);
  237. } else {
  238. os.write(content);
  239. }
  240. os.close();
  241. os = null;
  242. } catch (IOException ioe) {
  243. unlock();
  244. throw ioe;
  245. } catch (RuntimeException ioe) {
  246. unlock();
  247. throw ioe;
  248. } catch (Error ioe) {
  249. unlock();
  250. throw ioe;
  251. }
  252. }
  253. /**
  254. * Obtain the direct output stream for this lock.
  255. * <p>
  256. * The stream may only be accessed once, and only after {@link #lock()} has
  257. * been successfully invoked and returned true. Callers must close the
  258. * stream prior to calling {@link #commit()} to commit the change.
  259. *
  260. * @return a stream to write to the new file. The stream is unbuffered.
  261. */
  262. public OutputStream getOutputStream() {
  263. requireLock();
  264. final OutputStream out;
  265. if (fsync)
  266. out = Channels.newOutputStream(os.getChannel());
  267. else
  268. out = os;
  269. return new OutputStream() {
  270. @Override
  271. public void write(final byte[] b, final int o, final int n)
  272. throws IOException {
  273. out.write(b, o, n);
  274. }
  275. @Override
  276. public void write(final byte[] b) throws IOException {
  277. out.write(b);
  278. }
  279. @Override
  280. public void write(final int b) throws IOException {
  281. out.write(b);
  282. }
  283. @Override
  284. public void close() throws IOException {
  285. try {
  286. if (fsync)
  287. os.getChannel().force(true);
  288. out.close();
  289. os = null;
  290. } catch (IOException ioe) {
  291. unlock();
  292. throw ioe;
  293. } catch (RuntimeException ioe) {
  294. unlock();
  295. throw ioe;
  296. } catch (Error ioe) {
  297. unlock();
  298. throw ioe;
  299. }
  300. }
  301. };
  302. }
  303. private void requireLock() {
  304. if (os == null) {
  305. unlock();
  306. throw new IllegalStateException(MessageFormat.format(JGitText.get().lockOnNotHeld, ref));
  307. }
  308. }
  309. /**
  310. * Request that {@link #commit()} remember modification time.
  311. * <p>
  312. * This is an alias for {@code setNeedSnapshot(true)}.
  313. *
  314. * @param on
  315. * true if the commit method must remember the modification time.
  316. */
  317. public void setNeedStatInformation(final boolean on) {
  318. setNeedSnapshot(on);
  319. }
  320. /**
  321. * Request that {@link #commit()} remember the {@link FileSnapshot}.
  322. *
  323. * @param on
  324. * true if the commit method must remember the FileSnapshot.
  325. */
  326. public void setNeedSnapshot(final boolean on) {
  327. needSnapshot = on;
  328. }
  329. /**
  330. * Request that {@link #commit()} force dirty data to the drive.
  331. *
  332. * @param on
  333. * true if dirty data should be forced to the drive.
  334. */
  335. public void setFSync(final boolean on) {
  336. fsync = on;
  337. }
  338. /**
  339. * Wait until the lock file information differs from the old file.
  340. * <p>
  341. * This method tests both the length and the last modification date. If both
  342. * are the same, this method sleeps until it can force the new lock file's
  343. * modification date to be later than the target file.
  344. *
  345. * @throws InterruptedException
  346. * the thread was interrupted before the last modified date of
  347. * the lock file was different from the last modified date of
  348. * the target file.
  349. */
  350. public void waitForStatChange() throws InterruptedException {
  351. if (ref.length() == lck.length()) {
  352. long otime = ref.lastModified();
  353. long ntime = lck.lastModified();
  354. while (otime == ntime) {
  355. Thread.sleep(25 /* milliseconds */);
  356. lck.setLastModified(System.currentTimeMillis());
  357. ntime = lck.lastModified();
  358. }
  359. }
  360. }
  361. /**
  362. * Commit this change and release the lock.
  363. * <p>
  364. * If this method fails (returns false) the lock is still released.
  365. *
  366. * @return true if the commit was successful and the file contains the new
  367. * data; false if the commit failed and the file remains with the
  368. * old data.
  369. * @throws IllegalStateException
  370. * the lock is not held.
  371. */
  372. public boolean commit() {
  373. if (os != null) {
  374. unlock();
  375. throw new IllegalStateException(MessageFormat.format(JGitText.get().lockOnNotClosed, ref));
  376. }
  377. saveStatInformation();
  378. if (lck.renameTo(ref))
  379. return true;
  380. if (!ref.exists() || deleteRef())
  381. if (renameLock())
  382. return true;
  383. unlock();
  384. return false;
  385. }
  386. private boolean deleteRef() {
  387. if (!fs.retryFailedLockFileCommit())
  388. return ref.delete();
  389. // File deletion fails on windows if another thread is
  390. // concurrently reading the same file. So try a few times.
  391. //
  392. for (int attempts = 0; attempts < 10; attempts++) {
  393. if (ref.delete())
  394. return true;
  395. try {
  396. Thread.sleep(100);
  397. } catch (InterruptedException e) {
  398. return false;
  399. }
  400. }
  401. return false;
  402. }
  403. private boolean renameLock() {
  404. if (!fs.retryFailedLockFileCommit())
  405. return lck.renameTo(ref);
  406. // File renaming fails on windows if another thread is
  407. // concurrently reading the same file. So try a few times.
  408. //
  409. for (int attempts = 0; attempts < 10; attempts++) {
  410. if (lck.renameTo(ref))
  411. return true;
  412. try {
  413. Thread.sleep(100);
  414. } catch (InterruptedException e) {
  415. return false;
  416. }
  417. }
  418. return false;
  419. }
  420. private void saveStatInformation() {
  421. if (needSnapshot)
  422. commitSnapshot = FileSnapshot.save(lck);
  423. }
  424. /**
  425. * Get the modification time of the output file when it was committed.
  426. *
  427. * @return modification time of the lock file right before we committed it.
  428. */
  429. public long getCommitLastModified() {
  430. return commitSnapshot.lastModified();
  431. }
  432. /** @return get the {@link FileSnapshot} just before commit. */
  433. public FileSnapshot getCommitSnapshot() {
  434. return commitSnapshot;
  435. }
  436. /**
  437. * Unlock this file and abort this change.
  438. * <p>
  439. * The temporary file (if created) is deleted before returning.
  440. */
  441. public void unlock() {
  442. if (os != null) {
  443. try {
  444. os.close();
  445. } catch (IOException ioe) {
  446. // Ignore this
  447. }
  448. os = null;
  449. }
  450. if (haveLck) {
  451. haveLck = false;
  452. lck.delete();
  453. }
  454. }
  455. @Override
  456. public String toString() {
  457. return "LockFile[" + lck + ", haveLck=" + haveLck + "]";
  458. }
  459. }