You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

TransportSftp.java 14KB

Rewrite reference handling to be abstract and accurate This commit actually does three major changes to the way references are handled within JGit. Unfortunately they were easier to do as a single massive commit than to break them up into smaller units. Disambiguate symbolic references: --------------------------------- Reporting a symbolic reference such as HEAD as though it were any other normal reference like refs/heads/master causes subtle programming errors. We have been bitten by this error on several occasions, as have some downstream applications written by myself. Instead of reporting HEAD as a reference whose name differs from its "original name", report it as an actual SymbolicRef object that the application can test the type and examine the target of. With this change, Ref is now an abstract type with different subclasses for the different types. In the classical example of "HEAD" being a symbolic reference to branch "refs/heads/master", the Repository.getAllRefs() method will now return: Map<String, Ref> all = repository.getAllRefs(); SymbolicRef HEAD = (SymbolicRef) all.get("HEAD"); ObjectIdRef master = (ObjectIdRef) all.get("refs/heads/master"); assertSame(master, HEAD.getTarget()); assertSame(master.getObjectId(), HEAD.getObjectId()); assertEquals("HEAD", HEAD.getName()); assertEquals("refs/heads/master", master.getName()); A nice side-effect of this change is the storage type of the symbolic reference is no longer ambiguous with the storge type of the underlying reference it targets. In the above example, if master was only available in the packed-refs file, then the following is also true: assertSame(Ref.Storage.LOOSE, HEAD.getStorage()); assertSame(Ref.Storage.PACKED, master.getStorage()); (Prior to this change we returned the ambiguous storage of LOOSE_PACKED for HEAD, which was confusing since it wasn't actually true on disk). Another nice side-effect of this change is all intermediate symbolic references are preserved, and are therefore visible to the application when they walk the target chain. We can now correctly inspect chains of symbolic references. As a result of this change the Ref.getOrigName() method has been removed from the API. Applications should identify a symbolic reference by testing for isSymbolic() and not by using an arcane string comparsion between properties. Abstract the RefDatabase storage: --------------------------------- RefDatabase is now abstract, similar to ObjectDatabase, and a new concrete implementation called RefDirectory is used for the traditional on-disk storage layout. In the future we plan to support additional implementations, such as a pure in-memory RefDatabase for unit testing purposes. Optimize RefDirectory: ---------------------- The implementation of the in-memory reference cache, reading, and update routines has been completely rewritten. Much of the code was heavily borrowed or cribbed from the prior implementation, so copyright notices have been left intact as much as possible. The RefDirectory cache no longer confuses symbolic references with normal references. This permits the cache to resolve the value of a symbolic reference as late as possible, ensuring it is always current, without needing to maintain reverse pointers. The cache is now 2 sorted RefLists, rather than 3 HashMaps. Using sorted lists allows the implementation to reduce the in-memory footprint when storing many refs. Using specialized types for the elements allows the code to avoid additional map lookups for auxiliary stat information. To improve scan time during getRefs(), the lists are returned via a copy-on-write contract. Most callers of getRefs() do not modify the returned collections, so the copy-on-write semantics improves access on repositories with a large number of packed references. Iterator traversals of the returned Map<String,Ref> are performed using a simple merge-join of the two cache lists, ensuring we can perform the entire traversal in linear time as a function of the number of references: O(PackedRefs + LooseRefs). Scans of the loose reference space to update the cache run in O(LooseRefs log LooseRefs) time, as the directory contents are sorted before being merged against the in-memory cache. Since the majority of stable references are kept packed, there typically are only a handful of reference names to be sorted, so the sorting cost should not be very high. Locking is reduced during getRefs() by taking advantage of the copy-on-write semantics of the improved cache data structure. This permits concurrent readers to pull back references without blocking each other. If there is contention updating the cache during a scan, one or more updates are simply skipped and will get picked up again in a future scan. Writing to the $GIT_DIR/packed-refs during reference delete is now fully atomic. The file is locked, reparsed fresh, and written back out if a change is necessary. This avoids all race conditions with concurrent external updates of the packed-refs file. The RefLogWriter class has been fully folded into RefDirectory and is therefore deleted. Maintaining the reference's log is the responsiblity of the database implementation, and not all implementations will use java.io for access. Future work still remains to be done to abstract the ReflogReader class away from local disk IO. Change-Id: I26b9287c45a4b2d2be35ba2849daa316f5eec85d Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 years ago
Rewrite reference handling to be abstract and accurate This commit actually does three major changes to the way references are handled within JGit. Unfortunately they were easier to do as a single massive commit than to break them up into smaller units. Disambiguate symbolic references: --------------------------------- Reporting a symbolic reference such as HEAD as though it were any other normal reference like refs/heads/master causes subtle programming errors. We have been bitten by this error on several occasions, as have some downstream applications written by myself. Instead of reporting HEAD as a reference whose name differs from its "original name", report it as an actual SymbolicRef object that the application can test the type and examine the target of. With this change, Ref is now an abstract type with different subclasses for the different types. In the classical example of "HEAD" being a symbolic reference to branch "refs/heads/master", the Repository.getAllRefs() method will now return: Map<String, Ref> all = repository.getAllRefs(); SymbolicRef HEAD = (SymbolicRef) all.get("HEAD"); ObjectIdRef master = (ObjectIdRef) all.get("refs/heads/master"); assertSame(master, HEAD.getTarget()); assertSame(master.getObjectId(), HEAD.getObjectId()); assertEquals("HEAD", HEAD.getName()); assertEquals("refs/heads/master", master.getName()); A nice side-effect of this change is the storage type of the symbolic reference is no longer ambiguous with the storge type of the underlying reference it targets. In the above example, if master was only available in the packed-refs file, then the following is also true: assertSame(Ref.Storage.LOOSE, HEAD.getStorage()); assertSame(Ref.Storage.PACKED, master.getStorage()); (Prior to this change we returned the ambiguous storage of LOOSE_PACKED for HEAD, which was confusing since it wasn't actually true on disk). Another nice side-effect of this change is all intermediate symbolic references are preserved, and are therefore visible to the application when they walk the target chain. We can now correctly inspect chains of symbolic references. As a result of this change the Ref.getOrigName() method has been removed from the API. Applications should identify a symbolic reference by testing for isSymbolic() and not by using an arcane string comparsion between properties. Abstract the RefDatabase storage: --------------------------------- RefDatabase is now abstract, similar to ObjectDatabase, and a new concrete implementation called RefDirectory is used for the traditional on-disk storage layout. In the future we plan to support additional implementations, such as a pure in-memory RefDatabase for unit testing purposes. Optimize RefDirectory: ---------------------- The implementation of the in-memory reference cache, reading, and update routines has been completely rewritten. Much of the code was heavily borrowed or cribbed from the prior implementation, so copyright notices have been left intact as much as possible. The RefDirectory cache no longer confuses symbolic references with normal references. This permits the cache to resolve the value of a symbolic reference as late as possible, ensuring it is always current, without needing to maintain reverse pointers. The cache is now 2 sorted RefLists, rather than 3 HashMaps. Using sorted lists allows the implementation to reduce the in-memory footprint when storing many refs. Using specialized types for the elements allows the code to avoid additional map lookups for auxiliary stat information. To improve scan time during getRefs(), the lists are returned via a copy-on-write contract. Most callers of getRefs() do not modify the returned collections, so the copy-on-write semantics improves access on repositories with a large number of packed references. Iterator traversals of the returned Map<String,Ref> are performed using a simple merge-join of the two cache lists, ensuring we can perform the entire traversal in linear time as a function of the number of references: O(PackedRefs + LooseRefs). Scans of the loose reference space to update the cache run in O(LooseRefs log LooseRefs) time, as the directory contents are sorted before being merged against the in-memory cache. Since the majority of stable references are kept packed, there typically are only a handful of reference names to be sorted, so the sorting cost should not be very high. Locking is reduced during getRefs() by taking advantage of the copy-on-write semantics of the improved cache data structure. This permits concurrent readers to pull back references without blocking each other. If there is contention updating the cache during a scan, one or more updates are simply skipped and will get picked up again in a future scan. Writing to the $GIT_DIR/packed-refs during reference delete is now fully atomic. The file is locked, reparsed fresh, and written back out if a change is necessary. This avoids all race conditions with concurrent external updates of the packed-refs file. The RefLogWriter class has been fully folded into RefDirectory and is therefore deleted. Maintaining the reference's log is the responsiblity of the database implementation, and not all implementations will use java.io for access. Future work still remains to be done to abstract the ReflogReader class away from local disk IO. Change-Id: I26b9287c45a4b2d2be35ba2849daa316f5eec85d Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 years ago
Rewrite reference handling to be abstract and accurate This commit actually does three major changes to the way references are handled within JGit. Unfortunately they were easier to do as a single massive commit than to break them up into smaller units. Disambiguate symbolic references: --------------------------------- Reporting a symbolic reference such as HEAD as though it were any other normal reference like refs/heads/master causes subtle programming errors. We have been bitten by this error on several occasions, as have some downstream applications written by myself. Instead of reporting HEAD as a reference whose name differs from its "original name", report it as an actual SymbolicRef object that the application can test the type and examine the target of. With this change, Ref is now an abstract type with different subclasses for the different types. In the classical example of "HEAD" being a symbolic reference to branch "refs/heads/master", the Repository.getAllRefs() method will now return: Map<String, Ref> all = repository.getAllRefs(); SymbolicRef HEAD = (SymbolicRef) all.get("HEAD"); ObjectIdRef master = (ObjectIdRef) all.get("refs/heads/master"); assertSame(master, HEAD.getTarget()); assertSame(master.getObjectId(), HEAD.getObjectId()); assertEquals("HEAD", HEAD.getName()); assertEquals("refs/heads/master", master.getName()); A nice side-effect of this change is the storage type of the symbolic reference is no longer ambiguous with the storge type of the underlying reference it targets. In the above example, if master was only available in the packed-refs file, then the following is also true: assertSame(Ref.Storage.LOOSE, HEAD.getStorage()); assertSame(Ref.Storage.PACKED, master.getStorage()); (Prior to this change we returned the ambiguous storage of LOOSE_PACKED for HEAD, which was confusing since it wasn't actually true on disk). Another nice side-effect of this change is all intermediate symbolic references are preserved, and are therefore visible to the application when they walk the target chain. We can now correctly inspect chains of symbolic references. As a result of this change the Ref.getOrigName() method has been removed from the API. Applications should identify a symbolic reference by testing for isSymbolic() and not by using an arcane string comparsion between properties. Abstract the RefDatabase storage: --------------------------------- RefDatabase is now abstract, similar to ObjectDatabase, and a new concrete implementation called RefDirectory is used for the traditional on-disk storage layout. In the future we plan to support additional implementations, such as a pure in-memory RefDatabase for unit testing purposes. Optimize RefDirectory: ---------------------- The implementation of the in-memory reference cache, reading, and update routines has been completely rewritten. Much of the code was heavily borrowed or cribbed from the prior implementation, so copyright notices have been left intact as much as possible. The RefDirectory cache no longer confuses symbolic references with normal references. This permits the cache to resolve the value of a symbolic reference as late as possible, ensuring it is always current, without needing to maintain reverse pointers. The cache is now 2 sorted RefLists, rather than 3 HashMaps. Using sorted lists allows the implementation to reduce the in-memory footprint when storing many refs. Using specialized types for the elements allows the code to avoid additional map lookups for auxiliary stat information. To improve scan time during getRefs(), the lists are returned via a copy-on-write contract. Most callers of getRefs() do not modify the returned collections, so the copy-on-write semantics improves access on repositories with a large number of packed references. Iterator traversals of the returned Map<String,Ref> are performed using a simple merge-join of the two cache lists, ensuring we can perform the entire traversal in linear time as a function of the number of references: O(PackedRefs + LooseRefs). Scans of the loose reference space to update the cache run in O(LooseRefs log LooseRefs) time, as the directory contents are sorted before being merged against the in-memory cache. Since the majority of stable references are kept packed, there typically are only a handful of reference names to be sorted, so the sorting cost should not be very high. Locking is reduced during getRefs() by taking advantage of the copy-on-write semantics of the improved cache data structure. This permits concurrent readers to pull back references without blocking each other. If there is contention updating the cache during a scan, one or more updates are simply skipped and will get picked up again in a future scan. Writing to the $GIT_DIR/packed-refs during reference delete is now fully atomic. The file is locked, reparsed fresh, and written back out if a change is necessary. This avoids all race conditions with concurrent external updates of the packed-refs file. The RefLogWriter class has been fully folded into RefDirectory and is therefore deleted. Maintaining the reference's log is the responsiblity of the database implementation, and not all implementations will use java.io for access. Future work still remains to be done to abstract the ReflogReader class away from local disk IO. Change-Id: I26b9287c45a4b2d2be35ba2849daa316f5eec85d Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 years ago
Rewrite reference handling to be abstract and accurate This commit actually does three major changes to the way references are handled within JGit. Unfortunately they were easier to do as a single massive commit than to break them up into smaller units. Disambiguate symbolic references: --------------------------------- Reporting a symbolic reference such as HEAD as though it were any other normal reference like refs/heads/master causes subtle programming errors. We have been bitten by this error on several occasions, as have some downstream applications written by myself. Instead of reporting HEAD as a reference whose name differs from its "original name", report it as an actual SymbolicRef object that the application can test the type and examine the target of. With this change, Ref is now an abstract type with different subclasses for the different types. In the classical example of "HEAD" being a symbolic reference to branch "refs/heads/master", the Repository.getAllRefs() method will now return: Map<String, Ref> all = repository.getAllRefs(); SymbolicRef HEAD = (SymbolicRef) all.get("HEAD"); ObjectIdRef master = (ObjectIdRef) all.get("refs/heads/master"); assertSame(master, HEAD.getTarget()); assertSame(master.getObjectId(), HEAD.getObjectId()); assertEquals("HEAD", HEAD.getName()); assertEquals("refs/heads/master", master.getName()); A nice side-effect of this change is the storage type of the symbolic reference is no longer ambiguous with the storge type of the underlying reference it targets. In the above example, if master was only available in the packed-refs file, then the following is also true: assertSame(Ref.Storage.LOOSE, HEAD.getStorage()); assertSame(Ref.Storage.PACKED, master.getStorage()); (Prior to this change we returned the ambiguous storage of LOOSE_PACKED for HEAD, which was confusing since it wasn't actually true on disk). Another nice side-effect of this change is all intermediate symbolic references are preserved, and are therefore visible to the application when they walk the target chain. We can now correctly inspect chains of symbolic references. As a result of this change the Ref.getOrigName() method has been removed from the API. Applications should identify a symbolic reference by testing for isSymbolic() and not by using an arcane string comparsion between properties. Abstract the RefDatabase storage: --------------------------------- RefDatabase is now abstract, similar to ObjectDatabase, and a new concrete implementation called RefDirectory is used for the traditional on-disk storage layout. In the future we plan to support additional implementations, such as a pure in-memory RefDatabase for unit testing purposes. Optimize RefDirectory: ---------------------- The implementation of the in-memory reference cache, reading, and update routines has been completely rewritten. Much of the code was heavily borrowed or cribbed from the prior implementation, so copyright notices have been left intact as much as possible. The RefDirectory cache no longer confuses symbolic references with normal references. This permits the cache to resolve the value of a symbolic reference as late as possible, ensuring it is always current, without needing to maintain reverse pointers. The cache is now 2 sorted RefLists, rather than 3 HashMaps. Using sorted lists allows the implementation to reduce the in-memory footprint when storing many refs. Using specialized types for the elements allows the code to avoid additional map lookups for auxiliary stat information. To improve scan time during getRefs(), the lists are returned via a copy-on-write contract. Most callers of getRefs() do not modify the returned collections, so the copy-on-write semantics improves access on repositories with a large number of packed references. Iterator traversals of the returned Map<String,Ref> are performed using a simple merge-join of the two cache lists, ensuring we can perform the entire traversal in linear time as a function of the number of references: O(PackedRefs + LooseRefs). Scans of the loose reference space to update the cache run in O(LooseRefs log LooseRefs) time, as the directory contents are sorted before being merged against the in-memory cache. Since the majority of stable references are kept packed, there typically are only a handful of reference names to be sorted, so the sorting cost should not be very high. Locking is reduced during getRefs() by taking advantage of the copy-on-write semantics of the improved cache data structure. This permits concurrent readers to pull back references without blocking each other. If there is contention updating the cache during a scan, one or more updates are simply skipped and will get picked up again in a future scan. Writing to the $GIT_DIR/packed-refs during reference delete is now fully atomic. The file is locked, reparsed fresh, and written back out if a change is necessary. This avoids all race conditions with concurrent external updates of the packed-refs file. The RefLogWriter class has been fully folded into RefDirectory and is therefore deleted. Maintaining the reference's log is the responsiblity of the database implementation, and not all implementations will use java.io for access. Future work still remains to be done to abstract the ReflogReader class away from local disk IO. Change-Id: I26b9287c45a4b2d2be35ba2849daa316f5eec85d Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 years ago
Rewrite reference handling to be abstract and accurate This commit actually does three major changes to the way references are handled within JGit. Unfortunately they were easier to do as a single massive commit than to break them up into smaller units. Disambiguate symbolic references: --------------------------------- Reporting a symbolic reference such as HEAD as though it were any other normal reference like refs/heads/master causes subtle programming errors. We have been bitten by this error on several occasions, as have some downstream applications written by myself. Instead of reporting HEAD as a reference whose name differs from its "original name", report it as an actual SymbolicRef object that the application can test the type and examine the target of. With this change, Ref is now an abstract type with different subclasses for the different types. In the classical example of "HEAD" being a symbolic reference to branch "refs/heads/master", the Repository.getAllRefs() method will now return: Map<String, Ref> all = repository.getAllRefs(); SymbolicRef HEAD = (SymbolicRef) all.get("HEAD"); ObjectIdRef master = (ObjectIdRef) all.get("refs/heads/master"); assertSame(master, HEAD.getTarget()); assertSame(master.getObjectId(), HEAD.getObjectId()); assertEquals("HEAD", HEAD.getName()); assertEquals("refs/heads/master", master.getName()); A nice side-effect of this change is the storage type of the symbolic reference is no longer ambiguous with the storge type of the underlying reference it targets. In the above example, if master was only available in the packed-refs file, then the following is also true: assertSame(Ref.Storage.LOOSE, HEAD.getStorage()); assertSame(Ref.Storage.PACKED, master.getStorage()); (Prior to this change we returned the ambiguous storage of LOOSE_PACKED for HEAD, which was confusing since it wasn't actually true on disk). Another nice side-effect of this change is all intermediate symbolic references are preserved, and are therefore visible to the application when they walk the target chain. We can now correctly inspect chains of symbolic references. As a result of this change the Ref.getOrigName() method has been removed from the API. Applications should identify a symbolic reference by testing for isSymbolic() and not by using an arcane string comparsion between properties. Abstract the RefDatabase storage: --------------------------------- RefDatabase is now abstract, similar to ObjectDatabase, and a new concrete implementation called RefDirectory is used for the traditional on-disk storage layout. In the future we plan to support additional implementations, such as a pure in-memory RefDatabase for unit testing purposes. Optimize RefDirectory: ---------------------- The implementation of the in-memory reference cache, reading, and update routines has been completely rewritten. Much of the code was heavily borrowed or cribbed from the prior implementation, so copyright notices have been left intact as much as possible. The RefDirectory cache no longer confuses symbolic references with normal references. This permits the cache to resolve the value of a symbolic reference as late as possible, ensuring it is always current, without needing to maintain reverse pointers. The cache is now 2 sorted RefLists, rather than 3 HashMaps. Using sorted lists allows the implementation to reduce the in-memory footprint when storing many refs. Using specialized types for the elements allows the code to avoid additional map lookups for auxiliary stat information. To improve scan time during getRefs(), the lists are returned via a copy-on-write contract. Most callers of getRefs() do not modify the returned collections, so the copy-on-write semantics improves access on repositories with a large number of packed references. Iterator traversals of the returned Map<String,Ref> are performed using a simple merge-join of the two cache lists, ensuring we can perform the entire traversal in linear time as a function of the number of references: O(PackedRefs + LooseRefs). Scans of the loose reference space to update the cache run in O(LooseRefs log LooseRefs) time, as the directory contents are sorted before being merged against the in-memory cache. Since the majority of stable references are kept packed, there typically are only a handful of reference names to be sorted, so the sorting cost should not be very high. Locking is reduced during getRefs() by taking advantage of the copy-on-write semantics of the improved cache data structure. This permits concurrent readers to pull back references without blocking each other. If there is contention updating the cache during a scan, one or more updates are simply skipped and will get picked up again in a future scan. Writing to the $GIT_DIR/packed-refs during reference delete is now fully atomic. The file is locked, reparsed fresh, and written back out if a change is necessary. This avoids all race conditions with concurrent external updates of the packed-refs file. The RefLogWriter class has been fully folded into RefDirectory and is therefore deleted. Maintaining the reference's log is the responsiblity of the database implementation, and not all implementations will use java.io for access. Future work still remains to be done to abstract the ReflogReader class away from local disk IO. Change-Id: I26b9287c45a4b2d2be35ba2849daa316f5eec85d Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
14 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465
  1. /*
  2. * Copyright (C) 2008, Shawn O. Pearce <spearce@spearce.org>
  3. * and other copyright owners as documented in the project's IP log.
  4. *
  5. * This program and the accompanying materials are made available
  6. * under the terms of the Eclipse Distribution License v1.0 which
  7. * accompanies this distribution, is reproduced below, and is
  8. * available at http://www.eclipse.org/org/documents/edl-v10.php
  9. *
  10. * All rights reserved.
  11. *
  12. * Redistribution and use in source and binary forms, with or
  13. * without modification, are permitted provided that the following
  14. * conditions are met:
  15. *
  16. * - Redistributions of source code must retain the above copyright
  17. * notice, this list of conditions and the following disclaimer.
  18. *
  19. * - Redistributions in binary form must reproduce the above
  20. * copyright notice, this list of conditions and the following
  21. * disclaimer in the documentation and/or other materials provided
  22. * with the distribution.
  23. *
  24. * - Neither the name of the Eclipse Foundation, Inc. nor the
  25. * names of its contributors may be used to endorse or promote
  26. * products derived from this software without specific prior
  27. * written permission.
  28. *
  29. * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
  30. * CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
  31. * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
  32. * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
  33. * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
  34. * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
  35. * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
  36. * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
  37. * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
  38. * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
  39. * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
  40. * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
  41. * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  42. */
  43. package org.eclipse.jgit.transport;
  44. import java.io.BufferedReader;
  45. import java.io.FileNotFoundException;
  46. import java.io.IOException;
  47. import java.io.OutputStream;
  48. import java.util.ArrayList;
  49. import java.util.Collection;
  50. import java.util.Collections;
  51. import java.util.Comparator;
  52. import java.util.EnumSet;
  53. import java.util.HashMap;
  54. import java.util.List;
  55. import java.util.Map;
  56. import java.util.Set;
  57. import java.util.TreeMap;
  58. import org.eclipse.jgit.errors.NotSupportedException;
  59. import org.eclipse.jgit.errors.TransportException;
  60. import org.eclipse.jgit.internal.JGitText;
  61. import org.eclipse.jgit.lib.Constants;
  62. import org.eclipse.jgit.lib.ObjectId;
  63. import org.eclipse.jgit.lib.ObjectIdRef;
  64. import org.eclipse.jgit.lib.ProgressMonitor;
  65. import org.eclipse.jgit.lib.Ref;
  66. import org.eclipse.jgit.lib.Ref.Storage;
  67. import org.eclipse.jgit.lib.Repository;
  68. import org.eclipse.jgit.lib.SymbolicRef;
  69. import com.jcraft.jsch.Channel;
  70. import com.jcraft.jsch.ChannelSftp;
  71. import com.jcraft.jsch.JSchException;
  72. import com.jcraft.jsch.SftpATTRS;
  73. import com.jcraft.jsch.SftpException;
  74. /**
  75. * Transport over the non-Git aware SFTP (SSH based FTP) protocol.
  76. * <p>
  77. * The SFTP transport does not require any specialized Git support on the remote
  78. * (server side) repository. Object files are retrieved directly through secure
  79. * shell's FTP protocol, making it possible to copy objects from a remote
  80. * repository that is available over SSH, but whose remote host does not have
  81. * Git installed.
  82. * <p>
  83. * Unlike the HTTP variant (see {@link TransportHttp}) we rely upon being able
  84. * to list files in directories, as the SFTP protocol supports this function. By
  85. * listing files through SFTP we can avoid needing to have current
  86. * <code>objects/info/packs</code> or <code>info/refs</code> files on the
  87. * remote repository and access the data directly, much as Git itself would.
  88. * <p>
  89. * Concurrent pushing over this transport is not supported. Multiple concurrent
  90. * push operations may cause confusion in the repository state.
  91. *
  92. * @see WalkFetchConnection
  93. */
  94. public class TransportSftp extends SshTransport implements WalkTransport {
  95. static final TransportProtocol PROTO_SFTP = new TransportProtocol() {
  96. public String getName() {
  97. return JGitText.get().transportProtoSFTP;
  98. }
  99. public Set<String> getSchemes() {
  100. return Collections.singleton("sftp"); //$NON-NLS-1$
  101. }
  102. public Set<URIishField> getRequiredFields() {
  103. return Collections.unmodifiableSet(EnumSet.of(URIishField.HOST,
  104. URIishField.PATH));
  105. }
  106. public Set<URIishField> getOptionalFields() {
  107. return Collections.unmodifiableSet(EnumSet.of(URIishField.USER,
  108. URIishField.PASS, URIishField.PORT));
  109. }
  110. public int getDefaultPort() {
  111. return 22;
  112. }
  113. public Transport open(URIish uri, Repository local, String remoteName)
  114. throws NotSupportedException {
  115. return new TransportSftp(local, uri);
  116. }
  117. };
  118. TransportSftp(final Repository local, final URIish uri) {
  119. super(local, uri);
  120. }
  121. @Override
  122. public FetchConnection openFetch() throws TransportException {
  123. final SftpObjectDB c = new SftpObjectDB(uri.getPath());
  124. final WalkFetchConnection r = new WalkFetchConnection(this, c);
  125. r.available(c.readAdvertisedRefs());
  126. return r;
  127. }
  128. @Override
  129. public PushConnection openPush() throws TransportException {
  130. final SftpObjectDB c = new SftpObjectDB(uri.getPath());
  131. final WalkPushConnection r = new WalkPushConnection(this, c);
  132. r.available(c.readAdvertisedRefs());
  133. return r;
  134. }
  135. ChannelSftp newSftp() throws TransportException {
  136. final int tms = getTimeout() > 0 ? getTimeout() * 1000 : 0;
  137. try {
  138. // @TODO: Fix so that this operation is generic and casting to
  139. // JschSession is no longer necessary.
  140. final Channel channel = ((JschSession) getSession())
  141. .getSftpChannel();
  142. channel.connect(tms);
  143. return (ChannelSftp) channel;
  144. } catch (JSchException je) {
  145. throw new TransportException(uri, je.getMessage(), je);
  146. }
  147. }
  148. class SftpObjectDB extends WalkRemoteObjectDatabase {
  149. private final String objectsPath;
  150. private ChannelSftp ftp;
  151. SftpObjectDB(String path) throws TransportException {
  152. if (path.startsWith("/~")) //$NON-NLS-1$
  153. path = path.substring(1);
  154. if (path.startsWith("~/")) //$NON-NLS-1$
  155. path = path.substring(2);
  156. try {
  157. ftp = newSftp();
  158. ftp.cd(path);
  159. ftp.cd("objects"); //$NON-NLS-1$
  160. objectsPath = ftp.pwd();
  161. } catch (TransportException err) {
  162. close();
  163. throw err;
  164. } catch (SftpException je) {
  165. throw new TransportException("Can't enter " + path + "/objects"
  166. + ": " + je.getMessage(), je); //$NON-NLS-1$
  167. }
  168. }
  169. SftpObjectDB(final SftpObjectDB parent, final String p)
  170. throws TransportException {
  171. try {
  172. ftp = newSftp();
  173. ftp.cd(parent.objectsPath);
  174. ftp.cd(p);
  175. objectsPath = ftp.pwd();
  176. } catch (TransportException err) {
  177. close();
  178. throw err;
  179. } catch (SftpException je) {
  180. throw new TransportException("Can't enter " + p + " from "
  181. + parent.objectsPath + ": " + je.getMessage(), je); //$NON-NLS-1$
  182. }
  183. }
  184. @Override
  185. URIish getURI() {
  186. return uri.setPath(objectsPath);
  187. }
  188. @Override
  189. Collection<WalkRemoteObjectDatabase> getAlternates() throws IOException {
  190. try {
  191. return readAlternates(INFO_ALTERNATES);
  192. } catch (FileNotFoundException err) {
  193. return null;
  194. }
  195. }
  196. @Override
  197. WalkRemoteObjectDatabase openAlternate(final String location)
  198. throws IOException {
  199. return new SftpObjectDB(this, location);
  200. }
  201. @Override
  202. Collection<String> getPackNames() throws IOException {
  203. final List<String> packs = new ArrayList<String>();
  204. try {
  205. final Collection<ChannelSftp.LsEntry> list = ftp.ls("pack"); //$NON-NLS-1$
  206. final HashMap<String, ChannelSftp.LsEntry> files;
  207. final HashMap<String, Integer> mtimes;
  208. files = new HashMap<String, ChannelSftp.LsEntry>();
  209. mtimes = new HashMap<String, Integer>();
  210. for (final ChannelSftp.LsEntry ent : list)
  211. files.put(ent.getFilename(), ent);
  212. for (final ChannelSftp.LsEntry ent : list) {
  213. final String n = ent.getFilename();
  214. if (!n.startsWith("pack-") || !n.endsWith(".pack")) //$NON-NLS-1$ //$NON-NLS-2$
  215. continue;
  216. final String in = n.substring(0, n.length() - 5) + ".idx"; //$NON-NLS-1$
  217. if (!files.containsKey(in))
  218. continue;
  219. mtimes.put(n, Integer.valueOf(ent.getAttrs().getMTime()));
  220. packs.add(n);
  221. }
  222. Collections.sort(packs, new Comparator<String>() {
  223. public int compare(final String o1, final String o2) {
  224. return mtimes.get(o2).intValue()
  225. - mtimes.get(o1).intValue();
  226. }
  227. });
  228. } catch (SftpException je) {
  229. throw new TransportException("Can't ls " + objectsPath
  230. + "/pack: " + je.getMessage(), je);
  231. }
  232. return packs;
  233. }
  234. @Override
  235. FileStream open(final String path) throws IOException {
  236. try {
  237. final SftpATTRS a = ftp.lstat(path);
  238. return new FileStream(ftp.get(path), a.getSize());
  239. } catch (SftpException je) {
  240. if (je.id == ChannelSftp.SSH_FX_NO_SUCH_FILE)
  241. throw new FileNotFoundException(path);
  242. throw new TransportException("Can't get " + objectsPath + "/" //$NON-NLS-2$
  243. + path + ": " + je.getMessage(), je); //$NON-NLS-1$
  244. }
  245. }
  246. @Override
  247. void deleteFile(final String path) throws IOException {
  248. try {
  249. ftp.rm(path);
  250. } catch (SftpException je) {
  251. if (je.id == ChannelSftp.SSH_FX_NO_SUCH_FILE)
  252. return;
  253. throw new TransportException("Can't delete " + objectsPath
  254. + "/" + path + ": " + je.getMessage(), je); //$NON-NLS-1$//$NON-NLS-2$
  255. }
  256. // Prune any now empty directories.
  257. //
  258. String dir = path;
  259. int s = dir.lastIndexOf('/');
  260. while (s > 0) {
  261. try {
  262. dir = dir.substring(0, s);
  263. ftp.rmdir(dir);
  264. s = dir.lastIndexOf('/');
  265. } catch (SftpException je) {
  266. // If we cannot delete it, leave it alone. It may have
  267. // entries still in it, or maybe we lack write access on
  268. // the parent. Either way it isn't a fatal error.
  269. //
  270. break;
  271. }
  272. }
  273. }
  274. @Override
  275. OutputStream writeFile(final String path,
  276. final ProgressMonitor monitor, final String monitorTask)
  277. throws IOException {
  278. try {
  279. return ftp.put(path);
  280. } catch (SftpException je) {
  281. if (je.id == ChannelSftp.SSH_FX_NO_SUCH_FILE) {
  282. mkdir_p(path);
  283. try {
  284. return ftp.put(path);
  285. } catch (SftpException je2) {
  286. je = je2;
  287. }
  288. }
  289. throw new TransportException("Can't write " + objectsPath + "/" //$NON-NLS-2$
  290. + path + ": " + je.getMessage(), je); //$NON-NLS-1$
  291. }
  292. }
  293. @Override
  294. void writeFile(final String path, final byte[] data) throws IOException {
  295. final String lock = path + ".lock"; //$NON-NLS-1$
  296. try {
  297. super.writeFile(lock, data);
  298. try {
  299. ftp.rename(lock, path);
  300. } catch (SftpException je) {
  301. throw new TransportException("Can't write " + objectsPath
  302. + "/" + path + ": " + je.getMessage(), je); //$NON-NLS-1$//$NON-NLS-2$
  303. }
  304. } catch (IOException err) {
  305. try {
  306. ftp.rm(lock);
  307. } catch (SftpException e) {
  308. // Ignore deletion failure, we are already
  309. // failing anyway.
  310. }
  311. throw err;
  312. }
  313. }
  314. private void mkdir_p(String path) throws IOException {
  315. final int s = path.lastIndexOf('/');
  316. if (s <= 0)
  317. return;
  318. path = path.substring(0, s);
  319. try {
  320. ftp.mkdir(path);
  321. } catch (SftpException je) {
  322. if (je.id == ChannelSftp.SSH_FX_NO_SUCH_FILE) {
  323. mkdir_p(path);
  324. try {
  325. ftp.mkdir(path);
  326. return;
  327. } catch (SftpException je2) {
  328. je = je2;
  329. }
  330. }
  331. throw new TransportException("Can't mkdir " + objectsPath + "/"
  332. + path + ": " + je.getMessage(), je);
  333. }
  334. }
  335. Map<String, Ref> readAdvertisedRefs() throws TransportException {
  336. final TreeMap<String, Ref> avail = new TreeMap<String, Ref>();
  337. readPackedRefs(avail);
  338. readRef(avail, ROOT_DIR + Constants.HEAD, Constants.HEAD);
  339. readLooseRefs(avail, ROOT_DIR + "refs", "refs/"); //$NON-NLS-1$ //$NON-NLS-2$
  340. return avail;
  341. }
  342. @SuppressWarnings("unchecked")
  343. private void readLooseRefs(final TreeMap<String, Ref> avail,
  344. final String dir, final String prefix)
  345. throws TransportException {
  346. final Collection<ChannelSftp.LsEntry> list;
  347. try {
  348. list = ftp.ls(dir);
  349. } catch (SftpException je) {
  350. throw new TransportException("Can't ls " + objectsPath + "/" //$NON-NLS-2$
  351. + dir + ": " + je.getMessage(), je); //$NON-NLS-1$
  352. }
  353. for (final ChannelSftp.LsEntry ent : list) {
  354. final String n = ent.getFilename();
  355. if (".".equals(n) || "..".equals(n)) //$NON-NLS-1$ //$NON-NLS-2$
  356. continue;
  357. final String nPath = dir + "/" + n; //$NON-NLS-1$
  358. if (ent.getAttrs().isDir())
  359. readLooseRefs(avail, nPath, prefix + n + "/"); //$NON-NLS-1$
  360. else
  361. readRef(avail, nPath, prefix + n);
  362. }
  363. }
  364. private Ref readRef(final TreeMap<String, Ref> avail,
  365. final String path, final String name) throws TransportException {
  366. final String line;
  367. try {
  368. final BufferedReader br = openReader(path);
  369. try {
  370. line = br.readLine();
  371. } finally {
  372. br.close();
  373. }
  374. } catch (FileNotFoundException noRef) {
  375. return null;
  376. } catch (IOException err) {
  377. throw new TransportException("Cannot read " + objectsPath + "/" //$NON-NLS-2$
  378. + path + ": " + err.getMessage(), err); //$NON-NLS-1$
  379. }
  380. if (line == null)
  381. throw new TransportException("Empty ref: " + name);
  382. if (line.startsWith("ref: ")) { //$NON-NLS-1$
  383. final String target = line.substring("ref: ".length()); //$NON-NLS-1$
  384. Ref r = avail.get(target);
  385. if (r == null)
  386. r = readRef(avail, ROOT_DIR + target, target);
  387. if (r == null)
  388. r = new ObjectIdRef.Unpeeled(Ref.Storage.NEW, target, null);
  389. r = new SymbolicRef(name, r);
  390. avail.put(r.getName(), r);
  391. return r;
  392. }
  393. if (ObjectId.isId(line)) {
  394. final Ref r = new ObjectIdRef.Unpeeled(loose(avail.get(name)),
  395. name, ObjectId.fromString(line));
  396. avail.put(r.getName(), r);
  397. return r;
  398. }
  399. throw new TransportException("Bad ref: " + name + ": " + line); //$NON-NLS-2$
  400. }
  401. private Storage loose(final Ref r) {
  402. if (r != null && r.getStorage() == Storage.PACKED)
  403. return Storage.LOOSE_PACKED;
  404. return Storage.LOOSE;
  405. }
  406. @Override
  407. void close() {
  408. if (ftp != null) {
  409. try {
  410. if (ftp.isConnected())
  411. ftp.disconnect();
  412. } finally {
  413. ftp = null;
  414. }
  415. }
  416. }
  417. }
  418. }