aboutsummaryrefslogtreecommitdiffstats
path: root/org.eclipse.jgit.storage.dht/src
Commit message (Collapse)AuthorAgeFilesLines
* Delete storage.dht packageShawn O. Pearce2012-09-0578-15849/+0
| | | | | | | | | | | | | | | | | This experiment proved to be not very useful. I had originally planned to use this on top of Google Bigtable, Apache HBase or Apache Cassandra. Unfortunately the schema is very complex and does not perform well. The storage.dfs package has much better performance and has been in production at Google for many months now, proving it is a viable storage backend for Git. As there are no users of the storage.dht schema, either at Google or any other company, nor any valid open source implementations of the storage system, drop the entire package and API from the JGit project. There is no point in trying to maintain code that is simply not used. Change-Id: Ia8d32f27426d2bcc12e7dc9cc4524c59f4fe4df9 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* [findBugs] Make ChunkKey serializableRobin Stocker2012-07-151-1/+4
| | | | | | | It's used in DhtMissingChunkException, which is serializable. Change-Id: I2b76bc1bc373efd44214be4598a03c62c681a200 Signed-off-by: Robin Stocker <robin@nibor.org>
* cleanup: Remove unused declarationsRobin Rosenberg2012-06-061-1/+0
| | | | Change-Id: I3b54cb9f73cb433c71a441a11ddc74cfecdaa1dc
* Fix loading packed objects >2GShawn O. Pearce2012-03-281-3/+3
| | | | | | | | | | | | | | | | Parsing the size from a packed object header was incorrectly computing the total inflated length when the length exceeded the range of a Java int. The next 7 bits of size information was shifted left as an int using a shift of 25 bits, placing the higher bits of the size into the sign position. When this size was extended to a long to be added to the current size accumulator the size went negative, resulting in NegativeArraySizeException being thrown. Fix all places where this particular pattern of code is used to read a pack size field, or a binary delta header, as they both use the same variable length encoding scheme. Change-Id: I04008728ed828f18202652c3d5401cf95a441d0a
* cleanup: Drop unused parameter on DhtPackParserRobin Rosenberg2012-03-091-4/+4
| | | | Change-Id: I8f2cd0a04cc95a02c49c16dade1b3509cba02e2d
* Fire IndexChangedEvent on DirCache.commit()Matthias Sohn2011-09-301-0/+5
| | | | | | | | | | | | | | | | | | Since we replaced GitIndex by DirCache JGit didn't fire IndexChangedEvents anymore. For EGit this still worked with a high latency since its RepositoryChangeScanner which is scheduled to run each 10 seconds fires the event in case the index changes. This scanner is meant to detect index changes induced by a different process e.g. by calling "git add" from native git. When the index is changed from within the same process we should fire the event synchronously. Compare the index checksum on write to index checksum when index was read earlier to determine if index really changed. Use IndexChangedListener interface to keep DirCache decoupled from Repository. Change-Id: Id4311f7a7859ffe8738863b3d86c83c8b5f513af Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* Reassign symbolic ref list after calling put.Kevin Sawicki2011-08-241-1/+1
| | | | | | | This is required since RefList.put returns a new RefList. Change-Id: I717d75d6f6154a6e0dc7cde3b72b0a59c68d955c Signed-off-by: Kevin Sawicki <kevin@github.com>
* DHT: Change DhtReadher caches to be dynamic by workloadShawn O. Pearce2011-06-096-54/+158
| | | | | | | | | | | | | | | | | Instead of fixing the prefetch queue and recent chunk queue as different sizes, allow these to share the same limit but be scaled based on the work being performed. During walks about 20% of the space will be given to the prefetcher, and the other 80% will be used by the recent chunks cache. This should improve cases where there is bad locality between chunks. During writing of a pack stream, 90-100% of the space should be made available to the prefetcher, as the prefetch plan is usually very accurate about the order chunks will be needed in. Change-Id: I1ca7acb4518e66eb9d4138fb753df38e7254704d Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* DHT: Use a proper HashMap for RecentChunk lookupsShawn O. Pearce2011-06-091-12/+16
| | | | | | | | | | | | | | | | A linear search is somewhat acceptable for only 4 recent chunks, but a HashMap based lookup would be better. The table will have 16 slots by default and given the hashCode() of ChunkKey is derived from the SHA-1 of the chunk, each chunk will fall into its own bucket within the table and thus evaluate only 1 entry during lookup instead of 4. Some users may also want to devote more memory to the recent chunks, in which case expanding this list to a longer length will help to reduce chunk faults, but would increase search time. Using a HashMap will help this code to scale to larger sizes better. Change-Id: Ia41b7a1cc69ad27b85749e3b74cbf8d0aa338044 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* DHT: Always have at least one recent chunk in DhtReaderShawn O. Pearce2011-06-091-1/+1
| | | | | | | | | | | | | | | The RecentChunks cache assumes there is always at least one recent chunk in the maxSize that it receives from the DhtReaderOptions. Ensure that is true by requiring the size to be at least 1. Running with 0 recent chunk cache is very a bad idea, often during commit walking the parents of a commit will be found on the same chunk as the commit that was just accessed. In these cases its a good idea to keep that last chunk around so the parents can be quickly accessed. Change-Id: I33b65286e8a4cbf6ef4ced28c547837f173e065d Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* DHT: Fix NPE during prefetchShawn O. Pearce2011-06-091-1/+1
| | | | | | | | | | The Prefetcher may have loaded a chunk that is a fragment, if the DhtReader is scanning the Prefetcher's chunks for a particular object fragment chunks will be missing the index and NPE during the findOffset() call into the index itself. Change-Id: Ie2823724c289f745655076c5209acec32361a1ea Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* DHT: Drop leading hash digits from row keysShawn O. Pearce2011-06-092-20/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Originally I put the first two digits of the object SHA-1 into the start of a row key to try and spread the load of objects around a DHT service. Unfortunately this tends to not work as well as I had hoped. Servers reading a repository need to contact every node in a DHT cluster if the cluster tries to evenly distribute the object rows. This is a lot of connections, especially if the cluster has many backend storage servers. If the library has an open connection limit (possibly due to JVM file descriptor limitations) it may need to open and close a lot of connections to access a repository, rather than being able to reuse the same connection to a handful of backend servers. This results in a lot of connection thrashing for some DHT type databases, and is inefficient. Some DHTs are able to operate even if part of the database space is currently unavailable. For example, a DHT service might assign some section of the key space to a node, and then fail that section over to another node when the primary is noticed as being offline. During that failover period that section of the key space is not available, but other sections hosted by other backends are still ready for service. Spreading keys all over the cluster makes it likely that any single backend being temporarily down means the entire cluster is down, rather than only some. This is a massive schema change, but it should improve relability and performance for any DHT system. Change-Id: I6b65bfb4c14b6f7bd323c2bd0638b49d429245be Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* DHT: Support removing a repository nameShawn O. Pearce2011-05-313-0/+43
| | | | | | | | | The first step to deleting a repository from the DHT storage is to remove the name binding in the RepositoryIndexTable, making the repository unavailable for lookup. Change-Id: I469bf92f4bf2f555a15949569b21937c14cb142b Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* DHT: Fix thread-safety issue in AbstractWriteBufferShawn O. Pearce2011-05-311-7/+18
| | | | | | | | | There is a data corruption issue with the 'running' list if a background thread schedules something onto the buffer while the application thread is also using it. Change-Id: I5ba78b98b6632965d677a9c8f209f0cf8320cc3d Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* DHT: Add sequence RefDataShawn O. Pearce2011-05-253-129/+197
| | | | | | | | | | | | | | | | | | RefData now uses a sequence number as part of the field, ensuring that updates always increase the sequence number by one whenever a reference is modified. Attaching a sequence number to RefData will help with storing reference log entries during updates. As the sequence number should be unique within the reference name space, log entries can be keyed by the sequence number and remain unique. Making this work over reference delete-create cycles will require an additional RefTable API to return the oldest sequence number previously used in the reference log to seed the recreated reference. Change-Id: I11cfff2a96ef962e57f29925a3eef41bdbf9f9bb Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
* DHT: Replace TinyProtobuf with Google Protocol BuffersShawn O. Pearce2011-05-2537-2396/+981
| | | | | | | | | | | | | | | | | | | | | | | | | | | The standard Google distribution of Protocol Buffers in Java is better maintained than TinyProtobuf, and should be faster for most uses. It does use slightly more memory due to many of our key types being stored as strings in protobuf messages, but this is probably worth the small hit to memory in exchange for better maintained code that is easier to reuse in other applications. Exposing all of our data members to the underlying implementation makes it easier to develop reporting and data mining tools, or to expand out a nested structure like RefData into a flat format in a SQL database table. Since the C++ `protoc` tool is necessary to convert the protobuf script into Java code, the generated files are committed as part of the source repository to make it easier for developers who do not have this tool installed to still build the overall JGit package and make use of it. Reviewers will need to be careful to ensure that any edits made to a *.proto file come in a commit that also updates the generated code to match. CQ: 5135 Change-Id: I53e11e82c186b9cf0d7b368e0276519e6a0b2893 Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
* DHT: Remove per-process ChunkCacheShawn O. Pearce2011-05-256-626/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | Performance testing has indicated the per-process ChunkCache isn't very effective for the DHT storage implementation. If a server is using the DHT storage backend, it is most likely part of a larger cluster where requests are distributed in a round-robin fashion between the member servers. In such a scenario there is insufficient data locality between requests to get a good hit ratio on the per-process ChunkCache. A low hit ratio means the cache is actually hurting performance by eating up memory that could otherwise be used for transient request data, and increasing pressure on the GC when it needs to find free space. Remove all of the ChunkCache code. Installations that want to cache (to reduce database usage) should wrap their Database with a CacheDatabase and use a network based CacheServer. I left the ChunkCache in the original DHT storage commit because I wanted to document in the history of the project that its probably worth *not* having, but leave open a door for someone to revert this change if they find otherwise at a later date. Change-Id: I364d0725c46c5a19f7443642a40c89ba4d3fdd29 Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
* Store Git on any DHTShawn O. Pearce2011-05-0582-0/+17638
jgit.storage.dht is a storage provider implementation for JGit that permits storing the Git repository in a distributed hashtable, NoSQL system, or other database. The actual underlying storage system is undefined, and can be plugged in by implementing 7 small interfaces: * Database * RepositoryIndexTable * RepositoryTable * RefTable * ChunkTable * ObjectIndexTable * WriteBuffer The storage provider interface tries to assume very little about the underlying storage system, and requires only three key features: * key -> value lookup (a hashtable is suitable) * atomic updates on single rows * asynchronous operations (Java's ExecutorService is easy to use) Most NoSQL database products offer all 3 of these features in their clients, and so does any decent network based cache system like the open source memcache product. Relying only on key equality for data retrevial makes it simple for the storage engine to distribute across multiple machines. Traditional SQL systems could also be used with a JDBC based spi implementation. Before submitting this change I have implemented six storage systems for the spi layer: * Apache HBase[1] * Apache Cassandra[2] * Google Bigtable[3] * an in-memory implementation for unit testing * a JDBC implementation for SQL * a generic cache provider that can ride on top of memcache All six systems came in with an spi layer around 1000 lines of code to implement the above 7 interfaces. This is a huge reduction in size compared to prior attempts to implement a new JGit storage layer. As this package shows, a complete JGit storage implementation is more than 17,000 lines of fairly complex code. A simple cache is provided in storage.dht.spi.cache. Implementers can use CacheDatabase to wrap any other type of Database and perform fast reads against a network based cache service, such as the open source memcached[4]. An implementation of CacheService must be provided to glue this spi onto the network cache. [1] https://github.com/spearce/jgit_hbase [2] https://github.com/spearce/jgit_cassandra [3] http://labs.google.com/papers/bigtable.html [4] http://memcached.org/ Change-Id: I0aa4072781f5ccc019ca421c036adff2c40c4295 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>