You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

DhtReaderOptions.java 11KB

Store Git on any DHT jgit.storage.dht is a storage provider implementation for JGit that permits storing the Git repository in a distributed hashtable, NoSQL system, or other database. The actual underlying storage system is undefined, and can be plugged in by implementing 7 small interfaces: * Database * RepositoryIndexTable * RepositoryTable * RefTable * ChunkTable * ObjectIndexTable * WriteBuffer The storage provider interface tries to assume very little about the underlying storage system, and requires only three key features: * key -> value lookup (a hashtable is suitable) * atomic updates on single rows * asynchronous operations (Java's ExecutorService is easy to use) Most NoSQL database products offer all 3 of these features in their clients, and so does any decent network based cache system like the open source memcache product. Relying only on key equality for data retrevial makes it simple for the storage engine to distribute across multiple machines. Traditional SQL systems could also be used with a JDBC based spi implementation. Before submitting this change I have implemented six storage systems for the spi layer: * Apache HBase[1] * Apache Cassandra[2] * Google Bigtable[3] * an in-memory implementation for unit testing * a JDBC implementation for SQL * a generic cache provider that can ride on top of memcache All six systems came in with an spi layer around 1000 lines of code to implement the above 7 interfaces. This is a huge reduction in size compared to prior attempts to implement a new JGit storage layer. As this package shows, a complete JGit storage implementation is more than 17,000 lines of fairly complex code. A simple cache is provided in storage.dht.spi.cache. Implementers can use CacheDatabase to wrap any other type of Database and perform fast reads against a network based cache service, such as the open source memcached[4]. An implementation of CacheService must be provided to glue this spi onto the network cache. [1] https://github.com/spearce/jgit_hbase [2] https://github.com/spearce/jgit_cassandra [3] http://labs.google.com/papers/bigtable.html [4] http://memcached.org/ Change-Id: I0aa4072781f5ccc019ca421c036adff2c40c4295 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
13 years ago
Store Git on any DHT jgit.storage.dht is a storage provider implementation for JGit that permits storing the Git repository in a distributed hashtable, NoSQL system, or other database. The actual underlying storage system is undefined, and can be plugged in by implementing 7 small interfaces: * Database * RepositoryIndexTable * RepositoryTable * RefTable * ChunkTable * ObjectIndexTable * WriteBuffer The storage provider interface tries to assume very little about the underlying storage system, and requires only three key features: * key -> value lookup (a hashtable is suitable) * atomic updates on single rows * asynchronous operations (Java's ExecutorService is easy to use) Most NoSQL database products offer all 3 of these features in their clients, and so does any decent network based cache system like the open source memcache product. Relying only on key equality for data retrevial makes it simple for the storage engine to distribute across multiple machines. Traditional SQL systems could also be used with a JDBC based spi implementation. Before submitting this change I have implemented six storage systems for the spi layer: * Apache HBase[1] * Apache Cassandra[2] * Google Bigtable[3] * an in-memory implementation for unit testing * a JDBC implementation for SQL * a generic cache provider that can ride on top of memcache All six systems came in with an spi layer around 1000 lines of code to implement the above 7 interfaces. This is a huge reduction in size compared to prior attempts to implement a new JGit storage layer. As this package shows, a complete JGit storage implementation is more than 17,000 lines of fairly complex code. A simple cache is provided in storage.dht.spi.cache. Implementers can use CacheDatabase to wrap any other type of Database and perform fast reads against a network based cache service, such as the open source memcached[4]. An implementation of CacheService must be provided to glue this spi onto the network cache. [1] https://github.com/spearce/jgit_hbase [2] https://github.com/spearce/jgit_cassandra [3] http://labs.google.com/papers/bigtable.html [4] http://memcached.org/ Change-Id: I0aa4072781f5ccc019ca421c036adff2c40c4295 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
13 years ago
Store Git on any DHT jgit.storage.dht is a storage provider implementation for JGit that permits storing the Git repository in a distributed hashtable, NoSQL system, or other database. The actual underlying storage system is undefined, and can be plugged in by implementing 7 small interfaces: * Database * RepositoryIndexTable * RepositoryTable * RefTable * ChunkTable * ObjectIndexTable * WriteBuffer The storage provider interface tries to assume very little about the underlying storage system, and requires only three key features: * key -> value lookup (a hashtable is suitable) * atomic updates on single rows * asynchronous operations (Java's ExecutorService is easy to use) Most NoSQL database products offer all 3 of these features in their clients, and so does any decent network based cache system like the open source memcache product. Relying only on key equality for data retrevial makes it simple for the storage engine to distribute across multiple machines. Traditional SQL systems could also be used with a JDBC based spi implementation. Before submitting this change I have implemented six storage systems for the spi layer: * Apache HBase[1] * Apache Cassandra[2] * Google Bigtable[3] * an in-memory implementation for unit testing * a JDBC implementation for SQL * a generic cache provider that can ride on top of memcache All six systems came in with an spi layer around 1000 lines of code to implement the above 7 interfaces. This is a huge reduction in size compared to prior attempts to implement a new JGit storage layer. As this package shows, a complete JGit storage implementation is more than 17,000 lines of fairly complex code. A simple cache is provided in storage.dht.spi.cache. Implementers can use CacheDatabase to wrap any other type of Database and perform fast reads against a network based cache service, such as the open source memcached[4]. An implementation of CacheService must be provided to glue this spi onto the network cache. [1] https://github.com/spearce/jgit_hbase [2] https://github.com/spearce/jgit_cassandra [3] http://labs.google.com/papers/bigtable.html [4] http://memcached.org/ Change-Id: I0aa4072781f5ccc019ca421c036adff2c40c4295 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
13 years ago
Store Git on any DHT jgit.storage.dht is a storage provider implementation for JGit that permits storing the Git repository in a distributed hashtable, NoSQL system, or other database. The actual underlying storage system is undefined, and can be plugged in by implementing 7 small interfaces: * Database * RepositoryIndexTable * RepositoryTable * RefTable * ChunkTable * ObjectIndexTable * WriteBuffer The storage provider interface tries to assume very little about the underlying storage system, and requires only three key features: * key -> value lookup (a hashtable is suitable) * atomic updates on single rows * asynchronous operations (Java's ExecutorService is easy to use) Most NoSQL database products offer all 3 of these features in their clients, and so does any decent network based cache system like the open source memcache product. Relying only on key equality for data retrevial makes it simple for the storage engine to distribute across multiple machines. Traditional SQL systems could also be used with a JDBC based spi implementation. Before submitting this change I have implemented six storage systems for the spi layer: * Apache HBase[1] * Apache Cassandra[2] * Google Bigtable[3] * an in-memory implementation for unit testing * a JDBC implementation for SQL * a generic cache provider that can ride on top of memcache All six systems came in with an spi layer around 1000 lines of code to implement the above 7 interfaces. This is a huge reduction in size compared to prior attempts to implement a new JGit storage layer. As this package shows, a complete JGit storage implementation is more than 17,000 lines of fairly complex code. A simple cache is provided in storage.dht.spi.cache. Implementers can use CacheDatabase to wrap any other type of Database and perform fast reads against a network based cache service, such as the open source memcached[4]. An implementation of CacheService must be provided to glue this spi onto the network cache. [1] https://github.com/spearce/jgit_hbase [2] https://github.com/spearce/jgit_cassandra [3] http://labs.google.com/papers/bigtable.html [4] http://memcached.org/ Change-Id: I0aa4072781f5ccc019ca421c036adff2c40c4295 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
13 years ago
Store Git on any DHT jgit.storage.dht is a storage provider implementation for JGit that permits storing the Git repository in a distributed hashtable, NoSQL system, or other database. The actual underlying storage system is undefined, and can be plugged in by implementing 7 small interfaces: * Database * RepositoryIndexTable * RepositoryTable * RefTable * ChunkTable * ObjectIndexTable * WriteBuffer The storage provider interface tries to assume very little about the underlying storage system, and requires only three key features: * key -> value lookup (a hashtable is suitable) * atomic updates on single rows * asynchronous operations (Java's ExecutorService is easy to use) Most NoSQL database products offer all 3 of these features in their clients, and so does any decent network based cache system like the open source memcache product. Relying only on key equality for data retrevial makes it simple for the storage engine to distribute across multiple machines. Traditional SQL systems could also be used with a JDBC based spi implementation. Before submitting this change I have implemented six storage systems for the spi layer: * Apache HBase[1] * Apache Cassandra[2] * Google Bigtable[3] * an in-memory implementation for unit testing * a JDBC implementation for SQL * a generic cache provider that can ride on top of memcache All six systems came in with an spi layer around 1000 lines of code to implement the above 7 interfaces. This is a huge reduction in size compared to prior attempts to implement a new JGit storage layer. As this package shows, a complete JGit storage implementation is more than 17,000 lines of fairly complex code. A simple cache is provided in storage.dht.spi.cache. Implementers can use CacheDatabase to wrap any other type of Database and perform fast reads against a network based cache service, such as the open source memcached[4]. An implementation of CacheService must be provided to glue this spi onto the network cache. [1] https://github.com/spearce/jgit_hbase [2] https://github.com/spearce/jgit_cassandra [3] http://labs.google.com/papers/bigtable.html [4] http://memcached.org/ Change-Id: I0aa4072781f5ccc019ca421c036adff2c40c4295 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
13 years ago
Store Git on any DHT jgit.storage.dht is a storage provider implementation for JGit that permits storing the Git repository in a distributed hashtable, NoSQL system, or other database. The actual underlying storage system is undefined, and can be plugged in by implementing 7 small interfaces: * Database * RepositoryIndexTable * RepositoryTable * RefTable * ChunkTable * ObjectIndexTable * WriteBuffer The storage provider interface tries to assume very little about the underlying storage system, and requires only three key features: * key -> value lookup (a hashtable is suitable) * atomic updates on single rows * asynchronous operations (Java's ExecutorService is easy to use) Most NoSQL database products offer all 3 of these features in their clients, and so does any decent network based cache system like the open source memcache product. Relying only on key equality for data retrevial makes it simple for the storage engine to distribute across multiple machines. Traditional SQL systems could also be used with a JDBC based spi implementation. Before submitting this change I have implemented six storage systems for the spi layer: * Apache HBase[1] * Apache Cassandra[2] * Google Bigtable[3] * an in-memory implementation for unit testing * a JDBC implementation for SQL * a generic cache provider that can ride on top of memcache All six systems came in with an spi layer around 1000 lines of code to implement the above 7 interfaces. This is a huge reduction in size compared to prior attempts to implement a new JGit storage layer. As this package shows, a complete JGit storage implementation is more than 17,000 lines of fairly complex code. A simple cache is provided in storage.dht.spi.cache. Implementers can use CacheDatabase to wrap any other type of Database and perform fast reads against a network based cache service, such as the open source memcached[4]. An implementation of CacheService must be provided to glue this spi onto the network cache. [1] https://github.com/spearce/jgit_hbase [2] https://github.com/spearce/jgit_cassandra [3] http://labs.google.com/papers/bigtable.html [4] http://memcached.org/ Change-Id: I0aa4072781f5ccc019ca421c036adff2c40c4295 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
13 years ago
Store Git on any DHT jgit.storage.dht is a storage provider implementation for JGit that permits storing the Git repository in a distributed hashtable, NoSQL system, or other database. The actual underlying storage system is undefined, and can be plugged in by implementing 7 small interfaces: * Database * RepositoryIndexTable * RepositoryTable * RefTable * ChunkTable * ObjectIndexTable * WriteBuffer The storage provider interface tries to assume very little about the underlying storage system, and requires only three key features: * key -> value lookup (a hashtable is suitable) * atomic updates on single rows * asynchronous operations (Java's ExecutorService is easy to use) Most NoSQL database products offer all 3 of these features in their clients, and so does any decent network based cache system like the open source memcache product. Relying only on key equality for data retrevial makes it simple for the storage engine to distribute across multiple machines. Traditional SQL systems could also be used with a JDBC based spi implementation. Before submitting this change I have implemented six storage systems for the spi layer: * Apache HBase[1] * Apache Cassandra[2] * Google Bigtable[3] * an in-memory implementation for unit testing * a JDBC implementation for SQL * a generic cache provider that can ride on top of memcache All six systems came in with an spi layer around 1000 lines of code to implement the above 7 interfaces. This is a huge reduction in size compared to prior attempts to implement a new JGit storage layer. As this package shows, a complete JGit storage implementation is more than 17,000 lines of fairly complex code. A simple cache is provided in storage.dht.spi.cache. Implementers can use CacheDatabase to wrap any other type of Database and perform fast reads against a network based cache service, such as the open source memcached[4]. An implementation of CacheService must be provided to glue this spi onto the network cache. [1] https://github.com/spearce/jgit_hbase [2] https://github.com/spearce/jgit_cassandra [3] http://labs.google.com/papers/bigtable.html [4] http://memcached.org/ Change-Id: I0aa4072781f5ccc019ca421c036adff2c40c4295 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
13 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353
  1. /*
  2. * Copyright (C) 2011, Google Inc.
  3. * and other copyright owners as documented in the project's IP log.
  4. *
  5. * This program and the accompanying materials are made available
  6. * under the terms of the Eclipse Distribution License v1.0 which
  7. * accompanies this distribution, is reproduced below, and is
  8. * available at http://www.eclipse.org/org/documents/edl-v10.php
  9. *
  10. * All rights reserved.
  11. *
  12. * Redistribution and use in source and binary forms, with or
  13. * without modification, are permitted provided that the following
  14. * conditions are met:
  15. *
  16. * - Redistributions of source code must retain the above copyright
  17. * notice, this list of conditions and the following disclaimer.
  18. *
  19. * - Redistributions in binary form must reproduce the above
  20. * copyright notice, this list of conditions and the following
  21. * disclaimer in the documentation and/or other materials provided
  22. * with the distribution.
  23. *
  24. * - Neither the name of the Eclipse Foundation, Inc. nor the
  25. * names of its contributors may be used to endorse or promote
  26. * products derived from this software without specific prior
  27. * written permission.
  28. *
  29. * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
  30. * CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
  31. * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
  32. * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
  33. * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
  34. * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
  35. * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
  36. * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
  37. * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
  38. * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
  39. * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
  40. * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
  41. * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  42. */
  43. package org.eclipse.jgit.storage.dht;
  44. import org.eclipse.jgit.lib.Config;
  45. /** Options controlling how objects are read from a DHT stored repository. */
  46. public class DhtReaderOptions {
  47. /** 1024 (number of bytes in one kibibyte/kilobyte) */
  48. public static final int KiB = 1024;
  49. /** 1024 {@link #KiB} (number of bytes in one mebibyte/megabyte) */
  50. public static final int MiB = 1024 * KiB;
  51. private Timeout timeout;
  52. private boolean prefetchFollowEdgeHints;
  53. private int chunkLimit;
  54. private int openQueuePrefetchRatio;
  55. private int walkCommitsPrefetchRatio;
  56. private int walkTreesPrefetchRatio;
  57. private int writeObjectsPrefetchRatio;
  58. private int objectIndexConcurrentBatches;
  59. private int objectIndexBatchSize;
  60. private int deltaBaseCacheSize;
  61. private int deltaBaseCacheLimit;
  62. private int recentInfoCacheSize;
  63. private boolean trackFirstChunkLoad;
  64. /** Create a default reader configuration. */
  65. public DhtReaderOptions() {
  66. setTimeout(Timeout.seconds(5));
  67. setPrefetchFollowEdgeHints(true);
  68. setChunkLimit(5 * MiB);
  69. setOpenQueuePrefetchRatio(20 /* percent */);
  70. setWalkCommitsPrefetchRatio(20 /* percent */);
  71. setWalkTreesPrefetchRatio(20 /* percent */);
  72. setWriteObjectsPrefetchRatio(90 /* percent */);
  73. setObjectIndexConcurrentBatches(2);
  74. setObjectIndexBatchSize(512);
  75. setDeltaBaseCacheSize(1024);
  76. setDeltaBaseCacheLimit(10 * MiB);
  77. setRecentInfoCacheSize(4096);
  78. }
  79. /** @return default timeout to wait on long operations before aborting. */
  80. public Timeout getTimeout() {
  81. return timeout;
  82. }
  83. /**
  84. * Set the default timeout to wait on long operations.
  85. *
  86. * @param maxWaitTime
  87. * new wait time.
  88. * @return {@code this}
  89. */
  90. public DhtReaderOptions setTimeout(Timeout maxWaitTime) {
  91. if (maxWaitTime == null || maxWaitTime.getTime() < 0)
  92. throw new IllegalArgumentException();
  93. timeout = maxWaitTime;
  94. return this;
  95. }
  96. /** @return if the prefetcher should follow edge hints (experimental) */
  97. public boolean isPrefetchFollowEdgeHints() {
  98. return prefetchFollowEdgeHints;
  99. }
  100. /**
  101. * Enable (or disable) the experimental edge following feature.
  102. *
  103. * @param follow
  104. * true to follow the edge hints.
  105. * @return {@code this}
  106. */
  107. public DhtReaderOptions setPrefetchFollowEdgeHints(boolean follow) {
  108. prefetchFollowEdgeHints = follow;
  109. return this;
  110. }
  111. /** @return number of bytes to hold within a DhtReader. */
  112. public int getChunkLimit() {
  113. return chunkLimit;
  114. }
  115. /**
  116. * Set the number of bytes hold within a DhtReader.
  117. *
  118. * @param maxBytes
  119. * @return {@code this}
  120. */
  121. public DhtReaderOptions setChunkLimit(int maxBytes) {
  122. chunkLimit = Math.max(1024, maxBytes);
  123. return this;
  124. }
  125. /** @return percentage of {@link #getChunkLimit()} used for prefetch, 0..100. */
  126. public int getOpenQueuePrefetchRatio() {
  127. return openQueuePrefetchRatio;
  128. }
  129. /**
  130. * Set the prefetch ratio used by the open object queue.
  131. *
  132. * @param ratio 0..100.
  133. * @return {@code this}
  134. */
  135. public DhtReaderOptions setOpenQueuePrefetchRatio(int ratio) {
  136. openQueuePrefetchRatio = Math.max(0, Math.min(ratio, 100));
  137. return this;
  138. }
  139. /** @return percentage of {@link #getChunkLimit()} used for prefetch, 0..100. */
  140. public int getWalkCommitsPrefetchRatio() {
  141. return walkCommitsPrefetchRatio;
  142. }
  143. /**
  144. * Set the prefetch ratio used by the open object queue.
  145. *
  146. * @param ratio 0..100.
  147. * @return {@code this}
  148. */
  149. public DhtReaderOptions setWalkCommitsPrefetchRatio(int ratio) {
  150. walkCommitsPrefetchRatio = Math.max(0, Math.min(ratio, 100));
  151. return this;
  152. }
  153. /** @return percentage of {@link #getChunkLimit()} used for prefetch, 0..100. */
  154. public int getWalkTreesPrefetchRatio() {
  155. return walkTreesPrefetchRatio;
  156. }
  157. /**
  158. * Set the prefetch ratio used by the open object queue.
  159. *
  160. * @param ratio 0..100.
  161. * @return {@code this}
  162. */
  163. public DhtReaderOptions setWalkTreesPrefetchRatio(int ratio) {
  164. walkTreesPrefetchRatio = Math.max(0, Math.min(ratio, 100));
  165. return this;
  166. }
  167. /** @return percentage of {@link #getChunkLimit()} used for prefetch, 0..100. */
  168. public int getWriteObjectsPrefetchRatio() {
  169. return writeObjectsPrefetchRatio;
  170. }
  171. /**
  172. * Set the prefetch ratio used by the open object queue.
  173. *
  174. * @param ratio 0..100.
  175. * @return {@code this}
  176. */
  177. public DhtReaderOptions setWriteObjectsPrefetchRatio(int ratio) {
  178. writeObjectsPrefetchRatio = Math.max(0, Math.min(ratio, 100));
  179. return this;
  180. }
  181. /** @return number of concurrent reads against ObjectIndexTable. */
  182. public int getObjectIndexConcurrentBatches() {
  183. return objectIndexConcurrentBatches;
  184. }
  185. /**
  186. * Set the number of concurrent readers on ObjectIndexTable.
  187. *
  188. * @param batches
  189. * number of batches.
  190. * @return {@code this}
  191. */
  192. public DhtReaderOptions setObjectIndexConcurrentBatches(int batches) {
  193. objectIndexConcurrentBatches = Math.max(1, batches);
  194. return this;
  195. }
  196. /** @return number of objects to lookup in one batch. */
  197. public int getObjectIndexBatchSize() {
  198. return objectIndexBatchSize;
  199. }
  200. /**
  201. * Set the number of objects to lookup at once.
  202. *
  203. * @param objectCnt
  204. * the number of objects in a lookup batch.
  205. * @return {@code this}
  206. */
  207. public DhtReaderOptions setObjectIndexBatchSize(int objectCnt) {
  208. objectIndexBatchSize = Math.max(1, objectCnt);
  209. return this;
  210. }
  211. /** @return size of the delta base cache hash table, in object entries. */
  212. public int getDeltaBaseCacheSize() {
  213. return deltaBaseCacheSize;
  214. }
  215. /**
  216. * Set the size of the delta base cache hash table.
  217. *
  218. * @param slotCnt
  219. * number of slots in the hash table.
  220. * @return {@code this}
  221. */
  222. public DhtReaderOptions setDeltaBaseCacheSize(int slotCnt) {
  223. deltaBaseCacheSize = Math.max(1, slotCnt);
  224. return this;
  225. }
  226. /** @return maximum number of bytes to hold in per-reader DeltaBaseCache. */
  227. public int getDeltaBaseCacheLimit() {
  228. return deltaBaseCacheLimit;
  229. }
  230. /**
  231. * Set the maximum number of bytes in the DeltaBaseCache.
  232. *
  233. * @param maxBytes
  234. * the new limit.
  235. * @return {@code this}
  236. */
  237. public DhtReaderOptions setDeltaBaseCacheLimit(int maxBytes) {
  238. deltaBaseCacheLimit = Math.max(0, maxBytes);
  239. return this;
  240. }
  241. /** @return number of objects to cache information on. */
  242. public int getRecentInfoCacheSize() {
  243. return recentInfoCacheSize;
  244. }
  245. /**
  246. * Set the number of objects to cache information on.
  247. *
  248. * @param objectCnt
  249. * the number of objects to cache.
  250. * @return {@code this}
  251. */
  252. public DhtReaderOptions setRecentInfoCacheSize(int objectCnt) {
  253. recentInfoCacheSize = Math.max(0, objectCnt);
  254. return this;
  255. }
  256. /**
  257. * @return true if {@link DhtReader.Statistics} includes the stack trace for
  258. * the first time a chunk is loaded. Supports debugging DHT code.
  259. */
  260. public boolean isTrackFirstChunkLoad() {
  261. return trackFirstChunkLoad;
  262. }
  263. /**
  264. * Set whether or not the initial load of each chunk should be tracked.
  265. *
  266. * @param track
  267. * true to track the stack trace of the first load.
  268. * @return {@code this}.
  269. */
  270. public DhtReaderOptions setTrackFirstChunkLoad(boolean track) {
  271. trackFirstChunkLoad = track;
  272. return this;
  273. }
  274. /**
  275. * Update properties by setting fields from the configuration.
  276. * <p>
  277. * If a property is not defined in the configuration, then it is left
  278. * unmodified.
  279. *
  280. * @param rc
  281. * configuration to read properties from.
  282. * @return {@code this}
  283. */
  284. public DhtReaderOptions fromConfig(Config rc) {
  285. setTimeout(Timeout.getTimeout(rc, "core", "dht", "timeout", getTimeout()));
  286. setPrefetchFollowEdgeHints(rc.getBoolean("core", "dht", "prefetchFollowEdgeHints", isPrefetchFollowEdgeHints()));
  287. setChunkLimit(rc.getInt("core", "dht", "chunkLimit", getChunkLimit()));
  288. setOpenQueuePrefetchRatio(rc.getInt("core", "dht", "openQueuePrefetchRatio", getOpenQueuePrefetchRatio()));
  289. setWalkCommitsPrefetchRatio(rc.getInt("core", "dht", "walkCommitsPrefetchRatio", getWalkCommitsPrefetchRatio()));
  290. setWalkTreesPrefetchRatio(rc.getInt("core", "dht", "walkTreesPrefetchRatio", getWalkTreesPrefetchRatio()));
  291. setWriteObjectsPrefetchRatio(rc.getInt("core", "dht", "writeObjectsPrefetchRatio", getWriteObjectsPrefetchRatio()));
  292. setObjectIndexConcurrentBatches(rc.getInt("core", "dht", "objectIndexConcurrentBatches", getObjectIndexConcurrentBatches()));
  293. setObjectIndexBatchSize(rc.getInt("core", "dht", "objectIndexBatchSize", getObjectIndexBatchSize()));
  294. setDeltaBaseCacheSize(rc.getInt("core", "dht", "deltaBaseCacheSize", getDeltaBaseCacheSize()));
  295. setDeltaBaseCacheLimit(rc.getInt("core", "dht", "deltaBaseCacheLimit", getDeltaBaseCacheLimit()));
  296. setRecentInfoCacheSize(rc.getInt("core", "dht", "recentInfoCacheSize", getRecentInfoCacheSize()));
  297. setTrackFirstChunkLoad(rc.getBoolean("core", "dht", "debugTrackFirstChunkLoad", isTrackFirstChunkLoad()));
  298. return this;
  299. }
  300. }