You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

Ensure that GC#deleteOrphans respects pack lock If pack or index files are guarded by a pack lock (.keep file) deleteOrphans() should not touch the respective files protected by the lock file. Otherwise it may interfere with PackInserter concurrently inserting a new pack file and its index. The problem was caused by the following race. All mentioned files are located in "objects/pack/". File endings relevant in "pack" dir: .pack .keep .idx .bitmap When ReceivePack receives a pack file it executes the following steps: ReceivePack.service(): receivePackAndCheckConnectivity(): receivePack(): receive the pack parse the pack, returns packLock (.keep file) PackInserter.flush(): write tmpPck file: "insert_<random>.pack" write tmpIdx file: "insert_<random>.idx" real pack name: "pack-<SHA1>.pack" real index name: "pack-<SHA1>.idx" atomic rename tmpPack to realPack atomic rename tmpIdx to tmpIdx execute commands unlock pack by removing .keep file trigger auto gc if enabled When PackInserter.flush() renames the temporary pack to the final "pack-xxx.pack" file the temporary pack index file "insert_xxx.idx" has no matching .pack file with the same base name for a short interval. If deleteOrphans() ran during that interval it deduced the pack index file was orphaned. Subsequently the missing pack index caused MissingObjectExceptions since objects contained in the pack couldn't be looked up anymore. Bug: https://bugs.chromium.org/p/gerrit/issues/detail?id=13544 Change-Id: I559c81e4b1d7c487f92a751bd78b987d32c98719 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
hace 3 años
Ensure that GC#deleteOrphans respects pack lock If pack or index files are guarded by a pack lock (.keep file) deleteOrphans() should not touch the respective files protected by the lock file. Otherwise it may interfere with PackInserter concurrently inserting a new pack file and its index. The problem was caused by the following race. All mentioned files are located in "objects/pack/". File endings relevant in "pack" dir: .pack .keep .idx .bitmap When ReceivePack receives a pack file it executes the following steps: ReceivePack.service(): receivePackAndCheckConnectivity(): receivePack(): receive the pack parse the pack, returns packLock (.keep file) PackInserter.flush(): write tmpPck file: "insert_<random>.pack" write tmpIdx file: "insert_<random>.idx" real pack name: "pack-<SHA1>.pack" real index name: "pack-<SHA1>.idx" atomic rename tmpPack to realPack atomic rename tmpIdx to tmpIdx execute commands unlock pack by removing .keep file trigger auto gc if enabled When PackInserter.flush() renames the temporary pack to the final "pack-xxx.pack" file the temporary pack index file "insert_xxx.idx" has no matching .pack file with the same base name for a short interval. If deleteOrphans() ran during that interval it deduced the pack index file was orphaned. Subsequently the missing pack index caused MissingObjectExceptions since objects contained in the pack couldn't be looked up anymore. Bug: https://bugs.chromium.org/p/gerrit/issues/detail?id=13544 Change-Id: I559c81e4b1d7c487f92a751bd78b987d32c98719 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
hace 3 años
Limit the range of commits for which bitmaps are created. A bitmap index contains bitmaps for a set of commits in a pack file. Creating a bitmap for every commit is too expensive, so heuristics select the most "important" commits. The most recent commits are the most valuable. To clone a repository only those for the branch tips are needed. When fetching, only commits since the last fetch are needed. The commit selection heuristics generally work, but for some repositories the number of selected commits is prohibitively high. One example is the MSM 3.10 Linux kernel. With over 1 million commits on 2820 branches, the current heuristics resulted in +36k selected commits. Each uncompressed bitmap for that repository is ~413k, making it difficult to complete a GC operation in available memory. The benefit of creating bitmaps over the entire history of a repository like the MSM 3.10 Linux kernel isn't clear. For that repository, most history for the last year appears to be in the last 100k commits. Limiting bitmap commit selection to just those commits reduces the count of selected commits from ~36k to ~10.5k. Dropping bitmaps for older commits does not affect object counting times for clones or for fetches on clients that are reasonably up-to-date. This patch defines a new "bitmapCommitRange" PackConfig parameter to limit the commit selection process when building bitmaps. The range starts with the most recent commit and walks backwards. A range of 10k considers only the 10000 most recent commits. A range of zero creates bitmaps only for branch tips. A range of -1 (the default) does not limit the range--all commits in the pack are used in the commit selection process. Change-Id: Ied92c70cfa0778facc670e0f14a0980bed5e3bfb Signed-off-by: Terry Parker <tparker@google.com>
hace 8 años
Limit the range of commits for which bitmaps are created. A bitmap index contains bitmaps for a set of commits in a pack file. Creating a bitmap for every commit is too expensive, so heuristics select the most "important" commits. The most recent commits are the most valuable. To clone a repository only those for the branch tips are needed. When fetching, only commits since the last fetch are needed. The commit selection heuristics generally work, but for some repositories the number of selected commits is prohibitively high. One example is the MSM 3.10 Linux kernel. With over 1 million commits on 2820 branches, the current heuristics resulted in +36k selected commits. Each uncompressed bitmap for that repository is ~413k, making it difficult to complete a GC operation in available memory. The benefit of creating bitmaps over the entire history of a repository like the MSM 3.10 Linux kernel isn't clear. For that repository, most history for the last year appears to be in the last 100k commits. Limiting bitmap commit selection to just those commits reduces the count of selected commits from ~36k to ~10.5k. Dropping bitmaps for older commits does not affect object counting times for clones or for fetches on clients that are reasonably up-to-date. This patch defines a new "bitmapCommitRange" PackConfig parameter to limit the commit selection process when building bitmaps. The range starts with the most recent commit and walks backwards. A range of 10k considers only the 10000 most recent commits. A range of zero creates bitmaps only for branch tips. A range of -1 (the default) does not limit the range--all commits in the pack are used in the commit selection process. Change-Id: Ied92c70cfa0778facc670e0f14a0980bed5e3bfb Signed-off-by: Terry Parker <tparker@google.com>
hace 8 años
Limit the range of commits for which bitmaps are created. A bitmap index contains bitmaps for a set of commits in a pack file. Creating a bitmap for every commit is too expensive, so heuristics select the most "important" commits. The most recent commits are the most valuable. To clone a repository only those for the branch tips are needed. When fetching, only commits since the last fetch are needed. The commit selection heuristics generally work, but for some repositories the number of selected commits is prohibitively high. One example is the MSM 3.10 Linux kernel. With over 1 million commits on 2820 branches, the current heuristics resulted in +36k selected commits. Each uncompressed bitmap for that repository is ~413k, making it difficult to complete a GC operation in available memory. The benefit of creating bitmaps over the entire history of a repository like the MSM 3.10 Linux kernel isn't clear. For that repository, most history for the last year appears to be in the last 100k commits. Limiting bitmap commit selection to just those commits reduces the count of selected commits from ~36k to ~10.5k. Dropping bitmaps for older commits does not affect object counting times for clones or for fetches on clients that are reasonably up-to-date. This patch defines a new "bitmapCommitRange" PackConfig parameter to limit the commit selection process when building bitmaps. The range starts with the most recent commit and walks backwards. A range of 10k considers only the 10000 most recent commits. A range of zero creates bitmaps only for branch tips. A range of -1 (the default) does not limit the range--all commits in the pack are used in the commit selection process. Change-Id: Ied92c70cfa0778facc670e0f14a0980bed5e3bfb Signed-off-by: Terry Parker <tparker@google.com>
hace 8 años
Limit the range of commits for which bitmaps are created. A bitmap index contains bitmaps for a set of commits in a pack file. Creating a bitmap for every commit is too expensive, so heuristics select the most "important" commits. The most recent commits are the most valuable. To clone a repository only those for the branch tips are needed. When fetching, only commits since the last fetch are needed. The commit selection heuristics generally work, but for some repositories the number of selected commits is prohibitively high. One example is the MSM 3.10 Linux kernel. With over 1 million commits on 2820 branches, the current heuristics resulted in +36k selected commits. Each uncompressed bitmap for that repository is ~413k, making it difficult to complete a GC operation in available memory. The benefit of creating bitmaps over the entire history of a repository like the MSM 3.10 Linux kernel isn't clear. For that repository, most history for the last year appears to be in the last 100k commits. Limiting bitmap commit selection to just those commits reduces the count of selected commits from ~36k to ~10.5k. Dropping bitmaps for older commits does not affect object counting times for clones or for fetches on clients that are reasonably up-to-date. This patch defines a new "bitmapCommitRange" PackConfig parameter to limit the commit selection process when building bitmaps. The range starts with the most recent commit and walks backwards. A range of 10k considers only the 10000 most recent commits. A range of zero creates bitmaps only for branch tips. A range of -1 (the default) does not limit the range--all commits in the pack are used in the commit selection process. Change-Id: Ied92c70cfa0778facc670e0f14a0980bed5e3bfb Signed-off-by: Terry Parker <tparker@google.com>
hace 8 años
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068106910701071107210731074107510761077107810791080108110821083108410851086108710881089109010911092109310941095109610971098109911001101110211031104110511061107110811091110111111121113111411151116111711181119112011211122112311241125112611271128112911301131113211331134113511361137113811391140114111421143114411451146114711481149115011511152115311541155115611571158115911601161116211631164116511661167116811691170117111721173117411751176117711781179118011811182118311841185118611871188118911901191119211931194119511961197119811991200120112021203120412051206120712081209121012111212121312141215121612171218121912201221122212231224122512261227122812291230123112321233123412351236123712381239124012411242124312441245124612471248124912501251125212531254125512561257125812591260126112621263126412651266126712681269127012711272127312741275127612771278127912801281128212831284128512861287128812891290129112921293129412951296129712981299130013011302130313041305130613071308130913101311131213131314131513161317131813191320132113221323132413251326132713281329133013311332133313341335133613371338133913401341134213431344134513461347134813491350135113521353135413551356135713581359136013611362136313641365136613671368136913701371137213731374137513761377137813791380138113821383138413851386138713881389139013911392139313941395139613971398139914001401140214031404140514061407140814091410141114121413141414151416141714181419142014211422142314241425142614271428142914301431143214331434143514361437143814391440144114421443144414451446144714481449145014511452145314541455145614571458145914601461146214631464146514661467146814691470147114721473147414751476147714781479148014811482148314841485148614871488148914901491149214931494149514961497149814991500150115021503150415051506150715081509151015111512151315141515151615171518151915201521152215231524152515261527152815291530153115321533153415351536153715381539154015411542154315441545154615471548154915501551155215531554155515561557155815591560156115621563156415651566156715681569157015711572157315741575157615771578157915801581158215831584158515861587
  1. /*
  2. * Copyright (C) 2012, Christian Halstrick <christian.halstrick@sap.com>
  3. * Copyright (C) 2011, Shawn O. Pearce <spearce@spearce.org> and others
  4. *
  5. * This program and the accompanying materials are made available under the
  6. * terms of the Eclipse Distribution License v. 1.0 which is available at
  7. * https://www.eclipse.org/org/documents/edl-v10.php.
  8. *
  9. * SPDX-License-Identifier: BSD-3-Clause
  10. */
  11. package org.eclipse.jgit.internal.storage.file;
  12. import static org.eclipse.jgit.internal.storage.pack.PackExt.BITMAP_INDEX;
  13. import static org.eclipse.jgit.internal.storage.pack.PackExt.INDEX;
  14. import static org.eclipse.jgit.internal.storage.pack.PackExt.PACK;
  15. import static org.eclipse.jgit.internal.storage.pack.PackExt.KEEP;
  16. import java.io.File;
  17. import java.io.FileOutputStream;
  18. import java.io.IOException;
  19. import java.io.OutputStream;
  20. import java.io.PrintWriter;
  21. import java.io.StringWriter;
  22. import java.nio.channels.Channels;
  23. import java.nio.channels.FileChannel;
  24. import java.nio.file.DirectoryNotEmptyException;
  25. import java.nio.file.DirectoryStream;
  26. import java.nio.file.Files;
  27. import java.nio.file.Path;
  28. import java.nio.file.StandardCopyOption;
  29. import java.text.MessageFormat;
  30. import java.text.ParseException;
  31. import java.time.Instant;
  32. import java.time.temporal.ChronoUnit;
  33. import java.util.ArrayList;
  34. import java.util.Collection;
  35. import java.util.Collections;
  36. import java.util.Comparator;
  37. import java.util.Date;
  38. import java.util.HashMap;
  39. import java.util.HashSet;
  40. import java.util.Iterator;
  41. import java.util.LinkedList;
  42. import java.util.List;
  43. import java.util.Map;
  44. import java.util.Objects;
  45. import java.util.Set;
  46. import java.util.TreeMap;
  47. import java.util.concurrent.Callable;
  48. import java.util.concurrent.ExecutorService;
  49. import java.util.regex.Pattern;
  50. import java.util.stream.Collectors;
  51. import java.util.stream.Stream;
  52. import org.eclipse.jgit.annotations.NonNull;
  53. import org.eclipse.jgit.dircache.DirCacheIterator;
  54. import org.eclipse.jgit.errors.CancelledException;
  55. import org.eclipse.jgit.errors.CorruptObjectException;
  56. import org.eclipse.jgit.errors.IncorrectObjectTypeException;
  57. import org.eclipse.jgit.errors.MissingObjectException;
  58. import org.eclipse.jgit.errors.NoWorkTreeException;
  59. import org.eclipse.jgit.internal.JGitText;
  60. import org.eclipse.jgit.internal.storage.pack.PackExt;
  61. import org.eclipse.jgit.internal.storage.pack.PackWriter;
  62. import org.eclipse.jgit.lib.ConfigConstants;
  63. import org.eclipse.jgit.lib.Constants;
  64. import org.eclipse.jgit.lib.FileMode;
  65. import org.eclipse.jgit.lib.NullProgressMonitor;
  66. import org.eclipse.jgit.lib.ObjectId;
  67. import org.eclipse.jgit.lib.ObjectIdSet;
  68. import org.eclipse.jgit.lib.ObjectLoader;
  69. import org.eclipse.jgit.lib.ObjectReader;
  70. import org.eclipse.jgit.lib.ProgressMonitor;
  71. import org.eclipse.jgit.lib.Ref;
  72. import org.eclipse.jgit.lib.Ref.Storage;
  73. import org.eclipse.jgit.lib.RefDatabase;
  74. import org.eclipse.jgit.lib.ReflogEntry;
  75. import org.eclipse.jgit.lib.ReflogReader;
  76. import org.eclipse.jgit.lib.internal.WorkQueue;
  77. import org.eclipse.jgit.revwalk.ObjectWalk;
  78. import org.eclipse.jgit.revwalk.RevObject;
  79. import org.eclipse.jgit.revwalk.RevWalk;
  80. import org.eclipse.jgit.storage.pack.PackConfig;
  81. import org.eclipse.jgit.treewalk.TreeWalk;
  82. import org.eclipse.jgit.treewalk.filter.TreeFilter;
  83. import org.eclipse.jgit.util.FileUtils;
  84. import org.eclipse.jgit.util.GitDateParser;
  85. import org.eclipse.jgit.util.SystemReader;
  86. import org.slf4j.Logger;
  87. import org.slf4j.LoggerFactory;
  88. /**
  89. * A garbage collector for git
  90. * {@link org.eclipse.jgit.internal.storage.file.FileRepository}. Instances of
  91. * this class are not thread-safe. Don't use the same instance from multiple
  92. * threads.
  93. *
  94. * This class started as a copy of DfsGarbageCollector from Shawn O. Pearce
  95. * adapted to FileRepositories.
  96. */
  97. public class GC {
  98. private static final Logger LOG = LoggerFactory
  99. .getLogger(GC.class);
  100. private static final String PRUNE_EXPIRE_DEFAULT = "2.weeks.ago"; //$NON-NLS-1$
  101. private static final String PRUNE_PACK_EXPIRE_DEFAULT = "1.hour.ago"; //$NON-NLS-1$
  102. private static final Pattern PATTERN_LOOSE_OBJECT = Pattern
  103. .compile("[0-9a-fA-F]{38}"); //$NON-NLS-1$
  104. private static final String PACK_EXT = "." + PackExt.PACK.getExtension();//$NON-NLS-1$
  105. private static final String BITMAP_EXT = "." //$NON-NLS-1$
  106. + PackExt.BITMAP_INDEX.getExtension();
  107. private static final String INDEX_EXT = "." + PackExt.INDEX.getExtension(); //$NON-NLS-1$
  108. private static final String KEEP_EXT = "." + PackExt.KEEP.getExtension(); //$NON-NLS-1$
  109. private static final int DEFAULT_AUTOPACKLIMIT = 50;
  110. private static final int DEFAULT_AUTOLIMIT = 6700;
  111. private static volatile ExecutorService executor;
  112. /**
  113. * Set the executor for running auto-gc in the background. If no executor is
  114. * set JGit's own WorkQueue will be used.
  115. *
  116. * @param e
  117. * the executor to be used for running auto-gc
  118. */
  119. public static void setExecutor(ExecutorService e) {
  120. executor = e;
  121. }
  122. private final FileRepository repo;
  123. private ProgressMonitor pm;
  124. private long expireAgeMillis = -1;
  125. private Date expire;
  126. private long packExpireAgeMillis = -1;
  127. private Date packExpire;
  128. private PackConfig pconfig;
  129. /**
  130. * the refs which existed during the last call to {@link #repack()}. This is
  131. * needed during {@link #prune(Set)} where we can optimize by looking at the
  132. * difference between the current refs and the refs which existed during
  133. * last {@link #repack()}.
  134. */
  135. private Collection<Ref> lastPackedRefs;
  136. /**
  137. * Holds the starting time of the last repack() execution. This is needed in
  138. * prune() to inspect only those reflog entries which have been added since
  139. * last repack().
  140. */
  141. private long lastRepackTime;
  142. /**
  143. * Whether gc should do automatic housekeeping
  144. */
  145. private boolean automatic;
  146. /**
  147. * Whether to run gc in a background thread
  148. */
  149. private boolean background;
  150. /**
  151. * Creates a new garbage collector with default values. An expirationTime of
  152. * two weeks and <code>null</code> as progress monitor will be used.
  153. *
  154. * @param repo
  155. * the repo to work on
  156. */
  157. public GC(FileRepository repo) {
  158. this.repo = repo;
  159. this.pconfig = new PackConfig(repo);
  160. this.pm = NullProgressMonitor.INSTANCE;
  161. }
  162. /**
  163. * Runs a garbage collector on a
  164. * {@link org.eclipse.jgit.internal.storage.file.FileRepository}. It will
  165. * <ul>
  166. * <li>pack loose references into packed-refs</li>
  167. * <li>repack all reachable objects into new pack files and delete the old
  168. * pack files</li>
  169. * <li>prune all loose objects which are now reachable by packs</li>
  170. * </ul>
  171. *
  172. * If {@link #setAuto(boolean)} was set to {@code true} {@code gc} will
  173. * first check whether any housekeeping is required; if not, it exits
  174. * without performing any work.
  175. *
  176. * If {@link #setBackground(boolean)} was set to {@code true}
  177. * {@code collectGarbage} will start the gc in the background, and then
  178. * return immediately. In this case, errors will not be reported except in
  179. * gc.log.
  180. *
  181. * @return the collection of
  182. * {@link org.eclipse.jgit.internal.storage.file.Pack}'s which
  183. * are newly created
  184. * @throws java.io.IOException
  185. * @throws java.text.ParseException
  186. * If the configuration parameter "gc.pruneexpire" couldn't be
  187. * parsed
  188. */
  189. // TODO(ms): change signature and return Future<Collection<Pack>>
  190. @SuppressWarnings("FutureReturnValueIgnored")
  191. public Collection<Pack> gc() throws IOException, ParseException {
  192. if (!background) {
  193. return doGc();
  194. }
  195. final GcLog gcLog = new GcLog(repo);
  196. if (!gcLog.lock()) {
  197. // there is already a background gc running
  198. return Collections.emptyList();
  199. }
  200. Callable<Collection<Pack>> gcTask = () -> {
  201. try {
  202. Collection<Pack> newPacks = doGc();
  203. if (automatic && tooManyLooseObjects()) {
  204. String message = JGitText.get().gcTooManyUnpruned;
  205. gcLog.write(message);
  206. gcLog.commit();
  207. }
  208. return newPacks;
  209. } catch (IOException | ParseException e) {
  210. try {
  211. gcLog.write(e.getMessage());
  212. StringWriter sw = new StringWriter();
  213. e.printStackTrace(new PrintWriter(sw));
  214. gcLog.write(sw.toString());
  215. gcLog.commit();
  216. } catch (IOException e2) {
  217. e2.addSuppressed(e);
  218. LOG.error(e2.getMessage(), e2);
  219. }
  220. } finally {
  221. gcLog.unlock();
  222. }
  223. return Collections.emptyList();
  224. };
  225. // TODO(ms): change signature and return the Future
  226. executor().submit(gcTask);
  227. return Collections.emptyList();
  228. }
  229. private ExecutorService executor() {
  230. return (executor != null) ? executor : WorkQueue.getExecutor();
  231. }
  232. private Collection<Pack> doGc() throws IOException, ParseException {
  233. if (automatic && !needGc()) {
  234. return Collections.emptyList();
  235. }
  236. pm.start(6 /* tasks */);
  237. packRefs();
  238. // TODO: implement reflog_expire(pm, repo);
  239. Collection<Pack> newPacks = repack();
  240. prune(Collections.emptySet());
  241. // TODO: implement rerere_gc(pm);
  242. return newPacks;
  243. }
  244. /**
  245. * Loosen objects in a pack file which are not also in the newly-created
  246. * pack files.
  247. *
  248. * @param inserter
  249. * @param reader
  250. * @param pack
  251. * @param existing
  252. * @throws IOException
  253. */
  254. private void loosen(ObjectDirectoryInserter inserter, ObjectReader reader, Pack pack, HashSet<ObjectId> existing)
  255. throws IOException {
  256. for (PackIndex.MutableEntry entry : pack) {
  257. ObjectId oid = entry.toObjectId();
  258. if (existing.contains(oid)) {
  259. continue;
  260. }
  261. existing.add(oid);
  262. ObjectLoader loader = reader.open(oid);
  263. inserter.insert(loader.getType(),
  264. loader.getSize(),
  265. loader.openStream(),
  266. true /* create this object even though it's a duplicate */);
  267. }
  268. }
  269. /**
  270. * Delete old pack files. What is 'old' is defined by specifying a set of
  271. * old pack files and a set of new pack files. Each pack file contained in
  272. * old pack files but not contained in new pack files will be deleted. If
  273. * preserveOldPacks is set, keep a copy of the pack file in the preserve
  274. * directory. If an expirationDate is set then pack files which are younger
  275. * than the expirationDate will not be deleted nor preserved.
  276. * <p>
  277. * If we're not immediately expiring loose objects, loosen any objects
  278. * in the old pack files which aren't in the new pack files.
  279. *
  280. * @param oldPacks
  281. * @param newPacks
  282. * @throws ParseException
  283. * @throws IOException
  284. */
  285. private void deleteOldPacks(Collection<Pack> oldPacks,
  286. Collection<Pack> newPacks) throws ParseException, IOException {
  287. HashSet<ObjectId> ids = new HashSet<>();
  288. for (Pack pack : newPacks) {
  289. for (PackIndex.MutableEntry entry : pack) {
  290. ids.add(entry.toObjectId());
  291. }
  292. }
  293. ObjectReader reader = repo.newObjectReader();
  294. ObjectDirectory dir = repo.getObjectDatabase();
  295. ObjectDirectoryInserter inserter = dir.newInserter();
  296. boolean shouldLoosen = !"now".equals(getPruneExpireStr()) && //$NON-NLS-1$
  297. getExpireDate() < Long.MAX_VALUE;
  298. prunePreserved();
  299. long packExpireDate = getPackExpireDate();
  300. oldPackLoop: for (Pack oldPack : oldPacks) {
  301. checkCancelled();
  302. String oldName = oldPack.getPackName();
  303. // check whether an old pack file is also among the list of new
  304. // pack files. Then we must not delete it.
  305. for (Pack newPack : newPacks)
  306. if (oldName.equals(newPack.getPackName()))
  307. continue oldPackLoop;
  308. if (!oldPack.shouldBeKept()
  309. && repo.getFS()
  310. .lastModifiedInstant(oldPack.getPackFile())
  311. .toEpochMilli() < packExpireDate) {
  312. if (shouldLoosen) {
  313. loosen(inserter, reader, oldPack, ids);
  314. }
  315. oldPack.close();
  316. prunePack(oldPack.getPackFile());
  317. }
  318. }
  319. // close the complete object database. That's my only chance to force
  320. // rescanning and to detect that certain pack files are now deleted.
  321. repo.getObjectDatabase().close();
  322. }
  323. /**
  324. * Deletes old pack file, unless 'preserve-oldpacks' is set, in which case it
  325. * moves the pack file to the preserved directory
  326. *
  327. * @param packFile
  328. * @param deleteOptions
  329. * @throws IOException
  330. */
  331. private void removeOldPack(PackFile packFile, int deleteOptions)
  332. throws IOException {
  333. if (pconfig.isPreserveOldPacks()) {
  334. File oldPackDir = repo.getObjectDatabase().getPreservedDirectory();
  335. FileUtils.mkdir(oldPackDir, true);
  336. PackFile oldPackFile = packFile
  337. .createPreservedForDirectory(oldPackDir);
  338. FileUtils.rename(packFile, oldPackFile);
  339. } else {
  340. FileUtils.delete(packFile, deleteOptions);
  341. }
  342. }
  343. /**
  344. * Delete the preserved directory including all pack files within
  345. */
  346. private void prunePreserved() {
  347. if (pconfig.isPrunePreserved()) {
  348. try {
  349. FileUtils.delete(repo.getObjectDatabase().getPreservedDirectory(),
  350. FileUtils.RECURSIVE | FileUtils.RETRY | FileUtils.SKIP_MISSING);
  351. } catch (IOException e) {
  352. // Deletion of the preserved pack files failed. Silently return.
  353. }
  354. }
  355. }
  356. /**
  357. * Delete files associated with a single pack file. First try to delete the
  358. * ".pack" file because on some platforms the ".pack" file may be locked and
  359. * can't be deleted. In such a case it is better to detect this early and
  360. * give up on deleting files for this packfile. Otherwise we may delete the
  361. * ".index" file and when failing to delete the ".pack" file we are left
  362. * with a ".pack" file without a ".index" file.
  363. *
  364. * @param packFile
  365. */
  366. private void prunePack(PackFile packFile) {
  367. try {
  368. // Delete the .pack file first and if this fails give up on deleting
  369. // the other files
  370. int deleteOptions = FileUtils.RETRY | FileUtils.SKIP_MISSING;
  371. removeOldPack(packFile.create(PackExt.PACK), deleteOptions);
  372. // The .pack file has been deleted. Delete as many as the other
  373. // files as you can.
  374. deleteOptions |= FileUtils.IGNORE_ERRORS;
  375. for (PackExt ext : PackExt.values()) {
  376. if (!PackExt.PACK.equals(ext)) {
  377. removeOldPack(packFile.create(ext), deleteOptions);
  378. }
  379. }
  380. } catch (IOException e) {
  381. // Deletion of the .pack file failed. Silently return.
  382. }
  383. }
  384. /**
  385. * Like "git prune-packed" this method tries to prune all loose objects
  386. * which can be found in packs. If certain objects can't be pruned (e.g.
  387. * because the filesystem delete operation fails) this is silently ignored.
  388. *
  389. * @throws java.io.IOException
  390. */
  391. public void prunePacked() throws IOException {
  392. ObjectDirectory objdb = repo.getObjectDatabase();
  393. Collection<Pack> packs = objdb.getPacks();
  394. File objects = repo.getObjectsDirectory();
  395. String[] fanout = objects.list();
  396. if (fanout != null && fanout.length > 0) {
  397. pm.beginTask(JGitText.get().pruneLoosePackedObjects, fanout.length);
  398. try {
  399. for (String d : fanout) {
  400. checkCancelled();
  401. pm.update(1);
  402. if (d.length() != 2)
  403. continue;
  404. String[] entries = new File(objects, d).list();
  405. if (entries == null)
  406. continue;
  407. for (String e : entries) {
  408. checkCancelled();
  409. if (e.length() != Constants.OBJECT_ID_STRING_LENGTH - 2)
  410. continue;
  411. ObjectId id;
  412. try {
  413. id = ObjectId.fromString(d + e);
  414. } catch (IllegalArgumentException notAnObject) {
  415. // ignoring the file that does not represent loose
  416. // object
  417. continue;
  418. }
  419. boolean found = false;
  420. for (Pack p : packs) {
  421. checkCancelled();
  422. if (p.hasObject(id)) {
  423. found = true;
  424. break;
  425. }
  426. }
  427. if (found)
  428. FileUtils.delete(objdb.fileFor(id), FileUtils.RETRY
  429. | FileUtils.SKIP_MISSING
  430. | FileUtils.IGNORE_ERRORS);
  431. }
  432. }
  433. } finally {
  434. pm.endTask();
  435. }
  436. }
  437. }
  438. /**
  439. * Like "git prune" this method tries to prune all loose objects which are
  440. * unreferenced. If certain objects can't be pruned (e.g. because the
  441. * filesystem delete operation fails) this is silently ignored.
  442. *
  443. * @param objectsToKeep
  444. * a set of objects which should explicitly not be pruned
  445. * @throws java.io.IOException
  446. * @throws java.text.ParseException
  447. * If the configuration parameter "gc.pruneexpire" couldn't be
  448. * parsed
  449. */
  450. public void prune(Set<ObjectId> objectsToKeep) throws IOException,
  451. ParseException {
  452. long expireDate = getExpireDate();
  453. // Collect all loose objects which are old enough, not referenced from
  454. // the index and not in objectsToKeep
  455. Map<ObjectId, File> deletionCandidates = new HashMap<>();
  456. Set<ObjectId> indexObjects = null;
  457. File objects = repo.getObjectsDirectory();
  458. String[] fanout = objects.list();
  459. if (fanout == null || fanout.length == 0) {
  460. return;
  461. }
  462. pm.beginTask(JGitText.get().pruneLooseUnreferencedObjects,
  463. fanout.length);
  464. try {
  465. for (String d : fanout) {
  466. checkCancelled();
  467. pm.update(1);
  468. if (d.length() != 2)
  469. continue;
  470. File dir = new File(objects, d);
  471. File[] entries = dir.listFiles();
  472. if (entries == null || entries.length == 0) {
  473. FileUtils.delete(dir, FileUtils.IGNORE_ERRORS);
  474. continue;
  475. }
  476. for (File f : entries) {
  477. checkCancelled();
  478. String fName = f.getName();
  479. if (fName.length() != Constants.OBJECT_ID_STRING_LENGTH - 2)
  480. continue;
  481. if (repo.getFS().lastModifiedInstant(f)
  482. .toEpochMilli() >= expireDate) {
  483. continue;
  484. }
  485. try {
  486. ObjectId id = ObjectId.fromString(d + fName);
  487. if (objectsToKeep.contains(id))
  488. continue;
  489. if (indexObjects == null)
  490. indexObjects = listNonHEADIndexObjects();
  491. if (indexObjects.contains(id))
  492. continue;
  493. deletionCandidates.put(id, f);
  494. } catch (IllegalArgumentException notAnObject) {
  495. // ignoring the file that does not represent loose
  496. // object
  497. }
  498. }
  499. }
  500. } finally {
  501. pm.endTask();
  502. }
  503. if (deletionCandidates.isEmpty()) {
  504. return;
  505. }
  506. checkCancelled();
  507. // From the set of current refs remove all those which have been handled
  508. // during last repack(). Only those refs will survive which have been
  509. // added or modified since the last repack. Only these can save existing
  510. // loose refs from being pruned.
  511. Collection<Ref> newRefs;
  512. if (lastPackedRefs == null || lastPackedRefs.isEmpty())
  513. newRefs = getAllRefs();
  514. else {
  515. Map<String, Ref> last = new HashMap<>();
  516. for (Ref r : lastPackedRefs) {
  517. last.put(r.getName(), r);
  518. }
  519. newRefs = new ArrayList<>();
  520. for (Ref r : getAllRefs()) {
  521. Ref old = last.get(r.getName());
  522. if (!equals(r, old)) {
  523. newRefs.add(r);
  524. }
  525. }
  526. }
  527. if (!newRefs.isEmpty()) {
  528. // There are new/modified refs! Check which loose objects are now
  529. // referenced by these modified refs (or their reflogentries).
  530. // Remove these loose objects
  531. // from the deletionCandidates. When the last candidate is removed
  532. // leave this method.
  533. ObjectWalk w = new ObjectWalk(repo);
  534. try {
  535. for (Ref cr : newRefs) {
  536. checkCancelled();
  537. w.markStart(w.parseAny(cr.getObjectId()));
  538. }
  539. if (lastPackedRefs != null)
  540. for (Ref lpr : lastPackedRefs) {
  541. w.markUninteresting(w.parseAny(lpr.getObjectId()));
  542. }
  543. removeReferenced(deletionCandidates, w);
  544. } finally {
  545. w.dispose();
  546. }
  547. }
  548. if (deletionCandidates.isEmpty())
  549. return;
  550. // Since we have not left the method yet there are still
  551. // deletionCandidates. Last chance for these objects not to be pruned is
  552. // that they are referenced by reflog entries. Even refs which currently
  553. // point to the same object as during last repack() may have
  554. // additional reflog entries not handled during last repack()
  555. ObjectWalk w = new ObjectWalk(repo);
  556. try {
  557. for (Ref ar : getAllRefs())
  558. for (ObjectId id : listRefLogObjects(ar, lastRepackTime)) {
  559. checkCancelled();
  560. w.markStart(w.parseAny(id));
  561. }
  562. if (lastPackedRefs != null)
  563. for (Ref lpr : lastPackedRefs) {
  564. checkCancelled();
  565. w.markUninteresting(w.parseAny(lpr.getObjectId()));
  566. }
  567. removeReferenced(deletionCandidates, w);
  568. } finally {
  569. w.dispose();
  570. }
  571. if (deletionCandidates.isEmpty())
  572. return;
  573. checkCancelled();
  574. // delete all candidates which have survived: these are unreferenced
  575. // loose objects. Make a last check, though, to avoid deleting objects
  576. // that could have been referenced while the candidates list was being
  577. // built (by an incoming push, for example).
  578. Set<File> touchedFanout = new HashSet<>();
  579. for (File f : deletionCandidates.values()) {
  580. if (f.lastModified() < expireDate) {
  581. f.delete();
  582. touchedFanout.add(f.getParentFile());
  583. }
  584. }
  585. for (File f : touchedFanout) {
  586. FileUtils.delete(f,
  587. FileUtils.EMPTY_DIRECTORIES_ONLY | FileUtils.IGNORE_ERRORS);
  588. }
  589. repo.getObjectDatabase().close();
  590. }
  591. private long getExpireDate() throws ParseException {
  592. long expireDate = Long.MAX_VALUE;
  593. if (expire == null && expireAgeMillis == -1) {
  594. String pruneExpireStr = getPruneExpireStr();
  595. if (pruneExpireStr == null)
  596. pruneExpireStr = PRUNE_EXPIRE_DEFAULT;
  597. expire = GitDateParser.parse(pruneExpireStr, null, SystemReader
  598. .getInstance().getLocale());
  599. expireAgeMillis = -1;
  600. }
  601. if (expire != null)
  602. expireDate = expire.getTime();
  603. if (expireAgeMillis != -1)
  604. expireDate = System.currentTimeMillis() - expireAgeMillis;
  605. return expireDate;
  606. }
  607. private String getPruneExpireStr() {
  608. return repo.getConfig().getString(
  609. ConfigConstants.CONFIG_GC_SECTION, null,
  610. ConfigConstants.CONFIG_KEY_PRUNEEXPIRE);
  611. }
  612. private long getPackExpireDate() throws ParseException {
  613. long packExpireDate = Long.MAX_VALUE;
  614. if (packExpire == null && packExpireAgeMillis == -1) {
  615. String prunePackExpireStr = repo.getConfig().getString(
  616. ConfigConstants.CONFIG_GC_SECTION, null,
  617. ConfigConstants.CONFIG_KEY_PRUNEPACKEXPIRE);
  618. if (prunePackExpireStr == null)
  619. prunePackExpireStr = PRUNE_PACK_EXPIRE_DEFAULT;
  620. packExpire = GitDateParser.parse(prunePackExpireStr, null,
  621. SystemReader.getInstance().getLocale());
  622. packExpireAgeMillis = -1;
  623. }
  624. if (packExpire != null)
  625. packExpireDate = packExpire.getTime();
  626. if (packExpireAgeMillis != -1)
  627. packExpireDate = System.currentTimeMillis() - packExpireAgeMillis;
  628. return packExpireDate;
  629. }
  630. /**
  631. * Remove all entries from a map which key is the id of an object referenced
  632. * by the given ObjectWalk
  633. *
  634. * @param id2File
  635. * @param w
  636. * @throws MissingObjectException
  637. * @throws IncorrectObjectTypeException
  638. * @throws IOException
  639. */
  640. private void removeReferenced(Map<ObjectId, File> id2File,
  641. ObjectWalk w) throws MissingObjectException,
  642. IncorrectObjectTypeException, IOException {
  643. RevObject ro = w.next();
  644. while (ro != null) {
  645. checkCancelled();
  646. if (id2File.remove(ro.getId()) != null && id2File.isEmpty()) {
  647. return;
  648. }
  649. ro = w.next();
  650. }
  651. ro = w.nextObject();
  652. while (ro != null) {
  653. checkCancelled();
  654. if (id2File.remove(ro.getId()) != null && id2File.isEmpty()) {
  655. return;
  656. }
  657. ro = w.nextObject();
  658. }
  659. }
  660. private static boolean equals(Ref r1, Ref r2) {
  661. if (r1 == null || r2 == null) {
  662. return false;
  663. }
  664. if (r1.isSymbolic()) {
  665. return r2.isSymbolic() && r1.getTarget().getName()
  666. .equals(r2.getTarget().getName());
  667. }
  668. return !r2.isSymbolic()
  669. && Objects.equals(r1.getObjectId(), r2.getObjectId());
  670. }
  671. /**
  672. * Pack ref storage. For a RefDirectory database, this packs all
  673. * non-symbolic, loose refs into packed-refs. For Reftable, all of the data
  674. * is compacted into a single table.
  675. *
  676. * @throws java.io.IOException
  677. */
  678. public void packRefs() throws IOException {
  679. RefDatabase refDb = repo.getRefDatabase();
  680. if (refDb instanceof FileReftableDatabase) {
  681. // TODO: abstract this more cleanly.
  682. pm.beginTask(JGitText.get().packRefs, 1);
  683. try {
  684. ((FileReftableDatabase) refDb).compactFully();
  685. } finally {
  686. pm.endTask();
  687. }
  688. return;
  689. }
  690. Collection<Ref> refs = refDb.getRefsByPrefix(Constants.R_REFS);
  691. List<String> refsToBePacked = new ArrayList<>(refs.size());
  692. pm.beginTask(JGitText.get().packRefs, refs.size());
  693. try {
  694. for (Ref ref : refs) {
  695. checkCancelled();
  696. if (!ref.isSymbolic() && ref.getStorage().isLoose())
  697. refsToBePacked.add(ref.getName());
  698. pm.update(1);
  699. }
  700. ((RefDirectory) repo.getRefDatabase()).pack(refsToBePacked);
  701. } finally {
  702. pm.endTask();
  703. }
  704. }
  705. /**
  706. * Packs all objects which reachable from any of the heads into one pack
  707. * file. Additionally all objects which are not reachable from any head but
  708. * which are reachable from any of the other refs (e.g. tags), special refs
  709. * (e.g. FETCH_HEAD) or index are packed into a separate pack file. Objects
  710. * included in pack files which have a .keep file associated are never
  711. * repacked. All old pack files which existed before are deleted.
  712. *
  713. * @return a collection of the newly created pack files
  714. * @throws java.io.IOException
  715. * when during reading of refs, index, packfiles, objects,
  716. * reflog-entries or during writing to the packfiles
  717. * {@link java.io.IOException} occurs
  718. */
  719. public Collection<Pack> repack() throws IOException {
  720. Collection<Pack> toBeDeleted = repo.getObjectDatabase().getPacks();
  721. long time = System.currentTimeMillis();
  722. Collection<Ref> refsBefore = getAllRefs();
  723. Set<ObjectId> allHeadsAndTags = new HashSet<>();
  724. Set<ObjectId> allHeads = new HashSet<>();
  725. Set<ObjectId> allTags = new HashSet<>();
  726. Set<ObjectId> nonHeads = new HashSet<>();
  727. Set<ObjectId> txnHeads = new HashSet<>();
  728. Set<ObjectId> tagTargets = new HashSet<>();
  729. Set<ObjectId> indexObjects = listNonHEADIndexObjects();
  730. for (Ref ref : refsBefore) {
  731. checkCancelled();
  732. nonHeads.addAll(listRefLogObjects(ref, 0));
  733. if (ref.isSymbolic() || ref.getObjectId() == null) {
  734. continue;
  735. }
  736. if (isHead(ref)) {
  737. allHeads.add(ref.getObjectId());
  738. } else if (isTag(ref)) {
  739. allTags.add(ref.getObjectId());
  740. } else {
  741. nonHeads.add(ref.getObjectId());
  742. }
  743. if (ref.getPeeledObjectId() != null) {
  744. tagTargets.add(ref.getPeeledObjectId());
  745. }
  746. }
  747. List<ObjectIdSet> excluded = new LinkedList<>();
  748. for (Pack p : repo.getObjectDatabase().getPacks()) {
  749. checkCancelled();
  750. if (p.shouldBeKept())
  751. excluded.add(p.getIndex());
  752. }
  753. // Don't exclude tags that are also branch tips
  754. allTags.removeAll(allHeads);
  755. allHeadsAndTags.addAll(allHeads);
  756. allHeadsAndTags.addAll(allTags);
  757. // Hoist all branch tips and tags earlier in the pack file
  758. tagTargets.addAll(allHeadsAndTags);
  759. nonHeads.addAll(indexObjects);
  760. // Combine the GC_REST objects into the GC pack if requested
  761. if (pconfig.getSinglePack()) {
  762. allHeadsAndTags.addAll(nonHeads);
  763. nonHeads.clear();
  764. }
  765. List<Pack> ret = new ArrayList<>(2);
  766. Pack heads = null;
  767. if (!allHeadsAndTags.isEmpty()) {
  768. heads = writePack(allHeadsAndTags, PackWriter.NONE, allTags,
  769. tagTargets, excluded);
  770. if (heads != null) {
  771. ret.add(heads);
  772. excluded.add(0, heads.getIndex());
  773. }
  774. }
  775. if (!nonHeads.isEmpty()) {
  776. Pack rest = writePack(nonHeads, allHeadsAndTags, PackWriter.NONE,
  777. tagTargets, excluded);
  778. if (rest != null)
  779. ret.add(rest);
  780. }
  781. if (!txnHeads.isEmpty()) {
  782. Pack txn = writePack(txnHeads, PackWriter.NONE, PackWriter.NONE,
  783. null, excluded);
  784. if (txn != null)
  785. ret.add(txn);
  786. }
  787. try {
  788. deleteOldPacks(toBeDeleted, ret);
  789. } catch (ParseException e) {
  790. // TODO: the exception has to be wrapped into an IOException because
  791. // throwing the ParseException directly would break the API, instead
  792. // we should throw a ConfigInvalidException
  793. throw new IOException(e);
  794. }
  795. prunePacked();
  796. if (repo.getRefDatabase() instanceof RefDirectory) {
  797. // TODO: abstract this more cleanly.
  798. deleteEmptyRefsFolders();
  799. }
  800. deleteOrphans();
  801. deleteTempPacksIdx();
  802. lastPackedRefs = refsBefore;
  803. lastRepackTime = time;
  804. return ret;
  805. }
  806. private static boolean isHead(Ref ref) {
  807. return ref.getName().startsWith(Constants.R_HEADS);
  808. }
  809. private static boolean isTag(Ref ref) {
  810. return ref.getName().startsWith(Constants.R_TAGS);
  811. }
  812. private void deleteEmptyRefsFolders() throws IOException {
  813. Path refs = repo.getDirectory().toPath().resolve(Constants.R_REFS);
  814. // Avoid deleting a folder that was created after the threshold so that concurrent
  815. // operations trying to create a reference are not impacted
  816. Instant threshold = Instant.now().minus(30, ChronoUnit.SECONDS);
  817. try (Stream<Path> entries = Files.list(refs)
  818. .filter(Files::isDirectory)) {
  819. Iterator<Path> iterator = entries.iterator();
  820. while (iterator.hasNext()) {
  821. try (Stream<Path> s = Files.list(iterator.next())) {
  822. s.filter(path -> canBeSafelyDeleted(path, threshold)).forEach(this::deleteDir);
  823. }
  824. }
  825. }
  826. }
  827. private boolean canBeSafelyDeleted(Path path, Instant threshold) {
  828. try {
  829. return Files.getLastModifiedTime(path).toInstant().isBefore(threshold);
  830. }
  831. catch (IOException e) {
  832. LOG.warn(MessageFormat.format(
  833. JGitText.get().cannotAccessLastModifiedForSafeDeletion,
  834. path), e);
  835. return false;
  836. }
  837. }
  838. private void deleteDir(Path dir) {
  839. try (Stream<Path> dirs = Files.walk(dir)) {
  840. dirs.filter(this::isDirectory).sorted(Comparator.reverseOrder())
  841. .forEach(this::delete);
  842. } catch (IOException e) {
  843. LOG.error(e.getMessage(), e);
  844. }
  845. }
  846. private boolean isDirectory(Path p) {
  847. return p.toFile().isDirectory();
  848. }
  849. private void delete(Path d) {
  850. try {
  851. Files.delete(d);
  852. } catch (DirectoryNotEmptyException e) {
  853. // Don't log
  854. } catch (IOException e) {
  855. LOG.error(MessageFormat.format(JGitText.get().cannotDeleteFile, d),
  856. e);
  857. }
  858. }
  859. /**
  860. * Deletes orphans
  861. * <p>
  862. * A file is considered an orphan if it is either a "bitmap" or an index
  863. * file, and its corresponding pack file is missing in the list.
  864. * </p>
  865. */
  866. private void deleteOrphans() {
  867. Path packDir = repo.getObjectDatabase().getPackDirectory().toPath();
  868. List<String> fileNames = null;
  869. try (Stream<Path> files = Files.list(packDir)) {
  870. fileNames = files.map(path -> path.getFileName().toString())
  871. .filter(name -> (name.endsWith(PACK_EXT)
  872. || name.endsWith(BITMAP_EXT)
  873. || name.endsWith(INDEX_EXT)
  874. || name.endsWith(KEEP_EXT)))
  875. // sort files with same base name in the order:
  876. // .pack, .keep, .index, .bitmap to avoid look ahead
  877. .sorted(Collections.reverseOrder())
  878. .collect(Collectors.toList());
  879. } catch (IOException e) {
  880. LOG.error(e.getMessage(), e);
  881. return;
  882. }
  883. if (fileNames == null) {
  884. return;
  885. }
  886. String latestId = null;
  887. for (String n : fileNames) {
  888. PackFile pf = new PackFile(packDir.toFile(), n);
  889. PackExt ext = pf.getPackExt();
  890. if (ext.equals(PACK) || ext.equals(KEEP)) {
  891. latestId = pf.getId();
  892. }
  893. if (latestId == null || !pf.getId().equals(latestId)) {
  894. // no pack or keep for this id
  895. try {
  896. FileUtils.delete(pf,
  897. FileUtils.RETRY | FileUtils.SKIP_MISSING);
  898. LOG.warn(JGitText.get().deletedOrphanInPackDir, pf);
  899. } catch (IOException e) {
  900. LOG.error(e.getMessage(), e);
  901. }
  902. }
  903. }
  904. }
  905. private void deleteTempPacksIdx() {
  906. Path packDir = repo.getObjectDatabase().getPackDirectory().toPath();
  907. Instant threshold = Instant.now().minus(1, ChronoUnit.DAYS);
  908. if (!Files.exists(packDir)) {
  909. return;
  910. }
  911. try (DirectoryStream<Path> stream =
  912. Files.newDirectoryStream(packDir, "gc_*_tmp")) { //$NON-NLS-1$
  913. stream.forEach(t -> {
  914. try {
  915. Instant lastModified = Files.getLastModifiedTime(t)
  916. .toInstant();
  917. if (lastModified.isBefore(threshold)) {
  918. Files.deleteIfExists(t);
  919. }
  920. } catch (IOException e) {
  921. LOG.error(e.getMessage(), e);
  922. }
  923. });
  924. } catch (IOException e) {
  925. LOG.error(e.getMessage(), e);
  926. }
  927. }
  928. /**
  929. * @param ref
  930. * the ref which log should be inspected
  931. * @param minTime only reflog entries not older then this time are processed
  932. * @return the {@link ObjectId}s contained in the reflog
  933. * @throws IOException
  934. */
  935. private Set<ObjectId> listRefLogObjects(Ref ref, long minTime) throws IOException {
  936. ReflogReader reflogReader = repo.getReflogReader(ref.getName());
  937. if (reflogReader == null) {
  938. return Collections.emptySet();
  939. }
  940. List<ReflogEntry> rlEntries = reflogReader
  941. .getReverseEntries();
  942. if (rlEntries == null || rlEntries.isEmpty())
  943. return Collections.emptySet();
  944. Set<ObjectId> ret = new HashSet<>();
  945. for (ReflogEntry e : rlEntries) {
  946. if (e.getWho().getWhen().getTime() < minTime)
  947. break;
  948. ObjectId newId = e.getNewId();
  949. if (newId != null && !ObjectId.zeroId().equals(newId))
  950. ret.add(newId);
  951. ObjectId oldId = e.getOldId();
  952. if (oldId != null && !ObjectId.zeroId().equals(oldId))
  953. ret.add(oldId);
  954. }
  955. return ret;
  956. }
  957. /**
  958. * Returns a collection of all refs and additional refs.
  959. *
  960. * Additional refs which don't start with "refs/" are not returned because
  961. * they should not save objects from being garbage collected. Examples for
  962. * such references are ORIG_HEAD, MERGE_HEAD, FETCH_HEAD and
  963. * CHERRY_PICK_HEAD.
  964. *
  965. * @return a collection of refs pointing to live objects.
  966. * @throws IOException
  967. */
  968. private Collection<Ref> getAllRefs() throws IOException {
  969. RefDatabase refdb = repo.getRefDatabase();
  970. Collection<Ref> refs = refdb.getRefs();
  971. List<Ref> addl = refdb.getAdditionalRefs();
  972. if (!addl.isEmpty()) {
  973. List<Ref> all = new ArrayList<>(refs.size() + addl.size());
  974. all.addAll(refs);
  975. // add additional refs which start with refs/
  976. for (Ref r : addl) {
  977. checkCancelled();
  978. if (r.getName().startsWith(Constants.R_REFS)) {
  979. all.add(r);
  980. }
  981. }
  982. return all;
  983. }
  984. return refs;
  985. }
  986. /**
  987. * Return a list of those objects in the index which differ from whats in
  988. * HEAD
  989. *
  990. * @return a set of ObjectIds of changed objects in the index
  991. * @throws IOException
  992. * @throws CorruptObjectException
  993. * @throws NoWorkTreeException
  994. */
  995. private Set<ObjectId> listNonHEADIndexObjects()
  996. throws CorruptObjectException, IOException {
  997. if (repo.isBare()) {
  998. return Collections.emptySet();
  999. }
  1000. try (TreeWalk treeWalk = new TreeWalk(repo)) {
  1001. treeWalk.addTree(new DirCacheIterator(repo.readDirCache()));
  1002. ObjectId headID = repo.resolve(Constants.HEAD);
  1003. if (headID != null) {
  1004. try (RevWalk revWalk = new RevWalk(repo)) {
  1005. treeWalk.addTree(revWalk.parseTree(headID));
  1006. }
  1007. }
  1008. treeWalk.setFilter(TreeFilter.ANY_DIFF);
  1009. treeWalk.setRecursive(true);
  1010. Set<ObjectId> ret = new HashSet<>();
  1011. while (treeWalk.next()) {
  1012. checkCancelled();
  1013. ObjectId objectId = treeWalk.getObjectId(0);
  1014. switch (treeWalk.getRawMode(0) & FileMode.TYPE_MASK) {
  1015. case FileMode.TYPE_MISSING:
  1016. case FileMode.TYPE_GITLINK:
  1017. continue;
  1018. case FileMode.TYPE_TREE:
  1019. case FileMode.TYPE_FILE:
  1020. case FileMode.TYPE_SYMLINK:
  1021. ret.add(objectId);
  1022. continue;
  1023. default:
  1024. throw new IOException(MessageFormat.format(
  1025. JGitText.get().corruptObjectInvalidMode3,
  1026. String.format("%o", //$NON-NLS-1$
  1027. Integer.valueOf(treeWalk.getRawMode(0))),
  1028. (objectId == null) ? "null" : objectId.name(), //$NON-NLS-1$
  1029. treeWalk.getPathString(), //
  1030. repo.getIndexFile()));
  1031. }
  1032. }
  1033. return ret;
  1034. }
  1035. }
  1036. private Pack writePack(@NonNull Set<? extends ObjectId> want,
  1037. @NonNull Set<? extends ObjectId> have, @NonNull Set<ObjectId> tags,
  1038. Set<ObjectId> tagTargets, List<ObjectIdSet> excludeObjects)
  1039. throws IOException {
  1040. checkCancelled();
  1041. File tmpPack = null;
  1042. Map<PackExt, File> tmpExts = new TreeMap<>((o1, o2) -> {
  1043. // INDEX entries must be returned last, so the pack
  1044. // scanner does pick up the new pack until all the
  1045. // PackExt entries have been written.
  1046. if (o1 == o2) {
  1047. return 0;
  1048. }
  1049. if (o1 == PackExt.INDEX) {
  1050. return 1;
  1051. }
  1052. if (o2 == PackExt.INDEX) {
  1053. return -1;
  1054. }
  1055. return Integer.signum(o1.hashCode() - o2.hashCode());
  1056. });
  1057. try (PackWriter pw = new PackWriter(
  1058. pconfig,
  1059. repo.newObjectReader())) {
  1060. // prepare the PackWriter
  1061. pw.setDeltaBaseAsOffset(true);
  1062. pw.setReuseDeltaCommits(false);
  1063. if (tagTargets != null) {
  1064. pw.setTagTargets(tagTargets);
  1065. }
  1066. if (excludeObjects != null)
  1067. for (ObjectIdSet idx : excludeObjects)
  1068. pw.excludeObjects(idx);
  1069. pw.preparePack(pm, want, have, PackWriter.NONE, tags);
  1070. if (pw.getObjectCount() == 0)
  1071. return null;
  1072. checkCancelled();
  1073. // create temporary files
  1074. ObjectId id = pw.computeName();
  1075. File packdir = repo.getObjectDatabase().getPackDirectory();
  1076. packdir.mkdirs();
  1077. tmpPack = File.createTempFile("gc_", ".pack_tmp", packdir); //$NON-NLS-1$ //$NON-NLS-2$
  1078. final String tmpBase = tmpPack.getName()
  1079. .substring(0, tmpPack.getName().lastIndexOf('.'));
  1080. File tmpIdx = new File(packdir, tmpBase + ".idx_tmp"); //$NON-NLS-1$
  1081. tmpExts.put(INDEX, tmpIdx);
  1082. if (!tmpIdx.createNewFile())
  1083. throw new IOException(MessageFormat.format(
  1084. JGitText.get().cannotCreateIndexfile, tmpIdx.getPath()));
  1085. // write the packfile
  1086. try (FileOutputStream fos = new FileOutputStream(tmpPack);
  1087. FileChannel channel = fos.getChannel();
  1088. OutputStream channelStream = Channels
  1089. .newOutputStream(channel)) {
  1090. pw.writePack(pm, pm, channelStream);
  1091. channel.force(true);
  1092. }
  1093. // write the packindex
  1094. try (FileOutputStream fos = new FileOutputStream(tmpIdx);
  1095. FileChannel idxChannel = fos.getChannel();
  1096. OutputStream idxStream = Channels
  1097. .newOutputStream(idxChannel)) {
  1098. pw.writeIndex(idxStream);
  1099. idxChannel.force(true);
  1100. }
  1101. if (pw.prepareBitmapIndex(pm)) {
  1102. File tmpBitmapIdx = new File(packdir, tmpBase + ".bitmap_tmp"); //$NON-NLS-1$
  1103. tmpExts.put(BITMAP_INDEX, tmpBitmapIdx);
  1104. if (!tmpBitmapIdx.createNewFile())
  1105. throw new IOException(MessageFormat.format(
  1106. JGitText.get().cannotCreateIndexfile,
  1107. tmpBitmapIdx.getPath()));
  1108. try (FileOutputStream fos = new FileOutputStream(tmpBitmapIdx);
  1109. FileChannel idxChannel = fos.getChannel();
  1110. OutputStream idxStream = Channels
  1111. .newOutputStream(idxChannel)) {
  1112. pw.writeBitmapIndex(idxStream);
  1113. idxChannel.force(true);
  1114. }
  1115. }
  1116. // rename the temporary files to real files
  1117. File packDir = repo.getObjectDatabase().getPackDirectory();
  1118. PackFile realPack = new PackFile(packDir, id, PackExt.PACK);
  1119. repo.getObjectDatabase().closeAllPackHandles(realPack);
  1120. tmpPack.setReadOnly();
  1121. FileUtils.rename(tmpPack, realPack, StandardCopyOption.ATOMIC_MOVE);
  1122. for (Map.Entry<PackExt, File> tmpEntry : tmpExts.entrySet()) {
  1123. File tmpExt = tmpEntry.getValue();
  1124. tmpExt.setReadOnly();
  1125. PackFile realExt = new PackFile(packDir, id, tmpEntry.getKey());
  1126. try {
  1127. FileUtils.rename(tmpExt, realExt,
  1128. StandardCopyOption.ATOMIC_MOVE);
  1129. } catch (IOException e) {
  1130. File newExt = new File(realExt.getParentFile(),
  1131. realExt.getName() + ".new"); //$NON-NLS-1$
  1132. try {
  1133. FileUtils.rename(tmpExt, newExt,
  1134. StandardCopyOption.ATOMIC_MOVE);
  1135. } catch (IOException e2) {
  1136. newExt = tmpExt;
  1137. e = e2;
  1138. }
  1139. throw new IOException(MessageFormat.format(
  1140. JGitText.get().panicCantRenameIndexFile, newExt,
  1141. realExt), e);
  1142. }
  1143. }
  1144. boolean interrupted = false;
  1145. try {
  1146. FileSnapshot snapshot = FileSnapshot.save(realPack);
  1147. if (pconfig.doWaitPreventRacyPack(snapshot.size())) {
  1148. snapshot.waitUntilNotRacy();
  1149. }
  1150. } catch (InterruptedException e) {
  1151. interrupted = true;
  1152. }
  1153. try {
  1154. return repo.getObjectDatabase().openPack(realPack);
  1155. } finally {
  1156. if (interrupted) {
  1157. // Re-set interrupted flag
  1158. Thread.currentThread().interrupt();
  1159. }
  1160. }
  1161. } finally {
  1162. if (tmpPack != null && tmpPack.exists())
  1163. tmpPack.delete();
  1164. for (File tmpExt : tmpExts.values()) {
  1165. if (tmpExt.exists())
  1166. tmpExt.delete();
  1167. }
  1168. }
  1169. }
  1170. private void checkCancelled() throws CancelledException {
  1171. if (pm.isCancelled() || Thread.currentThread().isInterrupted()) {
  1172. throw new CancelledException(JGitText.get().operationCanceled);
  1173. }
  1174. }
  1175. /**
  1176. * A class holding statistical data for a FileRepository regarding how many
  1177. * objects are stored as loose or packed objects
  1178. */
  1179. public static class RepoStatistics {
  1180. /**
  1181. * The number of objects stored in pack files. If the same object is
  1182. * stored in multiple pack files then it is counted as often as it
  1183. * occurs in pack files.
  1184. */
  1185. public long numberOfPackedObjects;
  1186. /**
  1187. * The number of pack files
  1188. */
  1189. public long numberOfPackFiles;
  1190. /**
  1191. * The number of objects stored as loose objects.
  1192. */
  1193. public long numberOfLooseObjects;
  1194. /**
  1195. * The sum of the sizes of all files used to persist loose objects.
  1196. */
  1197. public long sizeOfLooseObjects;
  1198. /**
  1199. * The sum of the sizes of all pack files.
  1200. */
  1201. public long sizeOfPackedObjects;
  1202. /**
  1203. * The number of loose refs.
  1204. */
  1205. public long numberOfLooseRefs;
  1206. /**
  1207. * The number of refs stored in pack files.
  1208. */
  1209. public long numberOfPackedRefs;
  1210. /**
  1211. * The number of bitmaps in the bitmap indices.
  1212. */
  1213. public long numberOfBitmaps;
  1214. @Override
  1215. public String toString() {
  1216. final StringBuilder b = new StringBuilder();
  1217. b.append("numberOfPackedObjects=").append(numberOfPackedObjects); //$NON-NLS-1$
  1218. b.append(", numberOfPackFiles=").append(numberOfPackFiles); //$NON-NLS-1$
  1219. b.append(", numberOfLooseObjects=").append(numberOfLooseObjects); //$NON-NLS-1$
  1220. b.append(", numberOfLooseRefs=").append(numberOfLooseRefs); //$NON-NLS-1$
  1221. b.append(", numberOfPackedRefs=").append(numberOfPackedRefs); //$NON-NLS-1$
  1222. b.append(", sizeOfLooseObjects=").append(sizeOfLooseObjects); //$NON-NLS-1$
  1223. b.append(", sizeOfPackedObjects=").append(sizeOfPackedObjects); //$NON-NLS-1$
  1224. b.append(", numberOfBitmaps=").append(numberOfBitmaps); //$NON-NLS-1$
  1225. return b.toString();
  1226. }
  1227. }
  1228. /**
  1229. * Returns information about objects and pack files for a FileRepository.
  1230. *
  1231. * @return information about objects and pack files for a FileRepository
  1232. * @throws java.io.IOException
  1233. */
  1234. public RepoStatistics getStatistics() throws IOException {
  1235. RepoStatistics ret = new RepoStatistics();
  1236. Collection<Pack> packs = repo.getObjectDatabase().getPacks();
  1237. for (Pack p : packs) {
  1238. ret.numberOfPackedObjects += p.getIndex().getObjectCount();
  1239. ret.numberOfPackFiles++;
  1240. ret.sizeOfPackedObjects += p.getPackFile().length();
  1241. if (p.getBitmapIndex() != null)
  1242. ret.numberOfBitmaps += p.getBitmapIndex().getBitmapCount();
  1243. }
  1244. File objDir = repo.getObjectsDirectory();
  1245. String[] fanout = objDir.list();
  1246. if (fanout != null && fanout.length > 0) {
  1247. for (String d : fanout) {
  1248. if (d.length() != 2)
  1249. continue;
  1250. File[] entries = new File(objDir, d).listFiles();
  1251. if (entries == null)
  1252. continue;
  1253. for (File f : entries) {
  1254. if (f.getName().length() != Constants.OBJECT_ID_STRING_LENGTH - 2)
  1255. continue;
  1256. ret.numberOfLooseObjects++;
  1257. ret.sizeOfLooseObjects += f.length();
  1258. }
  1259. }
  1260. }
  1261. RefDatabase refDb = repo.getRefDatabase();
  1262. for (Ref r : refDb.getRefs()) {
  1263. Storage storage = r.getStorage();
  1264. if (storage == Storage.LOOSE || storage == Storage.LOOSE_PACKED)
  1265. ret.numberOfLooseRefs++;
  1266. if (storage == Storage.PACKED || storage == Storage.LOOSE_PACKED)
  1267. ret.numberOfPackedRefs++;
  1268. }
  1269. return ret;
  1270. }
  1271. /**
  1272. * Set the progress monitor used for garbage collection methods.
  1273. *
  1274. * @param pm a {@link org.eclipse.jgit.lib.ProgressMonitor} object.
  1275. * @return this
  1276. */
  1277. public GC setProgressMonitor(ProgressMonitor pm) {
  1278. this.pm = (pm == null) ? NullProgressMonitor.INSTANCE : pm;
  1279. return this;
  1280. }
  1281. /**
  1282. * During gc() or prune() each unreferenced, loose object which has been
  1283. * created or modified in the last <code>expireAgeMillis</code> milliseconds
  1284. * will not be pruned. Only older objects may be pruned. If set to 0 then
  1285. * every object is a candidate for pruning.
  1286. *
  1287. * @param expireAgeMillis
  1288. * minimal age of objects to be pruned in milliseconds.
  1289. */
  1290. public void setExpireAgeMillis(long expireAgeMillis) {
  1291. this.expireAgeMillis = expireAgeMillis;
  1292. expire = null;
  1293. }
  1294. /**
  1295. * During gc() or prune() packfiles which are created or modified in the
  1296. * last <code>packExpireAgeMillis</code> milliseconds will not be deleted.
  1297. * Only older packfiles may be deleted. If set to 0 then every packfile is a
  1298. * candidate for deletion.
  1299. *
  1300. * @param packExpireAgeMillis
  1301. * minimal age of packfiles to be deleted in milliseconds.
  1302. */
  1303. public void setPackExpireAgeMillis(long packExpireAgeMillis) {
  1304. this.packExpireAgeMillis = packExpireAgeMillis;
  1305. expire = null;
  1306. }
  1307. /**
  1308. * Set the PackConfig used when (re-)writing packfiles. This allows to
  1309. * influence how packs are written and to implement something similar to
  1310. * "git gc --aggressive"
  1311. *
  1312. * @param pconfig
  1313. * the {@link org.eclipse.jgit.storage.pack.PackConfig} used when
  1314. * writing packs
  1315. */
  1316. public void setPackConfig(@NonNull PackConfig pconfig) {
  1317. this.pconfig = pconfig;
  1318. }
  1319. /**
  1320. * During gc() or prune() each unreferenced, loose object which has been
  1321. * created or modified after or at <code>expire</code> will not be pruned.
  1322. * Only older objects may be pruned. If set to null then every object is a
  1323. * candidate for pruning.
  1324. *
  1325. * @param expire
  1326. * instant in time which defines object expiration
  1327. * objects with modification time before this instant are expired
  1328. * objects with modification time newer or equal to this instant
  1329. * are not expired
  1330. */
  1331. public void setExpire(Date expire) {
  1332. this.expire = expire;
  1333. expireAgeMillis = -1;
  1334. }
  1335. /**
  1336. * During gc() or prune() packfiles which are created or modified after or
  1337. * at <code>packExpire</code> will not be deleted. Only older packfiles may
  1338. * be deleted. If set to null then every packfile is a candidate for
  1339. * deletion.
  1340. *
  1341. * @param packExpire
  1342. * instant in time which defines packfile expiration
  1343. */
  1344. public void setPackExpire(Date packExpire) {
  1345. this.packExpire = packExpire;
  1346. packExpireAgeMillis = -1;
  1347. }
  1348. /**
  1349. * Set the {@code gc --auto} option.
  1350. *
  1351. * With this option, gc checks whether any housekeeping is required; if not,
  1352. * it exits without performing any work. Some JGit commands run
  1353. * {@code gc --auto} after performing operations that could create many
  1354. * loose objects.
  1355. * <p>
  1356. * Housekeeping is required if there are too many loose objects or too many
  1357. * packs in the repository. If the number of loose objects exceeds the value
  1358. * of the gc.auto option JGit GC consolidates all existing packs into a
  1359. * single pack (equivalent to {@code -A} option), whereas git-core would
  1360. * combine all loose objects into a single pack using {@code repack -d -l}.
  1361. * Setting the value of {@code gc.auto} to 0 disables automatic packing of
  1362. * loose objects.
  1363. * <p>
  1364. * If the number of packs exceeds the value of {@code gc.autoPackLimit},
  1365. * then existing packs (except those marked with a .keep file) are
  1366. * consolidated into a single pack by using the {@code -A} option of repack.
  1367. * Setting {@code gc.autoPackLimit} to 0 disables automatic consolidation of
  1368. * packs.
  1369. * <p>
  1370. * Like git the following jgit commands run auto gc:
  1371. * <ul>
  1372. * <li>fetch</li>
  1373. * <li>merge</li>
  1374. * <li>rebase</li>
  1375. * <li>receive-pack</li>
  1376. * </ul>
  1377. * The auto gc for receive-pack can be suppressed by setting the config
  1378. * option {@code receive.autogc = false}
  1379. *
  1380. * @param auto
  1381. * defines whether gc should do automatic housekeeping
  1382. */
  1383. public void setAuto(boolean auto) {
  1384. this.automatic = auto;
  1385. }
  1386. /**
  1387. * @param background
  1388. * whether to run the gc in a background thread.
  1389. */
  1390. void setBackground(boolean background) {
  1391. this.background = background;
  1392. }
  1393. private boolean needGc() {
  1394. if (tooManyPacks()) {
  1395. addRepackAllOption();
  1396. } else {
  1397. return tooManyLooseObjects();
  1398. }
  1399. // TODO run pre-auto-gc hook, if it fails return false
  1400. return true;
  1401. }
  1402. private void addRepackAllOption() {
  1403. // TODO: if JGit GC is enhanced to support repack's option -l this
  1404. // method needs to be implemented
  1405. }
  1406. /**
  1407. * @return {@code true} if number of packs > gc.autopacklimit (default 50)
  1408. */
  1409. boolean tooManyPacks() {
  1410. int autopacklimit = repo.getConfig().getInt(
  1411. ConfigConstants.CONFIG_GC_SECTION,
  1412. ConfigConstants.CONFIG_KEY_AUTOPACKLIMIT,
  1413. DEFAULT_AUTOPACKLIMIT);
  1414. if (autopacklimit <= 0) {
  1415. return false;
  1416. }
  1417. // JGit always creates two packfiles, one for the objects reachable from
  1418. // branches, and another one for the rest
  1419. return repo.getObjectDatabase().getPacks().size() > (autopacklimit + 1);
  1420. }
  1421. /**
  1422. * Quickly estimate number of loose objects, SHA1 is distributed evenly so
  1423. * counting objects in one directory (bucket 17) is sufficient
  1424. *
  1425. * @return {@code true} if number of loose objects > gc.auto (default 6700)
  1426. */
  1427. boolean tooManyLooseObjects() {
  1428. int auto = getLooseObjectLimit();
  1429. if (auto <= 0) {
  1430. return false;
  1431. }
  1432. int n = 0;
  1433. int threshold = (auto + 255) / 256;
  1434. Path dir = repo.getObjectsDirectory().toPath().resolve("17"); //$NON-NLS-1$
  1435. if (!dir.toFile().exists()) {
  1436. return false;
  1437. }
  1438. try (DirectoryStream<Path> stream = Files.newDirectoryStream(dir, file -> {
  1439. Path fileName = file.getFileName();
  1440. return file.toFile().isFile() && fileName != null
  1441. && PATTERN_LOOSE_OBJECT.matcher(fileName.toString())
  1442. .matches();
  1443. })) {
  1444. for (Iterator<Path> iter = stream.iterator(); iter.hasNext(); iter
  1445. .next()) {
  1446. if (++n > threshold) {
  1447. return true;
  1448. }
  1449. }
  1450. } catch (IOException e) {
  1451. LOG.error(e.getMessage(), e);
  1452. }
  1453. return false;
  1454. }
  1455. private int getLooseObjectLimit() {
  1456. return repo.getConfig().getInt(ConfigConstants.CONFIG_GC_SECTION,
  1457. ConfigConstants.CONFIG_KEY_AUTO, DEFAULT_AUTOLIMIT);
  1458. }
  1459. }