You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

GC.java 50KB

Limit the range of commits for which bitmaps are created. A bitmap index contains bitmaps for a set of commits in a pack file. Creating a bitmap for every commit is too expensive, so heuristics select the most "important" commits. The most recent commits are the most valuable. To clone a repository only those for the branch tips are needed. When fetching, only commits since the last fetch are needed. The commit selection heuristics generally work, but for some repositories the number of selected commits is prohibitively high. One example is the MSM 3.10 Linux kernel. With over 1 million commits on 2820 branches, the current heuristics resulted in +36k selected commits. Each uncompressed bitmap for that repository is ~413k, making it difficult to complete a GC operation in available memory. The benefit of creating bitmaps over the entire history of a repository like the MSM 3.10 Linux kernel isn't clear. For that repository, most history for the last year appears to be in the last 100k commits. Limiting bitmap commit selection to just those commits reduces the count of selected commits from ~36k to ~10.5k. Dropping bitmaps for older commits does not affect object counting times for clones or for fetches on clients that are reasonably up-to-date. This patch defines a new "bitmapCommitRange" PackConfig parameter to limit the commit selection process when building bitmaps. The range starts with the most recent commit and walks backwards. A range of 10k considers only the 10000 most recent commits. A range of zero creates bitmaps only for branch tips. A range of -1 (the default) does not limit the range--all commits in the pack are used in the commit selection process. Change-Id: Ied92c70cfa0778facc670e0f14a0980bed5e3bfb Signed-off-by: Terry Parker <tparker@google.com>
8 lat temu
Limit the range of commits for which bitmaps are created. A bitmap index contains bitmaps for a set of commits in a pack file. Creating a bitmap for every commit is too expensive, so heuristics select the most "important" commits. The most recent commits are the most valuable. To clone a repository only those for the branch tips are needed. When fetching, only commits since the last fetch are needed. The commit selection heuristics generally work, but for some repositories the number of selected commits is prohibitively high. One example is the MSM 3.10 Linux kernel. With over 1 million commits on 2820 branches, the current heuristics resulted in +36k selected commits. Each uncompressed bitmap for that repository is ~413k, making it difficult to complete a GC operation in available memory. The benefit of creating bitmaps over the entire history of a repository like the MSM 3.10 Linux kernel isn't clear. For that repository, most history for the last year appears to be in the last 100k commits. Limiting bitmap commit selection to just those commits reduces the count of selected commits from ~36k to ~10.5k. Dropping bitmaps for older commits does not affect object counting times for clones or for fetches on clients that are reasonably up-to-date. This patch defines a new "bitmapCommitRange" PackConfig parameter to limit the commit selection process when building bitmaps. The range starts with the most recent commit and walks backwards. A range of 10k considers only the 10000 most recent commits. A range of zero creates bitmaps only for branch tips. A range of -1 (the default) does not limit the range--all commits in the pack are used in the commit selection process. Change-Id: Ied92c70cfa0778facc670e0f14a0980bed5e3bfb Signed-off-by: Terry Parker <tparker@google.com>
8 lat temu
Limit the range of commits for which bitmaps are created. A bitmap index contains bitmaps for a set of commits in a pack file. Creating a bitmap for every commit is too expensive, so heuristics select the most "important" commits. The most recent commits are the most valuable. To clone a repository only those for the branch tips are needed. When fetching, only commits since the last fetch are needed. The commit selection heuristics generally work, but for some repositories the number of selected commits is prohibitively high. One example is the MSM 3.10 Linux kernel. With over 1 million commits on 2820 branches, the current heuristics resulted in +36k selected commits. Each uncompressed bitmap for that repository is ~413k, making it difficult to complete a GC operation in available memory. The benefit of creating bitmaps over the entire history of a repository like the MSM 3.10 Linux kernel isn't clear. For that repository, most history for the last year appears to be in the last 100k commits. Limiting bitmap commit selection to just those commits reduces the count of selected commits from ~36k to ~10.5k. Dropping bitmaps for older commits does not affect object counting times for clones or for fetches on clients that are reasonably up-to-date. This patch defines a new "bitmapCommitRange" PackConfig parameter to limit the commit selection process when building bitmaps. The range starts with the most recent commit and walks backwards. A range of 10k considers only the 10000 most recent commits. A range of zero creates bitmaps only for branch tips. A range of -1 (the default) does not limit the range--all commits in the pack are used in the commit selection process. Change-Id: Ied92c70cfa0778facc670e0f14a0980bed5e3bfb Signed-off-by: Terry Parker <tparker@google.com>
8 lat temu
Limit the range of commits for which bitmaps are created. A bitmap index contains bitmaps for a set of commits in a pack file. Creating a bitmap for every commit is too expensive, so heuristics select the most "important" commits. The most recent commits are the most valuable. To clone a repository only those for the branch tips are needed. When fetching, only commits since the last fetch are needed. The commit selection heuristics generally work, but for some repositories the number of selected commits is prohibitively high. One example is the MSM 3.10 Linux kernel. With over 1 million commits on 2820 branches, the current heuristics resulted in +36k selected commits. Each uncompressed bitmap for that repository is ~413k, making it difficult to complete a GC operation in available memory. The benefit of creating bitmaps over the entire history of a repository like the MSM 3.10 Linux kernel isn't clear. For that repository, most history for the last year appears to be in the last 100k commits. Limiting bitmap commit selection to just those commits reduces the count of selected commits from ~36k to ~10.5k. Dropping bitmaps for older commits does not affect object counting times for clones or for fetches on clients that are reasonably up-to-date. This patch defines a new "bitmapCommitRange" PackConfig parameter to limit the commit selection process when building bitmaps. The range starts with the most recent commit and walks backwards. A range of 10k considers only the 10000 most recent commits. A range of zero creates bitmaps only for branch tips. A range of -1 (the default) does not limit the range--all commits in the pack are used in the commit selection process. Change-Id: Ied92c70cfa0778facc670e0f14a0980bed5e3bfb Signed-off-by: Terry Parker <tparker@google.com>
8 lat temu
Limit the range of commits for which bitmaps are created. A bitmap index contains bitmaps for a set of commits in a pack file. Creating a bitmap for every commit is too expensive, so heuristics select the most "important" commits. The most recent commits are the most valuable. To clone a repository only those for the branch tips are needed. When fetching, only commits since the last fetch are needed. The commit selection heuristics generally work, but for some repositories the number of selected commits is prohibitively high. One example is the MSM 3.10 Linux kernel. With over 1 million commits on 2820 branches, the current heuristics resulted in +36k selected commits. Each uncompressed bitmap for that repository is ~413k, making it difficult to complete a GC operation in available memory. The benefit of creating bitmaps over the entire history of a repository like the MSM 3.10 Linux kernel isn't clear. For that repository, most history for the last year appears to be in the last 100k commits. Limiting bitmap commit selection to just those commits reduces the count of selected commits from ~36k to ~10.5k. Dropping bitmaps for older commits does not affect object counting times for clones or for fetches on clients that are reasonably up-to-date. This patch defines a new "bitmapCommitRange" PackConfig parameter to limit the commit selection process when building bitmaps. The range starts with the most recent commit and walks backwards. A range of 10k considers only the 10000 most recent commits. A range of zero creates bitmaps only for branch tips. A range of -1 (the default) does not limit the range--all commits in the pack are used in the commit selection process. Change-Id: Ied92c70cfa0778facc670e0f14a0980bed5e3bfb Signed-off-by: Terry Parker <tparker@google.com>
8 lat temu
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068106910701071107210731074107510761077107810791080108110821083108410851086108710881089109010911092109310941095109610971098109911001101110211031104110511061107110811091110111111121113111411151116111711181119112011211122112311241125112611271128112911301131113211331134113511361137113811391140114111421143114411451146114711481149115011511152115311541155115611571158115911601161116211631164116511661167116811691170117111721173117411751176117711781179118011811182118311841185118611871188118911901191119211931194119511961197119811991200120112021203120412051206120712081209121012111212121312141215121612171218121912201221122212231224122512261227122812291230123112321233123412351236123712381239124012411242124312441245124612471248124912501251125212531254125512561257125812591260126112621263126412651266126712681269127012711272127312741275127612771278127912801281128212831284128512861287128812891290129112921293129412951296129712981299130013011302130313041305130613071308130913101311131213131314131513161317131813191320132113221323132413251326132713281329133013311332133313341335133613371338133913401341134213431344134513461347134813491350135113521353135413551356135713581359136013611362136313641365136613671368136913701371137213731374137513761377137813791380138113821383138413851386138713881389139013911392139313941395139613971398139914001401140214031404140514061407140814091410141114121413141414151416141714181419142014211422142314241425142614271428142914301431143214331434143514361437143814391440144114421443144414451446144714481449145014511452145314541455145614571458145914601461146214631464146514661467146814691470147114721473147414751476147714781479148014811482148314841485148614871488148914901491149214931494149514961497149814991500150115021503150415051506150715081509151015111512151315141515151615171518151915201521152215231524152515261527152815291530153115321533153415351536153715381539154015411542154315441545154615471548154915501551155215531554155515561557155815591560156115621563156415651566156715681569157015711572157315741575157615771578157915801581158215831584158515861587158815891590159115921593159415951596159715981599160016011602160316041605160616071608
  1. /*
  2. * Copyright (C) 2012, Christian Halstrick <christian.halstrick@sap.com>
  3. * Copyright (C) 2011, Shawn O. Pearce <spearce@spearce.org>
  4. * and other copyright owners as documented in the project's IP log.
  5. *
  6. * This program and the accompanying materials are made available
  7. * under the terms of the Eclipse Distribution License v1.0 which
  8. * accompanies this distribution, is reproduced below, and is
  9. * available at http://www.eclipse.org/org/documents/edl-v10.php
  10. *
  11. * All rights reserved.
  12. *
  13. * Redistribution and use in source and binary forms, with or
  14. * without modification, are permitted provided that the following
  15. * conditions are met:
  16. *
  17. * - Redistributions of source code must retain the above copyright
  18. * notice, this list of conditions and the following disclaimer.
  19. *
  20. * - Redistributions in binary form must reproduce the above
  21. * copyright notice, this list of conditions and the following
  22. * disclaimer in the documentation and/or other materials provided
  23. * with the distribution.
  24. *
  25. * - Neither the name of the Eclipse Foundation, Inc. nor the
  26. * names of its contributors may be used to endorse or promote
  27. * products derived from this software without specific prior
  28. * written permission.
  29. *
  30. * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
  31. * CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
  32. * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
  33. * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
  34. * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
  35. * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
  36. * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
  37. * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
  38. * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
  39. * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
  40. * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
  41. * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
  42. * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  43. */
  44. package org.eclipse.jgit.internal.storage.file;
  45. import static org.eclipse.jgit.internal.storage.pack.PackExt.BITMAP_INDEX;
  46. import static org.eclipse.jgit.internal.storage.pack.PackExt.INDEX;
  47. import java.io.File;
  48. import java.io.FileOutputStream;
  49. import java.io.IOException;
  50. import java.io.OutputStream;
  51. import java.io.PrintWriter;
  52. import java.io.StringWriter;
  53. import java.nio.channels.Channels;
  54. import java.nio.channels.FileChannel;
  55. import java.nio.file.DirectoryNotEmptyException;
  56. import java.nio.file.DirectoryStream;
  57. import java.nio.file.Files;
  58. import java.nio.file.Path;
  59. import java.nio.file.StandardCopyOption;
  60. import java.text.MessageFormat;
  61. import java.text.ParseException;
  62. import java.time.Instant;
  63. import java.time.temporal.ChronoUnit;
  64. import java.util.ArrayList;
  65. import java.util.Collection;
  66. import java.util.Collections;
  67. import java.util.Comparator;
  68. import java.util.Date;
  69. import java.util.HashMap;
  70. import java.util.HashSet;
  71. import java.util.Iterator;
  72. import java.util.LinkedList;
  73. import java.util.List;
  74. import java.util.Map;
  75. import java.util.Objects;
  76. import java.util.Set;
  77. import java.util.TreeMap;
  78. import java.util.concurrent.Callable;
  79. import java.util.concurrent.ExecutorService;
  80. import java.util.regex.Pattern;
  81. import java.util.stream.Collectors;
  82. import java.util.stream.Stream;
  83. import org.eclipse.jgit.annotations.NonNull;
  84. import org.eclipse.jgit.dircache.DirCacheIterator;
  85. import org.eclipse.jgit.errors.CancelledException;
  86. import org.eclipse.jgit.errors.CorruptObjectException;
  87. import org.eclipse.jgit.errors.IncorrectObjectTypeException;
  88. import org.eclipse.jgit.errors.MissingObjectException;
  89. import org.eclipse.jgit.errors.NoWorkTreeException;
  90. import org.eclipse.jgit.internal.JGitText;
  91. import org.eclipse.jgit.internal.storage.pack.PackExt;
  92. import org.eclipse.jgit.internal.storage.pack.PackWriter;
  93. import org.eclipse.jgit.internal.storage.reftree.RefTreeNames;
  94. import org.eclipse.jgit.lib.ConfigConstants;
  95. import org.eclipse.jgit.lib.Constants;
  96. import org.eclipse.jgit.lib.FileMode;
  97. import org.eclipse.jgit.lib.NullProgressMonitor;
  98. import org.eclipse.jgit.lib.ObjectId;
  99. import org.eclipse.jgit.lib.ObjectIdSet;
  100. import org.eclipse.jgit.lib.ObjectLoader;
  101. import org.eclipse.jgit.lib.ObjectReader;
  102. import org.eclipse.jgit.lib.ProgressMonitor;
  103. import org.eclipse.jgit.lib.Ref;
  104. import org.eclipse.jgit.lib.Ref.Storage;
  105. import org.eclipse.jgit.lib.RefDatabase;
  106. import org.eclipse.jgit.lib.ReflogEntry;
  107. import org.eclipse.jgit.lib.ReflogReader;
  108. import org.eclipse.jgit.lib.internal.WorkQueue;
  109. import org.eclipse.jgit.revwalk.ObjectWalk;
  110. import org.eclipse.jgit.revwalk.RevObject;
  111. import org.eclipse.jgit.revwalk.RevWalk;
  112. import org.eclipse.jgit.storage.pack.PackConfig;
  113. import org.eclipse.jgit.treewalk.TreeWalk;
  114. import org.eclipse.jgit.treewalk.filter.TreeFilter;
  115. import org.eclipse.jgit.util.FileUtils;
  116. import org.eclipse.jgit.util.GitDateParser;
  117. import org.eclipse.jgit.util.SystemReader;
  118. import org.slf4j.Logger;
  119. import org.slf4j.LoggerFactory;
  120. /**
  121. * A garbage collector for git
  122. * {@link org.eclipse.jgit.internal.storage.file.FileRepository}. Instances of
  123. * this class are not thread-safe. Don't use the same instance from multiple
  124. * threads.
  125. *
  126. * This class started as a copy of DfsGarbageCollector from Shawn O. Pearce
  127. * adapted to FileRepositories.
  128. */
  129. public class GC {
  130. private final static Logger LOG = LoggerFactory
  131. .getLogger(GC.class);
  132. private static final String PRUNE_EXPIRE_DEFAULT = "2.weeks.ago"; //$NON-NLS-1$
  133. private static final String PRUNE_PACK_EXPIRE_DEFAULT = "1.hour.ago"; //$NON-NLS-1$
  134. private static final Pattern PATTERN_LOOSE_OBJECT = Pattern
  135. .compile("[0-9a-fA-F]{38}"); //$NON-NLS-1$
  136. private static final String PACK_EXT = "." + PackExt.PACK.getExtension();//$NON-NLS-1$
  137. private static final String BITMAP_EXT = "." //$NON-NLS-1$
  138. + PackExt.BITMAP_INDEX.getExtension();
  139. private static final String INDEX_EXT = "." + PackExt.INDEX.getExtension(); //$NON-NLS-1$
  140. private static final int DEFAULT_AUTOPACKLIMIT = 50;
  141. private static final int DEFAULT_AUTOLIMIT = 6700;
  142. private static volatile ExecutorService executor;
  143. /**
  144. * Set the executor for running auto-gc in the background. If no executor is
  145. * set JGit's own WorkQueue will be used.
  146. *
  147. * @param e
  148. * the executor to be used for running auto-gc
  149. */
  150. public static void setExecutor(ExecutorService e) {
  151. executor = e;
  152. }
  153. private final FileRepository repo;
  154. private ProgressMonitor pm;
  155. private long expireAgeMillis = -1;
  156. private Date expire;
  157. private long packExpireAgeMillis = -1;
  158. private Date packExpire;
  159. private PackConfig pconfig;
  160. /**
  161. * the refs which existed during the last call to {@link #repack()}. This is
  162. * needed during {@link #prune(Set)} where we can optimize by looking at the
  163. * difference between the current refs and the refs which existed during
  164. * last {@link #repack()}.
  165. */
  166. private Collection<Ref> lastPackedRefs;
  167. /**
  168. * Holds the starting time of the last repack() execution. This is needed in
  169. * prune() to inspect only those reflog entries which have been added since
  170. * last repack().
  171. */
  172. private long lastRepackTime;
  173. /**
  174. * Whether gc should do automatic housekeeping
  175. */
  176. private boolean automatic;
  177. /**
  178. * Whether to run gc in a background thread
  179. */
  180. private boolean background;
  181. /**
  182. * Creates a new garbage collector with default values. An expirationTime of
  183. * two weeks and <code>null</code> as progress monitor will be used.
  184. *
  185. * @param repo
  186. * the repo to work on
  187. */
  188. public GC(FileRepository repo) {
  189. this.repo = repo;
  190. this.pconfig = new PackConfig(repo);
  191. this.pm = NullProgressMonitor.INSTANCE;
  192. }
  193. /**
  194. * Runs a garbage collector on a
  195. * {@link org.eclipse.jgit.internal.storage.file.FileRepository}. It will
  196. * <ul>
  197. * <li>pack loose references into packed-refs</li>
  198. * <li>repack all reachable objects into new pack files and delete the old
  199. * pack files</li>
  200. * <li>prune all loose objects which are now reachable by packs</li>
  201. * </ul>
  202. *
  203. * If {@link #setAuto(boolean)} was set to {@code true} {@code gc} will
  204. * first check whether any housekeeping is required; if not, it exits
  205. * without performing any work.
  206. *
  207. * If {@link #setBackground(boolean)} was set to {@code true}
  208. * {@code collectGarbage} will start the gc in the background, and then
  209. * return immediately. In this case, errors will not be reported except in
  210. * gc.log.
  211. *
  212. * @return the collection of
  213. * {@link org.eclipse.jgit.internal.storage.file.PackFile}'s which
  214. * are newly created
  215. * @throws java.io.IOException
  216. * @throws java.text.ParseException
  217. * If the configuration parameter "gc.pruneexpire" couldn't be
  218. * parsed
  219. */
  220. // TODO(ms): change signature and return Future<Collection<PackFile>>
  221. @SuppressWarnings("FutureReturnValueIgnored")
  222. public Collection<PackFile> gc() throws IOException, ParseException {
  223. if (!background) {
  224. return doGc();
  225. }
  226. final GcLog gcLog = new GcLog(repo);
  227. if (!gcLog.lock()) {
  228. // there is already a background gc running
  229. return Collections.emptyList();
  230. }
  231. Callable<Collection<PackFile>> gcTask = () -> {
  232. try {
  233. Collection<PackFile> newPacks = doGc();
  234. if (automatic && tooManyLooseObjects()) {
  235. String message = JGitText.get().gcTooManyUnpruned;
  236. gcLog.write(message);
  237. gcLog.commit();
  238. }
  239. return newPacks;
  240. } catch (IOException | ParseException e) {
  241. try {
  242. gcLog.write(e.getMessage());
  243. StringWriter sw = new StringWriter();
  244. e.printStackTrace(new PrintWriter(sw));
  245. gcLog.write(sw.toString());
  246. gcLog.commit();
  247. } catch (IOException e2) {
  248. e2.addSuppressed(e);
  249. LOG.error(e2.getMessage(), e2);
  250. }
  251. } finally {
  252. gcLog.unlock();
  253. }
  254. return Collections.emptyList();
  255. };
  256. // TODO(ms): change signature and return the Future
  257. executor().submit(gcTask);
  258. return Collections.emptyList();
  259. }
  260. private ExecutorService executor() {
  261. return (executor != null) ? executor : WorkQueue.getExecutor();
  262. }
  263. private Collection<PackFile> doGc() throws IOException, ParseException {
  264. if (automatic && !needGc()) {
  265. return Collections.emptyList();
  266. }
  267. pm.start(6 /* tasks */);
  268. packRefs();
  269. // TODO: implement reflog_expire(pm, repo);
  270. Collection<PackFile> newPacks = repack();
  271. prune(Collections.emptySet());
  272. // TODO: implement rerere_gc(pm);
  273. return newPacks;
  274. }
  275. /**
  276. * Loosen objects in a pack file which are not also in the newly-created
  277. * pack files.
  278. *
  279. * @param inserter
  280. * @param reader
  281. * @param pack
  282. * @param existing
  283. * @throws IOException
  284. */
  285. private void loosen(ObjectDirectoryInserter inserter, ObjectReader reader, PackFile pack, HashSet<ObjectId> existing)
  286. throws IOException {
  287. for (PackIndex.MutableEntry entry : pack) {
  288. ObjectId oid = entry.toObjectId();
  289. if (existing.contains(oid)) {
  290. continue;
  291. }
  292. existing.add(oid);
  293. ObjectLoader loader = reader.open(oid);
  294. inserter.insert(loader.getType(),
  295. loader.getSize(),
  296. loader.openStream(),
  297. true /* create this object even though it's a duplicate */);
  298. }
  299. }
  300. /**
  301. * Delete old pack files. What is 'old' is defined by specifying a set of
  302. * old pack files and a set of new pack files. Each pack file contained in
  303. * old pack files but not contained in new pack files will be deleted. If
  304. * preserveOldPacks is set, keep a copy of the pack file in the preserve
  305. * directory. If an expirationDate is set then pack files which are younger
  306. * than the expirationDate will not be deleted nor preserved.
  307. * <p>
  308. * If we're not immediately expiring loose objects, loosen any objects
  309. * in the old pack files which aren't in the new pack files.
  310. *
  311. * @param oldPacks
  312. * @param newPacks
  313. * @throws ParseException
  314. * @throws IOException
  315. */
  316. private void deleteOldPacks(Collection<PackFile> oldPacks,
  317. Collection<PackFile> newPacks) throws ParseException, IOException {
  318. HashSet<ObjectId> ids = new HashSet<>();
  319. for (PackFile pack : newPacks) {
  320. for (PackIndex.MutableEntry entry : pack) {
  321. ids.add(entry.toObjectId());
  322. }
  323. }
  324. ObjectReader reader = repo.newObjectReader();
  325. ObjectDirectory dir = repo.getObjectDatabase();
  326. ObjectDirectoryInserter inserter = dir.newInserter();
  327. boolean shouldLoosen = !"now".equals(getPruneExpireStr()) && //$NON-NLS-1$
  328. getExpireDate() < Long.MAX_VALUE;
  329. prunePreserved();
  330. long packExpireDate = getPackExpireDate();
  331. oldPackLoop: for (PackFile oldPack : oldPacks) {
  332. checkCancelled();
  333. String oldName = oldPack.getPackName();
  334. // check whether an old pack file is also among the list of new
  335. // pack files. Then we must not delete it.
  336. for (PackFile newPack : newPacks)
  337. if (oldName.equals(newPack.getPackName()))
  338. continue oldPackLoop;
  339. if (!oldPack.shouldBeKept()
  340. && repo.getFS()
  341. .lastModifiedInstant(oldPack.getPackFile())
  342. .toEpochMilli() < packExpireDate) {
  343. oldPack.close();
  344. if (shouldLoosen) {
  345. loosen(inserter, reader, oldPack, ids);
  346. }
  347. prunePack(oldName);
  348. }
  349. }
  350. // close the complete object database. That's my only chance to force
  351. // rescanning and to detect that certain pack files are now deleted.
  352. repo.getObjectDatabase().close();
  353. }
  354. /**
  355. * Deletes old pack file, unless 'preserve-oldpacks' is set, in which case it
  356. * moves the pack file to the preserved directory
  357. *
  358. * @param packFile
  359. * @param packName
  360. * @param ext
  361. * @param deleteOptions
  362. * @throws IOException
  363. */
  364. private void removeOldPack(File packFile, String packName, PackExt ext,
  365. int deleteOptions) throws IOException {
  366. if (pconfig.isPreserveOldPacks()) {
  367. File oldPackDir = repo.getObjectDatabase().getPreservedDirectory();
  368. FileUtils.mkdir(oldPackDir, true);
  369. String oldPackName = "pack-" + packName + ".old-" + ext.getExtension(); //$NON-NLS-1$ //$NON-NLS-2$
  370. File oldPackFile = new File(oldPackDir, oldPackName);
  371. FileUtils.rename(packFile, oldPackFile);
  372. } else {
  373. FileUtils.delete(packFile, deleteOptions);
  374. }
  375. }
  376. /**
  377. * Delete the preserved directory including all pack files within
  378. */
  379. private void prunePreserved() {
  380. if (pconfig.isPrunePreserved()) {
  381. try {
  382. FileUtils.delete(repo.getObjectDatabase().getPreservedDirectory(),
  383. FileUtils.RECURSIVE | FileUtils.RETRY | FileUtils.SKIP_MISSING);
  384. } catch (IOException e) {
  385. // Deletion of the preserved pack files failed. Silently return.
  386. }
  387. }
  388. }
  389. /**
  390. * Delete files associated with a single pack file. First try to delete the
  391. * ".pack" file because on some platforms the ".pack" file may be locked and
  392. * can't be deleted. In such a case it is better to detect this early and
  393. * give up on deleting files for this packfile. Otherwise we may delete the
  394. * ".index" file and when failing to delete the ".pack" file we are left
  395. * with a ".pack" file without a ".index" file.
  396. *
  397. * @param packName
  398. */
  399. private void prunePack(String packName) {
  400. PackExt[] extensions = PackExt.values();
  401. try {
  402. // Delete the .pack file first and if this fails give up on deleting
  403. // the other files
  404. int deleteOptions = FileUtils.RETRY | FileUtils.SKIP_MISSING;
  405. for (PackExt ext : extensions)
  406. if (PackExt.PACK.equals(ext)) {
  407. File f = nameFor(packName, "." + ext.getExtension()); //$NON-NLS-1$
  408. removeOldPack(f, packName, ext, deleteOptions);
  409. break;
  410. }
  411. // The .pack file has been deleted. Delete as many as the other
  412. // files as you can.
  413. deleteOptions |= FileUtils.IGNORE_ERRORS;
  414. for (PackExt ext : extensions) {
  415. if (!PackExt.PACK.equals(ext)) {
  416. File f = nameFor(packName, "." + ext.getExtension()); //$NON-NLS-1$
  417. removeOldPack(f, packName, ext, deleteOptions);
  418. }
  419. }
  420. } catch (IOException e) {
  421. // Deletion of the .pack file failed. Silently return.
  422. }
  423. }
  424. /**
  425. * Like "git prune-packed" this method tries to prune all loose objects
  426. * which can be found in packs. If certain objects can't be pruned (e.g.
  427. * because the filesystem delete operation fails) this is silently ignored.
  428. *
  429. * @throws java.io.IOException
  430. */
  431. public void prunePacked() throws IOException {
  432. ObjectDirectory objdb = repo.getObjectDatabase();
  433. Collection<PackFile> packs = objdb.getPacks();
  434. File objects = repo.getObjectsDirectory();
  435. String[] fanout = objects.list();
  436. if (fanout != null && fanout.length > 0) {
  437. pm.beginTask(JGitText.get().pruneLoosePackedObjects, fanout.length);
  438. try {
  439. for (String d : fanout) {
  440. checkCancelled();
  441. pm.update(1);
  442. if (d.length() != 2)
  443. continue;
  444. String[] entries = new File(objects, d).list();
  445. if (entries == null)
  446. continue;
  447. for (String e : entries) {
  448. checkCancelled();
  449. if (e.length() != Constants.OBJECT_ID_STRING_LENGTH - 2)
  450. continue;
  451. ObjectId id;
  452. try {
  453. id = ObjectId.fromString(d + e);
  454. } catch (IllegalArgumentException notAnObject) {
  455. // ignoring the file that does not represent loose
  456. // object
  457. continue;
  458. }
  459. boolean found = false;
  460. for (PackFile p : packs) {
  461. checkCancelled();
  462. if (p.hasObject(id)) {
  463. found = true;
  464. break;
  465. }
  466. }
  467. if (found)
  468. FileUtils.delete(objdb.fileFor(id), FileUtils.RETRY
  469. | FileUtils.SKIP_MISSING
  470. | FileUtils.IGNORE_ERRORS);
  471. }
  472. }
  473. } finally {
  474. pm.endTask();
  475. }
  476. }
  477. }
  478. /**
  479. * Like "git prune" this method tries to prune all loose objects which are
  480. * unreferenced. If certain objects can't be pruned (e.g. because the
  481. * filesystem delete operation fails) this is silently ignored.
  482. *
  483. * @param objectsToKeep
  484. * a set of objects which should explicitly not be pruned
  485. * @throws java.io.IOException
  486. * @throws java.text.ParseException
  487. * If the configuration parameter "gc.pruneexpire" couldn't be
  488. * parsed
  489. */
  490. public void prune(Set<ObjectId> objectsToKeep) throws IOException,
  491. ParseException {
  492. long expireDate = getExpireDate();
  493. // Collect all loose objects which are old enough, not referenced from
  494. // the index and not in objectsToKeep
  495. Map<ObjectId, File> deletionCandidates = new HashMap<>();
  496. Set<ObjectId> indexObjects = null;
  497. File objects = repo.getObjectsDirectory();
  498. String[] fanout = objects.list();
  499. if (fanout == null || fanout.length == 0) {
  500. return;
  501. }
  502. pm.beginTask(JGitText.get().pruneLooseUnreferencedObjects,
  503. fanout.length);
  504. try {
  505. for (String d : fanout) {
  506. checkCancelled();
  507. pm.update(1);
  508. if (d.length() != 2)
  509. continue;
  510. File dir = new File(objects, d);
  511. File[] entries = dir.listFiles();
  512. if (entries == null || entries.length == 0) {
  513. FileUtils.delete(dir, FileUtils.IGNORE_ERRORS);
  514. continue;
  515. }
  516. for (File f : entries) {
  517. checkCancelled();
  518. String fName = f.getName();
  519. if (fName.length() != Constants.OBJECT_ID_STRING_LENGTH - 2)
  520. continue;
  521. if (repo.getFS().lastModifiedInstant(f)
  522. .toEpochMilli() >= expireDate) {
  523. continue;
  524. }
  525. try {
  526. ObjectId id = ObjectId.fromString(d + fName);
  527. if (objectsToKeep.contains(id))
  528. continue;
  529. if (indexObjects == null)
  530. indexObjects = listNonHEADIndexObjects();
  531. if (indexObjects.contains(id))
  532. continue;
  533. deletionCandidates.put(id, f);
  534. } catch (IllegalArgumentException notAnObject) {
  535. // ignoring the file that does not represent loose
  536. // object
  537. }
  538. }
  539. }
  540. } finally {
  541. pm.endTask();
  542. }
  543. if (deletionCandidates.isEmpty()) {
  544. return;
  545. }
  546. checkCancelled();
  547. // From the set of current refs remove all those which have been handled
  548. // during last repack(). Only those refs will survive which have been
  549. // added or modified since the last repack. Only these can save existing
  550. // loose refs from being pruned.
  551. Collection<Ref> newRefs;
  552. if (lastPackedRefs == null || lastPackedRefs.isEmpty())
  553. newRefs = getAllRefs();
  554. else {
  555. Map<String, Ref> last = new HashMap<>();
  556. for (Ref r : lastPackedRefs) {
  557. last.put(r.getName(), r);
  558. }
  559. newRefs = new ArrayList<>();
  560. for (Ref r : getAllRefs()) {
  561. Ref old = last.get(r.getName());
  562. if (!equals(r, old)) {
  563. newRefs.add(r);
  564. }
  565. }
  566. }
  567. if (!newRefs.isEmpty()) {
  568. // There are new/modified refs! Check which loose objects are now
  569. // referenced by these modified refs (or their reflogentries).
  570. // Remove these loose objects
  571. // from the deletionCandidates. When the last candidate is removed
  572. // leave this method.
  573. ObjectWalk w = new ObjectWalk(repo);
  574. try {
  575. for (Ref cr : newRefs) {
  576. checkCancelled();
  577. w.markStart(w.parseAny(cr.getObjectId()));
  578. }
  579. if (lastPackedRefs != null)
  580. for (Ref lpr : lastPackedRefs) {
  581. w.markUninteresting(w.parseAny(lpr.getObjectId()));
  582. }
  583. removeReferenced(deletionCandidates, w);
  584. } finally {
  585. w.dispose();
  586. }
  587. }
  588. if (deletionCandidates.isEmpty())
  589. return;
  590. // Since we have not left the method yet there are still
  591. // deletionCandidates. Last chance for these objects not to be pruned is
  592. // that they are referenced by reflog entries. Even refs which currently
  593. // point to the same object as during last repack() may have
  594. // additional reflog entries not handled during last repack()
  595. ObjectWalk w = new ObjectWalk(repo);
  596. try {
  597. for (Ref ar : getAllRefs())
  598. for (ObjectId id : listRefLogObjects(ar, lastRepackTime)) {
  599. checkCancelled();
  600. w.markStart(w.parseAny(id));
  601. }
  602. if (lastPackedRefs != null)
  603. for (Ref lpr : lastPackedRefs) {
  604. checkCancelled();
  605. w.markUninteresting(w.parseAny(lpr.getObjectId()));
  606. }
  607. removeReferenced(deletionCandidates, w);
  608. } finally {
  609. w.dispose();
  610. }
  611. if (deletionCandidates.isEmpty())
  612. return;
  613. checkCancelled();
  614. // delete all candidates which have survived: these are unreferenced
  615. // loose objects. Make a last check, though, to avoid deleting objects
  616. // that could have been referenced while the candidates list was being
  617. // built (by an incoming push, for example).
  618. Set<File> touchedFanout = new HashSet<>();
  619. for (File f : deletionCandidates.values()) {
  620. if (f.lastModified() < expireDate) {
  621. f.delete();
  622. touchedFanout.add(f.getParentFile());
  623. }
  624. }
  625. for (File f : touchedFanout) {
  626. FileUtils.delete(f,
  627. FileUtils.EMPTY_DIRECTORIES_ONLY | FileUtils.IGNORE_ERRORS);
  628. }
  629. repo.getObjectDatabase().close();
  630. }
  631. private long getExpireDate() throws ParseException {
  632. long expireDate = Long.MAX_VALUE;
  633. if (expire == null && expireAgeMillis == -1) {
  634. String pruneExpireStr = getPruneExpireStr();
  635. if (pruneExpireStr == null)
  636. pruneExpireStr = PRUNE_EXPIRE_DEFAULT;
  637. expire = GitDateParser.parse(pruneExpireStr, null, SystemReader
  638. .getInstance().getLocale());
  639. expireAgeMillis = -1;
  640. }
  641. if (expire != null)
  642. expireDate = expire.getTime();
  643. if (expireAgeMillis != -1)
  644. expireDate = System.currentTimeMillis() - expireAgeMillis;
  645. return expireDate;
  646. }
  647. private String getPruneExpireStr() {
  648. return repo.getConfig().getString(
  649. ConfigConstants.CONFIG_GC_SECTION, null,
  650. ConfigConstants.CONFIG_KEY_PRUNEEXPIRE);
  651. }
  652. private long getPackExpireDate() throws ParseException {
  653. long packExpireDate = Long.MAX_VALUE;
  654. if (packExpire == null && packExpireAgeMillis == -1) {
  655. String prunePackExpireStr = repo.getConfig().getString(
  656. ConfigConstants.CONFIG_GC_SECTION, null,
  657. ConfigConstants.CONFIG_KEY_PRUNEPACKEXPIRE);
  658. if (prunePackExpireStr == null)
  659. prunePackExpireStr = PRUNE_PACK_EXPIRE_DEFAULT;
  660. packExpire = GitDateParser.parse(prunePackExpireStr, null,
  661. SystemReader.getInstance().getLocale());
  662. packExpireAgeMillis = -1;
  663. }
  664. if (packExpire != null)
  665. packExpireDate = packExpire.getTime();
  666. if (packExpireAgeMillis != -1)
  667. packExpireDate = System.currentTimeMillis() - packExpireAgeMillis;
  668. return packExpireDate;
  669. }
  670. /**
  671. * Remove all entries from a map which key is the id of an object referenced
  672. * by the given ObjectWalk
  673. *
  674. * @param id2File
  675. * @param w
  676. * @throws MissingObjectException
  677. * @throws IncorrectObjectTypeException
  678. * @throws IOException
  679. */
  680. private void removeReferenced(Map<ObjectId, File> id2File,
  681. ObjectWalk w) throws MissingObjectException,
  682. IncorrectObjectTypeException, IOException {
  683. RevObject ro = w.next();
  684. while (ro != null) {
  685. checkCancelled();
  686. if (id2File.remove(ro.getId()) != null && id2File.isEmpty()) {
  687. return;
  688. }
  689. ro = w.next();
  690. }
  691. ro = w.nextObject();
  692. while (ro != null) {
  693. checkCancelled();
  694. if (id2File.remove(ro.getId()) != null && id2File.isEmpty()) {
  695. return;
  696. }
  697. ro = w.nextObject();
  698. }
  699. }
  700. private static boolean equals(Ref r1, Ref r2) {
  701. if (r1 == null || r2 == null) {
  702. return false;
  703. }
  704. if (r1.isSymbolic()) {
  705. return r2.isSymbolic() && r1.getTarget().getName()
  706. .equals(r2.getTarget().getName());
  707. }
  708. return !r2.isSymbolic()
  709. && Objects.equals(r1.getObjectId(), r2.getObjectId());
  710. }
  711. /**
  712. * Packs all non-symbolic, loose refs into packed-refs.
  713. *
  714. * @throws java.io.IOException
  715. */
  716. public void packRefs() throws IOException {
  717. Collection<Ref> refs = repo.getRefDatabase()
  718. .getRefsByPrefix(Constants.R_REFS);
  719. List<String> refsToBePacked = new ArrayList<>(refs.size());
  720. pm.beginTask(JGitText.get().packRefs, refs.size());
  721. try {
  722. for (Ref ref : refs) {
  723. checkCancelled();
  724. if (!ref.isSymbolic() && ref.getStorage().isLoose())
  725. refsToBePacked.add(ref.getName());
  726. pm.update(1);
  727. }
  728. ((RefDirectory) repo.getRefDatabase()).pack(refsToBePacked);
  729. } finally {
  730. pm.endTask();
  731. }
  732. }
  733. /**
  734. * Packs all objects which reachable from any of the heads into one pack
  735. * file. Additionally all objects which are not reachable from any head but
  736. * which are reachable from any of the other refs (e.g. tags), special refs
  737. * (e.g. FETCH_HEAD) or index are packed into a separate pack file. Objects
  738. * included in pack files which have a .keep file associated are never
  739. * repacked. All old pack files which existed before are deleted.
  740. *
  741. * @return a collection of the newly created pack files
  742. * @throws java.io.IOException
  743. * when during reading of refs, index, packfiles, objects,
  744. * reflog-entries or during writing to the packfiles
  745. * {@link java.io.IOException} occurs
  746. */
  747. public Collection<PackFile> repack() throws IOException {
  748. Collection<PackFile> toBeDeleted = repo.getObjectDatabase().getPacks();
  749. long time = System.currentTimeMillis();
  750. Collection<Ref> refsBefore = getAllRefs();
  751. Set<ObjectId> allHeadsAndTags = new HashSet<>();
  752. Set<ObjectId> allHeads = new HashSet<>();
  753. Set<ObjectId> allTags = new HashSet<>();
  754. Set<ObjectId> nonHeads = new HashSet<>();
  755. Set<ObjectId> txnHeads = new HashSet<>();
  756. Set<ObjectId> tagTargets = new HashSet<>();
  757. Set<ObjectId> indexObjects = listNonHEADIndexObjects();
  758. RefDatabase refdb = repo.getRefDatabase();
  759. for (Ref ref : refsBefore) {
  760. checkCancelled();
  761. nonHeads.addAll(listRefLogObjects(ref, 0));
  762. if (ref.isSymbolic() || ref.getObjectId() == null) {
  763. continue;
  764. }
  765. if (isHead(ref)) {
  766. allHeads.add(ref.getObjectId());
  767. } else if (isTag(ref)) {
  768. allTags.add(ref.getObjectId());
  769. } else if (RefTreeNames.isRefTree(refdb, ref.getName())) {
  770. txnHeads.add(ref.getObjectId());
  771. } else {
  772. nonHeads.add(ref.getObjectId());
  773. }
  774. if (ref.getPeeledObjectId() != null) {
  775. tagTargets.add(ref.getPeeledObjectId());
  776. }
  777. }
  778. List<ObjectIdSet> excluded = new LinkedList<>();
  779. for (PackFile f : repo.getObjectDatabase().getPacks()) {
  780. checkCancelled();
  781. if (f.shouldBeKept())
  782. excluded.add(f.getIndex());
  783. }
  784. // Don't exclude tags that are also branch tips
  785. allTags.removeAll(allHeads);
  786. allHeadsAndTags.addAll(allHeads);
  787. allHeadsAndTags.addAll(allTags);
  788. // Hoist all branch tips and tags earlier in the pack file
  789. tagTargets.addAll(allHeadsAndTags);
  790. nonHeads.addAll(indexObjects);
  791. // Combine the GC_REST objects into the GC pack if requested
  792. if (pconfig.getSinglePack()) {
  793. allHeadsAndTags.addAll(nonHeads);
  794. nonHeads.clear();
  795. }
  796. List<PackFile> ret = new ArrayList<>(2);
  797. PackFile heads = null;
  798. if (!allHeadsAndTags.isEmpty()) {
  799. heads = writePack(allHeadsAndTags, PackWriter.NONE, allTags,
  800. tagTargets, excluded);
  801. if (heads != null) {
  802. ret.add(heads);
  803. excluded.add(0, heads.getIndex());
  804. }
  805. }
  806. if (!nonHeads.isEmpty()) {
  807. PackFile rest = writePack(nonHeads, allHeadsAndTags, PackWriter.NONE,
  808. tagTargets, excluded);
  809. if (rest != null)
  810. ret.add(rest);
  811. }
  812. if (!txnHeads.isEmpty()) {
  813. PackFile txn = writePack(txnHeads, PackWriter.NONE, PackWriter.NONE,
  814. null, excluded);
  815. if (txn != null)
  816. ret.add(txn);
  817. }
  818. try {
  819. deleteOldPacks(toBeDeleted, ret);
  820. } catch (ParseException e) {
  821. // TODO: the exception has to be wrapped into an IOException because
  822. // throwing the ParseException directly would break the API, instead
  823. // we should throw a ConfigInvalidException
  824. throw new IOException(e);
  825. }
  826. prunePacked();
  827. deleteEmptyRefsFolders();
  828. deleteOrphans();
  829. deleteTempPacksIdx();
  830. lastPackedRefs = refsBefore;
  831. lastRepackTime = time;
  832. return ret;
  833. }
  834. private static boolean isHead(Ref ref) {
  835. return ref.getName().startsWith(Constants.R_HEADS);
  836. }
  837. private static boolean isTag(Ref ref) {
  838. return ref.getName().startsWith(Constants.R_TAGS);
  839. }
  840. private void deleteEmptyRefsFolders() throws IOException {
  841. Path refs = repo.getDirectory().toPath().resolve(Constants.R_REFS);
  842. // Avoid deleting a folder that was created after the threshold so that concurrent
  843. // operations trying to create a reference are not impacted
  844. Instant threshold = Instant.now().minus(30, ChronoUnit.SECONDS);
  845. try (Stream<Path> entries = Files.list(refs)
  846. .filter(Files::isDirectory)) {
  847. Iterator<Path> iterator = entries.iterator();
  848. while (iterator.hasNext()) {
  849. try (Stream<Path> s = Files.list(iterator.next())) {
  850. s.filter(path -> canBeSafelyDeleted(path, threshold)).forEach(this::deleteDir);
  851. }
  852. }
  853. }
  854. }
  855. private boolean canBeSafelyDeleted(Path path, Instant threshold) {
  856. try {
  857. return Files.getLastModifiedTime(path).toInstant().isBefore(threshold);
  858. }
  859. catch (IOException e) {
  860. LOG.warn(MessageFormat.format(
  861. JGitText.get().cannotAccessLastModifiedForSafeDeletion,
  862. path), e);
  863. return false;
  864. }
  865. }
  866. private void deleteDir(Path dir) {
  867. try (Stream<Path> dirs = Files.walk(dir)) {
  868. dirs.filter(this::isDirectory).sorted(Comparator.reverseOrder())
  869. .forEach(this::delete);
  870. } catch (IOException e) {
  871. LOG.error(e.getMessage(), e);
  872. }
  873. }
  874. private boolean isDirectory(Path p) {
  875. return p.toFile().isDirectory();
  876. }
  877. private void delete(Path d) {
  878. try {
  879. Files.delete(d);
  880. } catch (DirectoryNotEmptyException e) {
  881. // Don't log
  882. } catch (IOException e) {
  883. LOG.error(MessageFormat.format(JGitText.get().cannotDeleteFile, d),
  884. e);
  885. }
  886. }
  887. /**
  888. * Deletes orphans
  889. * <p>
  890. * A file is considered an orphan if it is either a "bitmap" or an index
  891. * file, and its corresponding pack file is missing in the list.
  892. * </p>
  893. */
  894. private void deleteOrphans() {
  895. Path packDir = repo.getObjectDatabase().getPackDirectory().toPath();
  896. List<String> fileNames = null;
  897. try (Stream<Path> files = Files.list(packDir)) {
  898. fileNames = files.map(path -> path.getFileName().toString())
  899. .filter(name -> (name.endsWith(PACK_EXT)
  900. || name.endsWith(BITMAP_EXT)
  901. || name.endsWith(INDEX_EXT)))
  902. .sorted(Collections.reverseOrder())
  903. .collect(Collectors.toList());
  904. } catch (IOException e1) {
  905. // ignore
  906. }
  907. if (fileNames == null) {
  908. return;
  909. }
  910. String base = null;
  911. for (String n : fileNames) {
  912. if (n.endsWith(PACK_EXT)) {
  913. base = n.substring(0, n.lastIndexOf('.'));
  914. } else {
  915. if (base == null || !n.startsWith(base)) {
  916. try {
  917. Files.delete(packDir.resolve(n));
  918. } catch (IOException e) {
  919. LOG.error(e.getMessage(), e);
  920. }
  921. }
  922. }
  923. }
  924. }
  925. private void deleteTempPacksIdx() {
  926. Path packDir = repo.getObjectDatabase().getPackDirectory().toPath();
  927. Instant threshold = Instant.now().minus(1, ChronoUnit.DAYS);
  928. if (!Files.exists(packDir)) {
  929. return;
  930. }
  931. try (DirectoryStream<Path> stream =
  932. Files.newDirectoryStream(packDir, "gc_*_tmp")) { //$NON-NLS-1$
  933. stream.forEach(t -> {
  934. try {
  935. Instant lastModified = Files.getLastModifiedTime(t)
  936. .toInstant();
  937. if (lastModified.isBefore(threshold)) {
  938. Files.deleteIfExists(t);
  939. }
  940. } catch (IOException e) {
  941. LOG.error(e.getMessage(), e);
  942. }
  943. });
  944. } catch (IOException e) {
  945. LOG.error(e.getMessage(), e);
  946. }
  947. }
  948. /**
  949. * @param ref
  950. * the ref which log should be inspected
  951. * @param minTime only reflog entries not older then this time are processed
  952. * @return the {@link ObjectId}s contained in the reflog
  953. * @throws IOException
  954. */
  955. private Set<ObjectId> listRefLogObjects(Ref ref, long minTime) throws IOException {
  956. ReflogReader reflogReader = repo.getReflogReader(ref.getName());
  957. if (reflogReader == null) {
  958. return Collections.emptySet();
  959. }
  960. List<ReflogEntry> rlEntries = reflogReader
  961. .getReverseEntries();
  962. if (rlEntries == null || rlEntries.isEmpty())
  963. return Collections.emptySet();
  964. Set<ObjectId> ret = new HashSet<>();
  965. for (ReflogEntry e : rlEntries) {
  966. if (e.getWho().getWhen().getTime() < minTime)
  967. break;
  968. ObjectId newId = e.getNewId();
  969. if (newId != null && !ObjectId.zeroId().equals(newId))
  970. ret.add(newId);
  971. ObjectId oldId = e.getOldId();
  972. if (oldId != null && !ObjectId.zeroId().equals(oldId))
  973. ret.add(oldId);
  974. }
  975. return ret;
  976. }
  977. /**
  978. * Returns a collection of all refs and additional refs.
  979. *
  980. * Additional refs which don't start with "refs/" are not returned because
  981. * they should not save objects from being garbage collected. Examples for
  982. * such references are ORIG_HEAD, MERGE_HEAD, FETCH_HEAD and
  983. * CHERRY_PICK_HEAD.
  984. *
  985. * @return a collection of refs pointing to live objects.
  986. * @throws IOException
  987. */
  988. private Collection<Ref> getAllRefs() throws IOException {
  989. RefDatabase refdb = repo.getRefDatabase();
  990. Collection<Ref> refs = refdb.getRefs();
  991. List<Ref> addl = refdb.getAdditionalRefs();
  992. if (!addl.isEmpty()) {
  993. List<Ref> all = new ArrayList<>(refs.size() + addl.size());
  994. all.addAll(refs);
  995. // add additional refs which start with refs/
  996. for (Ref r : addl) {
  997. checkCancelled();
  998. if (r.getName().startsWith(Constants.R_REFS)) {
  999. all.add(r);
  1000. }
  1001. }
  1002. return all;
  1003. }
  1004. return refs;
  1005. }
  1006. /**
  1007. * Return a list of those objects in the index which differ from whats in
  1008. * HEAD
  1009. *
  1010. * @return a set of ObjectIds of changed objects in the index
  1011. * @throws IOException
  1012. * @throws CorruptObjectException
  1013. * @throws NoWorkTreeException
  1014. */
  1015. private Set<ObjectId> listNonHEADIndexObjects()
  1016. throws CorruptObjectException, IOException {
  1017. if (repo.isBare()) {
  1018. return Collections.emptySet();
  1019. }
  1020. try (TreeWalk treeWalk = new TreeWalk(repo)) {
  1021. treeWalk.addTree(new DirCacheIterator(repo.readDirCache()));
  1022. ObjectId headID = repo.resolve(Constants.HEAD);
  1023. if (headID != null) {
  1024. try (RevWalk revWalk = new RevWalk(repo)) {
  1025. treeWalk.addTree(revWalk.parseTree(headID));
  1026. }
  1027. }
  1028. treeWalk.setFilter(TreeFilter.ANY_DIFF);
  1029. treeWalk.setRecursive(true);
  1030. Set<ObjectId> ret = new HashSet<>();
  1031. while (treeWalk.next()) {
  1032. checkCancelled();
  1033. ObjectId objectId = treeWalk.getObjectId(0);
  1034. switch (treeWalk.getRawMode(0) & FileMode.TYPE_MASK) {
  1035. case FileMode.TYPE_MISSING:
  1036. case FileMode.TYPE_GITLINK:
  1037. continue;
  1038. case FileMode.TYPE_TREE:
  1039. case FileMode.TYPE_FILE:
  1040. case FileMode.TYPE_SYMLINK:
  1041. ret.add(objectId);
  1042. continue;
  1043. default:
  1044. throw new IOException(MessageFormat.format(
  1045. JGitText.get().corruptObjectInvalidMode3,
  1046. String.format("%o", //$NON-NLS-1$
  1047. Integer.valueOf(treeWalk.getRawMode(0))),
  1048. (objectId == null) ? "null" : objectId.name(), //$NON-NLS-1$
  1049. treeWalk.getPathString(), //
  1050. repo.getIndexFile()));
  1051. }
  1052. }
  1053. return ret;
  1054. }
  1055. }
  1056. private PackFile writePack(@NonNull Set<? extends ObjectId> want,
  1057. @NonNull Set<? extends ObjectId> have, @NonNull Set<ObjectId> tags,
  1058. Set<ObjectId> tagTargets, List<ObjectIdSet> excludeObjects)
  1059. throws IOException {
  1060. checkCancelled();
  1061. File tmpPack = null;
  1062. Map<PackExt, File> tmpExts = new TreeMap<>((o1, o2) -> {
  1063. // INDEX entries must be returned last, so the pack
  1064. // scanner does pick up the new pack until all the
  1065. // PackExt entries have been written.
  1066. if (o1 == o2) {
  1067. return 0;
  1068. }
  1069. if (o1 == PackExt.INDEX) {
  1070. return 1;
  1071. }
  1072. if (o2 == PackExt.INDEX) {
  1073. return -1;
  1074. }
  1075. return Integer.signum(o1.hashCode() - o2.hashCode());
  1076. });
  1077. try (PackWriter pw = new PackWriter(
  1078. pconfig,
  1079. repo.newObjectReader())) {
  1080. // prepare the PackWriter
  1081. pw.setDeltaBaseAsOffset(true);
  1082. pw.setReuseDeltaCommits(false);
  1083. if (tagTargets != null) {
  1084. pw.setTagTargets(tagTargets);
  1085. }
  1086. if (excludeObjects != null)
  1087. for (ObjectIdSet idx : excludeObjects)
  1088. pw.excludeObjects(idx);
  1089. pw.preparePack(pm, want, have, PackWriter.NONE, tags);
  1090. if (pw.getObjectCount() == 0)
  1091. return null;
  1092. checkCancelled();
  1093. // create temporary files
  1094. String id = pw.computeName().getName();
  1095. File packdir = repo.getObjectDatabase().getPackDirectory();
  1096. tmpPack = File.createTempFile("gc_", ".pack_tmp", packdir); //$NON-NLS-1$ //$NON-NLS-2$
  1097. final String tmpBase = tmpPack.getName()
  1098. .substring(0, tmpPack.getName().lastIndexOf('.'));
  1099. File tmpIdx = new File(packdir, tmpBase + ".idx_tmp"); //$NON-NLS-1$
  1100. tmpExts.put(INDEX, tmpIdx);
  1101. if (!tmpIdx.createNewFile())
  1102. throw new IOException(MessageFormat.format(
  1103. JGitText.get().cannotCreateIndexfile, tmpIdx.getPath()));
  1104. // write the packfile
  1105. try (FileOutputStream fos = new FileOutputStream(tmpPack);
  1106. FileChannel channel = fos.getChannel();
  1107. OutputStream channelStream = Channels
  1108. .newOutputStream(channel)) {
  1109. pw.writePack(pm, pm, channelStream);
  1110. channel.force(true);
  1111. }
  1112. // write the packindex
  1113. try (FileOutputStream fos = new FileOutputStream(tmpIdx);
  1114. FileChannel idxChannel = fos.getChannel();
  1115. OutputStream idxStream = Channels
  1116. .newOutputStream(idxChannel)) {
  1117. pw.writeIndex(idxStream);
  1118. idxChannel.force(true);
  1119. }
  1120. if (pw.prepareBitmapIndex(pm)) {
  1121. File tmpBitmapIdx = new File(packdir, tmpBase + ".bitmap_tmp"); //$NON-NLS-1$
  1122. tmpExts.put(BITMAP_INDEX, tmpBitmapIdx);
  1123. if (!tmpBitmapIdx.createNewFile())
  1124. throw new IOException(MessageFormat.format(
  1125. JGitText.get().cannotCreateIndexfile,
  1126. tmpBitmapIdx.getPath()));
  1127. try (FileOutputStream fos = new FileOutputStream(tmpBitmapIdx);
  1128. FileChannel idxChannel = fos.getChannel();
  1129. OutputStream idxStream = Channels
  1130. .newOutputStream(idxChannel)) {
  1131. pw.writeBitmapIndex(idxStream);
  1132. idxChannel.force(true);
  1133. }
  1134. }
  1135. // rename the temporary files to real files
  1136. File realPack = nameFor(id, ".pack"); //$NON-NLS-1$
  1137. repo.getObjectDatabase().closeAllPackHandles(realPack);
  1138. tmpPack.setReadOnly();
  1139. FileUtils.rename(tmpPack, realPack, StandardCopyOption.ATOMIC_MOVE);
  1140. for (Map.Entry<PackExt, File> tmpEntry : tmpExts.entrySet()) {
  1141. File tmpExt = tmpEntry.getValue();
  1142. tmpExt.setReadOnly();
  1143. File realExt = nameFor(id,
  1144. "." + tmpEntry.getKey().getExtension()); //$NON-NLS-1$
  1145. try {
  1146. FileUtils.rename(tmpExt, realExt,
  1147. StandardCopyOption.ATOMIC_MOVE);
  1148. } catch (IOException e) {
  1149. File newExt = new File(realExt.getParentFile(),
  1150. realExt.getName() + ".new"); //$NON-NLS-1$
  1151. try {
  1152. FileUtils.rename(tmpExt, newExt,
  1153. StandardCopyOption.ATOMIC_MOVE);
  1154. } catch (IOException e2) {
  1155. newExt = tmpExt;
  1156. e = e2;
  1157. }
  1158. throw new IOException(MessageFormat.format(
  1159. JGitText.get().panicCantRenameIndexFile, newExt,
  1160. realExt), e);
  1161. }
  1162. }
  1163. boolean interrupted = false;
  1164. try {
  1165. FileSnapshot snapshot = FileSnapshot.save(realPack);
  1166. if (pconfig.doWaitPreventRacyPack(snapshot.size())) {
  1167. snapshot.waitUntilNotRacy();
  1168. }
  1169. } catch (InterruptedException e) {
  1170. interrupted = true;
  1171. }
  1172. try {
  1173. return repo.getObjectDatabase().openPack(realPack);
  1174. } finally {
  1175. if (interrupted) {
  1176. // Re-set interrupted flag
  1177. Thread.currentThread().interrupt();
  1178. }
  1179. }
  1180. } finally {
  1181. if (tmpPack != null && tmpPack.exists())
  1182. tmpPack.delete();
  1183. for (File tmpExt : tmpExts.values()) {
  1184. if (tmpExt.exists())
  1185. tmpExt.delete();
  1186. }
  1187. }
  1188. }
  1189. private File nameFor(String name, String ext) {
  1190. File packdir = repo.getObjectDatabase().getPackDirectory();
  1191. return new File(packdir, "pack-" + name + ext); //$NON-NLS-1$
  1192. }
  1193. private void checkCancelled() throws CancelledException {
  1194. if (pm.isCancelled() || Thread.currentThread().isInterrupted()) {
  1195. throw new CancelledException(JGitText.get().operationCanceled);
  1196. }
  1197. }
  1198. /**
  1199. * A class holding statistical data for a FileRepository regarding how many
  1200. * objects are stored as loose or packed objects
  1201. */
  1202. public static class RepoStatistics {
  1203. /**
  1204. * The number of objects stored in pack files. If the same object is
  1205. * stored in multiple pack files then it is counted as often as it
  1206. * occurs in pack files.
  1207. */
  1208. public long numberOfPackedObjects;
  1209. /**
  1210. * The number of pack files
  1211. */
  1212. public long numberOfPackFiles;
  1213. /**
  1214. * The number of objects stored as loose objects.
  1215. */
  1216. public long numberOfLooseObjects;
  1217. /**
  1218. * The sum of the sizes of all files used to persist loose objects.
  1219. */
  1220. public long sizeOfLooseObjects;
  1221. /**
  1222. * The sum of the sizes of all pack files.
  1223. */
  1224. public long sizeOfPackedObjects;
  1225. /**
  1226. * The number of loose refs.
  1227. */
  1228. public long numberOfLooseRefs;
  1229. /**
  1230. * The number of refs stored in pack files.
  1231. */
  1232. public long numberOfPackedRefs;
  1233. /**
  1234. * The number of bitmaps in the bitmap indices.
  1235. */
  1236. public long numberOfBitmaps;
  1237. @Override
  1238. public String toString() {
  1239. final StringBuilder b = new StringBuilder();
  1240. b.append("numberOfPackedObjects=").append(numberOfPackedObjects); //$NON-NLS-1$
  1241. b.append(", numberOfPackFiles=").append(numberOfPackFiles); //$NON-NLS-1$
  1242. b.append(", numberOfLooseObjects=").append(numberOfLooseObjects); //$NON-NLS-1$
  1243. b.append(", numberOfLooseRefs=").append(numberOfLooseRefs); //$NON-NLS-1$
  1244. b.append(", numberOfPackedRefs=").append(numberOfPackedRefs); //$NON-NLS-1$
  1245. b.append(", sizeOfLooseObjects=").append(sizeOfLooseObjects); //$NON-NLS-1$
  1246. b.append(", sizeOfPackedObjects=").append(sizeOfPackedObjects); //$NON-NLS-1$
  1247. b.append(", numberOfBitmaps=").append(numberOfBitmaps); //$NON-NLS-1$
  1248. return b.toString();
  1249. }
  1250. }
  1251. /**
  1252. * Returns information about objects and pack files for a FileRepository.
  1253. *
  1254. * @return information about objects and pack files for a FileRepository
  1255. * @throws java.io.IOException
  1256. */
  1257. public RepoStatistics getStatistics() throws IOException {
  1258. RepoStatistics ret = new RepoStatistics();
  1259. Collection<PackFile> packs = repo.getObjectDatabase().getPacks();
  1260. for (PackFile f : packs) {
  1261. ret.numberOfPackedObjects += f.getIndex().getObjectCount();
  1262. ret.numberOfPackFiles++;
  1263. ret.sizeOfPackedObjects += f.getPackFile().length();
  1264. if (f.getBitmapIndex() != null)
  1265. ret.numberOfBitmaps += f.getBitmapIndex().getBitmapCount();
  1266. }
  1267. File objDir = repo.getObjectsDirectory();
  1268. String[] fanout = objDir.list();
  1269. if (fanout != null && fanout.length > 0) {
  1270. for (String d : fanout) {
  1271. if (d.length() != 2)
  1272. continue;
  1273. File[] entries = new File(objDir, d).listFiles();
  1274. if (entries == null)
  1275. continue;
  1276. for (File f : entries) {
  1277. if (f.getName().length() != Constants.OBJECT_ID_STRING_LENGTH - 2)
  1278. continue;
  1279. ret.numberOfLooseObjects++;
  1280. ret.sizeOfLooseObjects += f.length();
  1281. }
  1282. }
  1283. }
  1284. RefDatabase refDb = repo.getRefDatabase();
  1285. for (Ref r : refDb.getRefs()) {
  1286. Storage storage = r.getStorage();
  1287. if (storage == Storage.LOOSE || storage == Storage.LOOSE_PACKED)
  1288. ret.numberOfLooseRefs++;
  1289. if (storage == Storage.PACKED || storage == Storage.LOOSE_PACKED)
  1290. ret.numberOfPackedRefs++;
  1291. }
  1292. return ret;
  1293. }
  1294. /**
  1295. * Set the progress monitor used for garbage collection methods.
  1296. *
  1297. * @param pm a {@link org.eclipse.jgit.lib.ProgressMonitor} object.
  1298. * @return this
  1299. */
  1300. public GC setProgressMonitor(ProgressMonitor pm) {
  1301. this.pm = (pm == null) ? NullProgressMonitor.INSTANCE : pm;
  1302. return this;
  1303. }
  1304. /**
  1305. * During gc() or prune() each unreferenced, loose object which has been
  1306. * created or modified in the last <code>expireAgeMillis</code> milliseconds
  1307. * will not be pruned. Only older objects may be pruned. If set to 0 then
  1308. * every object is a candidate for pruning.
  1309. *
  1310. * @param expireAgeMillis
  1311. * minimal age of objects to be pruned in milliseconds.
  1312. */
  1313. public void setExpireAgeMillis(long expireAgeMillis) {
  1314. this.expireAgeMillis = expireAgeMillis;
  1315. expire = null;
  1316. }
  1317. /**
  1318. * During gc() or prune() packfiles which are created or modified in the
  1319. * last <code>packExpireAgeMillis</code> milliseconds will not be deleted.
  1320. * Only older packfiles may be deleted. If set to 0 then every packfile is a
  1321. * candidate for deletion.
  1322. *
  1323. * @param packExpireAgeMillis
  1324. * minimal age of packfiles to be deleted in milliseconds.
  1325. */
  1326. public void setPackExpireAgeMillis(long packExpireAgeMillis) {
  1327. this.packExpireAgeMillis = packExpireAgeMillis;
  1328. expire = null;
  1329. }
  1330. /**
  1331. * Set the PackConfig used when (re-)writing packfiles. This allows to
  1332. * influence how packs are written and to implement something similar to
  1333. * "git gc --aggressive"
  1334. *
  1335. * @param pconfig
  1336. * the {@link org.eclipse.jgit.storage.pack.PackConfig} used when
  1337. * writing packs
  1338. */
  1339. public void setPackConfig(@NonNull PackConfig pconfig) {
  1340. this.pconfig = pconfig;
  1341. }
  1342. /**
  1343. * During gc() or prune() each unreferenced, loose object which has been
  1344. * created or modified after or at <code>expire</code> will not be pruned.
  1345. * Only older objects may be pruned. If set to null then every object is a
  1346. * candidate for pruning.
  1347. *
  1348. * @param expire
  1349. * instant in time which defines object expiration
  1350. * objects with modification time before this instant are expired
  1351. * objects with modification time newer or equal to this instant
  1352. * are not expired
  1353. */
  1354. public void setExpire(Date expire) {
  1355. this.expire = expire;
  1356. expireAgeMillis = -1;
  1357. }
  1358. /**
  1359. * During gc() or prune() packfiles which are created or modified after or
  1360. * at <code>packExpire</code> will not be deleted. Only older packfiles may
  1361. * be deleted. If set to null then every packfile is a candidate for
  1362. * deletion.
  1363. *
  1364. * @param packExpire
  1365. * instant in time which defines packfile expiration
  1366. */
  1367. public void setPackExpire(Date packExpire) {
  1368. this.packExpire = packExpire;
  1369. packExpireAgeMillis = -1;
  1370. }
  1371. /**
  1372. * Set the {@code gc --auto} option.
  1373. *
  1374. * With this option, gc checks whether any housekeeping is required; if not,
  1375. * it exits without performing any work. Some JGit commands run
  1376. * {@code gc --auto} after performing operations that could create many
  1377. * loose objects.
  1378. * <p>
  1379. * Housekeeping is required if there are too many loose objects or too many
  1380. * packs in the repository. If the number of loose objects exceeds the value
  1381. * of the gc.auto option JGit GC consolidates all existing packs into a
  1382. * single pack (equivalent to {@code -A} option), whereas git-core would
  1383. * combine all loose objects into a single pack using {@code repack -d -l}.
  1384. * Setting the value of {@code gc.auto} to 0 disables automatic packing of
  1385. * loose objects.
  1386. * <p>
  1387. * If the number of packs exceeds the value of {@code gc.autoPackLimit},
  1388. * then existing packs (except those marked with a .keep file) are
  1389. * consolidated into a single pack by using the {@code -A} option of repack.
  1390. * Setting {@code gc.autoPackLimit} to 0 disables automatic consolidation of
  1391. * packs.
  1392. * <p>
  1393. * Like git the following jgit commands run auto gc:
  1394. * <ul>
  1395. * <li>fetch</li>
  1396. * <li>merge</li>
  1397. * <li>rebase</li>
  1398. * <li>receive-pack</li>
  1399. * </ul>
  1400. * The auto gc for receive-pack can be suppressed by setting the config
  1401. * option {@code receive.autogc = false}
  1402. *
  1403. * @param auto
  1404. * defines whether gc should do automatic housekeeping
  1405. */
  1406. public void setAuto(boolean auto) {
  1407. this.automatic = auto;
  1408. }
  1409. /**
  1410. * @param background
  1411. * whether to run the gc in a background thread.
  1412. */
  1413. void setBackground(boolean background) {
  1414. this.background = background;
  1415. }
  1416. private boolean needGc() {
  1417. if (tooManyPacks()) {
  1418. addRepackAllOption();
  1419. } else {
  1420. return tooManyLooseObjects();
  1421. }
  1422. // TODO run pre-auto-gc hook, if it fails return false
  1423. return true;
  1424. }
  1425. private void addRepackAllOption() {
  1426. // TODO: if JGit GC is enhanced to support repack's option -l this
  1427. // method needs to be implemented
  1428. }
  1429. /**
  1430. * @return {@code true} if number of packs > gc.autopacklimit (default 50)
  1431. */
  1432. boolean tooManyPacks() {
  1433. int autopacklimit = repo.getConfig().getInt(
  1434. ConfigConstants.CONFIG_GC_SECTION,
  1435. ConfigConstants.CONFIG_KEY_AUTOPACKLIMIT,
  1436. DEFAULT_AUTOPACKLIMIT);
  1437. if (autopacklimit <= 0) {
  1438. return false;
  1439. }
  1440. // JGit always creates two packfiles, one for the objects reachable from
  1441. // branches, and another one for the rest
  1442. return repo.getObjectDatabase().getPacks().size() > (autopacklimit + 1);
  1443. }
  1444. /**
  1445. * Quickly estimate number of loose objects, SHA1 is distributed evenly so
  1446. * counting objects in one directory (bucket 17) is sufficient
  1447. *
  1448. * @return {@code true} if number of loose objects > gc.auto (default 6700)
  1449. */
  1450. boolean tooManyLooseObjects() {
  1451. int auto = getLooseObjectLimit();
  1452. if (auto <= 0) {
  1453. return false;
  1454. }
  1455. int n = 0;
  1456. int threshold = (auto + 255) / 256;
  1457. Path dir = repo.getObjectsDirectory().toPath().resolve("17"); //$NON-NLS-1$
  1458. if (!dir.toFile().exists()) {
  1459. return false;
  1460. }
  1461. try (DirectoryStream<Path> stream = Files.newDirectoryStream(dir, file -> {
  1462. Path fileName = file.getFileName();
  1463. return file.toFile().isFile() && fileName != null
  1464. && PATTERN_LOOSE_OBJECT.matcher(fileName.toString())
  1465. .matches();
  1466. })) {
  1467. for (Iterator<Path> iter = stream.iterator(); iter.hasNext(); iter
  1468. .next()) {
  1469. if (++n > threshold) {
  1470. return true;
  1471. }
  1472. }
  1473. } catch (IOException e) {
  1474. LOG.error(e.getMessage(), e);
  1475. }
  1476. return false;
  1477. }
  1478. private int getLooseObjectLimit() {
  1479. return repo.getConfig().getInt(ConfigConstants.CONFIG_GC_SECTION,
  1480. ConfigConstants.CONFIG_KEY_AUTO, DEFAULT_AUTOLIMIT);
  1481. }
  1482. }