You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

GC.java 32KB

Limit the range of commits for which bitmaps are created. A bitmap index contains bitmaps for a set of commits in a pack file. Creating a bitmap for every commit is too expensive, so heuristics select the most "important" commits. The most recent commits are the most valuable. To clone a repository only those for the branch tips are needed. When fetching, only commits since the last fetch are needed. The commit selection heuristics generally work, but for some repositories the number of selected commits is prohibitively high. One example is the MSM 3.10 Linux kernel. With over 1 million commits on 2820 branches, the current heuristics resulted in +36k selected commits. Each uncompressed bitmap for that repository is ~413k, making it difficult to complete a GC operation in available memory. The benefit of creating bitmaps over the entire history of a repository like the MSM 3.10 Linux kernel isn't clear. For that repository, most history for the last year appears to be in the last 100k commits. Limiting bitmap commit selection to just those commits reduces the count of selected commits from ~36k to ~10.5k. Dropping bitmaps for older commits does not affect object counting times for clones or for fetches on clients that are reasonably up-to-date. This patch defines a new "bitmapCommitRange" PackConfig parameter to limit the commit selection process when building bitmaps. The range starts with the most recent commit and walks backwards. A range of 10k considers only the 10000 most recent commits. A range of zero creates bitmaps only for branch tips. A range of -1 (the default) does not limit the range--all commits in the pack are used in the commit selection process. Change-Id: Ied92c70cfa0778facc670e0f14a0980bed5e3bfb Signed-off-by: Terry Parker <tparker@google.com>
8 years ago
Limit the range of commits for which bitmaps are created. A bitmap index contains bitmaps for a set of commits in a pack file. Creating a bitmap for every commit is too expensive, so heuristics select the most "important" commits. The most recent commits are the most valuable. To clone a repository only those for the branch tips are needed. When fetching, only commits since the last fetch are needed. The commit selection heuristics generally work, but for some repositories the number of selected commits is prohibitively high. One example is the MSM 3.10 Linux kernel. With over 1 million commits on 2820 branches, the current heuristics resulted in +36k selected commits. Each uncompressed bitmap for that repository is ~413k, making it difficult to complete a GC operation in available memory. The benefit of creating bitmaps over the entire history of a repository like the MSM 3.10 Linux kernel isn't clear. For that repository, most history for the last year appears to be in the last 100k commits. Limiting bitmap commit selection to just those commits reduces the count of selected commits from ~36k to ~10.5k. Dropping bitmaps for older commits does not affect object counting times for clones or for fetches on clients that are reasonably up-to-date. This patch defines a new "bitmapCommitRange" PackConfig parameter to limit the commit selection process when building bitmaps. The range starts with the most recent commit and walks backwards. A range of 10k considers only the 10000 most recent commits. A range of zero creates bitmaps only for branch tips. A range of -1 (the default) does not limit the range--all commits in the pack are used in the commit selection process. Change-Id: Ied92c70cfa0778facc670e0f14a0980bed5e3bfb Signed-off-by: Terry Parker <tparker@google.com>
8 years ago
Limit the range of commits for which bitmaps are created. A bitmap index contains bitmaps for a set of commits in a pack file. Creating a bitmap for every commit is too expensive, so heuristics select the most "important" commits. The most recent commits are the most valuable. To clone a repository only those for the branch tips are needed. When fetching, only commits since the last fetch are needed. The commit selection heuristics generally work, but for some repositories the number of selected commits is prohibitively high. One example is the MSM 3.10 Linux kernel. With over 1 million commits on 2820 branches, the current heuristics resulted in +36k selected commits. Each uncompressed bitmap for that repository is ~413k, making it difficult to complete a GC operation in available memory. The benefit of creating bitmaps over the entire history of a repository like the MSM 3.10 Linux kernel isn't clear. For that repository, most history for the last year appears to be in the last 100k commits. Limiting bitmap commit selection to just those commits reduces the count of selected commits from ~36k to ~10.5k. Dropping bitmaps for older commits does not affect object counting times for clones or for fetches on clients that are reasonably up-to-date. This patch defines a new "bitmapCommitRange" PackConfig parameter to limit the commit selection process when building bitmaps. The range starts with the most recent commit and walks backwards. A range of 10k considers only the 10000 most recent commits. A range of zero creates bitmaps only for branch tips. A range of -1 (the default) does not limit the range--all commits in the pack are used in the commit selection process. Change-Id: Ied92c70cfa0778facc670e0f14a0980bed5e3bfb Signed-off-by: Terry Parker <tparker@google.com>
8 years ago
Limit the range of commits for which bitmaps are created. A bitmap index contains bitmaps for a set of commits in a pack file. Creating a bitmap for every commit is too expensive, so heuristics select the most "important" commits. The most recent commits are the most valuable. To clone a repository only those for the branch tips are needed. When fetching, only commits since the last fetch are needed. The commit selection heuristics generally work, but for some repositories the number of selected commits is prohibitively high. One example is the MSM 3.10 Linux kernel. With over 1 million commits on 2820 branches, the current heuristics resulted in +36k selected commits. Each uncompressed bitmap for that repository is ~413k, making it difficult to complete a GC operation in available memory. The benefit of creating bitmaps over the entire history of a repository like the MSM 3.10 Linux kernel isn't clear. For that repository, most history for the last year appears to be in the last 100k commits. Limiting bitmap commit selection to just those commits reduces the count of selected commits from ~36k to ~10.5k. Dropping bitmaps for older commits does not affect object counting times for clones or for fetches on clients that are reasonably up-to-date. This patch defines a new "bitmapCommitRange" PackConfig parameter to limit the commit selection process when building bitmaps. The range starts with the most recent commit and walks backwards. A range of 10k considers only the 10000 most recent commits. A range of zero creates bitmaps only for branch tips. A range of -1 (the default) does not limit the range--all commits in the pack are used in the commit selection process. Change-Id: Ied92c70cfa0778facc670e0f14a0980bed5e3bfb Signed-off-by: Terry Parker <tparker@google.com>
8 years ago
Limit the range of commits for which bitmaps are created. A bitmap index contains bitmaps for a set of commits in a pack file. Creating a bitmap for every commit is too expensive, so heuristics select the most "important" commits. The most recent commits are the most valuable. To clone a repository only those for the branch tips are needed. When fetching, only commits since the last fetch are needed. The commit selection heuristics generally work, but for some repositories the number of selected commits is prohibitively high. One example is the MSM 3.10 Linux kernel. With over 1 million commits on 2820 branches, the current heuristics resulted in +36k selected commits. Each uncompressed bitmap for that repository is ~413k, making it difficult to complete a GC operation in available memory. The benefit of creating bitmaps over the entire history of a repository like the MSM 3.10 Linux kernel isn't clear. For that repository, most history for the last year appears to be in the last 100k commits. Limiting bitmap commit selection to just those commits reduces the count of selected commits from ~36k to ~10.5k. Dropping bitmaps for older commits does not affect object counting times for clones or for fetches on clients that are reasonably up-to-date. This patch defines a new "bitmapCommitRange" PackConfig parameter to limit the commit selection process when building bitmaps. The range starts with the most recent commit and walks backwards. A range of 10k considers only the 10000 most recent commits. A range of zero creates bitmaps only for branch tips. A range of -1 (the default) does not limit the range--all commits in the pack are used in the commit selection process. Change-Id: Ied92c70cfa0778facc670e0f14a0980bed5e3bfb Signed-off-by: Terry Parker <tparker@google.com>
8 years ago
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012
  1. /*
  2. * Copyright (C) 2012, Christian Halstrick <christian.halstrick@sap.com>
  3. * Copyright (C) 2011, Shawn O. Pearce <spearce@spearce.org>
  4. * and other copyright owners as documented in the project's IP log.
  5. *
  6. * This program and the accompanying materials are made available
  7. * under the terms of the Eclipse Distribution License v1.0 which
  8. * accompanies this distribution, is reproduced below, and is
  9. * available at http://www.eclipse.org/org/documents/edl-v10.php
  10. *
  11. * All rights reserved.
  12. *
  13. * Redistribution and use in source and binary forms, with or
  14. * without modification, are permitted provided that the following
  15. * conditions are met:
  16. *
  17. * - Redistributions of source code must retain the above copyright
  18. * notice, this list of conditions and the following disclaimer.
  19. *
  20. * - Redistributions in binary form must reproduce the above
  21. * copyright notice, this list of conditions and the following
  22. * disclaimer in the documentation and/or other materials provided
  23. * with the distribution.
  24. *
  25. * - Neither the name of the Eclipse Foundation, Inc. nor the
  26. * names of its contributors may be used to endorse or promote
  27. * products derived from this software without specific prior
  28. * written permission.
  29. *
  30. * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
  31. * CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
  32. * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
  33. * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
  34. * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
  35. * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
  36. * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
  37. * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
  38. * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
  39. * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
  40. * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
  41. * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
  42. * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  43. */
  44. package org.eclipse.jgit.internal.storage.file;
  45. import static org.eclipse.jgit.internal.storage.pack.PackExt.BITMAP_INDEX;
  46. import static org.eclipse.jgit.internal.storage.pack.PackExt.INDEX;
  47. import java.io.File;
  48. import java.io.FileOutputStream;
  49. import java.io.IOException;
  50. import java.io.OutputStream;
  51. import java.nio.channels.Channels;
  52. import java.nio.channels.FileChannel;
  53. import java.nio.file.StandardCopyOption;
  54. import java.text.MessageFormat;
  55. import java.text.ParseException;
  56. import java.util.ArrayList;
  57. import java.util.Collection;
  58. import java.util.Collections;
  59. import java.util.Comparator;
  60. import java.util.Date;
  61. import java.util.HashMap;
  62. import java.util.HashSet;
  63. import java.util.LinkedList;
  64. import java.util.List;
  65. import java.util.Map;
  66. import java.util.Objects;
  67. import java.util.Set;
  68. import java.util.TreeMap;
  69. import org.eclipse.jgit.dircache.DirCacheIterator;
  70. import org.eclipse.jgit.errors.CorruptObjectException;
  71. import org.eclipse.jgit.errors.IncorrectObjectTypeException;
  72. import org.eclipse.jgit.errors.MissingObjectException;
  73. import org.eclipse.jgit.errors.NoWorkTreeException;
  74. import org.eclipse.jgit.internal.JGitText;
  75. import org.eclipse.jgit.internal.storage.pack.PackExt;
  76. import org.eclipse.jgit.internal.storage.pack.PackWriter;
  77. import org.eclipse.jgit.internal.storage.reftree.RefTreeNames;
  78. import org.eclipse.jgit.lib.ConfigConstants;
  79. import org.eclipse.jgit.lib.Constants;
  80. import org.eclipse.jgit.lib.FileMode;
  81. import org.eclipse.jgit.lib.NullProgressMonitor;
  82. import org.eclipse.jgit.lib.ObjectId;
  83. import org.eclipse.jgit.lib.ObjectIdSet;
  84. import org.eclipse.jgit.lib.ProgressMonitor;
  85. import org.eclipse.jgit.lib.Ref;
  86. import org.eclipse.jgit.lib.Ref.Storage;
  87. import org.eclipse.jgit.lib.RefDatabase;
  88. import org.eclipse.jgit.lib.ReflogEntry;
  89. import org.eclipse.jgit.lib.ReflogReader;
  90. import org.eclipse.jgit.revwalk.ObjectWalk;
  91. import org.eclipse.jgit.revwalk.RevObject;
  92. import org.eclipse.jgit.revwalk.RevWalk;
  93. import org.eclipse.jgit.storage.pack.PackConfig;
  94. import org.eclipse.jgit.treewalk.TreeWalk;
  95. import org.eclipse.jgit.treewalk.filter.TreeFilter;
  96. import org.eclipse.jgit.util.FileUtils;
  97. import org.eclipse.jgit.util.GitDateParser;
  98. import org.eclipse.jgit.util.SystemReader;
  99. /**
  100. * A garbage collector for git {@link FileRepository}. Instances of this class
  101. * are not thread-safe. Don't use the same instance from multiple threads.
  102. *
  103. * This class started as a copy of DfsGarbageCollector from Shawn O. Pearce
  104. * adapted to FileRepositories.
  105. */
  106. public class GC {
  107. private static final String PRUNE_EXPIRE_DEFAULT = "2.weeks.ago"; //$NON-NLS-1$
  108. private final FileRepository repo;
  109. private ProgressMonitor pm;
  110. private long expireAgeMillis = -1;
  111. private Date expire;
  112. private PackConfig pconfig = null;
  113. /**
  114. * the refs which existed during the last call to {@link #repack()}. This is
  115. * needed during {@link #prune(Set)} where we can optimize by looking at the
  116. * difference between the current refs and the refs which existed during
  117. * last {@link #repack()}.
  118. */
  119. private Collection<Ref> lastPackedRefs;
  120. /**
  121. * Holds the starting time of the last repack() execution. This is needed in
  122. * prune() to inspect only those reflog entries which have been added since
  123. * last repack().
  124. */
  125. private long lastRepackTime;
  126. /**
  127. * Creates a new garbage collector with default values. An expirationTime of
  128. * two weeks and <code>null</code> as progress monitor will be used.
  129. *
  130. * @param repo
  131. * the repo to work on
  132. */
  133. public GC(FileRepository repo) {
  134. this.repo = repo;
  135. this.pm = NullProgressMonitor.INSTANCE;
  136. }
  137. /**
  138. * Runs a garbage collector on a {@link FileRepository}. It will
  139. * <ul>
  140. * <li>pack loose references into packed-refs</li>
  141. * <li>repack all reachable objects into new pack files and delete the old
  142. * pack files</li>
  143. * <li>prune all loose objects which are now reachable by packs</li>
  144. * </ul>
  145. *
  146. * @return the collection of {@link PackFile}'s which are newly created
  147. * @throws IOException
  148. * @throws ParseException
  149. * If the configuration parameter "gc.pruneexpire" couldn't be
  150. * parsed
  151. */
  152. public Collection<PackFile> gc() throws IOException, ParseException {
  153. pm.start(6 /* tasks */);
  154. packRefs();
  155. // TODO: implement reflog_expire(pm, repo);
  156. Collection<PackFile> newPacks = repack();
  157. prune(Collections.<ObjectId> emptySet());
  158. // TODO: implement rerere_gc(pm);
  159. return newPacks;
  160. }
  161. /**
  162. * Delete old pack files. What is 'old' is defined by specifying a set of
  163. * old pack files and a set of new pack files. Each pack file contained in
  164. * old pack files but not contained in new pack files will be deleted. If an
  165. * expirationDate is set then pack files which are younger than the
  166. * expirationDate will not be deleted.
  167. *
  168. * @param oldPacks
  169. * @param newPacks
  170. * @throws ParseException
  171. */
  172. private void deleteOldPacks(Collection<PackFile> oldPacks,
  173. Collection<PackFile> newPacks) throws ParseException {
  174. long expireDate = getExpireDate();
  175. oldPackLoop: for (PackFile oldPack : oldPacks) {
  176. String oldName = oldPack.getPackName();
  177. // check whether an old pack file is also among the list of new
  178. // pack files. Then we must not delete it.
  179. for (PackFile newPack : newPacks)
  180. if (oldName.equals(newPack.getPackName()))
  181. continue oldPackLoop;
  182. if (!oldPack.shouldBeKept()
  183. && oldPack.getPackFile().lastModified() < expireDate) {
  184. oldPack.close();
  185. prunePack(oldName);
  186. }
  187. }
  188. // close the complete object database. Thats my only chance to force
  189. // rescanning and to detect that certain pack files are now deleted.
  190. repo.getObjectDatabase().close();
  191. }
  192. /**
  193. * Delete files associated with a single pack file. First try to delete the
  194. * ".pack" file because on some platforms the ".pack" file may be locked and
  195. * can't be deleted. In such a case it is better to detect this early and
  196. * give up on deleting files for this packfile. Otherwise we may delete the
  197. * ".index" file and when failing to delete the ".pack" file we are left
  198. * with a ".pack" file without a ".index" file.
  199. *
  200. * @param packName
  201. */
  202. private void prunePack(String packName) {
  203. PackExt[] extensions = PackExt.values();
  204. try {
  205. // Delete the .pack file first and if this fails give up on deleting
  206. // the other files
  207. int deleteOptions = FileUtils.RETRY | FileUtils.SKIP_MISSING;
  208. for (PackExt ext : extensions)
  209. if (PackExt.PACK.equals(ext)) {
  210. File f = nameFor(packName, "." + ext.getExtension()); //$NON-NLS-1$
  211. FileUtils.delete(f, deleteOptions);
  212. break;
  213. }
  214. // The .pack file has been deleted. Delete as many as the other
  215. // files as you can.
  216. deleteOptions |= FileUtils.IGNORE_ERRORS;
  217. for (PackExt ext : extensions) {
  218. if (!PackExt.PACK.equals(ext)) {
  219. File f = nameFor(packName, "." + ext.getExtension()); //$NON-NLS-1$
  220. FileUtils.delete(f, deleteOptions);
  221. }
  222. }
  223. } catch (IOException e) {
  224. // Deletion of the .pack file failed. Silently return.
  225. }
  226. }
  227. /**
  228. * Like "git prune-packed" this method tries to prune all loose objects
  229. * which can be found in packs. If certain objects can't be pruned (e.g.
  230. * because the filesystem delete operation fails) this is silently ignored.
  231. *
  232. * @throws IOException
  233. */
  234. public void prunePacked() throws IOException {
  235. ObjectDirectory objdb = repo.getObjectDatabase();
  236. Collection<PackFile> packs = objdb.getPacks();
  237. File objects = repo.getObjectsDirectory();
  238. String[] fanout = objects.list();
  239. if (fanout != null && fanout.length > 0) {
  240. pm.beginTask(JGitText.get().pruneLoosePackedObjects, fanout.length);
  241. try {
  242. for (String d : fanout) {
  243. pm.update(1);
  244. if (d.length() != 2)
  245. continue;
  246. String[] entries = new File(objects, d).list();
  247. if (entries == null)
  248. continue;
  249. for (String e : entries) {
  250. if (e.length() != Constants.OBJECT_ID_STRING_LENGTH - 2)
  251. continue;
  252. ObjectId id;
  253. try {
  254. id = ObjectId.fromString(d + e);
  255. } catch (IllegalArgumentException notAnObject) {
  256. // ignoring the file that does not represent loose
  257. // object
  258. continue;
  259. }
  260. boolean found = false;
  261. for (PackFile p : packs)
  262. if (p.hasObject(id)) {
  263. found = true;
  264. break;
  265. }
  266. if (found)
  267. FileUtils.delete(objdb.fileFor(id), FileUtils.RETRY
  268. | FileUtils.SKIP_MISSING
  269. | FileUtils.IGNORE_ERRORS);
  270. }
  271. }
  272. } finally {
  273. pm.endTask();
  274. }
  275. }
  276. }
  277. /**
  278. * Like "git prune" this method tries to prune all loose objects which are
  279. * unreferenced. If certain objects can't be pruned (e.g. because the
  280. * filesystem delete operation fails) this is silently ignored.
  281. *
  282. * @param objectsToKeep
  283. * a set of objects which should explicitly not be pruned
  284. *
  285. * @throws IOException
  286. * @throws ParseException
  287. * If the configuration parameter "gc.pruneexpire" couldn't be
  288. * parsed
  289. */
  290. public void prune(Set<ObjectId> objectsToKeep) throws IOException,
  291. ParseException {
  292. long expireDate = getExpireDate();
  293. // Collect all loose objects which are old enough, not referenced from
  294. // the index and not in objectsToKeep
  295. Map<ObjectId, File> deletionCandidates = new HashMap<ObjectId, File>();
  296. Set<ObjectId> indexObjects = null;
  297. File objects = repo.getObjectsDirectory();
  298. String[] fanout = objects.list();
  299. if (fanout != null && fanout.length > 0) {
  300. pm.beginTask(JGitText.get().pruneLooseUnreferencedObjects,
  301. fanout.length);
  302. try {
  303. for (String d : fanout) {
  304. pm.update(1);
  305. if (d.length() != 2)
  306. continue;
  307. File[] entries = new File(objects, d).listFiles();
  308. if (entries == null)
  309. continue;
  310. for (File f : entries) {
  311. String fName = f.getName();
  312. if (fName.length() != Constants.OBJECT_ID_STRING_LENGTH - 2)
  313. continue;
  314. if (f.lastModified() >= expireDate)
  315. continue;
  316. try {
  317. ObjectId id = ObjectId.fromString(d + fName);
  318. if (objectsToKeep.contains(id))
  319. continue;
  320. if (indexObjects == null)
  321. indexObjects = listNonHEADIndexObjects();
  322. if (indexObjects.contains(id))
  323. continue;
  324. deletionCandidates.put(id, f);
  325. } catch (IllegalArgumentException notAnObject) {
  326. // ignoring the file that does not represent loose
  327. // object
  328. continue;
  329. }
  330. }
  331. }
  332. } finally {
  333. pm.endTask();
  334. }
  335. }
  336. if (deletionCandidates.isEmpty())
  337. return;
  338. // From the set of current refs remove all those which have been handled
  339. // during last repack(). Only those refs will survive which have been
  340. // added or modified since the last repack. Only these can save existing
  341. // loose refs from being pruned.
  342. Collection<Ref> newRefs;
  343. if (lastPackedRefs == null || lastPackedRefs.isEmpty())
  344. newRefs = getAllRefs();
  345. else {
  346. Map<String, Ref> last = new HashMap<>();
  347. for (Ref r : lastPackedRefs) {
  348. last.put(r.getName(), r);
  349. }
  350. newRefs = new ArrayList<>();
  351. for (Ref r : getAllRefs()) {
  352. Ref old = last.get(r.getName());
  353. if (!equals(r, old)) {
  354. newRefs.add(r);
  355. }
  356. }
  357. }
  358. if (!newRefs.isEmpty()) {
  359. // There are new/modified refs! Check which loose objects are now
  360. // referenced by these modified refs (or their reflogentries).
  361. // Remove these loose objects
  362. // from the deletionCandidates. When the last candidate is removed
  363. // leave this method.
  364. ObjectWalk w = new ObjectWalk(repo);
  365. try {
  366. for (Ref cr : newRefs)
  367. w.markStart(w.parseAny(cr.getObjectId()));
  368. if (lastPackedRefs != null)
  369. for (Ref lpr : lastPackedRefs)
  370. w.markUninteresting(w.parseAny(lpr.getObjectId()));
  371. removeReferenced(deletionCandidates, w);
  372. } finally {
  373. w.dispose();
  374. }
  375. }
  376. if (deletionCandidates.isEmpty())
  377. return;
  378. // Since we have not left the method yet there are still
  379. // deletionCandidates. Last chance for these objects not to be pruned is
  380. // that they are referenced by reflog entries. Even refs which currently
  381. // point to the same object as during last repack() may have
  382. // additional reflog entries not handled during last repack()
  383. ObjectWalk w = new ObjectWalk(repo);
  384. try {
  385. for (Ref ar : getAllRefs())
  386. for (ObjectId id : listRefLogObjects(ar, lastRepackTime))
  387. w.markStart(w.parseAny(id));
  388. if (lastPackedRefs != null)
  389. for (Ref lpr : lastPackedRefs)
  390. w.markUninteresting(w.parseAny(lpr.getObjectId()));
  391. removeReferenced(deletionCandidates, w);
  392. } finally {
  393. w.dispose();
  394. }
  395. if (deletionCandidates.isEmpty())
  396. return;
  397. // delete all candidates which have survived: these are unreferenced
  398. // loose objects
  399. for (File f : deletionCandidates.values())
  400. f.delete();
  401. repo.getObjectDatabase().close();
  402. }
  403. private long getExpireDate() throws ParseException {
  404. long expireDate = Long.MAX_VALUE;
  405. if (expire == null && expireAgeMillis == -1) {
  406. String pruneExpireStr = repo.getConfig().getString(
  407. ConfigConstants.CONFIG_GC_SECTION, null,
  408. ConfigConstants.CONFIG_KEY_PRUNEEXPIRE);
  409. if (pruneExpireStr == null)
  410. pruneExpireStr = PRUNE_EXPIRE_DEFAULT;
  411. expire = GitDateParser.parse(pruneExpireStr, null, SystemReader
  412. .getInstance().getLocale());
  413. expireAgeMillis = -1;
  414. }
  415. if (expire != null)
  416. expireDate = expire.getTime();
  417. if (expireAgeMillis != -1)
  418. expireDate = System.currentTimeMillis() - expireAgeMillis;
  419. return expireDate;
  420. }
  421. /**
  422. * Remove all entries from a map which key is the id of an object referenced
  423. * by the given ObjectWalk
  424. *
  425. * @param id2File
  426. * @param w
  427. * @throws MissingObjectException
  428. * @throws IncorrectObjectTypeException
  429. * @throws IOException
  430. */
  431. private void removeReferenced(Map<ObjectId, File> id2File,
  432. ObjectWalk w) throws MissingObjectException,
  433. IncorrectObjectTypeException, IOException {
  434. RevObject ro = w.next();
  435. while (ro != null) {
  436. if (id2File.remove(ro.getId()) != null)
  437. if (id2File.isEmpty())
  438. return;
  439. ro = w.next();
  440. }
  441. ro = w.nextObject();
  442. while (ro != null) {
  443. if (id2File.remove(ro.getId()) != null)
  444. if (id2File.isEmpty())
  445. return;
  446. ro = w.nextObject();
  447. }
  448. }
  449. private static boolean equals(Ref r1, Ref r2) {
  450. if (r1 == null || r2 == null)
  451. return false;
  452. if (r1.isSymbolic()) {
  453. if (!r2.isSymbolic())
  454. return false;
  455. return r1.getTarget().getName().equals(r2.getTarget().getName());
  456. } else {
  457. if (r2.isSymbolic()) {
  458. return false;
  459. }
  460. return Objects.equals(r1.getObjectId(), r2.getObjectId());
  461. }
  462. }
  463. /**
  464. * Packs all non-symbolic, loose refs into packed-refs.
  465. *
  466. * @throws IOException
  467. */
  468. public void packRefs() throws IOException {
  469. Collection<Ref> refs = repo.getRefDatabase().getRefs(Constants.R_REFS).values();
  470. List<String> refsToBePacked = new ArrayList<String>(refs.size());
  471. pm.beginTask(JGitText.get().packRefs, refs.size());
  472. try {
  473. for (Ref ref : refs) {
  474. if (!ref.isSymbolic() && ref.getStorage().isLoose())
  475. refsToBePacked.add(ref.getName());
  476. pm.update(1);
  477. }
  478. ((RefDirectory) repo.getRefDatabase()).pack(refsToBePacked);
  479. } finally {
  480. pm.endTask();
  481. }
  482. }
  483. /**
  484. * Packs all objects which reachable from any of the heads into one pack
  485. * file. Additionally all objects which are not reachable from any head but
  486. * which are reachable from any of the other refs (e.g. tags), special refs
  487. * (e.g. FETCH_HEAD) or index are packed into a separate pack file. Objects
  488. * included in pack files which have a .keep file associated are never
  489. * repacked. All old pack files which existed before are deleted.
  490. *
  491. * @return a collection of the newly created pack files
  492. * @throws IOException
  493. * when during reading of refs, index, packfiles, objects,
  494. * reflog-entries or during writing to the packfiles
  495. * {@link IOException} occurs
  496. */
  497. public Collection<PackFile> repack() throws IOException {
  498. Collection<PackFile> toBeDeleted = repo.getObjectDatabase().getPacks();
  499. long time = System.currentTimeMillis();
  500. Collection<Ref> refsBefore = getAllRefs();
  501. Set<ObjectId> allHeads = new HashSet<ObjectId>();
  502. Set<ObjectId> nonHeads = new HashSet<ObjectId>();
  503. Set<ObjectId> txnHeads = new HashSet<ObjectId>();
  504. Set<ObjectId> tagTargets = new HashSet<ObjectId>();
  505. Set<ObjectId> indexObjects = listNonHEADIndexObjects();
  506. RefDatabase refdb = repo.getRefDatabase();
  507. for (Ref ref : refsBefore) {
  508. nonHeads.addAll(listRefLogObjects(ref, 0));
  509. if (ref.isSymbolic() || ref.getObjectId() == null)
  510. continue;
  511. if (ref.getName().startsWith(Constants.R_HEADS))
  512. allHeads.add(ref.getObjectId());
  513. else if (RefTreeNames.isRefTree(refdb, ref.getName()))
  514. txnHeads.add(ref.getObjectId());
  515. else
  516. nonHeads.add(ref.getObjectId());
  517. if (ref.getPeeledObjectId() != null)
  518. tagTargets.add(ref.getPeeledObjectId());
  519. }
  520. List<ObjectIdSet> excluded = new LinkedList<ObjectIdSet>();
  521. for (final PackFile f : repo.getObjectDatabase().getPacks())
  522. if (f.shouldBeKept())
  523. excluded.add(f.getIndex());
  524. tagTargets.addAll(allHeads);
  525. nonHeads.addAll(indexObjects);
  526. List<PackFile> ret = new ArrayList<PackFile>(2);
  527. PackFile heads = null;
  528. if (!allHeads.isEmpty()) {
  529. heads = writePack(allHeads, Collections.<ObjectId> emptySet(),
  530. tagTargets, excluded);
  531. if (heads != null) {
  532. ret.add(heads);
  533. excluded.add(0, heads.getIndex());
  534. }
  535. }
  536. if (!nonHeads.isEmpty()) {
  537. PackFile rest = writePack(nonHeads, allHeads, tagTargets, excluded);
  538. if (rest != null)
  539. ret.add(rest);
  540. }
  541. if (!txnHeads.isEmpty()) {
  542. PackFile txn = writePack(txnHeads, null, null, excluded);
  543. if (txn != null)
  544. ret.add(txn);
  545. }
  546. try {
  547. deleteOldPacks(toBeDeleted, ret);
  548. } catch (ParseException e) {
  549. // TODO: the exception has to be wrapped into an IOException because
  550. // throwing the ParseException directly would break the API, instead
  551. // we should throw a ConfigInvalidException
  552. throw new IOException(e);
  553. }
  554. prunePacked();
  555. lastPackedRefs = refsBefore;
  556. lastRepackTime = time;
  557. return ret;
  558. }
  559. /**
  560. * @param ref
  561. * the ref which log should be inspected
  562. * @param minTime only reflog entries not older then this time are processed
  563. * @return the {@link ObjectId}s contained in the reflog
  564. * @throws IOException
  565. */
  566. private Set<ObjectId> listRefLogObjects(Ref ref, long minTime) throws IOException {
  567. ReflogReader reflogReader = repo.getReflogReader(ref.getName());
  568. if (reflogReader == null) {
  569. return Collections.emptySet();
  570. }
  571. List<ReflogEntry> rlEntries = reflogReader
  572. .getReverseEntries();
  573. if (rlEntries == null || rlEntries.isEmpty())
  574. return Collections.<ObjectId> emptySet();
  575. Set<ObjectId> ret = new HashSet<ObjectId>();
  576. for (ReflogEntry e : rlEntries) {
  577. if (e.getWho().getWhen().getTime() < minTime)
  578. break;
  579. ObjectId newId = e.getNewId();
  580. if (newId != null && !ObjectId.zeroId().equals(newId))
  581. ret.add(newId);
  582. ObjectId oldId = e.getOldId();
  583. if (oldId != null && !ObjectId.zeroId().equals(oldId))
  584. ret.add(oldId);
  585. }
  586. return ret;
  587. }
  588. /**
  589. * Returns a map of all refs and additional refs (e.g. FETCH_HEAD,
  590. * MERGE_HEAD, ...)
  591. *
  592. * @return a map where names of refs point to ref objects
  593. * @throws IOException
  594. */
  595. private Collection<Ref> getAllRefs() throws IOException {
  596. Collection<Ref> refs = RefTreeNames.allRefs(repo.getRefDatabase());
  597. List<Ref> addl = repo.getRefDatabase().getAdditionalRefs();
  598. if (!addl.isEmpty()) {
  599. List<Ref> all = new ArrayList<>(refs.size() + addl.size());
  600. all.addAll(refs);
  601. all.addAll(addl);
  602. return all;
  603. }
  604. return refs;
  605. }
  606. /**
  607. * Return a list of those objects in the index which differ from whats in
  608. * HEAD
  609. *
  610. * @return a set of ObjectIds of changed objects in the index
  611. * @throws IOException
  612. * @throws CorruptObjectException
  613. * @throws NoWorkTreeException
  614. */
  615. private Set<ObjectId> listNonHEADIndexObjects()
  616. throws CorruptObjectException, IOException {
  617. if (repo.isBare()) {
  618. return Collections.emptySet();
  619. }
  620. try (TreeWalk treeWalk = new TreeWalk(repo)) {
  621. treeWalk.addTree(new DirCacheIterator(repo.readDirCache()));
  622. ObjectId headID = repo.resolve(Constants.HEAD);
  623. if (headID != null) {
  624. try (RevWalk revWalk = new RevWalk(repo)) {
  625. treeWalk.addTree(revWalk.parseTree(headID));
  626. }
  627. }
  628. treeWalk.setFilter(TreeFilter.ANY_DIFF);
  629. treeWalk.setRecursive(true);
  630. Set<ObjectId> ret = new HashSet<ObjectId>();
  631. while (treeWalk.next()) {
  632. ObjectId objectId = treeWalk.getObjectId(0);
  633. switch (treeWalk.getRawMode(0) & FileMode.TYPE_MASK) {
  634. case FileMode.TYPE_MISSING:
  635. case FileMode.TYPE_GITLINK:
  636. continue;
  637. case FileMode.TYPE_TREE:
  638. case FileMode.TYPE_FILE:
  639. case FileMode.TYPE_SYMLINK:
  640. ret.add(objectId);
  641. continue;
  642. default:
  643. throw new IOException(MessageFormat.format(
  644. JGitText.get().corruptObjectInvalidMode3,
  645. String.format("%o", //$NON-NLS-1$
  646. Integer.valueOf(treeWalk.getRawMode(0))),
  647. (objectId == null) ? "null" : objectId.name(), //$NON-NLS-1$
  648. treeWalk.getPathString(), //
  649. repo.getIndexFile()));
  650. }
  651. }
  652. return ret;
  653. }
  654. }
  655. private PackFile writePack(Set<? extends ObjectId> want,
  656. Set<? extends ObjectId> have, Set<ObjectId> tagTargets,
  657. List<ObjectIdSet> excludeObjects) throws IOException {
  658. File tmpPack = null;
  659. Map<PackExt, File> tmpExts = new TreeMap<PackExt, File>(
  660. new Comparator<PackExt>() {
  661. public int compare(PackExt o1, PackExt o2) {
  662. // INDEX entries must be returned last, so the pack
  663. // scanner does pick up the new pack until all the
  664. // PackExt entries have been written.
  665. if (o1 == o2)
  666. return 0;
  667. if (o1 == PackExt.INDEX)
  668. return 1;
  669. if (o2 == PackExt.INDEX)
  670. return -1;
  671. return Integer.signum(o1.hashCode() - o2.hashCode());
  672. }
  673. });
  674. try (PackWriter pw = new PackWriter(
  675. (pconfig == null) ? new PackConfig(repo) : pconfig,
  676. repo.newObjectReader())) {
  677. // prepare the PackWriter
  678. pw.setDeltaBaseAsOffset(true);
  679. pw.setReuseDeltaCommits(false);
  680. if (tagTargets != null)
  681. pw.setTagTargets(tagTargets);
  682. if (excludeObjects != null)
  683. for (ObjectIdSet idx : excludeObjects)
  684. pw.excludeObjects(idx);
  685. pw.preparePack(pm, want, have);
  686. if (pw.getObjectCount() == 0)
  687. return null;
  688. // create temporary files
  689. String id = pw.computeName().getName();
  690. File packdir = new File(repo.getObjectsDirectory(), "pack"); //$NON-NLS-1$
  691. tmpPack = File.createTempFile("gc_", ".pack_tmp", packdir); //$NON-NLS-1$ //$NON-NLS-2$
  692. final String tmpBase = tmpPack.getName()
  693. .substring(0, tmpPack.getName().lastIndexOf('.'));
  694. File tmpIdx = new File(packdir, tmpBase + ".idx_tmp"); //$NON-NLS-1$
  695. tmpExts.put(INDEX, tmpIdx);
  696. if (!tmpIdx.createNewFile())
  697. throw new IOException(MessageFormat.format(
  698. JGitText.get().cannotCreateIndexfile, tmpIdx.getPath()));
  699. // write the packfile
  700. FileOutputStream fos = new FileOutputStream(tmpPack);
  701. FileChannel channel = fos.getChannel();
  702. OutputStream channelStream = Channels.newOutputStream(channel);
  703. try {
  704. pw.writePack(pm, pm, channelStream);
  705. } finally {
  706. channel.force(true);
  707. channelStream.close();
  708. fos.close();
  709. }
  710. // write the packindex
  711. fos = new FileOutputStream(tmpIdx);
  712. FileChannel idxChannel = fos.getChannel();
  713. OutputStream idxStream = Channels.newOutputStream(idxChannel);
  714. try {
  715. pw.writeIndex(idxStream);
  716. } finally {
  717. idxChannel.force(true);
  718. idxStream.close();
  719. fos.close();
  720. }
  721. if (pw.prepareBitmapIndex(pm)) {
  722. File tmpBitmapIdx = new File(packdir, tmpBase + ".bitmap_tmp"); //$NON-NLS-1$
  723. tmpExts.put(BITMAP_INDEX, tmpBitmapIdx);
  724. if (!tmpBitmapIdx.createNewFile())
  725. throw new IOException(MessageFormat.format(
  726. JGitText.get().cannotCreateIndexfile,
  727. tmpBitmapIdx.getPath()));
  728. fos = new FileOutputStream(tmpBitmapIdx);
  729. idxChannel = fos.getChannel();
  730. idxStream = Channels.newOutputStream(idxChannel);
  731. try {
  732. pw.writeBitmapIndex(idxStream);
  733. } finally {
  734. idxChannel.force(true);
  735. idxStream.close();
  736. fos.close();
  737. }
  738. }
  739. // rename the temporary files to real files
  740. File realPack = nameFor(id, ".pack"); //$NON-NLS-1$
  741. // if the packfile already exists (because we are rewriting a
  742. // packfile for the same set of objects maybe with different
  743. // PackConfig) then make sure we get rid of all handles on the file.
  744. // Windows will not allow for rename otherwise.
  745. if (realPack.exists())
  746. for (PackFile p : repo.getObjectDatabase().getPacks())
  747. if (realPack.getPath().equals(p.getPackFile().getPath())) {
  748. p.close();
  749. break;
  750. }
  751. tmpPack.setReadOnly();
  752. FileUtils.rename(tmpPack, realPack, StandardCopyOption.ATOMIC_MOVE);
  753. for (Map.Entry<PackExt, File> tmpEntry : tmpExts.entrySet()) {
  754. File tmpExt = tmpEntry.getValue();
  755. tmpExt.setReadOnly();
  756. File realExt = nameFor(id,
  757. "." + tmpEntry.getKey().getExtension()); //$NON-NLS-1$
  758. try {
  759. FileUtils.rename(tmpExt, realExt,
  760. StandardCopyOption.ATOMIC_MOVE);
  761. } catch (IOException e) {
  762. File newExt = new File(realExt.getParentFile(),
  763. realExt.getName() + ".new"); //$NON-NLS-1$
  764. try {
  765. FileUtils.rename(tmpExt, newExt,
  766. StandardCopyOption.ATOMIC_MOVE);
  767. } catch (IOException e2) {
  768. newExt = tmpExt;
  769. e = e2;
  770. }
  771. throw new IOException(MessageFormat.format(
  772. JGitText.get().panicCantRenameIndexFile, newExt,
  773. realExt), e);
  774. }
  775. }
  776. return repo.getObjectDatabase().openPack(realPack);
  777. } finally {
  778. if (tmpPack != null && tmpPack.exists())
  779. tmpPack.delete();
  780. for (File tmpExt : tmpExts.values()) {
  781. if (tmpExt.exists())
  782. tmpExt.delete();
  783. }
  784. }
  785. }
  786. private File nameFor(String name, String ext) {
  787. File packdir = new File(repo.getObjectsDirectory(), "pack"); //$NON-NLS-1$
  788. return new File(packdir, "pack-" + name + ext); //$NON-NLS-1$
  789. }
  790. /**
  791. * A class holding statistical data for a FileRepository regarding how many
  792. * objects are stored as loose or packed objects
  793. */
  794. public class RepoStatistics {
  795. /**
  796. * The number of objects stored in pack files. If the same object is
  797. * stored in multiple pack files then it is counted as often as it
  798. * occurs in pack files.
  799. */
  800. public long numberOfPackedObjects;
  801. /**
  802. * The number of pack files
  803. */
  804. public long numberOfPackFiles;
  805. /**
  806. * The number of objects stored as loose objects.
  807. */
  808. public long numberOfLooseObjects;
  809. /**
  810. * The sum of the sizes of all files used to persist loose objects.
  811. */
  812. public long sizeOfLooseObjects;
  813. /**
  814. * The sum of the sizes of all pack files.
  815. */
  816. public long sizeOfPackedObjects;
  817. /**
  818. * The number of loose refs.
  819. */
  820. public long numberOfLooseRefs;
  821. /**
  822. * The number of refs stored in pack files.
  823. */
  824. public long numberOfPackedRefs;
  825. /**
  826. * The number of bitmaps in the bitmap indices.
  827. */
  828. public long numberOfBitmaps;
  829. public String toString() {
  830. final StringBuilder b = new StringBuilder();
  831. b.append("numberOfPackedObjects=").append(numberOfPackedObjects); //$NON-NLS-1$
  832. b.append(", numberOfPackFiles=").append(numberOfPackFiles); //$NON-NLS-1$
  833. b.append(", numberOfLooseObjects=").append(numberOfLooseObjects); //$NON-NLS-1$
  834. b.append(", numberOfLooseRefs=").append(numberOfLooseRefs); //$NON-NLS-1$
  835. b.append(", numberOfPackedRefs=").append(numberOfPackedRefs); //$NON-NLS-1$
  836. b.append(", sizeOfLooseObjects=").append(sizeOfLooseObjects); //$NON-NLS-1$
  837. b.append(", sizeOfPackedObjects=").append(sizeOfPackedObjects); //$NON-NLS-1$
  838. b.append(", numberOfBitmaps=").append(numberOfBitmaps); //$NON-NLS-1$
  839. return b.toString();
  840. }
  841. }
  842. /**
  843. * Returns information about objects and pack files for a FileRepository.
  844. *
  845. * @return information about objects and pack files for a FileRepository
  846. * @throws IOException
  847. */
  848. public RepoStatistics getStatistics() throws IOException {
  849. RepoStatistics ret = new RepoStatistics();
  850. Collection<PackFile> packs = repo.getObjectDatabase().getPacks();
  851. for (PackFile f : packs) {
  852. ret.numberOfPackedObjects += f.getIndex().getObjectCount();
  853. ret.numberOfPackFiles++;
  854. ret.sizeOfPackedObjects += f.getPackFile().length();
  855. if (f.getBitmapIndex() != null)
  856. ret.numberOfBitmaps += f.getBitmapIndex().getBitmapCount();
  857. }
  858. File objDir = repo.getObjectsDirectory();
  859. String[] fanout = objDir.list();
  860. if (fanout != null && fanout.length > 0) {
  861. for (String d : fanout) {
  862. if (d.length() != 2)
  863. continue;
  864. File[] entries = new File(objDir, d).listFiles();
  865. if (entries == null)
  866. continue;
  867. for (File f : entries) {
  868. if (f.getName().length() != Constants.OBJECT_ID_STRING_LENGTH - 2)
  869. continue;
  870. ret.numberOfLooseObjects++;
  871. ret.sizeOfLooseObjects += f.length();
  872. }
  873. }
  874. }
  875. RefDatabase refDb = repo.getRefDatabase();
  876. for (Ref r : refDb.getRefs(RefDatabase.ALL).values()) {
  877. Storage storage = r.getStorage();
  878. if (storage == Storage.LOOSE || storage == Storage.LOOSE_PACKED)
  879. ret.numberOfLooseRefs++;
  880. if (storage == Storage.PACKED || storage == Storage.LOOSE_PACKED)
  881. ret.numberOfPackedRefs++;
  882. }
  883. return ret;
  884. }
  885. /**
  886. * Set the progress monitor used for garbage collection methods.
  887. *
  888. * @param pm
  889. * @return this
  890. */
  891. public GC setProgressMonitor(ProgressMonitor pm) {
  892. this.pm = (pm == null) ? NullProgressMonitor.INSTANCE : pm;
  893. return this;
  894. }
  895. /**
  896. * During gc() or prune() each unreferenced, loose object which has been
  897. * created or modified in the last <code>expireAgeMillis</code> milliseconds
  898. * will not be pruned. Only older objects may be pruned. If set to 0 then
  899. * every object is a candidate for pruning.
  900. *
  901. * @param expireAgeMillis
  902. * minimal age of objects to be pruned in milliseconds.
  903. */
  904. public void setExpireAgeMillis(long expireAgeMillis) {
  905. this.expireAgeMillis = expireAgeMillis;
  906. expire = null;
  907. }
  908. /**
  909. * Set the PackConfig used when (re-)writing packfiles. This allows to
  910. * influence how packs are written and to implement something similar to
  911. * "git gc --aggressive"
  912. *
  913. * @since 3.6
  914. * @param pconfig
  915. * the {@link PackConfig} used when writing packs
  916. */
  917. public void setPackConfig(PackConfig pconfig) {
  918. this.pconfig = pconfig;
  919. }
  920. /**
  921. * During gc() or prune() each unreferenced, loose object which has been
  922. * created or modified after or at <code>expire</code> will not be pruned.
  923. * Only older objects may be pruned. If set to null then every object is a
  924. * candidate for pruning.
  925. *
  926. * @param expire
  927. * instant in time which defines object expiration
  928. * objects with modification time before this instant are expired
  929. * objects with modification time newer or equal to this instant
  930. * are not expired
  931. */
  932. public void setExpire(Date expire) {
  933. this.expire = expire;
  934. expireAgeMillis = -1;
  935. }
  936. }