summaryrefslogtreecommitdiffstats
path: root/org.eclipse.jgit
Commit message (Collapse)AuthorAgeFilesLines
* Fix bad test fix from 0bff481 "Limit receive commands"Shawn Pearce2017-02-201-9/+14
| | | | | | | | | | | | | | | | | In 0bff481d45db74db81a3b1b86f7401443a60d970 to accurately use the two limits it was necessary to move the LimitedInputStream out of the PacketLineIn and further down to the PackParser. Unfortuantely this didn't survive review, as a buggy test failed and the "fix" was to drop this part of the code. The maxPackSizeLimit should apply to the pack stream, not the pkt-line framing used to send commands to control the ReceivePack instance. The commands are controlled using a different limit. The failing test allowed too many bytes in the pack and was only failing because it was including the command framing. The correct fix for the test was simply to drop the limit lower, to more closely match the actual pack size. Change-Id: I47d3885b9d7d527e153df7ac9c62fc2865ceecf4
* Add some more missing @Override annotationsDavid Pursehouse2017-02-201-0/+1
| | | | | Change-Id: Ic13160920b986edde87c928c473240cc9c034f50 Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
* Enable and fix 'Should be tagged with @Override' warningDavid Pursehouse2017-02-19217-1/+633
| | | | | | | | | | | | | | | | | | | | | | | Set missingOverrideAnnotation=warning in Eclipse compiler preferences which enables the warning: The method <method> of type <type> should be tagged with @Override since it actually overrides a superclass method Justification for this warning is described in: http://stackoverflow.com/a/94411/381622 Enabling this causes in excess of 1000 warnings across the entire code-base. They are very easy to fix automatically with Eclipse's "Quick Fix" tool. Fix all of them except 2 which cause compilation failure when the project is built with mvn; add TODO comments on those for further investigation. Change-Id: I5772061041fd361fe93137fd8b0ad356e748a29c Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
* Fix typo in @sinceThomas Wolf2017-02-191-1/+1
| | | | Change-Id: I266b0c72d2827bcf2b86ddc6c1892d1a46c548eb Signed-off-by: Thomas Wolf <thomas.wolf@paranor.ch>
* PullCommand: Allow to set tag behaviorDavid Pursehouse2017-02-181-3/+18
| | | | | | | | | | | Add a new method setTagOpt which sets the annotated tag behavior during fetch. Pass the option to the fetch command. No explicit tests are added; the fetch with tags functionality is already covered by the tests of the fetch command. Change-Id: I131e1f68d8fcced178d8fa48abf7ffab17f8e173 Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
* Set commit time to ZipArchiveEntryNaoki Takezoe2017-02-182-5/+40
| | | | | | | | | | | | | | | | Archived zip files for a same commit have different MD5 hash because mdate and mdate in the header of zip entries are not specified. In this case, Commons Compress sets an archived time. In the original git implementation, it's set a commit time: https://github.com/git/git/blob/e2b2d6a172b76d44cb7b1ddb12ea5bfac9613a44/archive.c#L378 By this fix, archive command sets the commit time to ZipArchiveEntry when RevCommit is given as an archiving target. Change-Id: I30dd8710e910cdf42d57742f8709e9803930a123 Signed-off-by: Naoki Takezoe <takezoe@gmail.com> Signed-off-by: David Pursehouse <david.pursehouse@gmail.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* GC: don't loosen doomed objectsDavid Turner2017-02-171-4/+10
| | | | | | | | | | | | | | | | | | | | | | If the pruneexpire config is set to "now", then any unreferenced loose objects are immediately eligible for gc. So there is no need to actually write the loose objects. Users who run hosting services which sometimes accept large, entirely garbage packs might set the following configurations: gc.pruneExpire = now gc.prunePackExpire = 2.weeks Then garbage objects will be kept around in packs, but after two weeks the packs themselves will get deleted. For client-side users of jgit, the default settings will loosen garbage objects, and, after an hour, delete the old packs in which they resided. Change-Id: I8f686ac60b40181b1ee92ac6c313c3f33b55c44c Signed-off-by: David Turner <dturner@twosigma.com>
* Update name of InsecureCipherMode error-prone patternJonathan Nieder2017-02-151-1/+1
| | | | | | | | | | | | Without this, using bazel 0.4.4 to build fails: ERROR: jgit/org.eclipse.jgit/BUILD:29:1: Java compilation in rule '//org.eclipse.jgit:insecure_cipher_factory' failed: Worker process sent response with exit code: 1. jgit/src/org/eclipse/jgit/transport/InsecureCipherFactory.java:63: error: [InsecureCryptoUsage] Insecure usage of a crypto API: the transformation is not a compile-time constant expression. return Cipher.getInstance(algo); ^ (see http://errorprone.info/bugpattern/InsecureCryptoUsage) Change-Id: I7f9a3a5117e42cb68544674f5312df0368aa3674
* Add missing skip garbage pack logic in DfsReaderZhen Chen2017-02-151-4/+6
| | | | | | | | | * Missing garbage pack check in getObjectSize(AnyObjectId, int) * Missing `last` pack check in has(AnyObjectId) and open(AnyObjectId, int) Change-Id: Idd1b9dd8db34c92d7da546fef1936ec9b2728718 Signed-off-by: Zhen Chen <czhen@google.com>
* Skip first pack if avoid garbage is set and it is a garbage packZhen Chen2017-02-131-8/+10
| | | | | | | | | At beginning of the OBJECT_SCAN loop, it will first check if the object exists in the last pack, however, it forgot to avoid garbage pack for the first iteration. Change-Id: I8a99c0f439218d19c49cd4dae891b8cc4a57099d Signed-off-by: Zhen Chen <czhen@google.com>
* Refactor skip garbage pack logic into a methodZhen Chen2017-02-131-19/+19
| | | | | | | | | | | | | There are multiple places in DfsReader to skip garbage pack if both of the following conditions satisfied: * AvoidUnreachable flag is set * The pack is a garabge pack Refactor them into a shared private method. Change-Id: I67d6bb601db55f904437c807c6a3c36f0a723265 Signed-off-by: Zhen Chen <czhen@google.com>
* Limit receive commandsShawn Pearce2017-02-114-25/+132
| | | | | | | | | | | | | | | | | | | Place a configurable upper bound on the amount of command data received from clients during `git push`. The limit is applied to the encoded wire protocol format, not the JGit in-memory representation. This allows clients to flexibly use the limit; shorter reference names allow for more commands, longer reference names permit fewer commands per batch. Based on data gathered from many repositories at $DAY_JOB, the average reference name is well under 200 bytes when encoded in UTF-8 (the wire encoding). The new 3 MiB default receive.maxCommandBytes allows about 11,155 references in a single `git push` invocation. A Gerrit Code Review system with six-digit change numbers could still encode 29,399 references in the 3 MiB maxCommandBytes limit. Change-Id: I84317d396d25ab1b46820e43ae2b73943646032c Signed-off-by: David Pursehouse <david.pursehouse@gmail.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* BlameGenerator: Annotate #getRenameDetector as NullableDavid Pursehouse2017-02-091-3/+6
| | | | | | | | | | | | | The renameDetector member returned by this method will be null when following file renames has been disabled by previously calling: setFollowFileRenames(false). Annotate it as @Nullable and update the Javadoc to explicitly document the null return. Change-Id: I9bdf443a64cf3c45352d3ab023051a2e11f7426d Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
* RefLeaseSpec: Fix Eclipse errorsDavid Pursehouse2017-02-091-8/+8
| | | | | | | | | | | - Remove unused import - Remove unused private constructor - Add Javadoc for public constructor Change-Id: I1253e9fe863ca0f63182461ee87357fbf726ea2e Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
* Merge "push: support per-ref force-with-lease"Shawn Pearce2017-02-083-6/+213
|\
| * push: support per-ref force-with-leaseDavid Turner2017-02-083-6/+213
| | | | | | | | | | | | | | | | | | | | When rebasing, force-pushing has a race condition: someone else might have pushed a commit since the one you just rewrote. The force-with-lease option prevents this by ensuring that the ref's old value is the one that you expected. Change-Id: I97ca9f8395396c76332bdd07c486e60549ca4401 Signed-off-by: David Turner <dturner@twosigma.com>
* | Assume GC_REST and GC_TXN also attempted deltas during packingShawn Pearce2017-02-081-3/+8
| | | | | | | | | | | | | | | | | | | | In a DFS repository the DfsGarbageCollector will typically attempt delta compression while creating the three main pack files: GC, GC_REST and GC_TXN. Include all of these in the wasDeltaAttempted() decision so that future packers can bypass delta compression of non-delta objects. Change-Id: Ic2330c69fab0c494b920b4df0a290f3c2e1a03d7
* | Prefer smaller GC files during DFS garbage collectionShawn Pearce2017-02-082-1/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In 8ac65d33ed7a94f77cb066271669feebf9b882fc PackWriter changed its behavior to always prefer the last object representation presented to it by the ObjectReuseAsIs implementation. This was a fix to avoid delta chain cycles. Unfortunately it can lead to suboptimal compression when concurrent GCs are run on the same repository. One case is automatic GC running (with default settings) in parallel to a manual GC that has disabled delta reuse in order to generate new smaller deltas for the entire history of the repository. Running GC with no-reuse generally requires more CPU time, which also translates to a longer running time. This can lead to a race where the automatic GC completes before the no-reuse GC, leaving the repository in a state such as: no-reuse GC: size 1 GiB, mtime = 18:45 auto GC: size 8 GiB, mtime = 17:30 With the default sort ordering, the smaller no-reuse GC pack is sorted earlier in the pack list, due to its more recent mtime. During object reuse in a future GC, these smaller representations are considered first by PackWriter, but are all discarded when the auto GC file from 17:30 is examined second (due to its older mtime). Work around this in two ways. Well formed DFS repositories should have at most 1 GC pack. If 2 or more GC packs exist, break the sorting tie by selecting the smaller file earlier in the pack list. This allows all normal read code paths to favor the smaller file, which places less pressure on the DfsBlockCache. If any GC race happens, readers serving clone requests will prefer the file that is smaller. During object reuse, flip this ordering so that the smaller file is last. This allows PackWriter to see smaller deltas last, replacing larger representations that were previously considered from other pack files. Change-Id: I0b7dc8bb9711c82abd6bd16643f518cfccc6d31a
* | Fix missing deltas near type boundariesShawn Pearce2017-02-081-6/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Delta search was discarding discovered deltas if an object appeared near a type boundary in the delta search window. This has caused JGit to produce larger pack files than other implementations of the packing algorithm. Delta search works by pushing prior objects into a search window, an ordered list of objects to attempt to delta compress the next object against. (The window size is bounded, avoiding O(N^2) behavior.) For implementation reasons multiple object types can appear in the input list, and the window. PackWriter commonly passes both trees and blobs in the input list handed to the DeltaWindow algorithm. The pack file format requires an object to only delta compress against the same type, so the DeltaWindow algorithm must stop doing comparisions if a blob would be compared to a tree. Because the input list is sorted by object type and the window is recently considered prior objects, once a wrong type is discovered in the window the search algorithm stops and uses the current result. Unfortunately the termination condition was discarding any found delta by setting deltaBase and deltaBuf to null when it was trying to break the window search. When this bug occurs, the state of the DeltaWindow looks like this: current | \ / input list: tree0 tree1 blob1 blob2 window: blob1 tree1 tree0 / \ | res.prev As the loop iterates to the right across the window, it first finds that blob1 is a suitable delta base for blob2, and temporarily holds this in the bestDelta/deltaBuf fields. It then considers tree1, but tree1 has the wrong type (blob != tree), so the window loop must give up and fall through the remaining code. Moving the condition up and discarding the window contents allows the bestDelta/deltaBuf to be kept, letting the final file delta compress blob1 against blob0. The impact of this bug (and its fix) on real world repositories is likely minimal. The boundary from blob to tree happens approximately once in the search, as the input list is sorted by type. Only the first window size worth of blobs (e.g. 10 or 250) were failing to produce a delta in the final file. This bug fix does produce significantly different results for small test repositories created in the unit test suite, such as when a pack may contains 6 objects (2 commits, 2 trees, 2 blobs). Packing test cases can now better sample different output pack file sizes depending on delta compression and object reuse flags in PackConfig. Change-Id: Ibec09398d0305d4dbc0c66fce1daaf38eb71148f
* | Merge "Reintroduce garbage pack coalescing when ttl > 0."Shawn Pearce2017-02-081-10/+70
|\ \
| * | Reintroduce garbage pack coalescing when ttl > 0.Thirumala Reddy Mutchukota2017-02-071-10/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Disabling the garbage pack coalescing when garbageTtl > 0 can result in lot of garbage packs if they are created within the garbageTtl time. To avoid a large number of garbage packs, re-introducing garbage pack coalescing for the packs that are created within a single calendar day when the garbageTtl is more than one day or one third of the garbageTtl. Change-Id: If969716aeb55fb4fd0ff71d75f41a07638cd5a69 Signed-off-by: Thirumala Reddy Mutchukota <thirumala@google.com>
* | | Merge "Branch normalizer should not normalize already valid branch names"David Pursehouse2017-02-071-5/+11
|\ \ \
| * | | Branch normalizer should not normalize already valid branch namesMatthias Sohn2017-02-071-5/+11
| |/ / | | | | | | | | | Change-Id: Ib746655e32a37c4ad323f1d12ac0817de8fa56cf Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* / / [infer] Fix ObjectWalk leak in PackWriter.preparePack()Matthias Sohn2017-02-071-7/+8
|/ / | | | | | | Change-Id: I5d2455404e507faa717e9d916e9b6cd80aa91473 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* | Follow redirects in transportBo Zhang2017-02-024-2/+57
| | | | | | | | | | | | Bug: 465167 Change-Id: I6da19c8106201c2a1ac69002bd633b7387f25d96 Signed-off-by: Bo Zhang <zhangbodut@gmail.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* | Merge branch 'stable-4.6'Matthias Sohn2017-02-021-31/+41
|\ \ | | | | | | | | | | | | | | | | | | | | | * stable-4.6: GC: delete empty directories after purging loose objects GC.prune(Set<ObjectId>): return early if objects directory is empty Change-Id: I3d6cacf80d3b4c69ba108e970855963bd9f6ee78 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
| * | GC: delete empty directories after purging loose objectsMatthias Sohn2017-02-011-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | In order to limit the number of directories we check for emptiness only consider fanout directories which contained unreferenced loose objects we deleted in the same gc run. Change-Id: Idf8d512867ee1c8ed40bd55752122ce83a98ffa2 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
| * | GC.prune(Set<ObjectId>): return early if objects directory is emptyMatthias Sohn2017-01-301-30/+33
| | | | | | | | | | | | Change-Id: Id56b102604c4e0437230e3e7c59c0a3a1b676256 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* | | Organize importsDavid Pursehouse2017-02-012-3/+1
| | | | | | | | | | | | | | | Change-Id: I97044f69d220fc2d3f9fe890fdfec542454f02d2 Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
* | | Detect stale-file-handle error in causal chainHongkai Liu2017-01-303-2/+23
| | | | | | | | | | | | | | | | | | | | | | | | Cover the case where the exception is wrapped up as a cause, e.g., PackIndex#open(File). Change-Id: I0df5b1e9c2ff886bdd84dee3658b6a50866699d1 Signed-off-by: Hongkai Liu <hongkai.liu@ericsson.com>
* | | Merge branch 'stable-4.6'David Pursehouse2017-01-311-0/+53
|\| | | | | | | | | | | | | | | | | | | | | | | * stable-4.6: Clean up orphan files in GC Change-Id: I4fb6b4cd03d032535a9c04ede784bea880b4536b Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
| * | Clean up orphan files in GCHongkai Liu2017-01-301-0/+53
| | | | | | | | | | | | | | | | | | | | | | | | An orphan file is either a bitmap or an idx file in pack folder, and its corresponding pack file is missing. Change-Id: I3c4cb1f7aa99dd7b398bdb8d513f528d7761edff Signed-off-by: Hongkai Liu <hongkai.liu@ericsson.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* | | Merge "Don't rely on default locale when using toUpperCase() and toLowerCase()"David Pursehouse2017-01-3011-15/+32
|\ \ \
| * | | Don't rely on default locale when using toUpperCase() and toLowerCase()Matthias Sohn2017-01-2811-15/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Otherwise these methods may produce unexpected results if used for strings that are intended to be interpreted locale independently. Examples are programming language identifiers, protocol keys, and HTML tags. For instance, "TITLE".toLowerCase() in a Turkish locale returns "t\u0131tle", where '\u0131' is the LATIN SMALL LETTER DOTLESS I character. See https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#toLowerCase-- http://blog.thetaphi.de/2012/07/default-locales-default-charsets-and.html Bug: 511238 Change-Id: Id8d8f37d84d62239c918b81f8d883ed798d87656 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* | | | Make GC cancellable when called programmaticallyHector Caballero2017-01-292-6/+103
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sometimes, it is necessary to cancel a garbage collection operation. When GC is called using the standalone executable, i.e., from a command line, Control-Cing the process does the trick. When calling GC programmatically, though, there is no mechanism to do it. Add checks in the GC process so that a custom cancellable progress monitor could be passed in order to cancel the operation at specific points. In this case, the calling process set the cancel flag in the progress monitor and the GC process will throw an exception that can be caught and handled by the caller accordingly. Change-Id: Ieaecf3dbdf244539ec734939c065735f6785aacf Signed-off-by: Hector Caballero <hector.caballero@ericsson.com>
* | | | RepoCommand#readFile: Don't call Git#getRepository() in try-with-resourceDavid Pursehouse2017-01-281-3/+2
|/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using try-with-resource means that close() will automatically be called on the Repository object. However, according to the javadoc of Git#close(): If the repository was opened by a static factory method in this class, then this method calls Repository#close() on the underlying repository instance. This means that Repository#close() is called twice, by Git.close() and in the outer try-with-resource, leading to a corrupt use count. Change-Id: I37ba517eb2cc67d1cd36813598772c70208d0bc9 Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
* | | Repository: Include repository name when logging corrupt use countDavid Pursehouse2017-01-272-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | Logging the repository name makes it easier to track down what is incorrectly closing a repository. Change-Id: I42a8bdf766c0e67f100adbf76d9616584e367ac2 Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
* | | Record the estimated size of the pack files.Thirumala Reddy Mutchukota2017-01-264-8/+97
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Compacter and Garbage Collector will record the estimated size of the newly going to be created compact, gc or garbage packs. This information can be used by the clients to better make a call on how to actually store the pack based on the approximated expected size. Added a new protected method DfsObjDatabase.newPack(PackSource packSource, long estimatedPackSize), so that the clients can override this method to make use of the estimatedPackSize while creating a new PackDescription object. The default implementation of this method is equivalent to newPack(packSource).setEstimatedPackSize(estimatedPackSize). I didn't make it abstract because that would force all the existing sub classes of DfsObjDatabase to implement this method. Due to this default implementation, the estimatedPackSize is added to DfsPackDescription using a setter instead of a constructor parameter (even though constructor parameter would be a better choice as this value is set only during the object creation). Change-Id: Iade1122633ea774c2e842178a6a6cbb4a57b598b Signed-off-by: Thirumala Reddy Mutchukota <thirumala@google.com>
* | | Fixes Javadoc error in org.eclipse.jgit created with I59539acLars Vogel2017-01-252-6/+27
| | | | | | | | | | | | | | | | | | | | | | | | Adds the param information to the private method. These are generated via tooltip to resolve the compile errors. Bug: 511043 Change-Id: I9ba551978eab750326d1a067b296e3ae93925871 Signed-off-by: Lars Vogel <Lars.Vogel@vogella.com>
* | | Remove @since tags from internal packagesJonathan Nieder2017-01-2418-41/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | These packages don't use @since tags because they are not part of the stable public API. Some @since tags snuck in, though. Remove them to make the convention easier to find for new contributors and the expectations clearer for users. Change-Id: I6c17d3cfc93657f1b33cf5c5708f2b1c712b0d31
* | | gc: loosen unreferenced objectsDavid Turner2017-01-242-6/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An unreferenced object might appear in a pack. This could only happen because it was previously referenced, and then later that reference was removed. When we gc, we copy the referenced objects into a new pack, and delete the old pack. This would remove the unreferenced object. Now we first create a loose object from any unreferenced object in the doomed pack. This kicks off the two-week grace period for that object, after which it will be collected if it's not referenced. This matches the behavior of regular git. Change-Id: I59539aca1d0d83622c41aa9bfbdd72fa868ee9fb Signed-off-by: David Turner <dturner@twosigma.com> Signed-off-by: Jonathan Nieder <jrn@google.com>
* | | [infer] Mark ManifestParse.getFilteredProjects non-nullMatthias Sohn2017-01-231-1/+2
| | | | | | | | | | | | Change-Id: I05653df7a0337443d2c8e53f47f4e95ec9ca1a9c Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* | | [infer] Fix potential NPE in DiffFormatterMatthias Sohn2017-01-231-1/+1
| | | | | | | | | | | | Change-Id: Ia33e2af9ce3393d9173ca0dc7efefd86c965d8c8 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* | | [infer] Fix potential NPE in CloneCommandMatthias Sohn2017-01-231-6/+18
| | | | | | | | | | | | Change-Id: Ie7eeba3ae719ff207c7535d535a9e0bd6c9e99e6 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* | | Format Bazel files with buildifierDavid Pursehouse2017-01-221-19/+23
| | | | | | | | | | | | | | | Change-Id: I934114315d2c7cab917f1011b8e55c52367d429f Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
* | | Change StreamGobbler to Runnable to avoid unused FutureShawn Pearce2017-01-211-12/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can be considered a programming error to create a Future<T> but do nothing with that object. There is an async computation happening and without holding and checking the Future for done or exception the caller has no idea if it has completed. FS doesn't really care about these StreamGobblers finishing. Instead use Runnable with execute(Runnable), which doesn't return a Future. Change-Id: I93b66d1f6c869e66be5c1169d8edafe781e601f6
* | | Add missing @since tags on new API constantsMatthias Sohn2017-01-191-2/+2
| | | | | | | | | | | | Change-Id: Ia8b861da07fba99644ccc9eb5578a46cc39600a1 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* | | gc: Add options to preserve and prune old pack filesJames Melvin2017-01-194-6/+163
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new --preserve-oldpacks option moves old pack files into the preserved subdirectory instead of deleting them after repacking. The new --prune-preserved option prunes old pack files from the preserved subdirectory after repacking, but before potentially moving the latest old packfiles to this subdirectory. These options are designed to prevent stale file handle exceptions during git operations which can happen on users of NFS repos when repacking is done on them. The strategy is to preserve old pack files around until the next repack with the hopes that they will become unreferenced by then and not cause any exceptions to running processes when they are finally deleted (pruned). Change-Id: If3f729f0d9ce920ee2c3e6acdde46f2068be61d2 Signed-off-by: James Melvin <jmelvin@codeaurora.org>
* | | Implement initial framework of Bazel buildDavid Ostrovsky2017-01-181-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The initial implementation only builds the packages consumed by Gerrit Code Review. Test build and execution is not implemented. We prefer to consume maven_jar custom rule from bazlets repository, for the same reasons as in the Gerrit project: * Caching artifacts across different clones and projects * Exposing source classifiers and neverlink artifact TEST PLAN: $ bazel build :all $ unzip -t bazel-genfiles/all.zip Archive: bazel-genfiles/all.zip testing: libjgit-archive.jar OK testing: libjgit-servlet.jar OK testing: libjgit.jar OK testing: libjunit.jar OK No errors detected in compressed data of bazel-genfiles/all.zip. Change-Id: Ia837ce95d9829fe2515f37b7a04a71a4598672a0 Signed-off-by: David Ostrovsky <david@ostrovsky.org> Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
* | | Normalizer creating a valid branch name from a stringWim Jongman2017-01-181-0/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Generic normalization method for a possible invalid branch name. The method compresses dividers between spaces, then replaces spaces and non word characters with underscores. This method is needed in preparation for subsequent EGit changes. Bug: 509878 Change-Id: Ic0d12f098f90f912a45bcc5693d6accf751d4e58 Signed-off-by: Wim Jongman <wim.jongman@remainsoftware.com>