summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Enforce the use of Java5 API:s only (with a few exceptions)Robin Rosenberg2011-12-1631-32/+1522
| | | | | | | | | | | | This only works with Eclipse 3.6 and newer and requires installation of new package. Documentation is not very good, but there is a blog about it here: http://eclipseandjazz.blogspot.com/2011/10/of-invalid-references-to-system.html API checking is especially useful on OS X where Java5 is not readily available. Change-Id: I3c0ad460874a21c073f5ac047146cbf5d31992b4 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* Merge "Fix MergeCommandTest to pass if File.executable is not supported" ↵Matthias Sohn2011-12-151-4/+12
|\ | | | | | | into stable-1.2
| * Fix MergeCommandTest to pass if File.executable is not supportedRobin Rosenberg2011-12-151-4/+12
| | | | | | | | Change-Id: If11080ed6e53d9df88a1ae42f48ee8914d54669b
* | Add API checking using clirrMatthias Sohn2011-12-156-0/+115
|/ | | | | | | | | | | | | | | | | | In order to generate API reports run: mvn clirr:clirr The reports are generated to the folder target/site/clirr-report.html under the respective project. In order to check API compatibility and fail the build on incompatible changes run: mvn clirr:check For now we compare the API against the latest release 1.1.0.201109151100-r. Bug: 336849 Change-Id: I21baaf3a6883c5b4db263f712705cc7b8ab6d888 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com> Signed-off-by: Kevin Sawicki <kevin@github.com>
* Fix ResolveMerger not to add paths with FileMode 0Christian Halstrick2011-12-112-7/+150
| | | | | | | | | | | | | | | | | | | When ResolveMerger finds a path where it has to do a content merge it will try the content merge and if that succeeds it'll add the newly produced content to the index. For the FileMode of this new index entry it blindly copies the FileMode it finds for that path in the common base tree. If by chance the common base tree does not contain this path it'll try to add FileMode 0 (MISSING) to the index. One could argue that this can't happen: how can the ResolveMerger successfully (with no conflicts) merge two contents if there is no common base? This was due to another bug in ResolveMerger. It failed to find out that for two files which differ only in the FileMode (e.g. 644 vs. 755) it should not try a content merge. Change-Id: I7a00fe1a6c610679be475cab8a3f8aa4c08811a1 Signed-off-by: Christian Halstrick <christian.halstrick@sap.com> Signed-off-by: Robin Rosenberg <robin.rosenberg@dewire.com>
* Fix version.shMatthias Sohn2011-12-101-1/+1
| | | | | Change-Id: Icdf5d9ea3ca62839cbf7de13dfee9682056b7cef Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* Throw API exception when MergeCommand hits checkout conflicts Matthias Sohn2011-12-082-3/+24
| | | | | | | | | | | When MergeCommand hit checkout conflicts it did throw an internal JGit exception org.eclipse.jgit.errors.CheckoutConflictException instead of org.eclipse.jgit.api.errors.CheckoutConflictException which it declares to throw. Hence translate the internal exception to the exception declared in the API. Bug: 327573 Change-Id: I1efcd93a43ecbf4a40583e0fc9d8d53cffc98cae Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* Add methods for configuring platform emulationRobin Rosenberg2011-12-071-0/+30
| | | | | | | Specifically we support setting system properties for Windows, generic Unix and current test platform. Change-Id: Ib02be417c4915350dfec64fda3face1138552871
* Fix history rendering not to occupy too many lanesChristian Halstrick2011-12-061-0/+2
| | | | | | | | | | | There was a bug in history rendering which caused jgit to use too many lanes in case lanes get repositioned. Looking at commit 90c11cbaeb83ee9b02238cbd2c0e5bcf68068772 in JGit was one example. Vadim Dmitriev found the problem and the solution. Bug: 365460 Change-Id: I6024265b7a593dcfd4fc612d0baf6652a0092ff4 Also-by: Vadim Dmitriev <dmgloss@mail.ru> Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
* Fix History renderingChristian Halstrick2011-12-041-30/+42
| | | | | | | | | There was the possibility that during history rendering we draw a lane "trough" a passed commit. Vadim Dmitriev found that out in bug 335818. I added the needed check to that block of code where it was missing. Bug: 335818 Change-Id: Ic944193b2aca55ff3eb0235d46afa60b7896aa0f Signed-off-by: Christian Halstrick <christian.halstrick@sap.com>
* Fix HTTP unit testsShawn O. Pearce2011-11-301-2/+1
| | | | | | | | I modified the way errors are returned, and this particular test is now getting a different access denied response. The new text happens to be what I intended to have here, so update the test. Change-Id: I53f8410ca0a52755d80473cd5cbcdb4d8502febf
* Merge "RepositoryState: add method canAmend"Christian Halstrick2011-11-301-0/+18
|\
| * RepositoryState: add method canAmendJens Baumgart2011-11-281-0/+18
| | | | | | | | | | | | | | | | The method canAmend was added to RepositoryState. It returns true if amending the HEAD commit is allowed in the current repository state. Change-Id: Idd0c4eea83a23c41340789b7b877959b457d951e Signed-off-by: Jens Baumgart <jens.baumgart@sap.com>
* | Merge "Always checkout master when it matches the advertised HEAD"Shawn Pearce2011-11-282-0/+22
|\ \
| * | Always checkout master when it matches the advertised HEADKevin Sawicki2011-11-282-0/+22
| | | | | | | | | | | | | | | | | | | | | This parallels the CGit behavior of always using refs/heads/master when it matches the remote advertised HEAD commit. Change-Id: I5a5cd1516b58d116e334056aba1ef7990697ec30
* | | Merge "Update maven plugin versions"Shawn Pearce2011-11-282-12/+12
|\ \ \
| * | | Update maven plugin versionsMatthias Sohn2011-11-292-12/+12
| |/ / | | | | | | | | | | | | Change-Id: I7400e08a1059f57c85a53aebe2719f81c00f58e8 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* | | Merge "Implement Serializable interface in ReflogEntry"Shawn Pearce2011-11-281-1/+5
|\ \ \
| * | | Implement Serializable interface in ReflogEntryKevin Sawicki2011-11-281-1/+5
| |/ / | | | | | | | | | Change-Id: Idf798dd3981bef3dc9e17c13c12809f89089e96f
* / / Remove calls to printStackTrace in catch blocksKevin Sawicki2011-11-283-6/+1
|/ / | | | | | | Change-Id: I7a4179f10a4841e80b6546e1e7662cab71eac5e9
* | Merge "Reset SSH connection and credentials on "Auth fail""Shawn Pearce2011-11-262-20/+55
|\ \
| * | Reset SSH connection and credentials on "Auth fail"Matthias Sohn2011-11-272-20/+55
| |/ | | | | | | | | | | | | | | | | | | | | When SSH user/password authentication failed this may have been caused by changed credentials on the server side. When the SSH credentials of a user change the SSH connection needs to be re-established and credentials which may have been stored by the credentials provider need to be reset in order to enable prompting for the new credentials. Bug: 356233 Change-Id: I7d64c5f39b68a9687c858bb68a961616eabbc751 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* / Don't iterate over advertised refs when HEAD is nullKevin Sawicki2011-11-261-3/+3
|/ | | | | | | Moves the check from inside the loop to outside the loop and returns immediately if the HEAD advertisded ref is null Change-Id: I539da6cafb4f73610b8e00259e32bd4d57f4f4cc
* Merge "tools/release: Handle v1.0.0.201106090707-r-NN-gdeadbeef"Matthias Sohn2011-11-241-1/+1
|\
| * tools/release: Handle v1.0.0.201106090707-r-NN-gdeadbeefShawn O. Pearce2011-06-241-1/+1
| | | | | | | | | | | | | | | | The 1.0.0 release tags have a new suffix. Account for this. Change-Id: Ic6f260b6a5ba353af3b312b722f576155208eaa0 Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* | Merge "Adapt merge message to C Git for remote-tracking branch"Shawn Pearce2011-11-232-4/+4
|\ \
| * | Adapt merge message to C Git for remote-tracking branchRobin Stocker2011-11-232-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 13931236b9ee2895a98ffdbdacbd0f895956d8a8 in C Git (2011-11-02) changed the message format: -Merge remote branch 'origin/foo' +Merge remote-tracking branch 'origin/foo' This change does the same in EGit to be consistent. Change-Id: I7d9c5afa95771dbfe6079b5f89a10b248fee0172 Signed-off-by: Robin Stocker <robin@nibor.org>
* | | Merge changes I828ac2de,I80e5b7cfShawn O. Pearce2011-11-238-98/+357
|\ \ \ | |/ / |/| | | | | | | | | | | * changes: Add utilities for smart HTTP error handling Strip leading slashes in RepositoryFilter
| * | Add utilities for smart HTTP error handlingShawn O. Pearce2011-11-228-96/+355
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The GitSmartHttpTools class started as utility functions to help report useful error messages to users of the android.googlesource.com service. Now that the GitServlet and GitFilter classes support filters before a git-upload-pack or git-receive-pack request, server implementors may these routines helpful to report custom messages to clients. Using the sendError() method to return an HTTP 200 OK with error text embedded in the payload prevents native Git clients from retrying the action with a dumb Git or WebDAV HTTP request. Refactor some of the existing code to use these new error functions and protocol constants. The new sendError() function is very close to being identical to the old error handling code in RepositoryFilter, however we now use the POST Content-Type rather than the Accept HTTP header to check if the client will accept the error data in the response body rather than using the HTTP status code. This is a more reliable way of checking for native Git clients, as the Accept header was not always populated with the correct string in older versions of Git smart HTTP. Change-Id: I828ac2deb085af12b6689c10f86662ddd39bd1a2
| * | Strip leading slashes in RepositoryFilterShawn O. Pearce2011-11-221-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | If removing the leading slash results in an empty string, return with an HTTP 404 error before trying to use the RepositoryResolver. Moving this into a loop ahead of the length check ensures there is no empty string passed into the resolver. Change-Id: I80e5b7cf25ae9f2164b5c396a29773e5c7d7286e
* | | Guard against null branch in PullCommandKevin Sawicki2011-11-224-1/+19
| | | | | | | | | | | | | | | | | | | | | | | | Throw a NoHeadException when Repository.getFullBranch returns null Bug: 351543 Change-Id: I666cd5b67781508a293ae553c6fe5c080c8f4d99 Signed-off-by: Kevin Sawicki <kevin@github.com>
* | | Support adding all refs to LogCommandKevin Sawicki2011-11-222-0/+110
| | | | | | | | | | | | | | | Bug: 353310 Change-Id: Ifa2e7ed58c7f2bdfe3aafbd500b5a38c1f94c2ec Signed-off-by: Kevin Sawicki <kevin@github.com>
* | | Merge "Provide merge result when revert command fails"Shawn Pearce2011-11-221-0/+22
|\ \ \
| * | | Provide merge result when revert command failsKevin Sawicki2011-11-211-0/+22
| |/ / | | | | | | | | | | | | | | | | | | This allows callers to determine why the revert did not complete successfully Change-Id: Ie44bb8523cac388b63748bc69ebdd3c3a3665d06 Signed-off-by: Kevin Sawicki <kevin@github.com>
* / / maxObjectSizeLimit for receive-pack.Sasa Zivkov2011-11-226-2/+242
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ReceivePack (and PackParser) can be configured with the maxObjectSizeLimit in order to prevent users from pushing too large objects to Git. The limit check is applied to all object types although it is most likely that a BLOB will exceed the limit. In all cases the size of the object header is excluded from the object size which is checked against the limit as this is the size of which a BLOB object would take in the working tree when checked out as a file. When an object exceeds the maxObjectSizeLimit the receive-pack will abort immediately. Delta objects (both offset and ref delta) are also checked against the limit. However, for delta objects we will first check the size of the inflated delta block against the maxObjectSizeLimit and abort immediately if it exceeds the limit. In this case we even do not know the exact size of the resolved delta object but we assume it will be larger than the given maxObjectSizeLimit as delta is generally only chosen if the delta can copy more data from the base object than the delta needs to insert or needs to represent the copy ranges. Aborting early, in this case, avoids unnecessary inflating of the (huge) delta block. Unfortunately, it is too expensive (especially for a large delta) to compute SHA-1 of an object that causes the receive-pack to abort. This would decrease the value of this feature whose main purpose is to protect server resources from users pushing huge objects. Therefore we don't report the SHA-1 in the error message. Change-Id: I177ef24553faacda444ed5895e40ac8925ca0d1e Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com> Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* | Merge "Add missing '' characters around quoted variables"Shawn Pearce2011-11-182-7/+7
|\ \
| * | Add missing '' characters around quoted variablesKevin Sawicki2011-11-182-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | Double ' characters are needed for variables to appear in single quotes. Variables surrounded with a s single ' will not be replaced when formatted Change-Id: I0182c1f679ba879ca19dd81bf46924f415dc6003 Signed-off-by: Kevin Sawicki <kevin@github.com>
* | | Fix duplicate objects in "thin+cached" packs from DFSShawn O. Pearce2011-11-181-25/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The DfsReader must offer every representation of an object that exists on the local repository when PackWriter asks for them. This is necessary to identify objects in the thin pack part that are also in the cached pack that will be appended onto the end of the stream. Without looking at all alternatives, PackWriter may pack the same object twice (once in the thin section, again in the cached base pack). This may cause the command line C version to go into an infinite loop when repacking the resulting repository, as it may see a delta chain cycle with one of those duplicate copies of the object. Previously the DfsReader tried to avoid looking at packs that it might not care about, but this is insufficient, as all versions must be considered during pack generation. Change-Id: Ibf4a3e8ea5c42aef16404ffc42a5781edd97b18e
* | | Do not write edge objects to the pack streamShawn O. Pearce2011-11-181-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Consider two objects A->B where A uses B as a delta base, and these are in the same source pack file ordered as "A B". If cached packs is enabled and B is also in the cached pack that will be appended onto the end of the thin pack, and both A, B are supposed to be in the thin pack, PackWriter must consider the fact that A's base B is an edge object that claims to be part of the new pack, but is actually "external" and cannot be written first. If the object reuse system considered B candidates fist this bug does not arise, as B will be marked as edge due to it existing in the cached pack. When the A candidates are later examined, A sees a valid delta base is available as an edge, and will not later try to "write base first" during the writing phase. However, when the reuse system considers A candidates first they see that B will be in the outgoing pack, as it is still part of the thin pack, and arrange for A to be written first. Later when A switches from being in-pack to being an edge object (as it is part of the cached pack) the pointer in B does not get its type changed from ObjectToPack to ObjectId, so B thinks A is non-edge. We work around this case by also checking that the delta base B is non-edge before writing the object to the pack. Later when A writes its object header, delta base B's ObjectToPack will have an offset == 0, which makes isWritten() = false, and the OBJ_REF delta format will be used for A's header. This will be resolved by the client to the copy of B that appears in the later cached pack. Change-Id: Ifab6bfdf3c0aa93649468f49bcf91d67f90362ca
* | | Use long for more object counts in PackWriterShawn O. Pearce2011-11-181-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Packs can contain up to 2^32-1 objects, which exceeds the range of a Java int. Try harder to accept higher object counts in some cases by using long more often when we are working with the object count value. This is a trivial refactoring, we may have to make even more changes to the object handling code to support more than 2^31-1 objects. Change-Id: I8cd8146e97cd1c738ad5b48fa9e33804982167e7
* | | Search for annotated tag reuse firstShawn O. Pearce2011-11-181-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Annotated tags are relatively rare and currently are scheduled in a pack file near the commits, decreasing the time it takes to resolve client requests reading tags as part of a history traversal. Putting them first before the commits allows the storage system to page in the tag area, and have it relatively hot in the LRU when the nearby commit area gets examined too. Later looking at the tree and blob data will pollute the cache, making it more likely the tags are not loaded and would require file IO. Change-Id: I425f1f63ef937b8447c396939222ea20fdda290f
* | | Correct progress monitor on "Getting sizes:" phaseShawn O. Pearce2011-11-181-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | This counter always was running 1 higher, because it incremented after the queue was exhausted (and every object was processed). Move increments to be after the queue has provided a result, to ensure we do not show a higher in-progress count than total count. Change-Id: I97f815a0492c0957300475af409b6c6260008463
* | | Refactor DfsReader selection of cached packsShawn O. Pearce2011-11-181-3/+6
|/ / | | | | | | | | | | | | | | | | Make the code more clear with a simple refactoring of the boolean logic into a method that describes the condition we are looking for on each pack file. A cached pack is possible if there exists a tips collection, and the collection is non-empty. Change-Id: I4ac42b0622b39d159a0f4f223e291c35c71f672c
* | Merge changes I366435e2,I64577f8fShawn Pearce2011-11-182-1/+8
|\ \ | | | | | | | | | | | | | | | * changes: [findBugs] Silence returning null for StringUtils.toBooleanOrNull() [findBugs] Prefer short-cut logic as it's more performant
| * | [findBugs] Silence returning null for StringUtils.toBooleanOrNull()Matthias Sohn2011-11-161-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | As the method name and its javadoc clearly state that this method can return null we can ignore this FindBugs warning. Change-Id: I366435e26eda5d910f5d1a907db51f08efd4bb8c Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
| * | [findBugs] Prefer short-cut logic as it's more performantMatthias Sohn2011-11-161-1/+1
| | | | | | | | | | | | Change-Id: I64577f8fd19ee0d2d407479cc70e521adc367f37 Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
* | | Keep track of a static collection of all PackWriter instancesDave Borowitz2011-11-141-0/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | Stored in a weak concurrent hash map, which we clean up while iterating. Usually the weak reference behavior should not be necessary because PackWriters should be released with release(), but we still want to avoid leaks when dealing with broken client code. Change-Id: I337abb952ac6524f7f920fedf04065edf84d01d2
* | | Estimate the amount of memory used by a PackWriterDave Borowitz2011-11-141-3/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Memory usage is dominated by three terms: - The maximum memory allocated to each delta window. - The maximum size of a single file held in memory during delta search. - ObjectToPack instances owned by the writer. For the first two terms, rather than doing complex instrumentation of the DeltaWindows, we just overestimate based on the config parameters (though we may underestimate if the maximum size is not set). For the ObjectToPack instances, we do some rough byte accounting of the underlying Java object representation. Change-Id: I23fe3cf9d260a91f1aeb6ea22d75af8ddb9b1939
* | | Add an object encapsulating the state of a PackWriterDave Borowitz2011-11-143-14/+108
|/ / | | | | | | | | | | | | | | Exposes essentially the same state machine to the programmer as is exposed to the client via a ProgressMonitor, using a wrapper around beginTask()/endTask(). Change-Id: Ic3622b4acea65d2b9b3551c668806981fa7293e3
* | Merge "Implement DirCacheEntry.toString() to ease debugging"Christian Halstrick2011-11-111-0/+10
|\ \