Add the "compression-level" option to all ArchiveCommand formats
Different archive formats support a compression level in the range
[0-9]. The value 0 is for lowest compressions and 9 for highest. Highest
levels produce output files of smaller sizes but require more memory to
do the compression.
This change allows passing a "compression-level" option to the git
archive command and implements using it for different file formats.
Change-Id: I5758f691c37ba630dbac24db67bb7da827bbc8e1
Signed-off-by: Youssef Elghareeb <ghareeb@google.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Archived zip files for a same commit have different MD5 hash because
mdate and mdate in the header of zip entries are not specified. In
this case, Commons Compress sets an archived time.
In the original git implementation, it's set a commit time:
e2b2d6a172/archive.c (L378)
By this fix, archive command sets the commit time to ZipArchiveEntry
when RevCommit is given as an archiving target.
Change-Id: I30dd8710e910cdf42d57742f8709e9803930a123
Signed-off-by: Naoki Takezoe <takezoe@gmail.com>
Signed-off-by: David Pursehouse <david.pursehouse@gmail.com>
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
ArchiveCommand: Allow to pass options to underlying stream
Current ArchiveCommand design doesn't allow to pass in options to
underlying stream implementations. To overcome this, client has to
implement custom format implementation (it cannot be derived from
the existing one, because the classes are marked as final), and set
the options using ThreadLocal, before the method
ArchiveOutputStream createArchiveOutputStream(OutputStream s)
is get called.
This change extends the ArchiveCommand.Format by allowing to pass
option map during creation of ArchiveOutputStream.
ArchiveCommand is extended correspondingly. That way client can
easily pass options to the underlying streams:
Map<String, Object> level = ImmutableMap.<String, Object> of(
"level", new Integer(9));
new ArchiveCommand(repo)
.setFormat("zip")
.setFormatOptions(level)
.setTree(tree)
.setPaths(paths)
.setPrefix(prefix)
.setOutputStream(sidebandOut)
.call();
Change-Id: I1d92a1e5249117487da39d19c7593e4b812ad97a
Signed-off-by: David Ostrovsky <david@ostrovsky.org>
This should make it easier to modify ArchiveCommand to allow an
archive format to be registered twice while still noticing if
different callers try to register different implementations for
the same format.
Change-Id: I32261bc8dc1877a853b49e0da0a6e78921791812
Signed-off-by: Jonathan Nieder <jrn@google.com>
Pick default archive format based on filename suffix
Introduce a setFilename() method for ArchiveCommand so callers can
specify the intended filename of the produced archive. If the
filename ends with .tar, the format will default to tar; if .zip, zip;
if .tar.gz, gzip-compressed tar; and so on.
This doesn't affect "jgit archive" because it doesn't support the
--output=<file> option yet. A later patch might do that.
Change-Id: Ic0236a70f7aa7f2271c3ef11083b21ee986b4df5
Document archive formats, the archive format interface, and the
parameters of the GitAPIException constructors. Noticed by eclipse.
Reported-by: Dani Megert <Daniel_Megert@ch.ibm.com>
Change-Id: I22b5f9d4c0358bbe867c1906feec7c279e214273
Allow use of ArchiveCommand without depending on the jgit command-line
tools.
To avoid complicating the process of installing and upgrading JGit,
this does not add a dependency by the org.eclipse.jgit bundle on
commons-compress. Instead, the caller is responsible for registering
any formats they want to use by calling ArchiveCommand.registerFormat.
This patch puts functionality that requires an archiver into a
separate org.eclipse.jgit.archive bundle for people who want it. One
can use it by calling ArchiveCommand.registerFormat directly to
register its formats or by relying on OSGi class loading to load
org.eclipse.jgit.archive.FormatActivator, which takes care of
registration automatically.
Once the appropriate formats are registered, you can make a tar or zip
from a git tree object as follows:
ArchiveCommand cmd = git.archive();
try {
cmd.setTree(tree).setFormat(fmt).setOutputStream(out).call();
} finally {
cmd.release();
}
Change-Id: I418e7e7d76422dc6f010d0b3b624d7bec3b20c6e
Remove dependency by ArchiveCommand on archive formats
Provide static registerFormat and unregisterFormat methods to allow
formats to register themselves without the ArchiveCommand code being
aware of them.
Register the basic "zip" and "tar" support at bundle activation time
(and deregister them when unloading the bundle). For anyone using
this code as an OSGi plugin it should continue to just work.
The jgit program does not load org.eclipse.jgit.pgm as an OSGi bundle,
so let the Archive command register the formats it uses explicitly
with registerFormat.
Change-Id: Id39c03ea6923d0aed8316ed7b6bd04d5ced570a7
Drop dependency by ArchiveCommand.Format interface on commons-compress
Otherwise, anyone trying to implement a new format would have to
depend on commons-compress, even if using a different underlying
library to write the archive.
Change-Id: I301a1997e3b48aa7e32d693fd8f4b2d436c9b3a7
ArchiveCommand.Format: pass output stream as first argument to putEntry
This is more consistent with other APIs where the output side is the
first parameter to be analagous to the left-hand side of an
assignment.
Change-Id: Iec46bd50bc973a38b77d8367296adf5474ba515f
The size shoould be passed to BufferedOutputStream's constructor.
All callers seem to use the default, but that could change.
Change-Id: I874afee6a9114698805e36813781547e6aa328a5
Make sure all bytes are written to files on close, or get an error.
Java's BufferedOutputStream swallows any errors that occur when flushing
the buffer in close().
This class overrides close to make sure an error during the final
flush is reported back to the caller.
Change-Id: I74a82b31505fadf8378069c5f6554f1033c28f9b
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
In practice the DHT storage layer has not been performing as well as
large scale server environments want to see from a Git server.
The performance of the DHT schema degrades rapidly as small changes
are pushed into the repository due to the chunk size being less than
1/3 of the pushed pack size. Small chunks cause poor prefetch
performance during reading, and require significantly longer prefetch
lists inside of the chunk meta field to work around the small size.
The DHT code is very complex (>17,000 lines of code) and is very
sensitive to the underlying database round-trip time, as well as the
way objects were written into the pack stream that was chunked and
stored on the database. A poor pack layout (from any version of C Git
prior to Junio reworking it) can cause the DHT code to be unable to
enumerate the objects of the linux-2.6 repository in a completable
time scale.
Performing a clone from a DHT stored repository of 2 million objects
takes 2 million row lookups in the DHT to locate the OBJECT_INDEX row
for each object being cloned. This is very difficult for some DHTs to
scale, even at 5000 rows/second the lookup stage alone takes 6 minutes
(on local filesystem, this is almost too fast to bother measuring).
Some servers like Apache Cassandra just fall over and cannot complete
the 2 million lookups in rapid fire.
On a ~400 MiB repository, the DHT schema has an extra 25 MiB of
redundant data that gets downloaded to the JGit process, and that is
before you consider the cost of the OBJECT_INDEX table also being
fully loaded, which is at least 223 MiB of data for the linux kernel
repository. In the DHT schema answering a `git clone` of the ~400 MiB
linux kernel needs to load 248 MiB of "index" data from the DHT, in
addition to the ~400 MiB of pack data that gets sent to the client.
This is 193 MiB more data to be accessed than the native filesystem
format, but it needs to come over a much smaller pipe (local Ethernet
typically) than the local SATA disk drive.
I also never got around to writing the "repack" support for the DHT
schema, as it turns out to be fairly complex to safely repack data in
the repository while also trying to minimize the amount of changes
made to the database, due to very common limitations on database
mutation rates..
This new DFS storage layer fixes a lot of those issues by taking the
simple approach for storing relatively standard Git pack and index
files on an abstract filesystem. Packs are accessed by an in-process
buffer cache, similar to the WindowCache used by the local filesystem
storage layer. Unlike the local file IO, there are some assumptions
that the storage system has relatively high latency and no concept of
"file handles". Instead it looks at the file more like HTTP byte range
requests, where a read channel is a simply a thunk to trigger a read
request over the network.
The DFS code in this change is still abstract, it does not store on
any particular filesystem, but is fairly well suited to the Amazon S3
or Apache Hadoop HDFS. Storing packs directly on HDFS rather than
HBase removes a layer of abstraction, as most HBase row reads turn
into an HDFS read.
Most of the DFS code in this change was blatently copied from the
local filesystem code. Most parts should be refactored to be shared
between the two storage systems, but right now I am hesistent to do
this due to how well tuned the local filesystem code currently is.
Change-Id: Iec524abdf172e9ec5485d6c88ca6512cd8a6eafb
Adds a class which can be used to calculates a SHA1 of the diff
associated with a patch, similar to git patch-id.
In this version whitespace is not ignored.
Change-Id: I421d15ea905e23df543082786786841cbe3ef10d
Signed-off-by: Stefan Lay <stefan.lay@sap.com>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
The strings are externalized into the root resource bundles.
The resource bundles are stored under the new "resources" source
folder to get proper maven build.
Strings from tests are, in general, not externalized. Only in
cases where it was necessary to make the test pass the strings
were externalized. This was typically necessary in cases where
e.getMessage() was used in assert and the exception message was
slightly changed due to reuse of the externalized strings.
Change-Id: Ic0f29c80b9a54fcec8320d8539a3e112852a1f7b
Signed-off-by: Sasa Zivkov <sasa.zivkov@sap.com>
During fetch over http:// clients now try to take advantage of
the info/refs?service=git-upload-pack URL to determine if the
remote side will support a standard upload-pack command stream.
If so each block of 32 have lines is sent in one POST request,
prefixed by all of the 'want' lines and any previously discovered
common bases as 'have' lines.
During push over http:// clients now try to take advantage of
the info/refs?service=git-receive-pack URL to determine if the
remote side will support a standard receive-pack command stream.
If so, commands are sent along with their pack in a single HTTP
POST request.
Bug: 291002
Change-Id: I8c69b16ac15c442e1a4c3bd60b4ea1a47882b851
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This is a simple HTTP server that provides the minimum server side
support required for dumb (non-git aware) transport clients.
We produce the info/refs and objects/info/packs file on the fly
from the local repository state, but otherwise serve data as raw
files from the on-disk structure.
In the future we could better optimize the FileSender class and the
servlets that use it to take advantage of direct file to network
APIs in more advanced servlet containers like Jetty.
Our glue package borrows the idea of a micro embedded DSL from
Google Guice and uses it to configure a collection of Filters
and HttpServlets, all of which are matched against requests using
regular expressions. If a subgroup exists in the pattern, it is
extracted and used for the path info component of the request.
Change-Id: Ia0f1a425d07d035e344ae54faf8aeb04763e7487
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Per CQ 3448 this is the initial contribution of the JGit project
to eclipse.org. It is derived from the historical JGit repository
at commit 3a2dd9921c.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>