Remove it from
* package private functions.
* try blocks
* for loops
this was done with the following python script:
$ cat f.py
import sys
import re
import os
def replaceFinal(m):
return m.group(1) + "(" + m.group(2).replace('final ', '') + ")"
methodDecl = re.compile(r"^([\t ]*[a-zA-Z_ ]+)\(([^)]*)\)")
def subst(fn):
input = open(fn)
os.rename(fn, fn + "~")
dest = open(fn, 'w')
for l in input:
l = methodDecl.sub(replaceFinal, l)
dest.write(l)
dest.close()
for root, dirs, files in os.walk(".", topdown=False):
for f in files:
if not f.endswith('.java'):
continue
full = os.path.join(root, f)
print full
subst(full)
Change-Id: If533a75a417594fc893e7c669d2c1f0f6caeb7ca
Signed-off-by: Han-Wen Nienhuys <hanwen@google.com>
Allow SHA1 instances to be reused to compute another hash value, and
resume caching them in ObjectInserter and PackParser. This shaves a
small amount of running time off parsing git.git's pack file:
before after
------ ------
25.25s 25.55s
25.48s 25.06s
25.26s 24.94s
Almost noise (small difference), but recycling the instances reduces
some stress on the memory allocator finding two 80 word message block
arrays needed for hashing and collision detection.
Change-Id: I4af88a720e81460293bc5c5d1d3db1a831e7e228
Generate names for objects using only the pure Java SHA1
implementation, but continue using MessageDigest in tests.
This opens the possibility of changing the hashing function
to incorporate additional safety measures, such as those
used in sha1dc[1].
Since MessageDigest has higher throughput, continue using
MessageDigest for computing pack, idx and DirCache trailers.
These are less likely to be sensitive to SHAttered[2] types
of attacks, as Git uses them to detect random bit flips
during transfer, and not for content identity.
[1] https://github.com/cr-marcstevens/sha1collisiondetection
[2] https://shattered.it/
Change-Id: If6da98334201f7f20cb916e46f782c45f373784e
Fixes Javadoc error in org.eclipse.jgit created with I59539ac
Adds the param information to the private method. These are generated
via tooltip to resolve the compile errors.
Bug: 511043
Change-Id: I9ba551978eab750326d1a067b296e3ae93925871
Signed-off-by: Lars Vogel <Lars.Vogel@vogella.com>
An unreferenced object might appear in a pack. This could only happen
because it was previously referenced, and then later that reference
was removed. When we gc, we copy the referenced objects into a new
pack, and delete the old pack. This would remove the unreferenced
object. Now we first create a loose object from any unreferenced
object in the doomed pack. This kicks off the two-week grace period
for that object, after which it will be collected if it's not
referenced.
This matches the behavior of regular git.
Change-Id: I59539aca1d0d83622c41aa9bfbdd72fa868ee9fb
Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: Jonathan Nieder <jrn@google.com>
Expose the ObjectInserter that created an ObjectReader
We've found in Gerrit Code Review that it is common to pass around
both an ObjectReader (or more commonly a RevWalk wrapping one) and an
ObjectInserter. These code paths often assume that the ObjectReader
can read back any objects created by the ObjectInserter without
flushing. However, we previously had no way to enforce that constraint
programmatically, leading to hard-to-spot problems.
Provide a solution by exposing the ObjectInserter that created an
ObjectReader, when known. Callers can either continue passing both
objects and check:
reader.getCreatedFromInserter() == inserter
or they can just pass around ObjectReader and extract the inserter
when it's needed (checking that it's not null at usage time).
Change-Id: Ibbf5d1968b506f6b47030ab1b046ffccb47352ea
Add a method to ObjectInserter to read back inserted objects
In the DFS implementation, flushing an inserter writes a new pack to
the storage system and is potentially very slow, but was the only way
to ensure previously-inserted objects were available. For some tasks,
like performing a series of three-way merges, the total size of all
inserted objects may be small enough to avoid flushing the in-memory
buffered data.
DfsOutputStream already provides a read method to read back from the
not-yet-flushed data, so use this to provide an ObjectReader in the
DFS case.
In the file-backed case, objects are written out loosely on the fly,
so the implementation can just return the existing WindowCursor.
Change-Id: I454fdfb88f4d215e31b7da2b2a069853b197b3dd
Some of our Windows users have reported sporadic file system access
problems related to ObjectDirectory(Inserter) file deletion code in
combination with antiviral/firewall tools. For one of these users the
problem was fairly reproducible and changing deletion to RETRY solved
his problem.
Change-Id: I1e4001d5557fca693b7bac401268599467cb0c9e
Signed-off-by: Marc Strapetz <marc.strapetz@syntevo.com>
JGit 3.0: move internal classes into an internal subpackage
This breaks all existing callers once. Applications are not supposed
to build against the internal storage API unless they can accept API
churn and make necessary updates as versions change.
Change-Id: I2ab1327c202ef2003565e1b0770a583970e432e9
A few classes such as Constanrs are marked with @SuppressWarnings, as are
toString() methods with many liternal, but otherwise $NLS-n$ is used for
string containing text that should not be translated. A few literals may
fall into the gray zone, but mostly I've tried to only tag the obvious
ones.
Change-Id: I22e50a77e2bf9e0b842a66bdf674e8fa1692f590
For streams that should not be closed, i.e. don't own an underlying
stream, and in-memory streams that do not need to be closed we just
suppress the warning. This mostly apply to test cases. GC is enough.
For streams with external resources (i.e. files) we add the necessary
call to close().
Change-Id: I4d883ba2e7d07f199fe57ccb3459ece00441a570
This reverts commit 88fe2836ed.
Auto CRLF isn't special enough to be screwing around with the buffers
used for raw byte processing of the ObjectInserter API. If it needs a
buffer to process a file that is bigger than the buffer allocated by
an ObjectInserter, it needs to do its own buffer management.
Change-Id: Ida4aaa80d0f9f78035f3d2a9ebdde904c980f89a
CRLF only works for small files, where small is the size of the
buffer, i.e. about 8K. This QD fix reallocates the buffer to be
large enough.
Bug: 369780
Change-Id: Ifc34ad204fbf5986b257a5c616e4a8c601e8261a
This patch introduces CRLF handling to the DirCacheCheckout and
WorkingTreeIterator supporting the AutoCRLF for add, checkout
reset and status and hopefully some other places that depende
on the underlying logic of the affected API's.
The patch includes test cases for the Status command provided by
Tomasz Zarna for bug 353867.
The core.eol and core.safecrlf options are not yet supported.
Bug: 301775
Bug: 353867
Change-Id: I2280a2dc0698829475de6a662a6c6e80b1df7663
Improve performance when writing trees and small blobs
ObjectDirectoryInserter was always creating a temporary file,
writing the complete compressed contents of a tree, fsync()'ing
that to stable storage, and only then checking to see if there
was already an object with the same SHA-1 in the repository.
For commits this strategy makes some sense, the commit is very
unlikely to exist in the repository, as there are embedded times
and these change with each commit.
However for trees coming out of DirCache, it is more common for the
tree to already exist in the repository. Most subdirectories are
not modified in any given commit. Doing all of this local file IO
for things that already exist is very slow.
Try to detect cases where the object is "small enough" that it can
be processed entirely in memory, and avoid doing disk IO entirely
if the object already exists.
Also increase the size of the output buffer for the deflation.
This should boost the average write(2) syscall size from 512 bytes
to 8192 bytes, making streaming of large compressed contents to
disk slightly more efficient.
Change-Id: I1d40364e8725468522435814631916d73174c92b
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Refactor IndexPack to not require local filesystem
By moving the logic that parses a pack stream from the network (or
a bundle) into a type that can be constructed by an ObjectInserter,
repository implementations have a chance to inject their own logic
for storing object data received into the destination repository.
The API isn't completely generic yet, there are still quite a few
assumptions that the PackParser subclass is storing the data onto
the local filesystem as a single file. But its about the simplest
split of IndexPack I can come up with without completely ripping
the code apart.
Change-Id: I5b167c9cc6d7a7c56d0197c62c0fd0036a83ec6c
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Chris Aniszczyk <caniszczyk@gmail.com>
java.io.File.delete() reports failure as an exceptional
return value false. Fix the code which silently ignored
this exceptional return value. Also remove some duplicate
deletion helper methods.
Change-Id: I80ed20ca1f07a2bc6e779957a4ad0c713789c5be
Signed-off-by: Matthias Sohn <matthias.sohn@sap.com>
Some repositories may be on really unstable filesystems, but still
want to have good reliability when objects are written to disk. If
core.fsyncObjectFiles is set to true, request the JVM to ensure the
data is written before returning success to the caller of insert.
The option defaults to false because it should be useless on any
filesystem that orders writes and metadata, such as ext3 mounted with
data=ordered (or data=journal). But it may be useful on some systems
(especially HFS+) where file content may flush to the disk
independently of filesystem structure changes.
Because FileChannel.force(boolean) only claims to ensure data is
written if it was written using the write(ByteBuffer) method of
FileChannel, redirect all writes when using fsyncObjectFiles to go
through the FileChannel interface instead of through the older style
OutputStream interface. This may not be necessary on all JVMs, but
its more portable to follow the definition than the common behavior.
Change-Id: I57f6b6bb7e403c07fbae989dbf3758eaf5edbc78
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Update CachedObjectDirectory when inserting objects
If an ObjectInserter is created from a CachedObjectDirectory, we need
to ensure the cache is updated whenever a new loose object is actually
added to the loose objects directory, otherwise a future read from an
ObjectReader on the CachedObjectDirectory might not be able to open
the newly created object.
We mostly had the infrastructure in place to implement this due to the
injection of unpacked large deltas, but we didn't have a way to pass
the ObjectId from ObjectDirectoryInserter to CachedObjectDirectory,
because the inserter was using the underlying ObjectDirectory and not
the CachedObjectDirectory. Redirecting to CachedObjectDirectory
ensures the cache is updated.
Change-Id: I1f7bdfacc7ad77ebdb885f655e549cc570652225
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When running IndexPack we use a CachedObjectDirectory, which
knows what objects are loose and tries to avoid stat(2) calls for
objects that do not exist in the repository, as stat(2) on Win32
is very slow.
However large delta objects found in a pack file are expanded into
a loose object, in order to avoid costly delta chain processing
when that object is used as a base for another delta.
If this expand occurs while working with the CachedObjectDirectory,
we need to update the cached directory data to include this new
object, otherwise it won't be available when we try to open it
during the object verify phase.
Bug: 324868
Change-Id: Idf0c76d4849d69aa415ead32e46a435622395d68
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Instead of spooling large delta bases into temporary files and then
immediately deleting them afterwards, spool the large delta out to
a normal loose object. Later any requests for that large delta can
be answered by reading from the loose object, which is much easier
to stream efficiently for readers.
Since the object is now duplicated, once in the pack as a delta and
again as a loose object, any future prune-packed will automatically
delete the loose object variant, releasing the wasted disk space.
As prune-packed is run automatically during either repack or gc, and
gc --auto triggers automatically based on the number of loose objects,
we get automatic cache management for free. Large objects that were
unpacked will be periodically cleared out, and will simply be restored
later if they are needed again.
After a short offline discussion with Junio Hamano today, we may want
to propose a change to prune-packed to hold onto larger loose objects
which also exist in pack files as deltas, if the loose object was
recently accessed or modified in the last 2 days.
Change-Id: I3668a3967c807010f48cd69f994dcbaaf582337c
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Remember loose objects and fast-track their lookup
Recently created objects are usually what branches point to, and
are usually written out as loose objects. But due to the high cost
of asking the operating system if a file exists, these are the last
thing that ObjectDirectory examines when looking for an object by
its ObjectId.
Caching recently seen loose objects permits the opening code to
jump directly to the loose object, accelerating lookup for branch
heads that are accessed often.
To avoid exploding the cache its limited to approximately 2048
entries. When more ids are added, the table is simply cleared
and reset in size.
Change-Id: I18f483217412b102f754ffd496c87061d592e535
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Similar to what we did on Repository, the openObject method
already implied we wanted to open an object, given its main
argument was of type AnyObjectId. Simplify the method name
to just the action, has or open.
Change-Id: If055e5e0d8de0e2424c18a773f6d2bc2f66054f4
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Move FileRepository to storage.file.FileRepository
This move isolates all of the local file specific implementation code
into a single package, where their package-private methods and support
classes are properly hidden away from the rest of the core library.
Because of the sheer number of files impacted, I have limited this
change to only the renames and the updated imports.
Change-Id: Icca4884e1a418f83f8b617d0c4c78b73d8a4bd17
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Refactor object writing responsiblities to ObjectDatabase
The ObjectInserter API permits ObjectDatabase implementations to
control their own object insertion behavior, rather than forcing
it to always be a new loose file created in the local filesystem.
Inserted objects can also be queued and written asynchronously to
the main application, such as by appending into a pack file that
is later closed and added to the repository.
This change also starts to open the door to non-file based object
storage, such as an in-memory HashMap for unit testing, or a more
complex system built on top of a distributed hash table.
To help existing application code port to the newer interface we
are keeping ObjectWriter as a delegation wrapper to the new API.
Each ObjectWriter instances holds a reference to an ObjectInserter
for the Repository's top-level ObjectDatabase, and it flushes and
releases that instance on each object processed.
Change-Id: I413224fb95563e7330c82748deb0aada4e0d6ace
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>