DFS: A storage layer for JGit
In practice the DHT storage layer has not been performing as well as
large scale server environments want to see from a Git server.
The performance of the DHT schema degrades rapidly as small changes
are pushed into the repository due to the chunk size being less than
1/3 of the pushed pack size. Small chunks cause poor prefetch
performance during reading, and require significantly longer prefetch
lists inside of the chunk meta field to work around the small size.
The DHT code is very complex (>17,000 lines of code) and is very
sensitive to the underlying database round-trip time, as well as the
way objects were written into the pack stream that was chunked and
stored on the database. A poor pack layout (from any version of C Git
prior to Junio reworking it) can cause the DHT code to be unable to
enumerate the objects of the linux-2.6 repository in a completable
time scale.
Performing a clone from a DHT stored repository of 2 million objects
takes 2 million row lookups in the DHT to locate the OBJECT_INDEX row
for each object being cloned. This is very difficult for some DHTs to
scale, even at 5000 rows/second the lookup stage alone takes 6 minutes
(on local filesystem, this is almost too fast to bother measuring).
Some servers like Apache Cassandra just fall over and cannot complete
the 2 million lookups in rapid fire.
On a ~400 MiB repository, the DHT schema has an extra 25 MiB of
redundant data that gets downloaded to the JGit process, and that is
before you consider the cost of the OBJECT_INDEX table also being
fully loaded, which is at least 223 MiB of data for the linux kernel
repository. In the DHT schema answering a `git clone` of the ~400 MiB
linux kernel needs to load 248 MiB of "index" data from the DHT, in
addition to the ~400 MiB of pack data that gets sent to the client.
This is 193 MiB more data to be accessed than the native filesystem
format, but it needs to come over a much smaller pipe (local Ethernet
typically) than the local SATA disk drive.
I also never got around to writing the "repack" support for the DHT
schema, as it turns out to be fairly complex to safely repack data in
the repository while also trying to minimize the amount of changes
made to the database, due to very common limitations on database
mutation rates..
This new DFS storage layer fixes a lot of those issues by taking the
simple approach for storing relatively standard Git pack and index
files on an abstract filesystem. Packs are accessed by an in-process
buffer cache, similar to the WindowCache used by the local filesystem
storage layer. Unlike the local file IO, there are some assumptions
that the storage system has relatively high latency and no concept of
"file handles". Instead it looks at the file more like HTTP byte range
requests, where a read channel is a simply a thunk to trigger a read
request over the network.
The DFS code in this change is still abstract, it does not store on
any particular filesystem, but is fairly well suited to the Amazon S3
or Apache Hadoop HDFS. Storing packs directly on HDFS rather than
HBase removes a layer of abstraction, as most HBase row reads turn
into an HDFS read.
Most of the DFS code in this change was blatently copied from the
local filesystem code. Most parts should be refactored to be shared
between the two storage systems, but right now I am hesistent to do
this due to how well tuned the local filesystem code currently is.
Change-Id: Iec524abdf172e9ec5485d6c88ca6512cd8a6eafb
13 years ago DFS: A storage layer for JGit
In practice the DHT storage layer has not been performing as well as
large scale server environments want to see from a Git server.
The performance of the DHT schema degrades rapidly as small changes
are pushed into the repository due to the chunk size being less than
1/3 of the pushed pack size. Small chunks cause poor prefetch
performance during reading, and require significantly longer prefetch
lists inside of the chunk meta field to work around the small size.
The DHT code is very complex (>17,000 lines of code) and is very
sensitive to the underlying database round-trip time, as well as the
way objects were written into the pack stream that was chunked and
stored on the database. A poor pack layout (from any version of C Git
prior to Junio reworking it) can cause the DHT code to be unable to
enumerate the objects of the linux-2.6 repository in a completable
time scale.
Performing a clone from a DHT stored repository of 2 million objects
takes 2 million row lookups in the DHT to locate the OBJECT_INDEX row
for each object being cloned. This is very difficult for some DHTs to
scale, even at 5000 rows/second the lookup stage alone takes 6 minutes
(on local filesystem, this is almost too fast to bother measuring).
Some servers like Apache Cassandra just fall over and cannot complete
the 2 million lookups in rapid fire.
On a ~400 MiB repository, the DHT schema has an extra 25 MiB of
redundant data that gets downloaded to the JGit process, and that is
before you consider the cost of the OBJECT_INDEX table also being
fully loaded, which is at least 223 MiB of data for the linux kernel
repository. In the DHT schema answering a `git clone` of the ~400 MiB
linux kernel needs to load 248 MiB of "index" data from the DHT, in
addition to the ~400 MiB of pack data that gets sent to the client.
This is 193 MiB more data to be accessed than the native filesystem
format, but it needs to come over a much smaller pipe (local Ethernet
typically) than the local SATA disk drive.
I also never got around to writing the "repack" support for the DHT
schema, as it turns out to be fairly complex to safely repack data in
the repository while also trying to minimize the amount of changes
made to the database, due to very common limitations on database
mutation rates..
This new DFS storage layer fixes a lot of those issues by taking the
simple approach for storing relatively standard Git pack and index
files on an abstract filesystem. Packs are accessed by an in-process
buffer cache, similar to the WindowCache used by the local filesystem
storage layer. Unlike the local file IO, there are some assumptions
that the storage system has relatively high latency and no concept of
"file handles". Instead it looks at the file more like HTTP byte range
requests, where a read channel is a simply a thunk to trigger a read
request over the network.
The DFS code in this change is still abstract, it does not store on
any particular filesystem, but is fairly well suited to the Amazon S3
or Apache Hadoop HDFS. Storing packs directly on HDFS rather than
HBase removes a layer of abstraction, as most HBase row reads turn
into an HDFS read.
Most of the DFS code in this change was blatently copied from the
local filesystem code. Most parts should be refactored to be shared
between the two storage systems, but right now I am hesistent to do
this due to how well tuned the local filesystem code currently is.
Change-Id: Iec524abdf172e9ec5485d6c88ca6512cd8a6eafb
13 years ago DFS: A storage layer for JGit
In practice the DHT storage layer has not been performing as well as
large scale server environments want to see from a Git server.
The performance of the DHT schema degrades rapidly as small changes
are pushed into the repository due to the chunk size being less than
1/3 of the pushed pack size. Small chunks cause poor prefetch
performance during reading, and require significantly longer prefetch
lists inside of the chunk meta field to work around the small size.
The DHT code is very complex (>17,000 lines of code) and is very
sensitive to the underlying database round-trip time, as well as the
way objects were written into the pack stream that was chunked and
stored on the database. A poor pack layout (from any version of C Git
prior to Junio reworking it) can cause the DHT code to be unable to
enumerate the objects of the linux-2.6 repository in a completable
time scale.
Performing a clone from a DHT stored repository of 2 million objects
takes 2 million row lookups in the DHT to locate the OBJECT_INDEX row
for each object being cloned. This is very difficult for some DHTs to
scale, even at 5000 rows/second the lookup stage alone takes 6 minutes
(on local filesystem, this is almost too fast to bother measuring).
Some servers like Apache Cassandra just fall over and cannot complete
the 2 million lookups in rapid fire.
On a ~400 MiB repository, the DHT schema has an extra 25 MiB of
redundant data that gets downloaded to the JGit process, and that is
before you consider the cost of the OBJECT_INDEX table also being
fully loaded, which is at least 223 MiB of data for the linux kernel
repository. In the DHT schema answering a `git clone` of the ~400 MiB
linux kernel needs to load 248 MiB of "index" data from the DHT, in
addition to the ~400 MiB of pack data that gets sent to the client.
This is 193 MiB more data to be accessed than the native filesystem
format, but it needs to come over a much smaller pipe (local Ethernet
typically) than the local SATA disk drive.
I also never got around to writing the "repack" support for the DHT
schema, as it turns out to be fairly complex to safely repack data in
the repository while also trying to minimize the amount of changes
made to the database, due to very common limitations on database
mutation rates..
This new DFS storage layer fixes a lot of those issues by taking the
simple approach for storing relatively standard Git pack and index
files on an abstract filesystem. Packs are accessed by an in-process
buffer cache, similar to the WindowCache used by the local filesystem
storage layer. Unlike the local file IO, there are some assumptions
that the storage system has relatively high latency and no concept of
"file handles". Instead it looks at the file more like HTTP byte range
requests, where a read channel is a simply a thunk to trigger a read
request over the network.
The DFS code in this change is still abstract, it does not store on
any particular filesystem, but is fairly well suited to the Amazon S3
or Apache Hadoop HDFS. Storing packs directly on HDFS rather than
HBase removes a layer of abstraction, as most HBase row reads turn
into an HDFS read.
Most of the DFS code in this change was blatently copied from the
local filesystem code. Most parts should be refactored to be shared
between the two storage systems, but right now I am hesistent to do
this due to how well tuned the local filesystem code currently is.
Change-Id: Iec524abdf172e9ec5485d6c88ca6512cd8a6eafb
13 years ago DFS: A storage layer for JGit
In practice the DHT storage layer has not been performing as well as
large scale server environments want to see from a Git server.
The performance of the DHT schema degrades rapidly as small changes
are pushed into the repository due to the chunk size being less than
1/3 of the pushed pack size. Small chunks cause poor prefetch
performance during reading, and require significantly longer prefetch
lists inside of the chunk meta field to work around the small size.
The DHT code is very complex (>17,000 lines of code) and is very
sensitive to the underlying database round-trip time, as well as the
way objects were written into the pack stream that was chunked and
stored on the database. A poor pack layout (from any version of C Git
prior to Junio reworking it) can cause the DHT code to be unable to
enumerate the objects of the linux-2.6 repository in a completable
time scale.
Performing a clone from a DHT stored repository of 2 million objects
takes 2 million row lookups in the DHT to locate the OBJECT_INDEX row
for each object being cloned. This is very difficult for some DHTs to
scale, even at 5000 rows/second the lookup stage alone takes 6 minutes
(on local filesystem, this is almost too fast to bother measuring).
Some servers like Apache Cassandra just fall over and cannot complete
the 2 million lookups in rapid fire.
On a ~400 MiB repository, the DHT schema has an extra 25 MiB of
redundant data that gets downloaded to the JGit process, and that is
before you consider the cost of the OBJECT_INDEX table also being
fully loaded, which is at least 223 MiB of data for the linux kernel
repository. In the DHT schema answering a `git clone` of the ~400 MiB
linux kernel needs to load 248 MiB of "index" data from the DHT, in
addition to the ~400 MiB of pack data that gets sent to the client.
This is 193 MiB more data to be accessed than the native filesystem
format, but it needs to come over a much smaller pipe (local Ethernet
typically) than the local SATA disk drive.
I also never got around to writing the "repack" support for the DHT
schema, as it turns out to be fairly complex to safely repack data in
the repository while also trying to minimize the amount of changes
made to the database, due to very common limitations on database
mutation rates..
This new DFS storage layer fixes a lot of those issues by taking the
simple approach for storing relatively standard Git pack and index
files on an abstract filesystem. Packs are accessed by an in-process
buffer cache, similar to the WindowCache used by the local filesystem
storage layer. Unlike the local file IO, there are some assumptions
that the storage system has relatively high latency and no concept of
"file handles". Instead it looks at the file more like HTTP byte range
requests, where a read channel is a simply a thunk to trigger a read
request over the network.
The DFS code in this change is still abstract, it does not store on
any particular filesystem, but is fairly well suited to the Amazon S3
or Apache Hadoop HDFS. Storing packs directly on HDFS rather than
HBase removes a layer of abstraction, as most HBase row reads turn
into an HDFS read.
Most of the DFS code in this change was blatently copied from the
local filesystem code. Most parts should be refactored to be shared
between the two storage systems, but right now I am hesistent to do
this due to how well tuned the local filesystem code currently is.
Change-Id: Iec524abdf172e9ec5485d6c88ca6512cd8a6eafb
13 years ago DFS: A storage layer for JGit
In practice the DHT storage layer has not been performing as well as
large scale server environments want to see from a Git server.
The performance of the DHT schema degrades rapidly as small changes
are pushed into the repository due to the chunk size being less than
1/3 of the pushed pack size. Small chunks cause poor prefetch
performance during reading, and require significantly longer prefetch
lists inside of the chunk meta field to work around the small size.
The DHT code is very complex (>17,000 lines of code) and is very
sensitive to the underlying database round-trip time, as well as the
way objects were written into the pack stream that was chunked and
stored on the database. A poor pack layout (from any version of C Git
prior to Junio reworking it) can cause the DHT code to be unable to
enumerate the objects of the linux-2.6 repository in a completable
time scale.
Performing a clone from a DHT stored repository of 2 million objects
takes 2 million row lookups in the DHT to locate the OBJECT_INDEX row
for each object being cloned. This is very difficult for some DHTs to
scale, even at 5000 rows/second the lookup stage alone takes 6 minutes
(on local filesystem, this is almost too fast to bother measuring).
Some servers like Apache Cassandra just fall over and cannot complete
the 2 million lookups in rapid fire.
On a ~400 MiB repository, the DHT schema has an extra 25 MiB of
redundant data that gets downloaded to the JGit process, and that is
before you consider the cost of the OBJECT_INDEX table also being
fully loaded, which is at least 223 MiB of data for the linux kernel
repository. In the DHT schema answering a `git clone` of the ~400 MiB
linux kernel needs to load 248 MiB of "index" data from the DHT, in
addition to the ~400 MiB of pack data that gets sent to the client.
This is 193 MiB more data to be accessed than the native filesystem
format, but it needs to come over a much smaller pipe (local Ethernet
typically) than the local SATA disk drive.
I also never got around to writing the "repack" support for the DHT
schema, as it turns out to be fairly complex to safely repack data in
the repository while also trying to minimize the amount of changes
made to the database, due to very common limitations on database
mutation rates..
This new DFS storage layer fixes a lot of those issues by taking the
simple approach for storing relatively standard Git pack and index
files on an abstract filesystem. Packs are accessed by an in-process
buffer cache, similar to the WindowCache used by the local filesystem
storage layer. Unlike the local file IO, there are some assumptions
that the storage system has relatively high latency and no concept of
"file handles". Instead it looks at the file more like HTTP byte range
requests, where a read channel is a simply a thunk to trigger a read
request over the network.
The DFS code in this change is still abstract, it does not store on
any particular filesystem, but is fairly well suited to the Amazon S3
or Apache Hadoop HDFS. Storing packs directly on HDFS rather than
HBase removes a layer of abstraction, as most HBase row reads turn
into an HDFS read.
Most of the DFS code in this change was blatently copied from the
local filesystem code. Most parts should be refactored to be shared
between the two storage systems, but right now I am hesistent to do
this due to how well tuned the local filesystem code currently is.
Change-Id: Iec524abdf172e9ec5485d6c88ca6512cd8a6eafb
13 years ago DFS: A storage layer for JGit
In practice the DHT storage layer has not been performing as well as
large scale server environments want to see from a Git server.
The performance of the DHT schema degrades rapidly as small changes
are pushed into the repository due to the chunk size being less than
1/3 of the pushed pack size. Small chunks cause poor prefetch
performance during reading, and require significantly longer prefetch
lists inside of the chunk meta field to work around the small size.
The DHT code is very complex (>17,000 lines of code) and is very
sensitive to the underlying database round-trip time, as well as the
way objects were written into the pack stream that was chunked and
stored on the database. A poor pack layout (from any version of C Git
prior to Junio reworking it) can cause the DHT code to be unable to
enumerate the objects of the linux-2.6 repository in a completable
time scale.
Performing a clone from a DHT stored repository of 2 million objects
takes 2 million row lookups in the DHT to locate the OBJECT_INDEX row
for each object being cloned. This is very difficult for some DHTs to
scale, even at 5000 rows/second the lookup stage alone takes 6 minutes
(on local filesystem, this is almost too fast to bother measuring).
Some servers like Apache Cassandra just fall over and cannot complete
the 2 million lookups in rapid fire.
On a ~400 MiB repository, the DHT schema has an extra 25 MiB of
redundant data that gets downloaded to the JGit process, and that is
before you consider the cost of the OBJECT_INDEX table also being
fully loaded, which is at least 223 MiB of data for the linux kernel
repository. In the DHT schema answering a `git clone` of the ~400 MiB
linux kernel needs to load 248 MiB of "index" data from the DHT, in
addition to the ~400 MiB of pack data that gets sent to the client.
This is 193 MiB more data to be accessed than the native filesystem
format, but it needs to come over a much smaller pipe (local Ethernet
typically) than the local SATA disk drive.
I also never got around to writing the "repack" support for the DHT
schema, as it turns out to be fairly complex to safely repack data in
the repository while also trying to minimize the amount of changes
made to the database, due to very common limitations on database
mutation rates..
This new DFS storage layer fixes a lot of those issues by taking the
simple approach for storing relatively standard Git pack and index
files on an abstract filesystem. Packs are accessed by an in-process
buffer cache, similar to the WindowCache used by the local filesystem
storage layer. Unlike the local file IO, there are some assumptions
that the storage system has relatively high latency and no concept of
"file handles". Instead it looks at the file more like HTTP byte range
requests, where a read channel is a simply a thunk to trigger a read
request over the network.
The DFS code in this change is still abstract, it does not store on
any particular filesystem, but is fairly well suited to the Amazon S3
or Apache Hadoop HDFS. Storing packs directly on HDFS rather than
HBase removes a layer of abstraction, as most HBase row reads turn
into an HDFS read.
Most of the DFS code in this change was blatently copied from the
local filesystem code. Most parts should be refactored to be shared
between the two storage systems, but right now I am hesistent to do
this due to how well tuned the local filesystem code currently is.
Change-Id: Iec524abdf172e9ec5485d6c88ca6512cd8a6eafb
13 years ago DFS: A storage layer for JGit
In practice the DHT storage layer has not been performing as well as
large scale server environments want to see from a Git server.
The performance of the DHT schema degrades rapidly as small changes
are pushed into the repository due to the chunk size being less than
1/3 of the pushed pack size. Small chunks cause poor prefetch
performance during reading, and require significantly longer prefetch
lists inside of the chunk meta field to work around the small size.
The DHT code is very complex (>17,000 lines of code) and is very
sensitive to the underlying database round-trip time, as well as the
way objects were written into the pack stream that was chunked and
stored on the database. A poor pack layout (from any version of C Git
prior to Junio reworking it) can cause the DHT code to be unable to
enumerate the objects of the linux-2.6 repository in a completable
time scale.
Performing a clone from a DHT stored repository of 2 million objects
takes 2 million row lookups in the DHT to locate the OBJECT_INDEX row
for each object being cloned. This is very difficult for some DHTs to
scale, even at 5000 rows/second the lookup stage alone takes 6 minutes
(on local filesystem, this is almost too fast to bother measuring).
Some servers like Apache Cassandra just fall over and cannot complete
the 2 million lookups in rapid fire.
On a ~400 MiB repository, the DHT schema has an extra 25 MiB of
redundant data that gets downloaded to the JGit process, and that is
before you consider the cost of the OBJECT_INDEX table also being
fully loaded, which is at least 223 MiB of data for the linux kernel
repository. In the DHT schema answering a `git clone` of the ~400 MiB
linux kernel needs to load 248 MiB of "index" data from the DHT, in
addition to the ~400 MiB of pack data that gets sent to the client.
This is 193 MiB more data to be accessed than the native filesystem
format, but it needs to come over a much smaller pipe (local Ethernet
typically) than the local SATA disk drive.
I also never got around to writing the "repack" support for the DHT
schema, as it turns out to be fairly complex to safely repack data in
the repository while also trying to minimize the amount of changes
made to the database, due to very common limitations on database
mutation rates..
This new DFS storage layer fixes a lot of those issues by taking the
simple approach for storing relatively standard Git pack and index
files on an abstract filesystem. Packs are accessed by an in-process
buffer cache, similar to the WindowCache used by the local filesystem
storage layer. Unlike the local file IO, there are some assumptions
that the storage system has relatively high latency and no concept of
"file handles". Instead it looks at the file more like HTTP byte range
requests, where a read channel is a simply a thunk to trigger a read
request over the network.
The DFS code in this change is still abstract, it does not store on
any particular filesystem, but is fairly well suited to the Amazon S3
or Apache Hadoop HDFS. Storing packs directly on HDFS rather than
HBase removes a layer of abstraction, as most HBase row reads turn
into an HDFS read.
Most of the DFS code in this change was blatently copied from the
local filesystem code. Most parts should be refactored to be shared
between the two storage systems, but right now I am hesistent to do
this due to how well tuned the local filesystem code currently is.
Change-Id: Iec524abdf172e9ec5485d6c88ca6512cd8a6eafb
13 years ago |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028 |
- /*
- * Copyright (C) 2008-2011, Google Inc.
- * Copyright (C) 2007, Robin Rosenberg <robin.rosenberg@dewire.com>
- * Copyright (C) 2006-2008, Shawn O. Pearce <spearce@spearce.org>
- * and other copyright owners as documented in the project's IP log.
- *
- * This program and the accompanying materials are made available
- * under the terms of the Eclipse Distribution License v1.0 which
- * accompanies this distribution, is reproduced below, and is
- * available at http://www.eclipse.org/org/documents/edl-v10.php
- *
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or
- * without modification, are permitted provided that the following
- * conditions are met:
- *
- * - Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- *
- * - Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following
- * disclaimer in the documentation and/or other materials provided
- * with the distribution.
- *
- * - Neither the name of the Eclipse Foundation, Inc. nor the
- * names of its contributors may be used to endorse or promote
- * products derived from this software without specific prior
- * written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
- * CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
- * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
- * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
- * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
- * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
- * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
- * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
- package org.eclipse.jgit.storage.dfs;
-
- import java.io.BufferedInputStream;
- import java.io.EOFException;
- import java.io.IOException;
- import java.io.InputStream;
- import java.nio.channels.Channels;
- import java.text.MessageFormat;
- import java.util.Set;
- import java.util.zip.CRC32;
- import java.util.zip.DataFormatException;
- import java.util.zip.Inflater;
-
- import org.eclipse.jgit.errors.CorruptObjectException;
- import org.eclipse.jgit.errors.LargeObjectException;
- import org.eclipse.jgit.errors.MissingObjectException;
- import org.eclipse.jgit.errors.PackInvalidException;
- import org.eclipse.jgit.errors.StoredObjectRepresentationNotAvailableException;
- import org.eclipse.jgit.internal.JGitText;
- import org.eclipse.jgit.lib.AbbreviatedObjectId;
- import org.eclipse.jgit.lib.AnyObjectId;
- import org.eclipse.jgit.lib.Constants;
- import org.eclipse.jgit.lib.ObjectId;
- import org.eclipse.jgit.lib.ObjectLoader;
- import org.eclipse.jgit.storage.file.PackIndex;
- import org.eclipse.jgit.storage.file.PackReverseIndex;
- import org.eclipse.jgit.storage.pack.BinaryDelta;
- import org.eclipse.jgit.storage.pack.PackOutputStream;
- import org.eclipse.jgit.storage.pack.StoredObjectRepresentation;
- import org.eclipse.jgit.util.IO;
- import org.eclipse.jgit.util.LongList;
-
- /**
- * A Git version 2 pack file representation. A pack file contains Git objects in
- * delta packed format yielding high compression of lots of object where some
- * objects are similar.
- */
- public final class DfsPackFile {
- /**
- * File offset used to cache {@link #index} in {@link DfsBlockCache}.
- * <p>
- * To better manage memory, the forward index is stored as a single block in
- * the block cache under this file position. A negative value is used
- * because it cannot occur in a normal pack file, and it is less likely to
- * collide with a valid data block from the file as the high bits will all
- * be set when treated as an unsigned long by the cache code.
- */
- private static final long POS_INDEX = -1;
-
- /** Offset used to cache {@link #reverseIndex}. See {@link #POS_INDEX}. */
- private static final long POS_REVERSE_INDEX = -2;
-
- /** Cache that owns this pack file and its data. */
- private final DfsBlockCache cache;
-
- /** Description of the pack file's storage. */
- private final DfsPackDescription packDesc;
-
- /** Unique identity of this pack while in-memory. */
- final DfsPackKey key;
-
- /**
- * Total number of bytes in this pack file.
- * <p>
- * This field initializes to -1 and gets populated when a block is loaded.
- */
- volatile long length;
-
- /**
- * Preferred alignment for loading blocks from the backing file.
- * <p>
- * It is initialized to 0 and filled in on the first read made from the
- * file. Block sizes may be odd, e.g. 4091, caused by the underling DFS
- * storing 4091 user bytes and 5 bytes block metadata into a lower level
- * 4096 byte block on disk.
- */
- private volatile int blockSize;
-
- /** True once corruption has been detected that cannot be worked around. */
- private volatile boolean invalid;
-
- /**
- * Lock for initialization of {@link #index} and {@link #corruptObjects}.
- * <p>
- * This lock ensures only one thread can perform the initialization work.
- */
- private final Object initLock = new Object();
-
- /** Index mapping {@link ObjectId} to position within the pack stream. */
- private volatile DfsBlockCache.Ref<PackIndex> index;
-
- /** Reverse version of {@link #index} mapping position to {@link ObjectId}. */
- private volatile DfsBlockCache.Ref<PackReverseIndex> reverseIndex;
-
- /**
- * Objects we have tried to read, and discovered to be corrupt.
- * <p>
- * The list is allocated after the first corruption is found, and filled in
- * as more entries are discovered. Typically this list is never used, as
- * pack files do not usually contain corrupt objects.
- */
- private volatile LongList corruptObjects;
-
- /**
- * Construct a reader for an existing, packfile.
- *
- * @param cache
- * cache that owns the pack data.
- * @param desc
- * description of the pack within the DFS.
- * @param key
- * interned key used to identify blocks in the block cache.
- */
- DfsPackFile(DfsBlockCache cache, DfsPackDescription desc, DfsPackKey key) {
- this.cache = cache;
- this.packDesc = desc;
- this.key = key;
-
- length = desc.getPackSize();
- if (length <= 0)
- length = -1;
- }
-
- /** @return description that was originally used to configure this pack file. */
- public DfsPackDescription getPackDescription() {
- return packDesc;
- }
-
- /** @return bytes cached in memory for this pack, excluding the index. */
- public long getCachedSize() {
- return key.cachedSize.get();
- }
-
- private String getPackName() {
- return packDesc.getPackName();
- }
-
- void setBlockSize(int newSize) {
- blockSize = newSize;
- }
-
- void setPackIndex(PackIndex idx) {
- long objCnt = idx.getObjectCount();
- int recSize = Constants.OBJECT_ID_LENGTH + 8;
- int sz = (int) Math.min(objCnt * recSize, Integer.MAX_VALUE);
- index = cache.put(key, POS_INDEX, sz, idx);
- }
-
- PackIndex getPackIndex(DfsReader ctx) throws IOException {
- return idx(ctx);
- }
-
- private PackIndex idx(DfsReader ctx) throws IOException {
- DfsBlockCache.Ref<PackIndex> idxref = index;
- if (idxref != null) {
- PackIndex idx = idxref.get();
- if (idx != null)
- return idx;
- }
-
- if (invalid)
- throw new PackInvalidException(getPackName());
-
- synchronized (initLock) {
- idxref = index;
- if (idxref != null) {
- PackIndex idx = idxref.get();
- if (idx != null)
- return idx;
- }
-
- PackIndex idx;
- try {
- ReadableChannel rc = ctx.db.openPackIndex(packDesc);
- try {
- InputStream in = Channels.newInputStream(rc);
- int wantSize = 8192;
- int bs = rc.blockSize();
- if (0 < bs && bs < wantSize)
- bs = (wantSize / bs) * bs;
- else if (bs <= 0)
- bs = wantSize;
- in = new BufferedInputStream(in, bs);
- idx = PackIndex.read(in);
- } finally {
- rc.close();
- }
- } catch (EOFException e) {
- invalid = true;
- IOException e2 = new IOException(MessageFormat.format(
- DfsText.get().shortReadOfIndex, packDesc.getIndexName()));
- e2.initCause(e);
- throw e2;
- } catch (IOException e) {
- invalid = true;
- IOException e2 = new IOException(MessageFormat.format(
- DfsText.get().cannotReadIndex, packDesc.getIndexName()));
- e2.initCause(e);
- throw e2;
- }
-
- setPackIndex(idx);
- return idx;
- }
- }
-
- private PackReverseIndex getReverseIdx(DfsReader ctx) throws IOException {
- DfsBlockCache.Ref<PackReverseIndex> revref = reverseIndex;
- if (revref != null) {
- PackReverseIndex revidx = revref.get();
- if (revidx != null)
- return revidx;
- }
-
- synchronized (initLock) {
- revref = reverseIndex;
- if (revref != null) {
- PackReverseIndex revidx = revref.get();
- if (revidx != null)
- return revidx;
- }
-
- PackReverseIndex revidx = new PackReverseIndex(idx(ctx));
- reverseIndex = cache.put(key, POS_REVERSE_INDEX,
- packDesc.getReverseIndexSize(), revidx);
- return revidx;
- }
- }
-
- boolean hasObject(DfsReader ctx, AnyObjectId id) throws IOException {
- final long offset = idx(ctx).findOffset(id);
- return 0 < offset && !isCorrupt(offset);
- }
-
- /**
- * Get an object from this pack.
- *
- * @param ctx
- * temporary working space associated with the calling thread.
- * @param id
- * the object to obtain from the pack. Must not be null.
- * @return the object loader for the requested object if it is contained in
- * this pack; null if the object was not found.
- * @throws IOException
- * the pack file or the index could not be read.
- */
- ObjectLoader get(DfsReader ctx, AnyObjectId id)
- throws IOException {
- long offset = idx(ctx).findOffset(id);
- return 0 < offset && !isCorrupt(offset) ? load(ctx, offset) : null;
- }
-
- long findOffset(DfsReader ctx, AnyObjectId id) throws IOException {
- return idx(ctx).findOffset(id);
- }
-
- void resolve(DfsReader ctx, Set<ObjectId> matches, AbbreviatedObjectId id,
- int matchLimit) throws IOException {
- idx(ctx).resolve(matches, id, matchLimit);
- }
-
- /** Release all memory used by this DfsPackFile instance. */
- public void close() {
- cache.remove(this);
- index = null;
- reverseIndex = null;
- }
-
- /**
- * Obtain the total number of objects available in this pack. This method
- * relies on pack index, giving number of effectively available objects.
- *
- * @param ctx
- * current reader for the calling thread.
- * @return number of objects in index of this pack, likewise in this pack
- * @throws IOException
- * the index file cannot be loaded into memory.
- */
- long getObjectCount(DfsReader ctx) throws IOException {
- return idx(ctx).getObjectCount();
- }
-
- /**
- * Search for object id with the specified start offset in associated pack
- * (reverse) index.
- *
- * @param ctx
- * current reader for the calling thread.
- * @param offset
- * start offset of object to find
- * @return object id for this offset, or null if no object was found
- * @throws IOException
- * the index file cannot be loaded into memory.
- */
- ObjectId findObjectForOffset(DfsReader ctx, long offset) throws IOException {
- return getReverseIdx(ctx).findObject(offset);
- }
-
- private byte[] decompress(long position, int sz, DfsReader ctx)
- throws IOException, DataFormatException {
- byte[] dstbuf;
- try {
- dstbuf = new byte[sz];
- } catch (OutOfMemoryError noMemory) {
- // The size may be larger than our heap allows, return null to
- // let the caller know allocation isn't possible and it should
- // use the large object streaming approach instead.
- //
- // For example, this can occur when sz is 640 MB, and JRE
- // maximum heap size is only 256 MB. Even if the JRE has
- // 200 MB free, it cannot allocate a 640 MB byte array.
- return null;
- }
-
- if (ctx.inflate(this, position, dstbuf, false) != sz)
- throw new EOFException(MessageFormat.format(
- JGitText.get().shortCompressedStreamAt,
- Long.valueOf(position)));
- return dstbuf;
- }
-
- void copyPackAsIs(PackOutputStream out, boolean validate, DfsReader ctx)
- throws IOException {
- // Pin the first window, this ensures the length is accurate.
- ctx.pin(this, 0);
- ctx.copyPackAsIs(this, length, validate, out);
- }
-
- void copyAsIs(PackOutputStream out, DfsObjectToPack src,
- boolean validate, DfsReader ctx) throws IOException,
- StoredObjectRepresentationNotAvailableException {
- final CRC32 crc1 = validate ? new CRC32() : null;
- final CRC32 crc2 = validate ? new CRC32() : null;
- final byte[] buf = out.getCopyBuffer();
-
- // Rip apart the header so we can discover the size.
- //
- try {
- readFully(src.offset, buf, 0, 20, ctx);
- } catch (IOException ioError) {
- StoredObjectRepresentationNotAvailableException gone;
- gone = new StoredObjectRepresentationNotAvailableException(src);
- gone.initCause(ioError);
- throw gone;
- }
- int c = buf[0] & 0xff;
- final int typeCode = (c >> 4) & 7;
- long inflatedLength = c & 15;
- int shift = 4;
- int headerCnt = 1;
- while ((c & 0x80) != 0) {
- c = buf[headerCnt++] & 0xff;
- inflatedLength += ((long) (c & 0x7f)) << shift;
- shift += 7;
- }
-
- if (typeCode == Constants.OBJ_OFS_DELTA) {
- do {
- c = buf[headerCnt++] & 0xff;
- } while ((c & 128) != 0);
- if (validate) {
- crc1.update(buf, 0, headerCnt);
- crc2.update(buf, 0, headerCnt);
- }
- } else if (typeCode == Constants.OBJ_REF_DELTA) {
- if (validate) {
- crc1.update(buf, 0, headerCnt);
- crc2.update(buf, 0, headerCnt);
- }
-
- readFully(src.offset + headerCnt, buf, 0, 20, ctx);
- if (validate) {
- crc1.update(buf, 0, 20);
- crc2.update(buf, 0, 20);
- }
- headerCnt += 20;
- } else if (validate) {
- crc1.update(buf, 0, headerCnt);
- crc2.update(buf, 0, headerCnt);
- }
-
- final long dataOffset = src.offset + headerCnt;
- final long dataLength = src.length;
- final long expectedCRC;
- final DfsBlock quickCopy;
-
- // Verify the object isn't corrupt before sending. If it is,
- // we report it missing instead.
- //
- try {
- quickCopy = ctx.quickCopy(this, dataOffset, dataLength);
-
- if (validate && idx(ctx).hasCRC32Support()) {
- // Index has the CRC32 code cached, validate the object.
- //
- expectedCRC = idx(ctx).findCRC32(src);
- if (quickCopy != null) {
- quickCopy.crc32(crc1, dataOffset, (int) dataLength);
- } else {
- long pos = dataOffset;
- long cnt = dataLength;
- while (cnt > 0) {
- final int n = (int) Math.min(cnt, buf.length);
- readFully(pos, buf, 0, n, ctx);
- crc1.update(buf, 0, n);
- pos += n;
- cnt -= n;
- }
- }
- if (crc1.getValue() != expectedCRC) {
- setCorrupt(src.offset);
- throw new CorruptObjectException(MessageFormat.format(
- JGitText.get().objectAtHasBadZlibStream,
- Long.valueOf(src.offset), getPackName()));
- }
- } else if (validate) {
- // We don't have a CRC32 code in the index, so compute it
- // now while inflating the raw data to get zlib to tell us
- // whether or not the data is safe.
- //
- Inflater inf = ctx.inflater();
- byte[] tmp = new byte[1024];
- if (quickCopy != null) {
- quickCopy.check(inf, tmp, dataOffset, (int) dataLength);
- } else {
- long pos = dataOffset;
- long cnt = dataLength;
- while (cnt > 0) {
- final int n = (int) Math.min(cnt, buf.length);
- readFully(pos, buf, 0, n, ctx);
- crc1.update(buf, 0, n);
- inf.setInput(buf, 0, n);
- while (inf.inflate(tmp, 0, tmp.length) > 0)
- continue;
- pos += n;
- cnt -= n;
- }
- }
- if (!inf.finished() || inf.getBytesRead() != dataLength) {
- setCorrupt(src.offset);
- throw new EOFException(MessageFormat.format(
- JGitText.get().shortCompressedStreamAt,
- Long.valueOf(src.offset)));
- }
- expectedCRC = crc1.getValue();
- } else {
- expectedCRC = -1;
- }
- } catch (DataFormatException dataFormat) {
- setCorrupt(src.offset);
-
- CorruptObjectException corruptObject = new CorruptObjectException(
- MessageFormat.format(
- JGitText.get().objectAtHasBadZlibStream,
- Long.valueOf(src.offset), getPackName()));
- corruptObject.initCause(dataFormat);
-
- StoredObjectRepresentationNotAvailableException gone;
- gone = new StoredObjectRepresentationNotAvailableException(src);
- gone.initCause(corruptObject);
- throw gone;
-
- } catch (IOException ioError) {
- StoredObjectRepresentationNotAvailableException gone;
- gone = new StoredObjectRepresentationNotAvailableException(src);
- gone.initCause(ioError);
- throw gone;
- }
-
- if (quickCopy != null) {
- // The entire object fits into a single byte array window slice,
- // and we have it pinned. Write this out without copying.
- //
- out.writeHeader(src, inflatedLength);
- quickCopy.write(out, dataOffset, (int) dataLength, null);
-
- } else if (dataLength <= buf.length) {
- // Tiny optimization: Lots of objects are very small deltas or
- // deflated commits that are likely to fit in the copy buffer.
- //
- if (!validate) {
- long pos = dataOffset;
- long cnt = dataLength;
- while (cnt > 0) {
- final int n = (int) Math.min(cnt, buf.length);
- readFully(pos, buf, 0, n, ctx);
- pos += n;
- cnt -= n;
- }
- }
- out.writeHeader(src, inflatedLength);
- out.write(buf, 0, (int) dataLength);
- } else {
- // Now we are committed to sending the object. As we spool it out,
- // check its CRC32 code to make sure there wasn't corruption between
- // the verification we did above, and us actually outputting it.
- //
- out.writeHeader(src, inflatedLength);
- long pos = dataOffset;
- long cnt = dataLength;
- while (cnt > 0) {
- final int n = (int) Math.min(cnt, buf.length);
- readFully(pos, buf, 0, n, ctx);
- if (validate)
- crc2.update(buf, 0, n);
- out.write(buf, 0, n);
- pos += n;
- cnt -= n;
- }
- if (validate && crc2.getValue() != expectedCRC) {
- throw new CorruptObjectException(MessageFormat.format(
- JGitText.get().objectAtHasBadZlibStream,
- Long.valueOf(src.offset), getPackName()));
- }
- }
- }
-
- boolean invalid() {
- return invalid;
- }
-
- void setInvalid() {
- invalid = true;
- }
-
- private void readFully(long position, byte[] dstbuf, int dstoff, int cnt,
- DfsReader ctx) throws IOException {
- if (ctx.copy(this, position, dstbuf, dstoff, cnt) != cnt)
- throw new EOFException();
- }
-
- long alignToBlock(long pos) {
- int size = blockSize;
- if (size == 0)
- size = cache.getBlockSize();
- return (pos / size) * size;
- }
-
- DfsBlock getOrLoadBlock(long pos, DfsReader ctx) throws IOException {
- return cache.getOrLoad(this, pos, ctx);
- }
-
- DfsBlock readOneBlock(long pos, DfsReader ctx)
- throws IOException {
- if (invalid)
- throw new PackInvalidException(getPackName());
-
- boolean close = true;
- ReadableChannel rc = ctx.db.openPackFile(packDesc);
- try {
- // If the block alignment is not yet known, discover it. Prefer the
- // larger size from either the cache or the file itself.
- int size = blockSize;
- if (size == 0) {
- size = rc.blockSize();
- if (size <= 0)
- size = cache.getBlockSize();
- else if (size < cache.getBlockSize())
- size = (cache.getBlockSize() / size) * size;
- blockSize = size;
- pos = (pos / size) * size;
- }
-
- // If the size of the file is not yet known, try to discover it.
- // Channels may choose to return -1 to indicate they don't
- // know the length yet, in this case read up to the size unit
- // given by the caller, then recheck the length.
- long len = length;
- if (len < 0) {
- len = rc.size();
- if (0 <= len)
- length = len;
- }
-
- if (0 <= len && len < pos + size)
- size = (int) (len - pos);
- if (size <= 0)
- throw new EOFException(MessageFormat.format(
- DfsText.get().shortReadOfBlock, Long.valueOf(pos),
- getPackName(), Long.valueOf(0), Long.valueOf(0)));
-
- byte[] buf = new byte[size];
- rc.position(pos);
- int cnt = IO.read(rc, buf, 0, size);
- if (cnt != size) {
- if (0 <= len) {
- throw new EOFException(MessageFormat.format(
- DfsText.get().shortReadOfBlock,
- Long.valueOf(pos),
- getPackName(),
- Integer.valueOf(size),
- Integer.valueOf(cnt)));
- }
-
- // Assume the entire thing was read in a single shot, compact
- // the buffer to only the space required.
- byte[] n = new byte[cnt];
- System.arraycopy(buf, 0, n, 0, n.length);
- buf = n;
- } else if (len < 0) {
- // With no length at the start of the read, the channel should
- // have the length available at the end.
- length = len = rc.size();
- }
-
- DfsBlock v = new DfsBlock(key, pos, buf);
- if (v.end < len)
- close = !cache.readAhead(rc, key, size, v.end, len, ctx);
- return v;
- } finally {
- if (close)
- rc.close();
- }
- }
-
- ObjectLoader load(DfsReader ctx, long pos)
- throws IOException {
- try {
- final byte[] ib = ctx.tempId;
- Delta delta = null;
- byte[] data = null;
- int type = Constants.OBJ_BAD;
- boolean cached = false;
-
- SEARCH: for (;;) {
- readFully(pos, ib, 0, 20, ctx);
- int c = ib[0] & 0xff;
- final int typeCode = (c >> 4) & 7;
- long sz = c & 15;
- int shift = 4;
- int p = 1;
- while ((c & 0x80) != 0) {
- c = ib[p++] & 0xff;
- sz += ((long) (c & 0x7f)) << shift;
- shift += 7;
- }
-
- switch (typeCode) {
- case Constants.OBJ_COMMIT:
- case Constants.OBJ_TREE:
- case Constants.OBJ_BLOB:
- case Constants.OBJ_TAG: {
- if (delta != null) {
- data = decompress(pos + p, (int) sz, ctx);
- type = typeCode;
- break SEARCH;
- }
-
- if (sz < ctx.getStreamFileThreshold()) {
- data = decompress(pos + p, (int) sz, ctx);
- if (data != null)
- return new ObjectLoader.SmallObject(typeCode, data);
- }
- return new LargePackedWholeObject(typeCode, sz, pos, p, this, ctx.db);
- }
-
- case Constants.OBJ_OFS_DELTA: {
- c = ib[p++] & 0xff;
- long base = c & 127;
- while ((c & 128) != 0) {
- base += 1;
- c = ib[p++] & 0xff;
- base <<= 7;
- base += (c & 127);
- }
- base = pos - base;
- delta = new Delta(delta, pos, (int) sz, p, base);
- if (sz != delta.deltaSize)
- break SEARCH;
-
- DeltaBaseCache.Entry e = ctx.getDeltaBaseCache().get(key, base);
- if (e != null) {
- type = e.type;
- data = e.data;
- cached = true;
- break SEARCH;
- }
- pos = base;
- continue SEARCH;
- }
-
- case Constants.OBJ_REF_DELTA: {
- readFully(pos + p, ib, 0, 20, ctx);
- long base = findDeltaBase(ctx, ObjectId.fromRaw(ib));
- delta = new Delta(delta, pos, (int) sz, p + 20, base);
- if (sz != delta.deltaSize)
- break SEARCH;
-
- DeltaBaseCache.Entry e = ctx.getDeltaBaseCache().get(key, base);
- if (e != null) {
- type = e.type;
- data = e.data;
- cached = true;
- break SEARCH;
- }
- pos = base;
- continue SEARCH;
- }
-
- default:
- throw new IOException(MessageFormat.format(
- JGitText.get().unknownObjectType, Integer.valueOf(typeCode)));
- }
- }
-
- // At this point there is at least one delta to apply to data.
- // (Whole objects with no deltas to apply return early above.)
-
- if (data == null)
- throw new LargeObjectException();
-
- do {
- // Cache only the base immediately before desired object.
- if (cached)
- cached = false;
- else if (delta.next == null)
- ctx.getDeltaBaseCache().put(key, delta.basePos, type, data);
-
- pos = delta.deltaPos;
-
- byte[] cmds = decompress(pos + delta.hdrLen, delta.deltaSize, ctx);
- if (cmds == null) {
- data = null; // Discard base in case of OutOfMemoryError
- throw new LargeObjectException();
- }
-
- final long sz = BinaryDelta.getResultSize(cmds);
- if (Integer.MAX_VALUE <= sz)
- throw new LargeObjectException.ExceedsByteArrayLimit();
-
- final byte[] result;
- try {
- result = new byte[(int) sz];
- } catch (OutOfMemoryError tooBig) {
- data = null; // Discard base in case of OutOfMemoryError
- cmds = null;
- throw new LargeObjectException.OutOfMemory(tooBig);
- }
-
- BinaryDelta.apply(data, cmds, result);
- data = result;
- delta = delta.next;
- } while (delta != null);
-
- return new ObjectLoader.SmallObject(type, data);
-
- } catch (DataFormatException dfe) {
- CorruptObjectException coe = new CorruptObjectException(
- MessageFormat.format(
- JGitText.get().objectAtHasBadZlibStream, Long.valueOf(pos),
- getPackName()));
- coe.initCause(dfe);
- throw coe;
- }
- }
-
- private long findDeltaBase(DfsReader ctx, ObjectId baseId)
- throws IOException, MissingObjectException {
- long ofs = idx(ctx).findOffset(baseId);
- if (ofs < 0)
- throw new MissingObjectException(baseId,
- JGitText.get().missingDeltaBase);
- return ofs;
- }
-
- private static class Delta {
- /** Child that applies onto this object. */
- final Delta next;
-
- /** Offset of the delta object. */
- final long deltaPos;
-
- /** Size of the inflated delta stream. */
- final int deltaSize;
-
- /** Total size of the delta's pack entry header (including base). */
- final int hdrLen;
-
- /** Offset of the base object this delta applies onto. */
- final long basePos;
-
- Delta(Delta next, long ofs, int sz, int hdrLen, long baseOffset) {
- this.next = next;
- this.deltaPos = ofs;
- this.deltaSize = sz;
- this.hdrLen = hdrLen;
- this.basePos = baseOffset;
- }
- }
-
- byte[] getDeltaHeader(DfsReader wc, long pos)
- throws IOException, DataFormatException {
- // The delta stream starts as two variable length integers. If we
- // assume they are 64 bits each, we need 16 bytes to encode them,
- // plus 2 extra bytes for the variable length overhead. So 18 is
- // the longest delta instruction header.
- //
- final byte[] hdr = new byte[32];
- wc.inflate(this, pos, hdr, true /* header only */);
- return hdr;
- }
-
- int getObjectType(DfsReader ctx, long pos) throws IOException {
- final byte[] ib = ctx.tempId;
- for (;;) {
- readFully(pos, ib, 0, 20, ctx);
- int c = ib[0] & 0xff;
- final int type = (c >> 4) & 7;
-
- switch (type) {
- case Constants.OBJ_COMMIT:
- case Constants.OBJ_TREE:
- case Constants.OBJ_BLOB:
- case Constants.OBJ_TAG:
- return type;
-
- case Constants.OBJ_OFS_DELTA: {
- int p = 1;
- while ((c & 0x80) != 0)
- c = ib[p++] & 0xff;
- c = ib[p++] & 0xff;
- long ofs = c & 127;
- while ((c & 128) != 0) {
- ofs += 1;
- c = ib[p++] & 0xff;
- ofs <<= 7;
- ofs += (c & 127);
- }
- pos = pos - ofs;
- continue;
- }
-
- case Constants.OBJ_REF_DELTA: {
- int p = 1;
- while ((c & 0x80) != 0)
- c = ib[p++] & 0xff;
- readFully(pos + p, ib, 0, 20, ctx);
- pos = findDeltaBase(ctx, ObjectId.fromRaw(ib));
- continue;
- }
-
- default:
- throw new IOException(MessageFormat.format(
- JGitText.get().unknownObjectType, Integer.valueOf(type)));
- }
- }
- }
-
- long getObjectSize(DfsReader ctx, AnyObjectId id) throws IOException {
- final long offset = idx(ctx).findOffset(id);
- return 0 < offset ? getObjectSize(ctx, offset) : -1;
- }
-
- long getObjectSize(DfsReader ctx, long pos)
- throws IOException {
- final byte[] ib = ctx.tempId;
- readFully(pos, ib, 0, 20, ctx);
- int c = ib[0] & 0xff;
- final int type = (c >> 4) & 7;
- long sz = c & 15;
- int shift = 4;
- int p = 1;
- while ((c & 0x80) != 0) {
- c = ib[p++] & 0xff;
- sz += ((long) (c & 0x7f)) << shift;
- shift += 7;
- }
-
- long deltaAt;
- switch (type) {
- case Constants.OBJ_COMMIT:
- case Constants.OBJ_TREE:
- case Constants.OBJ_BLOB:
- case Constants.OBJ_TAG:
- return sz;
-
- case Constants.OBJ_OFS_DELTA:
- c = ib[p++] & 0xff;
- while ((c & 128) != 0)
- c = ib[p++] & 0xff;
- deltaAt = pos + p;
- break;
-
- case Constants.OBJ_REF_DELTA:
- deltaAt = pos + p + 20;
- break;
-
- default:
- throw new IOException(MessageFormat.format(
- JGitText.get().unknownObjectType, Integer.valueOf(type)));
- }
-
- try {
- return BinaryDelta.getResultSize(getDeltaHeader(ctx, deltaAt));
- } catch (DataFormatException dfe) {
- CorruptObjectException coe = new CorruptObjectException(
- MessageFormat.format(
- JGitText.get().objectAtHasBadZlibStream, Long.valueOf(pos),
- getPackName()));
- coe.initCause(dfe);
- throw coe;
- }
- }
-
- void representation(DfsReader ctx, DfsObjectRepresentation r)
- throws IOException {
- final long pos = r.offset;
- final byte[] ib = ctx.tempId;
- readFully(pos, ib, 0, 20, ctx);
- int c = ib[0] & 0xff;
- int p = 1;
- final int typeCode = (c >> 4) & 7;
- while ((c & 0x80) != 0)
- c = ib[p++] & 0xff;
-
- long len = (getReverseIdx(ctx).findNextOffset(pos, length - 20) - pos);
- switch (typeCode) {
- case Constants.OBJ_COMMIT:
- case Constants.OBJ_TREE:
- case Constants.OBJ_BLOB:
- case Constants.OBJ_TAG:
- r.format = StoredObjectRepresentation.PACK_WHOLE;
- r.length = len - p;
- return;
-
- case Constants.OBJ_OFS_DELTA: {
- c = ib[p++] & 0xff;
- long ofs = c & 127;
- while ((c & 128) != 0) {
- ofs += 1;
- c = ib[p++] & 0xff;
- ofs <<= 7;
- ofs += (c & 127);
- }
- ofs = pos - ofs;
- r.format = StoredObjectRepresentation.PACK_DELTA;
- r.baseId = findObjectForOffset(ctx, ofs);
- r.length = len - p;
- return;
- }
-
- case Constants.OBJ_REF_DELTA: {
- len -= p;
- len -= Constants.OBJECT_ID_LENGTH;
- readFully(pos + p, ib, 0, 20, ctx);
- ObjectId id = ObjectId.fromRaw(ib);
- r.format = StoredObjectRepresentation.PACK_DELTA;
- r.baseId = id;
- r.length = len;
- return;
- }
-
- default:
- throw new IOException(MessageFormat.format(
- JGitText.get().unknownObjectType, Integer.valueOf(typeCode)));
- }
- }
-
- private boolean isCorrupt(long offset) {
- LongList list = corruptObjects;
- if (list == null)
- return false;
- synchronized (list) {
- return list.contains(offset);
- }
- }
-
- private void setCorrupt(long offset) {
- LongList list = corruptObjects;
- if (list == null) {
- synchronized (initLock) {
- list = corruptObjects;
- if (list == null) {
- list = new LongList();
- corruptObjects = list;
- }
- }
- }
- synchronized (list) {
- list.add(offset);
- }
- }
- }
|