author | Nicolas Pitre <nico@fluxnic.net> | |
Sun, 21 Feb 2010 20:48:06 +0000 (15:48 -0500) | ||
committer | Junio C Hamano <gitster@pobox.com> | |
Mon, 22 Feb 2010 06:33:25 +0000 (22:33 -0800) | ||
commit | 748af44c63ea6fec12690f1693f3dddd963e88d5 | |
tree | ea8e162830bacb7803bb4c397c57e795260d94c9 | tree | snapshot |
parent | 9892bebafe0865d8f4f3f18d60a1cfa2d1447cd7 | commit | diff |
sha1_file: be paranoid when creating loose objects
We don't want the data being deflated and stored into loose objects
to be different from what we expect. While the deflated data is
protected by a CRC which is good enough for safe data retrieval
operations, we still want to be doubly sure that the source data used
at object creation time is still what we expected once that data has
been deflated and its CRC32 computed.
The most plausible data corruption may occur if the source file is
modified while Git is deflating and writing it out in a loose object.
Or Git itself could have a bug causing memory corruption. Or even bad
RAM could cause trouble. So it is best to make sure everything is
coherent and checksum protected from beginning to end.
To do so we compute the SHA1 of the data being deflated _after_ the
deflate operation has consumed that data, and make sure it matches
with the expected SHA1. This way we can rely on the CRC32 checked by
the inflate operation to provide a good indication that the data is still
coherent with its SHA1 hash. One pathological case we ignore is when
the data is modified before (or during) deflate call, but changed back
before it is hashed.
There is some overhead of course. Using 'git add' on a set of large files:
Before:
real 0m25.210s
user 0m23.783s
sys 0m1.408s
After:
real 0m26.537s
user 0m25.175s
sys 0m1.358s
The overhead is around 5% for full data coherency guarantee.
Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We don't want the data being deflated and stored into loose objects
to be different from what we expect. While the deflated data is
protected by a CRC which is good enough for safe data retrieval
operations, we still want to be doubly sure that the source data used
at object creation time is still what we expected once that data has
been deflated and its CRC32 computed.
The most plausible data corruption may occur if the source file is
modified while Git is deflating and writing it out in a loose object.
Or Git itself could have a bug causing memory corruption. Or even bad
RAM could cause trouble. So it is best to make sure everything is
coherent and checksum protected from beginning to end.
To do so we compute the SHA1 of the data being deflated _after_ the
deflate operation has consumed that data, and make sure it matches
with the expected SHA1. This way we can rely on the CRC32 checked by
the inflate operation to provide a good indication that the data is still
coherent with its SHA1 hash. One pathological case we ignore is when
the data is modified before (or during) deflate call, but changed back
before it is hashed.
There is some overhead of course. Using 'git add' on a set of large files:
Before:
real 0m25.210s
user 0m23.783s
sys 0m1.408s
After:
real 0m26.537s
user 0m25.175s
sys 0m1.358s
The overhead is around 5% for full data coherency guarantee.
Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
sha1_file.c | diff | blob | history |