summary | shortlog | log | commit | commitdiff | tree
raw | patch | inline | side by side (parent: 5b8a94b)
raw | patch | inline | side by side (parent: 5b8a94b)
author | Avery Pennarun <apenwarr@gmail.com> | |
Sat, 28 Jun 2008 23:33:56 +0000 (19:33 -0400) | ||
committer | Junio C Hamano <gitster@pobox.com> | |
Sun, 29 Jun 2008 02:57:22 +0000 (19:57 -0700) |
Commit ffe256f9bac8a40ff751a9341a5869d98f72c285 ("git-svn: Speed up fetch")
introduced changes that create a temporary file for each object fetched by
svn. These files should be deleted automatically, but perl apparently
doesn't do this until the process exits (or perhaps when its garbage
collector runs).
This means that on a large fetch, especially with lots of branches, we
sometimes fill up /tmp completely, which prevents the next temp file from
being written completely. This is aggravated by the fact that a new temp
file is created for each updated file, even if that update produces a file
identical to one already in git. Thus, it can happen even if there's lots
of disk space to store the finished repository.
We weren't adequately checking for write errors, so this would result in an
invalid file getting committed, which caused git-svn to fail later with an
invalid checksum.
This patch adds a check to syswrite() so similar problems don't lead to
corruption in the future. It also unlink()'s each temp file explicitly
when we're done with it, so the disk doesn't need to fill up.
Signed-off-by: Avery Pennarun <apenwarr@gmail.com>
Tested-by: Björn Steinbrink <B.Steinbrink@gmx.de>
Acked-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
introduced changes that create a temporary file for each object fetched by
svn. These files should be deleted automatically, but perl apparently
doesn't do this until the process exits (or perhaps when its garbage
collector runs).
This means that on a large fetch, especially with lots of branches, we
sometimes fill up /tmp completely, which prevents the next temp file from
being written completely. This is aggravated by the fact that a new temp
file is created for each updated file, even if that update produces a file
identical to one already in git. Thus, it can happen even if there's lots
of disk space to store the finished repository.
We weren't adequately checking for write errors, so this would result in an
invalid file getting committed, which caused git-svn to fail later with an
invalid checksum.
This patch adds a check to syswrite() so similar problems don't lead to
corruption in the future. It also unlink()'s each temp file explicitly
when we're done with it, so the disk doesn't need to fill up.
Signed-off-by: Avery Pennarun <apenwarr@gmail.com>
Tested-by: Björn Steinbrink <B.Steinbrink@gmx.de>
Acked-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
git-svn.perl | patch | blob | history |
diff --git a/git-svn.perl b/git-svn.perl
index 4c9c59bc3ffb9ed2f8e808cf6849562669adfd18..50ace22da24be76d731dfdba852f967ca80569aa 100755 (executable)
--- a/git-svn.perl
+++ b/git-svn.perl
my ($tmp_fh, $tmp_filename) = File::Temp::tempfile(UNLINK => 1);
my $result;
while ($result = sysread($fh, my $string, 1024)) {
- syswrite($tmp_fh, $string, $result);
+ my $wrote = syswrite($tmp_fh, $string, $result);
+ defined($wrote) && $wrote == $result
+ or croak("write $tmp_filename: $!\n");
}
defined $result or croak $!;
close $tmp_fh or croak $!;
close $fh or croak $!;
$hash = $::_repository->hash_and_insert_object($tmp_filename);
+ unlink($tmp_filename);
$hash =~ /^[a-f\d]{40}$/ or die "not a sha1: $hash\n";
close $fb->{base} or croak $!;
} else {