author | Shawn O. Pearce <spearce@spearce.org> | |
Sun, 11 Nov 2007 07:29:47 +0000 (02:29 -0500) | ||
committer | Junio C Hamano <gitster@pobox.com> | |
Mon, 12 Nov 2007 01:09:55 +0000 (17:09 -0800) | ||
commit | 4191c35671f6392173221bea3994f8b305f4f3a8 | |
tree | 25d2b9c69692a73416eb6004663649a3ce8f2d6b | tree | snapshot |
parent | 27350891de59608d4db689cf0851f7e49158a6e3 | commit | diff |
git-fetch: avoid local fetching from alternate (again)
Back in e3c6f240fd9c5bdeb33f2d47adc859f37935e2df Junio taught
git-fetch to avoid copying objects when we are fetching from
a repository that is already registered as an alternate object
database. In such a case there is no reason to copy any objects
as we can already obtain them through the alternate.
However we need to ensure the objects are all reachable, so we
run `git rev-list --objects $theirs --not --all` to verify this.
If any object is missing or unreadable then we need to fetch/copy
the objects from the remote. When a missing object is detected
the git-rev-list process will exit with a non-zero exit status,
making this condition quite easy to detect.
Although git-fetch is currently a builtin (and so is rev-list)
we cannot invoke the traverse_objects() API at this point in the
transport code. The object walker within traverse_objects() calls
die() as soon as it finds an object it cannot read. If that happens
we want to resume the fetch process by calling do_fetch_pack().
To get around this we spawn git-rev-list into a background process
to prevent a die() from killing the foreground fetch process,
thus allowing the fetch process to resume into do_fetch_pack()
if copying is necessary.
We aren't interested in the output of rev-list (a list of SHA-1
object names that are reachable) or its errors (a "spurious" error
about an object not being found as we need to copy it) so we redirect
both stdout and stderr to /dev/null.
We run this git-rev-list based check before any fetch as we may
already have the necessary objects local from a prior fetch. If we
don't then its very likely the first $theirs object listed on the
command line won't exist locally and git-rev-list will die very
quickly, allowing us to start the network transfer. This test even
on remote URLs may save bandwidth if someone runs `git pull origin`,
sees a merge conflict, resets out, then redoes the same pull just
a short time later. If the remote hasn't changed between the two
pulls and the local repository hasn't had git-gc run in it then
there is probably no need to perform network transfer as all of
the objects are local.
Documentation for the new quickfetch function was suggested and
written by Junio, based on his original comment in git-fetch.sh.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Back in e3c6f240fd9c5bdeb33f2d47adc859f37935e2df Junio taught
git-fetch to avoid copying objects when we are fetching from
a repository that is already registered as an alternate object
database. In such a case there is no reason to copy any objects
as we can already obtain them through the alternate.
However we need to ensure the objects are all reachable, so we
run `git rev-list --objects $theirs --not --all` to verify this.
If any object is missing or unreadable then we need to fetch/copy
the objects from the remote. When a missing object is detected
the git-rev-list process will exit with a non-zero exit status,
making this condition quite easy to detect.
Although git-fetch is currently a builtin (and so is rev-list)
we cannot invoke the traverse_objects() API at this point in the
transport code. The object walker within traverse_objects() calls
die() as soon as it finds an object it cannot read. If that happens
we want to resume the fetch process by calling do_fetch_pack().
To get around this we spawn git-rev-list into a background process
to prevent a die() from killing the foreground fetch process,
thus allowing the fetch process to resume into do_fetch_pack()
if copying is necessary.
We aren't interested in the output of rev-list (a list of SHA-1
object names that are reachable) or its errors (a "spurious" error
about an object not being found as we need to copy it) so we redirect
both stdout and stderr to /dev/null.
We run this git-rev-list based check before any fetch as we may
already have the necessary objects local from a prior fetch. If we
don't then its very likely the first $theirs object listed on the
command line won't exist locally and git-rev-list will die very
quickly, allowing us to start the network transfer. This test even
on remote URLs may save bandwidth if someone runs `git pull origin`,
sees a merge conflict, resets out, then redoes the same pull just
a short time later. If the remote hasn't changed between the two
pulls and the local repository hasn't had git-gc run in it then
there is probably no need to perform network transfer as all of
the objects are local.
Documentation for the new quickfetch function was suggested and
written by Junio, based on his original comment in git-fetch.sh.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
builtin-fetch.c | diff | blob | history | |
t/t5502-quickfetch.sh | diff | blob | history |