Corrected BNF input documentation for fast-import.
Now that fast-import uses uintmax_t (the largest available unsigned
integer type) for marks we don't want to say its an unsigned 32
bit integer in ASCII base 10 notation. It could be much larger,
especially on 64 bit systems, and especially if a frontend uses
a very large number of marks (1 per file revision on a very, very
large import).
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Now that fast-import uses uintmax_t (the largest available unsigned
integer type) for marks we don't want to say its an unsigned 32
bit integer in ASCII base 10 notation. It could be much larger,
especially on 64 bit systems, and especially if a frontend uses
a very large number of marks (1 per file revision on a very, very
large import).
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Print out the edge commits for each packfile in fast-import.
To help callers repack very large repositories into a series of
packfiles fast-import now outputs the last commits/tags it wrote to
a packfile when it prints out the packfile name. This information
can be feed to pack-objects --revs to repack. For the first pack
of an initial import this is pretty easy (just feed those SHA1s on
stdin) but for subsequent packs you want to feed the subsequent
pack's final SHA1s but also all prior pack's SHA1s prefixed with
the negation operator. This way the prior pack's data does not
get included into the subsequent pack.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
To help callers repack very large repositories into a series of
packfiles fast-import now outputs the last commits/tags it wrote to
a packfile when it prints out the packfile name. This information
can be feed to pack-objects --revs to repack. For the first pack
of an initial import this is pretty easy (just feed those SHA1s on
stdin) but for subsequent packs you want to feed the subsequent
pack's final SHA1s but also all prior pack's SHA1s prefixed with
the negation operator. This way the prior pack's data does not
get included into the subsequent pack.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Correct object_count type and stat output in fast-import.
Since object_count is limited to 'unsigned long' (really an
unsigned 32 bit integer value) by the pack file format we may as
well use exactly that type here in fast-import for that counter.
An earlier change by me incorrectly made it uintmax_t.
But since object_count is a counter for the current packfile only,
we don't want to output its value at the end. Instead we should
sum up the individual type counters and report that total, as that
will cover all of the packfiles.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Since object_count is limited to 'unsigned long' (really an
unsigned 32 bit integer value) by the pack file format we may as
well use exactly that type here in fast-import for that counter.
An earlier change by me incorrectly made it uintmax_t.
But since object_count is a counter for the current packfile only,
we don't want to output its value at the end. Instead we should
sum up the individual type counters and report that total, as that
will cover all of the packfiles.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Correct max_packsize default in fast-import.
Apparently amd64 has defined 'unsigned long' to be a 64 bit value,
which means -1 was way over the 4 GiB packfile limit. Whoops.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Apparently amd64 has defined 'unsigned long' to be a 64 bit value,
which means -1 was way over the 4 GiB packfile limit. Whoops.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Remove unnecessary pack_fd global in fast-import.
Much like the pack_sha1 the pack_fd is an unnecessary global
variable, we already have the fd stored in our struct packed_git
*pack_data so that the core library functions in sha1_file.c are
able to lookup and decompress object data that we have previously
written. Keeping an extra copy of this value in our own variable
is just a hold-over from earlier versions of fast-import and is
now completely unnecessary.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Much like the pack_sha1 the pack_fd is an unnecessary global
variable, we already have the fd stored in our struct packed_git
*pack_data so that the core library functions in sha1_file.c are
able to lookup and decompress object data that we have previously
written. Keeping an extra copy of this value in our own variable
is just a hold-over from earlier versions of fast-import and is
now completely unnecessary.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Ensure we close the packfile after creating it in fast-import.
Because we are renaming the packfile into its file destination we
need to be sure its not open when the rename is called, otherwise
some operating systems (e.g. Windows) may prevent the rename from
occurring.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Because we are renaming the packfile into its file destination we
need to be sure its not open when the rename is called, otherwise
some operating systems (e.g. Windows) may prevent the rename from
occurring.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Use .keep files in fast-import during processing.
Because fast-import automatically updates all references (heads
and tags) at the end of its run the repository is corrupt unless
the objects are available in the .git/objects/pack directory prior
to the refs being modified. The easiest way to ensure that is true
is to move the packfile and its associated index directly into the
.git/objects/pack directory as soon as we have finished output to it.
But the only safe way to do this is to create the a temporary .keep
file for that pack, so we use the same tricks that index-pack uses
when its being invoked by receive-pack.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Because fast-import automatically updates all references (heads
and tags) at the end of its run the repository is corrupt unless
the objects are available in the .git/objects/pack directory prior
to the refs being modified. The easiest way to ensure that is true
is to move the packfile and its associated index directly into the
.git/objects/pack directory as soon as we have finished output to it.
But the only safe way to do this is to create the a temporary .keep
file for that pack, so we use the same tricks that index-pack uses
when its being invoked by receive-pack.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Reuse sha1 in packed_git in fast-import.
Rather than maintaing our own packfile level sha1 variable we
can make use of the one already available in struct packed_git.
Its meant for the SHA1 of the index but it can also hold the
SHA1 of the packfile itself between final checksumming of the
packfile and creation of the index.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Rather than maintaing our own packfile level sha1 variable we
can make use of the one already available in struct packed_git.
Its meant for the SHA1 of the index but it can also hold the
SHA1 of the packfile itself between final checksumming of the
packfile and creation of the index.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Replace redundant yread() with read_in_full() in fast-import.
Prior to git having read_in_full() fast-import used its own private
function yread to perform the header reading task. No sense in
keeping that around now that read_in_full is a public, stable
function.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Prior to git having read_in_full() fast-import used its own private
function yread to perform the header reading task. No sense in
keeping that around now that read_in_full is a public, stable
function.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Use uintmax_t for marks in fast-import.
If a frontend wants to use a mark per file revision and per commit
and is doing a truly huge import (such as a 32 GiB SVN repository)
we may need more than 2**32 unique mark values, especially if the
frontend is unable (or unwilling) to recycle mark values. For mark
idnums we should use the largest unsigned integer type available,
hoping that will be at least 64 bits when we are compiled as a 64
bit executable. This way we may consume huge amounts of memory
storing our mark table, but we'll at least be able to process
the entire import without failing.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If a frontend wants to use a mark per file revision and per commit
and is doing a truly huge import (such as a 32 GiB SVN repository)
we may need more than 2**32 unique mark values, especially if the
frontend is unable (or unwilling) to recycle mark values. For mark
idnums we should use the largest unsigned integer type available,
hoping that will be at least 64 bits when we are compiled as a 64
bit executable. This way we may consume huge amounts of memory
storing our mark table, but we'll at least be able to process
the entire import without failing.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Corrected buffer overflow during automatic checkpoint in fast-import.
If we previously were using a delta but we needed to checkpoint the
current packfile and switch to a new packfile we need to throw away
the delta and compress the raw object by itself, as delta chains
cannot span non-thin packfiles. Unfortunately the output buffer
in this case needs to grow, as the size of the compressed object
may be quite a bit larger than the size of the compressed delta.
I've also avoided recompressing the object if we are checkpointing
and we didn't use a delta. In this case the output buffer is the
correct size and has already been populated with the right data,
we just need to close out the current packfile and open a new one.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If we previously were using a delta but we needed to checkpoint the
current packfile and switch to a new packfile we need to throw away
the delta and compress the raw object by itself, as delta chains
cannot span non-thin packfiles. Unfortunately the output buffer
in this case needs to grow, as the size of the compressed object
may be quite a bit larger than the size of the compressed delta.
I've also avoided recompressing the object if we are checkpointing
and we didn't use a delta. In this case the output buffer is the
correct size and has already been populated with the right data,
we just need to close out the current packfile and open a new one.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Print the packfile names to stdout from fast-import.
Caller scripts may want to know what packfiles the fast-import
process just wrote out for them. This is now output to stdout,
one packfile name per line, after we checkpoint each packfile.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Caller scripts may want to know what packfiles the fast-import
process just wrote out for them. This is now output to stdout,
one packfile name per line, after we checkpoint each packfile.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Implemented automatic checkpoints within fast-import.
When the number of objects or number of bytes gets close to the limit
allowed by the packfile format (or configured on the command line by
our caller) we should automatically checkpoint the current packfile
and start a new one before writing the object out. This does however
require that we abandon the delta (if we had one) as its not valid
in a new packfile.
I also added the simple rule that if we got a delta back but the
delta itself is the same size as or larger than the uncompressed
object to ignore the delta and just store the object data. This
should avoid some really bad behavior caused by our current delta
strategy.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When the number of objects or number of bytes gets close to the limit
allowed by the packfile format (or configured on the command line by
our caller) we should automatically checkpoint the current packfile
and start a new one before writing the object out. This does however
require that we abandon the delta (if we had one) as its not valid
in a new packfile.
I also added the simple rule that if we got a delta back but the
delta itself is the same size as or larger than the uncompressed
object to ignore the delta and just store the object data. This
should avoid some really bad behavior caused by our current delta
strategy.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Optimize index creation on large object sets in fast-import.
When we are generating multiple packfiles at once we only need
to scan the blocks of object_entry structs which contain objects
for the current packfile. Because the most recent blocks are at
the front of the linked list, and because all new objects going
into the current file are allocated from the front of that list,
we can stop scanning for objects as soon as we identify one which
doesn't belong to the current packfile.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When we are generating multiple packfiles at once we only need
to scan the blocks of object_entry structs which contain objects
for the current packfile. Because the most recent blocks are at
the front of the linked list, and because all new objects going
into the current file are allocated from the front of that list,
we can stop scanning for objects as soon as we identify one which
doesn't belong to the current packfile.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Don't create a final empty packfile in fast-import.
If the last packfile is going to be empty (has 0 objects) then it
shouldn't be kept after the import has terminated, as there is no
point to the packfile. So rather than hashing it and making the
index file, just delete the packfile.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the last packfile is going to be empty (has 0 objects) then it
shouldn't be kept after the import has terminated, as there is no
point to the packfile. So rather than hashing it and making the
index file, just delete the packfile.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Implemented manual packfile switching in fast-import.
To help importers which are dealing with massive amounts of data
fast-import needs to be able to close the packfile it is currently
writing to and open a new packfile for any additional data that
will be received. A new 'checkpoint' command has been introduced
which can be used by the frontend import process to force this
to occur at any time. This may be useful to ensure a very long
running import doesn't lose any work due to unexpected failures.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
To help importers which are dealing with massive amounts of data
fast-import needs to be able to close the packfile it is currently
writing to and open a new packfile for any additional data that
will be received. A new 'checkpoint' command has been introduced
which can be used by the frontend import process to force this
to occur at any time. This may be useful to ensure a very long
running import doesn't lose any work due to unexpected failures.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Remove unnecessary duplicate_count in fast-import.
There is little reason to be keeping a global duplicate_count
value when we also keep it per object type. The global counter can
easily be computed at the end, once all processing has completed.
This saves us a couple of machine instructions in an unimportant
part of code. But it looks slightly better to me to not keep
two counters around.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
There is little reason to be keeping a global duplicate_count
value when we also keep it per object type. The global counter can
easily be computed at the end, once all processing has completed.
This saves us a couple of machine instructions in an unimportant
part of code. But it looks slightly better to me to not keep
two counters around.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Restructure fast-import to support creating multiple packfiles.
Now that we are starting to see some really large projects (such
as KDE or a fork of FreeBSD) get imported into Git we're running
into the upper limit on packfile object count as well as overall
byte length. The KDE and FreeBSD projects are both likely to
require more than 4 GiB to store their current history, which means
we really need multiple packfiles to handle their content.
This is a fairly simple restructuring of the internal code to help
us support creating multiple packfiles from within fast-import.
We are now adding a 5 digit incrementing suffix to the end of the
basename supplied to us by the caller, permitting up to 99,999
packs to be generated in a single fast-import run.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Now that we are starting to see some really large projects (such
as KDE or a fork of FreeBSD) get imported into Git we're running
into the upper limit on packfile object count as well as overall
byte length. The KDE and FreeBSD projects are both likely to
require more than 4 GiB to store their current history, which means
we really need multiple packfiles to handle their content.
This is a fairly simple restructuring of the internal code to help
us support creating multiple packfiles from within fast-import.
We are now adding a 5 digit incrementing suffix to the end of the
basename supplied to us by the caller, permitting up to 99,999
packs to be generated in a single fast-import run.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Misc. type cleanups within fast-import.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Improve reuse of sha1_file library within fast-import.
Now that the sha1_file.c library routines use the sliding mmap
routines to perform efficient access to portions of a packfile
I can remove that code from fast-import.c and just invoke it.
One benefit is we now have reloading support for any packfile which
uses OBJ_OFS_DELTA. Another is we have significantly less code
to maintain.
This code reuse change *requires* that fast-import generate only
an OBJ_OFS_DELTA format packfile, as there is absolutely no index
available to perform OBJ_REF_DELTA lookup in while unpacking
an object. This is probably reasonable to require as the delta
offsets result in smaller packfiles and are faster to unpack,
as no index searching is required. Its also only a temporary
requirement as users could always repack without offsets before
making the import available to older versions of Git.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Now that the sha1_file.c library routines use the sliding mmap
routines to perform efficient access to portions of a packfile
I can remove that code from fast-import.c and just invoke it.
One benefit is we now have reloading support for any packfile which
uses OBJ_OFS_DELTA. Another is we have significantly less code
to maintain.
This code reuse change *requires* that fast-import generate only
an OBJ_OFS_DELTA format packfile, as there is absolutely no index
available to perform OBJ_REF_DELTA lookup in while unpacking
an object. This is probably reasonable to require as the delta
offsets result in smaller packfiles and are faster to unpack,
as no index searching is required. Its also only a temporary
requirement as users could always repack without offsets before
making the import available to older versions of Git.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Merge branch 'master' into sp/fast-import
I'm bringing master in early so that the OBJ_OFS_DELTA implementation
is available as part of the topic. This way git-fast-import can
learn about this new slightly smaller and faster packfile format,
and can generate them directly rather than needing to have them be
repacked with git-pack-objects.
Due to the API changes in master during the period of development
of git-fast-import, a few minor tweaks to fast-import.c are needed
to produce a working merge. I've done them here as part of the
merge to ensure bisection always works.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
I'm bringing master in early so that the OBJ_OFS_DELTA implementation
is available as part of the topic. This way git-fast-import can
learn about this new slightly smaller and faster packfile format,
and can generate them directly rather than needing to have them be
repacked with git-pack-objects.
Due to the API changes in master during the period of development
of git-fast-import, a few minor tweaks to fast-import.c are needed
to produce a working merge. I've done them here as part of the
merge to ensure bisection always works.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Allow creating branches without committing in fast-import.
Some importers may want to create a branch long before they actually
commit to it, or in some cases they may never commit to the branch
but they still need the ref to be created in the repository after
the import is complete.
This extends the 'reset ' command to automatically create a new
branch if the supplied reference isn't already known as a branch.
While I'm at it I also modified the syntax of the reset command
to terminate with an empty line, like commit and tag operate.
This just makes the command set more consistent.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Some importers may want to create a branch long before they actually
commit to it, or in some cases they may never commit to the branch
but they still need the ref to be created in the repository after
the import is complete.
This extends the 'reset ' command to automatically create a new
branch if the supplied reference isn't already known as a branch.
While I'm at it I also modified the syntax of the reset command
to terminate with an empty line, like commit and tag operate.
This just makes the command set more consistent.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Support creation of merge commits in fast-import.
Some importers are able to determine when branch merges occurred
within their source data. In these cases they will want to supply
the correct commits to fast-import so that a proper merge commit
will exist in Git. This is now supported by supplying a 'merge '
command after the commit message and optional from command.
A merge is not actually performed by fast-import, its assumed that
the frontend performed any sort of merging activity already and
that fast-import should simply be storing its result.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Some importers are able to determine when branch merges occurred
within their source data. In these cases they will want to supply
the correct commits to fast-import so that a proper merge commit
will exist in Git. This is now supported by supplying a 'merge '
command after the commit message and optional from command.
A merge is not actually performed by fast-import, its assumed that
the frontend performed any sort of merging activity already and
that fast-import should simply be storing its result.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Fix repository corruption when using marks for modified blobs.
Apparently we did not copy the blob SHA1 into the stack variable
'sha1' when a mark is used to refer to a prior blob. This code
was not previously tested as the Mozilla CVS -> git-fast-import
program always fed us full SHA1s for modified blobs and did not
use the mark feature there.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Apparently we did not copy the blob SHA1 into the stack variable
'sha1' when a mark is used to refer to a prior blob. This code
was not previously tested as the Mozilla CVS -> git-fast-import
program always fed us full SHA1s for modified blobs and did not
use the mark feature there.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Additional fast-import tree delta corruption cleanups.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Correct tree corruption problems in fast-import.
The new tree delta implementation caused blob SHA1s to be used
instead of a tree SHA1 when a tree was written out. This really
only appeared to happen when converting an existing file to a tree,
but may have been possible in some other situations.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The new tree delta implementation caused blob SHA1s to be used
instead of a tree SHA1 when a tree was written out. This really
only appeared to happen when converting an existing file to a tree,
but may have been possible in some other situations.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Replace ywrite in fast-import with the standard write_or_die.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Reuse the same buffer for all commits/tags in fast-import.
Since most commits and tag objects are around the same size and we
only generate one at a time we can reuse the same buffer rather than
xmalloc'ing and free'ing the buffer every time we generate a commit.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Since most commits and tag objects are around the same size and we
only generate one at a time we can reuse the same buffer rather than
xmalloc'ing and free'ing the buffer every time we generate a commit.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Recycle data buffers for tree generation in fast-import.
We only ever generate at most two tree streams at a time. Since most
trees are around the same size we can simply recycle the buffers from
one tree generation to the next rather than constantly xmalloc'ing
and free'ing them. This should perform slightly better when handling
a large number of trees as malloc has less work to do.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We only ever generate at most two tree streams at a time. Since most
trees are around the same size we can simply recycle the buffers from
one tree generation to the next rather than constantly xmalloc'ing
and free'ing them. This should perform slightly better when handling
a large number of trees as malloc has less work to do.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Implemented tree delta compression in fast-import.
We now store for every tree entry two modes and two sha1 values;
the base (aka "version 0") and the current/new (aka "version 1").
When we generate a tree object we also regenerate the prior version
object and use that as our base object for a delta. This strategy
saves a significant amount of memory as we can continue to use the
atom pool for file/directory names and only increases each tree
entry by an additional 24 bytes of memory.
Branches should automatically delta against their ancestor tree,
unless the ancestor tree is already at the delta chain limit.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
We now store for every tree entry two modes and two sha1 values;
the base (aka "version 0") and the current/new (aka "version 1").
When we generate a tree object we also regenerate the prior version
object and use that as our base object for a delta. This strategy
saves a significant amount of memory as we can continue to use the
atom pool for file/directory names and only increases each tree
entry by an additional 24 bytes of memory.
Branches should automatically delta against their ancestor tree,
unless the ancestor tree is already at the delta chain limit.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Converted hash memcpy/memcmp to new hashcpy/hashcmp/hashclr.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Don't crash fast-import if no branch log was requested.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Added 'reset' command to clear a branch's tree.
Sometimes an import frontend may need to work with a temporary branch
which will actually contain many different branches over the life
of the import. This is especially useful when the frontend needs
to create a tag from a set of file versions which are otherwise
never a commit.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Sometimes an import frontend may need to work with a temporary branch
which will actually contain many different branches over the life
of the import. This is especially useful when the frontend needs
to create a tag from a set of file versions which are otherwise
never a commit.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Map only part of the generated pack file at any point in time.
When generating a very large pack file (for example close to 1 GB
in size) it may be impossible for the kernel to find a contiguous
free range within a 32 bit address space for the mapping to be
located at. This is especially problematic on large imports where
there is a lot of malloc activity occuring within the same process
and the malloc'd regions may straddle the previously mapped regions,
thereby creating large holes in the address space.
So instead we map only 128 MB of the pack at any given time.
This will likely increase the number of times the file gets mapped
(with additional system time required to update the page tables
more frequently) but will allow the program to handle packs up to
4 GB in size.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When generating a very large pack file (for example close to 1 GB
in size) it may be impossible for the kernel to find a contiguous
free range within a 32 bit address space for the mapping to be
located at. This is especially problematic on large imports where
there is a lot of malloc activity occuring within the same process
and the malloc'd regions may straddle the previously mapped regions,
thereby creating large holes in the address space.
So instead we map only 128 MB of the pack at any given time.
This will likely increase the number of times the file gets mapped
(with additional system time required to update the page tables
more frequently) but will allow the program to handle packs up to
4 GB in size.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Fixed compile error in fast-import.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Fixed GPF in fast-import caused by unterminated linked list.
fast-import was encounting a GPF when it ran out of free tree_entry
objects but didn't know this was the cause because the last
tree_entry wasn't terminated with a NULL pointer. The missing NULL
pointer occurred when we allocated additional entries via xmalloc
but didn't set the last tree_entry's "next" pointer to NULL.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
fast-import was encounting a GPF when it ran out of free tree_entry
objects but didn't know this was the cause because the last
tree_entry wasn't terminated with a NULL pointer. The missing NULL
pointer occurred when we allocated additional entries via xmalloc
but didn't set the last tree_entry's "next" pointer to NULL.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Added --branch-log to option to fast-import.
This option can be used to have a record of every commit, the mark
(if supplied) and branch name of the commit recorded into a log file
when the commit is generated. This log can be useful to verify the
results of an import as the commits can be compared to some source
repository matching commits through the mark value.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This option can be used to have a record of every commit, the mark
(if supplied) and branch name of the commit recorded into a log file
when the commit is generated. This log can be useful to verify the
results of an import as the commits can be compared to some source
repository matching commits through the mark value.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Added option to export the marks table when fast-import terminates.
The marks table can be used by the frontend to load any commit after
the import and compare it to whatever data the frontend knows about
that commit. If the mark idnums can be easily correlated to some
reference source then its relatively trivial to compare the GIT
tree to the reference to verify the accuracy of the import.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The marks table can be used by the frontend to load any commit after
the import and compare it to whatever data the frontend knows about
that commit. If the mark idnums can be easily correlated to some
reference source then its relatively trivial to compare the GIT
tree to the reference to verify the accuracy of the import.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Account for tree entry memory costs in fast-import.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Moved from command to after data to help cvs2svn.
cvs2svn has three phases: begin_commit, middle_commit, end_commit.
The ancester is computed in the middle_commit phase. So its easier
to generate a stream if the from command appears after the commit
message itself but before the file change commands.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
cvs2svn has three phases: begin_commit, middle_commit, end_commit.
The ancester is computed in the middle_commit phase. So its easier
to generate a stream if the from command appears after the commit
message itself but before the file change commands.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Remove branch creation command from fast-import.
Jon Smirl was finding it difficult to alter cvs2svn to generate
branch commands prior to the first commit of the same branch.
This change moves the 'from' command to be an optional parameter of
the 'commit' command, thereby allowing a new branch to be defined
at the moment it gets used to create the first commit on that branch.
This change makes it impossible to create a branch with no commits
on it as at least one commit is needed to register the branch.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Jon Smirl was finding it difficult to alter cvs2svn to generate
branch commands prior to the first commit of the same branch.
This change moves the 'from' command to be an optional parameter of
the 'commit' command, thereby allowing a new branch to be defined
at the moment it gets used to create the first commit on that branch.
This change makes it impossible to create a branch with no commits
on it as at least one commit is needed to register the branch.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Round out memory pool allocations in fast-import to pointer sizes.
Some architectures (e.g. SPARC) would require that we access pointers
only on pointer-sized alignments. So ensure the pool allocator
rounds out non-pointer sized allocations to the next pointer so we
don't generate bad memory addresses. This could have occurred if
we had previously allocated an atom whose string was not a whole
multiple of the pointer size, for example.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Some architectures (e.g. SPARC) would require that we access pointers
only on pointer-sized alignments. So ensure the pool allocator
rounds out non-pointer sized allocations to the next pointer so we
don't generate bad memory addresses. This could have occurred if
we had previously allocated an atom whose string was not a whole
multiple of the pointer size, for example.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Implemented tree reloading in fast-import.
Tree reloading allows fast-import to swap out the least-recently used
branch by simply deallocating the data structures from memory that
were associated with that branch. Later if the branch becomes active
again it can lazily recreate those structures on demand by reloading
the necessary trees from the pack file it originally wrote them to.
The reloading process is implemented by mmap'ing the pack into
memory and using a much tighter variant of the pack reading code
contained in sha1_file.c. This was a blatent copy from sha1_file.c
but the unpacking functions were significantly simplified and are
actually now in a form that should make it easier to map only the
necessary regions of a pack rather than the entire file.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Tree reloading allows fast-import to swap out the least-recently used
branch by simply deallocating the data structures from memory that
were associated with that branch. Later if the branch becomes active
again it can lazily recreate those structures on demand by reloading
the necessary trees from the pack file it originally wrote them to.
The reloading process is implemented by mmap'ing the pack into
memory and using a much tighter variant of the pack reading code
contained in sha1_file.c. This was a blatent copy from sha1_file.c
but the unpacking functions were significantly simplified and are
actually now in a form that should make it easier to map only the
necessary regions of a pack rather than the entire file.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Implemented 'tag' command in fast-import.
Tags received from the frontend are generated in memory in a simple
linked list in the order that the tag commands were sent by the
frontend. If multiple different tag objects for the same tag name
get generated the last one sent by the frontend will be the one
that gets written out at termination. Multiple tag objects for
the same name will cause all older tags of the same name to be lost.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Tags received from the frontend are generated in memory in a simple
linked list in the order that the tag commands were sent by the
frontend. If multiple different tag objects for the same tag name
get generated the last one sent by the frontend will be the one
that gets written out at termination. Multiple tag objects for
the same name will cause all older tags of the same name to be lost.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Added branch load counter to fast-import.
If the branch load count exceeds the number of branches created then
the frontend is causing fast-import to page branches into and out of
memory due to the way its ordering its commits. Performance can
likely be increased if the frontend were to alter its commit
sequence such that it stays on one branch before switching to another
branch, then never returns to the prior branch.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If the branch load count exceeds the number of branches created then
the frontend is causing fast-import to page branches into and out of
memory due to the way its ordering its commits. Performance can
likely be increased if the frontend were to alter its commit
sequence such that it stays on one branch before switching to another
branch, then never returns to the prior branch.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Added mark store/find to fast-import.
Marks are now saved when the mark directive gets used by the frontend
and may be used in place of a SHA1 expression to locate a previous
SHA1 which fast-import may have generated. This is particularly
useful with commits where the frontend does not (easily) have the
ability to compute the SHA1 for an arbitrary commit but needs it
to generate a branch or tag from that commit.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Marks are now saved when the mark directive gets used by the frontend
and may be used in place of a SHA1 expression to locate a previous
SHA1 which fast-import may have generated. This is particularly
useful with commits where the frontend does not (easily) have the
ability to compute the SHA1 for an arbitrary commit but needs it
to generate a branch or tag from that commit.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Converted fast-import to accept standard command line parameters.
The following command line options are now accepted before the
pack name:
--objects=n # replaces the object count after the pack name
--depth=n # delta chain depth to use (default is 10)
--active-branches=n # maximum number of branches to keep in memory
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The following command line options are now accepted before the
pack name:
--objects=n # replaces the object count after the pack name
--depth=n # delta chain depth to use (default is 10)
--active-branches=n # maximum number of branches to keep in memory
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Fixed segfault in fast-import after growing a tree.
Growing a tree caused all subtrees to be deallocated and put back
into the free list yet those subtree's contents were still actively
in use. Consequently they were doled out again and got stomped
on elsewhere. Releasing a tree is now performed in two parts,
either releasing only the content array or releasing the content
array and recursively releasing the subtree(s).
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Growing a tree caused all subtrees to be deallocated and put back
into the free list yet those subtree's contents were still actively
in use. Consequently they were doled out again and got stomped
on elsewhere. Releasing a tree is now performed in two parts,
either releasing only the content array or releasing the content
array and recursively releasing the subtree(s).
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Allow symlink blobs in trees during fast-import.
If a frontend is smart enough to import a symlink then we should
let them do so. We'll assume that they were smart enough to first
generate a blob to hold the link target, as that's how symlinks
get represented in GIT.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
If a frontend is smart enough to import a symlink then we should
let them do so. We'll assume that they were smart enough to first
generate a blob to hold the link target, as that's how symlinks
get represented in GIT.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Changed fast-import's pack header creation to use pack.h
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Converted fast-import to a text based protocol.
Frontend clients can now send a text stream to fast-import rather
than a binary stream. This should facilitate developing frontend
software as the data stream is easier to view, manipulate and debug
my hand and Mark-I eyeball.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Frontend clients can now send a text stream to fast-import rather
than a binary stream. This should facilitate developing frontend
software as the data stream is easier to view, manipulate and debug
my hand and Mark-I eyeball.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Implement blob ID validation in fast-import.
When accepting revision SHA1 IDs from the frontend verify the SHA1
actually refers to a blob and is known to exist. Its an error
to use a SHA1 in a tree if the blob doesn't exist as this would
cause git-fsck-objects to report a missing blob should the pack get
closed without the blob being appended into it or a subsequent pack.
So right now we'll just ask that the frontend "pre-declare" any
blobs it wants to use in a tree before it can use them.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
When accepting revision SHA1 IDs from the frontend verify the SHA1
actually refers to a blob and is known to exist. Its an error
to use a SHA1 in a tree if the blob doesn't exist as this would
cause git-fsck-objects to report a missing blob should the pack get
closed without the blob being appended into it or a subsequent pack.
So right now we'll just ask that the frontend "pre-declare" any
blobs it wants to use in a tree before it can use them.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Added tree and commit writing to fast-import.
The tree of the current commit can be altered by file_change commands
before the commit gets written to the pack. The file changes are
rather primitive as they simply allow removal of a tree entry or
setting/adding a tree entry.
Currently trees and commits aren't being deltafied when written to
the pack and branch reloading from the current pack doesn't work,
so at most 5 branches can be worked with at any one time.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
The tree of the current commit can be altered by file_change commands
before the commit gets written to the pack. The file changes are
rather primitive as they simply allow removal of a tree entry or
setting/adding a tree entry.
Currently trees and commits aren't being deltafied when written to
the pack and branch reloading from the current pack doesn't work,
so at most 5 branches can be worked with at any one time.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Implemented branch handling and basic tree support in fast-import.
This provides the basic data structures needed to store trees in
memory while we are processing them for a branch. What we are
attempting to do is track one complete tree for each branch that
the frontend has registered with us through the 'newb' (new_branch)
command. When the frontend edits that tree through 'updf' or 'delf'
commands we'll mark the affected tree(s) as being dirty and recompute
their objects during 'comt' (commit).
Currently the protocol is decidedly _not_ user friendly. I crashed
fast-import by giving it bad input data from Perl. I may try to
improve upon it, or at least upon its error handling.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
This provides the basic data structures needed to store trees in
memory while we are processing them for a branch. What we are
attempting to do is track one complete tree for each branch that
the frontend has registered with us through the 'newb' (new_branch)
command. When the frontend edits that tree through 'updf' or 'delf'
commands we'll mark the affected tree(s) as being dirty and recompute
their objects during 'comt' (commit).
Currently the protocol is decidedly _not_ user friendly. I crashed
fast-import by giving it bad input data from Perl. I may try to
improve upon it, or at least upon its error handling.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Added basic command handler to fast-import.
Moved the new_blob logic off into a new subroutine and
invoked it when getting the 'blob' command.
Added statistics dump to STDERR when the program terminates listing
what it did at a high level. This is somewhat interesting.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Moved the new_blob logic off into a new subroutine and
invoked it when getting the 'blob' command.
Added statistics dump to STDERR when the program terminates listing
what it did at a high level. This is somewhat interesting.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Refactored fast-import's internals for future additions.
Too many globals variables were being used not not enough
code was resuable to process trees and commits so this is
a simple refactoring of the existing blob processing code
to get into a state that will be easier to handle trees
and commits in.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Too many globals variables were being used not not enough
code was resuable to process trees and commits so this is
a simple refactoring of the existing blob processing code
to get into a state that will be easier to handle trees
and commits in.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Cleaned up memory allocation for object_entry structs.
Although its easy to ask the user to tell us how many objects they
will need, its probably better to dynamically grow the object table
in large units. But if the user can give us a hint as to roughly
how many objects then we can still use it during startup.
Also stopped printing the SHA1 strings to stdout as no user is
currently making use of that facility.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Although its easy to ask the user to tell us how many objects they
will need, its probably better to dynamically grow the object table
in large units. But if the user can give us a hint as to roughly
how many objects then we can still use it during startup.
Also stopped printing the SHA1 strings to stdout as no user is
currently making use of that facility.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Added automatic index generation to fast-import.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Created fast-import, a tool to quickly generating a pack from blobs.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
git-commit documentation: -a adds and also removes
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
git-remote: no longer silent on unknown commands.
Signed-off-by: Quy Tonthat <qtonthat@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Quy Tonthat <qtonthat@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
git-svn: fix tests to work with older svn
Some of the recent changes and shortcuts to the tests broke
things for people using older versions of svn:
t9104-git-svn-follow-parent.sh:
v1.2.3 (from SuSE 10.0 as reported by riddochc on #git
(thanks!)) required an extra 'svn up'. I was also able to
reproduce this with v1.1.4 (Debian Sarge).
lib-git-svn.sh:
SVN::Repos bindings in versions up to and including 1.1.4
(Sarge again) do not pass fs-config options to the underlying
library. BerkeleyDB repositories also seem completely broken
on all my Sarge machines; so not using FSFS does not seem to
be an option for most people.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Some of the recent changes and shortcuts to the tests broke
things for people using older versions of svn:
t9104-git-svn-follow-parent.sh:
v1.2.3 (from SuSE 10.0 as reported by riddochc on #git
(thanks!)) required an extra 'svn up'. I was also able to
reproduce this with v1.1.4 (Debian Sarge).
lib-git-svn.sh:
SVN::Repos bindings in versions up to and including 1.1.4
(Sarge again) do not pass fs-config options to the underlying
library. BerkeleyDB repositories also seem completely broken
on all my Sarge machines; so not using FSFS does not seem to
be an option for most people.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Make git-prune-packed a bit more chatty.
Steven Grimm noticed that git-repack's verbosity is inconsistent
because pack-objects is chatty and prune-packed is not. This
makes the latter a bit more chatty and gives -q option to
squelch it.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Steven Grimm noticed that git-repack's verbosity is inconsistent
because pack-objects is chatty and prune-packed is not. This
makes the latter a bit more chatty and gives -q option to
squelch it.
Signed-off-by: Junio C Hamano <junkio@cox.net>
glossary typofix
Pointed out by Paul Witt <paul.witt@oxix.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Pointed out by Paul Witt <paul.witt@oxix.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
use 'init' instead of 'init-db' for shipped docs and tools
While 'init-db' still is and probably will always remain a valid git
command for obvious backward compatibility reasons, it would be a good
idea to move shipped tools and docs to using 'init' instead.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
While 'init-db' still is and probably will always remain a valid git
command for obvious backward compatibility reasons, it would be a good
idea to move shipped tools and docs to using 'init' instead.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Explain "Not a git repository: '.git'".
Andy Parkins noticed that the error message some "whole tree"
oriented commands emit is stated misleadingly when they refused
to run from a subdirectory.
We could probably allow some of them to work from a subdirectory
but that is a semantic change that could have unintended side
effects, so let's start at first by rewording the error message
to be easier to read without doing anything else to be safe.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Andy Parkins noticed that the error message some "whole tree"
oriented commands emit is stated misleadingly when they refused
to run from a subdirectory.
We could probably allow some of them to work from a subdirectory
but that is a semantic change that could have unintended side
effects, so let's start at first by rewording the error message
to be easier to read without doing anything else to be safe.
Signed-off-by: Junio C Hamano <junkio@cox.net>
merge-recursive: do not report the resulting tree object name
It is not available in the outermost merge, and it is only
useful for debugging merge-recursive in the inner merges.
Sergey Vlasov noticed that the old code accesses an
uninitialized location.
Signed-off-by: Junio C Hamano <junkio@cox.net>
It is not available in the outermost merge, and it is only
useful for debugging merge-recursive in the inner merges.
Sergey Vlasov noticed that the old code accesses an
uninitialized location.
Signed-off-by: Junio C Hamano <junkio@cox.net>
git-revert: Fix die before git-sh-setup defines it.
The code previously checked it's own name and called 'die' upon
an error. However 'die' was not yet defined because git-sh-setup
had not been sourced yet. Instead simply write the error message
to stderr and exit with an error as was originally desired.
Signed-off-by: Bob Proulx <bob@proulx.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The code previously checked it's own name and called 'die' upon
an error. However 'die' was not yet defined because git-sh-setup
had not been sourced yet. Instead simply write the error message
to stderr and exit with an error as was originally desired.
Signed-off-by: Bob Proulx <bob@proulx.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
fix documentation for git-commit --no-verify
Despite what the documentation claims, git-commit does not check commit
for suspicious lines: all hooks are disabled by default,
and the pre-comit hook could be changed to do something else.
Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Despite what the documentation claims, git-commit does not check commit
for suspicious lines: all hooks are disabled by default,
and the pre-comit hook could be changed to do something else.
Signed-off-by: Michael S. Tsirkin <mst@mellanox.co.il>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Fix up totally buggered read_or_die()
The "read_or_die()" function would silently NOT die for a partial read,
and since it was of type "void" it obviously couldn't even return the
partial number of bytes read.
IOW, it was totally broken. This hopefully fixes it up.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The "read_or_die()" function would silently NOT die for a partial read,
and since it was of type "void" it obviously couldn't even return the
partial number of bytes read.
IOW, it was totally broken. This hopefully fixes it up.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Clean up write_in_full() users
With the new-and-improved write_in_full() semantics, where a partial write
simply always returns a real error (and always sets 'errno' when that
happens, including for the disk full case), a lot of the callers of
write_in_full() were just unnecessarily complex.
In particular, there's no reason to ever check for a zero length or
return: if the length was zero, we'll return zero, otherwise, if a disk
full resulted in the actual write() system call returning zero the
write_in_full() logic would have correctly turned that into a negative
return value, with 'errno' set to ENOSPC.
I really wish every "write_in_full()" user would just check against "<0"
now, but this fixes the nasty and stupid ones.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
With the new-and-improved write_in_full() semantics, where a partial write
simply always returns a real error (and always sets 'errno' when that
happens, including for the disk full case), a lot of the callers of
write_in_full() were just unnecessarily complex.
In particular, there's no reason to ever check for a zero length or
return: if the length was zero, we'll return zero, otherwise, if a disk
full resulted in the actual write() system call returning zero the
write_in_full() logic would have correctly turned that into a negative
return value, with 'errno' set to ENOSPC.
I really wish every "write_in_full()" user would just check against "<0"
now, but this fixes the nasty and stupid ones.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
reflog-expire: brown paper bag fix.
When --stale-fix is not passed, the code did not initialize the
two commit objects properly.
Signed-off-by: Junio C Hamano <junkio@cox.net>
When --stale-fix is not passed, the code did not initialize the
two commit objects properly.
Signed-off-by: Junio C Hamano <junkio@cox.net>
GIT v1.5.0-rc1
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
plug a few leaks in revision walking used in describe.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Chose better tag names in git-describe after merges.
Recently git.git itself encountered a situation on its master and
next branches where git-describe stopped reporting 'v1.5.0-rc0-gN'
and instead started reporting 'v1.4.4.4-gN'. This appeared to be
a backward jump in version numbering.
maint o-------------------4
\ \
master o-o-o-o-o-o-o-5-o-C-o-W
The issue is that commit C in the diagram claims it is version
1.5.0, as the tag v1.5.0 is placed on commit 5. Yet commit W
claims it is version 1.4.4.4 as the tag v1.5.0 has an older tag
date than the v1.4.4.4 tag.
As it turns out this situation is very common. A bug fix applied
to maint and later merged into master occurs frequently enough that
it should Just Work Right(tm).
Rather than taking the first tag that gets found git-describe will
now generate a list of all possible tags and select the one which
has the most number of commits in common with HEAD (or whatever
revision the user requested the description of).
This rule is based on the principle shown in the diagram above.
There are a large number of commits on the primary development branch
'master' which do not appear in the 'maint' branch, and many of
these are already tagged as part of v1.5.0-rc0. Additionally these
commits are not in v1.4.4.4, as they are part of the v1.5.0 release
still being developed. The v1.5.0-rc0 tag is more descriptive of
W than v1.4.4.4 is, and therefore should be used.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Recently git.git itself encountered a situation on its master and
next branches where git-describe stopped reporting 'v1.5.0-rc0-gN'
and instead started reporting 'v1.4.4.4-gN'. This appeared to be
a backward jump in version numbering.
maint o-------------------4
\ \
master o-o-o-o-o-o-o-5-o-C-o-W
The issue is that commit C in the diagram claims it is version
1.5.0, as the tag v1.5.0 is placed on commit 5. Yet commit W
claims it is version 1.4.4.4 as the tag v1.5.0 has an older tag
date than the v1.4.4.4 tag.
As it turns out this situation is very common. A bug fix applied
to maint and later merged into master occurs frequently enough that
it should Just Work Right(tm).
Rather than taking the first tag that gets found git-describe will
now generate a list of all possible tags and select the one which
has the most number of commits in common with HEAD (or whatever
revision the user requested the description of).
This rule is based on the principle shown in the diagram above.
There are a large number of commits on the primary development branch
'master' which do not appear in the 'maint' branch, and many of
these are already tagged as part of v1.5.0-rc0. Additionally these
commits are not in v1.4.4.4, as they are part of the v1.5.0 release
still being developed. The v1.5.0-rc0 tag is more descriptive of
W than v1.4.4.4 is, and therefore should be used.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Merge branch 'jc/bare'
* jc/bare:
Disallow working directory commands in a bare repository.
git-fetch: allow updating the current branch in a bare repository.
Introduce is_bare_repository() and core.bare configuration variable
Move initialization of log_all_ref_updates
* jc/bare:
Disallow working directory commands in a bare repository.
git-fetch: allow updating the current branch in a bare repository.
Introduce is_bare_repository() and core.bare configuration variable
Move initialization of log_all_ref_updates
Merge branch 'ar/merge-recursive'
* ar/merge-recursive:
merge-recursive: do not use on-file index when not needed.
Speed-up recursive by flushing index only once for all entries
* ar/merge-recursive:
merge-recursive: do not use on-file index when not needed.
Speed-up recursive by flushing index only once for all entries
Merge branch 'jc/detached-head'
* jc/detached-head:
git-checkout: handle local changes sanely when detaching HEAD
git-checkout: safety check for detached HEAD checks existing refs
git-checkout: fix branch name output from the command
git-checkout: safety when coming back from the detached HEAD state.
git-checkout: rewording comments regarding detached HEAD.
git-checkout: do not warn detaching HEAD when it is already detached.
Detached HEAD (experimental)
git-branch: show detached HEAD
git-status: show detached HEAD
* jc/detached-head:
git-checkout: handle local changes sanely when detaching HEAD
git-checkout: safety check for detached HEAD checks existing refs
git-checkout: fix branch name output from the command
git-checkout: safety when coming back from the detached HEAD state.
git-checkout: rewording comments regarding detached HEAD.
git-checkout: do not warn detaching HEAD when it is already detached.
Detached HEAD (experimental)
git-branch: show detached HEAD
git-status: show detached HEAD
git-status: wording update to deal with deleted files.
If you do:
$ /bin/rm foo
$ git status
we used to say "git add ... to add content to commit". But
suggsting "git add" to record the deletion of a file is simply
insane.
So this rewords various things:
- The section header is the old "Changed but not updated",
instead of "Changed but not added";
- Suggestion is "git add ... to update what will be committed",
instead of "... to add content to commit";
- If there are removed paths, the above suggestion becomes "git
add/rm ... to update what will be committed";
- For untracked files, the suggestion is "git add ... to
include in what will be committed".
Signed-off-by: Junio C Hamano <junkio@cox.net>
If you do:
$ /bin/rm foo
$ git status
we used to say "git add ... to add content to commit". But
suggsting "git add" to record the deletion of a file is simply
insane.
So this rewords various things:
- The section header is the old "Changed but not updated",
instead of "Changed but not added";
- Suggestion is "git add ... to update what will be committed",
instead of "... to add content to commit";
- If there are removed paths, the above suggestion becomes "git
add/rm ... to update what will be committed";
- For untracked files, the suggestion is "git add ... to
include in what will be committed".
Signed-off-by: Junio C Hamano <junkio@cox.net>
git-rm: do not fail on already removed file.
Often the user would do "/bin/rm foo" before telling git, but
then want to tell git about it. "git rm foo" however would fail
because it cannot unlink(2) foo.
Treat ENOENT error return from unlink(2) as if a successful
removal happened.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Often the user would do "/bin/rm foo" before telling git, but
then want to tell git about it. "git rm foo" however would fail
because it cannot unlink(2) foo.
Treat ENOENT error return from unlink(2) as if a successful
removal happened.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Avoid errors and warnings when attempting to do I/O on zero bytes
Unfortunately, while {read,write}_in_full do take into account
zero-sized reads/writes; their die and whine variants do not.
I have a repository where there are zero-sized files in
the history that was triggering these things.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Unfortunately, while {read,write}_in_full do take into account
zero-sized reads/writes; their die and whine variants do not.
I have a repository where there are zero-sized files in
the history that was triggering these things.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Better error messages for corrupt databases
This fixes another problem that Andy's case showed: git-fsck-objects
reports nonsensical results for corrupt objects.
There were actually two independent and confusing problems:
- when we had a zero-sized file and used map_sha1_file, mmap() would
return EINVAL, and git-fsck-objects would report that as an insane and
confusing error. I don't know when this was introduced, it might have
been there forever.
- when "parse_object()" returned NULL, fsck would say "object not found",
which can be very confusing, since obviously the object might "exist",
it's just unparseable because it's totally corrupt.
So this just makes "xmmap()" return NULL for a zero-sized object (which is
a valid thing pointer, exactly the same way "malloc()" can return NULL for
a zero-sized allocation). That fixes the first problem (but we could have
fixed it in the caller too - I don't personally much care whichever way it
goes, but maybe somebody should check that the NO_MMAP case does
something sane in this case too?).
And the second problem is solved by just making the error message slightly
clearer - the failure to parse an object may be because it's missing or
corrupt, not necessarily because it's not "found".
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
This fixes another problem that Andy's case showed: git-fsck-objects
reports nonsensical results for corrupt objects.
There were actually two independent and confusing problems:
- when we had a zero-sized file and used map_sha1_file, mmap() would
return EINVAL, and git-fsck-objects would report that as an insane and
confusing error. I don't know when this was introduced, it might have
been there forever.
- when "parse_object()" returned NULL, fsck would say "object not found",
which can be very confusing, since obviously the object might "exist",
it's just unparseable because it's totally corrupt.
So this just makes "xmmap()" return NULL for a zero-sized object (which is
a valid thing pointer, exactly the same way "malloc()" can return NULL for
a zero-sized allocation). That fixes the first problem (but we could have
fixed it in the caller too - I don't personally much care whichever way it
goes, but maybe somebody should check that the NO_MMAP case does
something sane in this case too?).
And the second problem is solved by just making the error message slightly
clearer - the failure to parse an object may be because it's missing or
corrupt, not necessarily because it's not "found".
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
config-set: check write-in-full returns in set_multivar
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
index-pack: write-or-die instead of unchecked write-in-full.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
write_in_full: really write in full or return error on disk full.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Document git-init
These days, the command does a lot more than just initialise the
object database (such as setting default config-variables,
installing template hooks...), and "git init" is actually a more
sensible name nowadays.
Signed-off-by: Junio C Hamano <junkio@cox.net>
These days, the command does a lot more than just initialise the
object database (such as setting default config-variables,
installing template hooks...), and "git init" is actually a more
sensible name nowadays.
Signed-off-by: Junio C Hamano <junkio@cox.net>
write-cache: do not leak the serialized cache-tree data.
It is not used after getting written, and just is leaking every time
we write the index out.
Signed-off-by: Junio C Hamano <junkio@cox.net>
It is not used after getting written, and just is leaking every time
we write the index out.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Disallow working directory commands in a bare repository.
If the user tries to run a porcelainish command which requires
a working directory in a bare repository they may get unexpected
results which are difficult to predict and may differ from command
to command.
Instead we should detect that the current repository is a bare
repository and refuse to run the command there, as there is no
working directory associated with it.
[jc: updated Shawn's original somewhat -- bugs are mine.]
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
If the user tries to run a porcelainish command which requires
a working directory in a bare repository they may get unexpected
results which are difficult to predict and may differ from command
to command.
Instead we should detect that the current repository is a bare
repository and refuse to run the command there, as there is no
working directory associated with it.
[jc: updated Shawn's original somewhat -- bugs are mine.]
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
merge-recursive: do not use on-file index when not needed.
This revamps the merge-recursive implementation following the
outline in:
Message-ID: <7v8xgileza.fsf@assigned-by-dhcp.cox.net>
There is no need to write out the index until the very end just
once from merge-recursive. Also there is no need to write out
the resulting tree object for the simple case of merging with a
single merge base.
Signed-off-by: Junio C Hamano <junkio@cox.net>
This revamps the merge-recursive implementation following the
outline in:
Message-ID: <7v8xgileza.fsf@assigned-by-dhcp.cox.net>
There is no need to write out the index until the very end just
once from merge-recursive. Also there is no need to write out
the resulting tree object for the simple case of merging with a
single merge base.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Speed-up recursive by flushing index only once for all entries
The merge-recursive implementation in C inherited the invariant
that the on-file index file is written out and later read back
after any index operations and writing trees from the original
Python implementation. But it was only because the original
implementation worked at the scripting level.
There is no need to write out the index file after handling
every path.
Signed-off-by: Alex Riesen <raa.lkml@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The merge-recursive implementation in C inherited the invariant
that the on-file index file is written out and later read back
after any index operations and writing trees from the original
Python implementation. But it was only because the original
implementation worked at the scripting level.
There is no need to write out the index file after handling
every path.
Signed-off-by: Alex Riesen <raa.lkml@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Provide better feedback for the untracked only case in status output
Since 98bf8a47c296f51ea9722fef4bb81dbfb70cd4bb status would claim that
git-commit could be useful even if there are no changes except untracked files.
Since wt-status is already computing all the information needed go the whole
way and actually track the (non-)emptiness of all three sections separately,
unify the code, and provide useful messages for each individual case.
Thanks to Junio and Michael Loeffler for suggestions.
Signed-off-by: Jürgen Rühle <j-r@online.de>
Since 98bf8a47c296f51ea9722fef4bb81dbfb70cd4bb status would claim that
git-commit could be useful even if there are no changes except untracked files.
Since wt-status is already computing all the information needed go the whole
way and actually track the (non-)emptiness of all three sections separately,
unify the code, and provide useful messages for each individual case.
Thanks to Junio and Michael Loeffler for suggestions.
Signed-off-by: Jürgen Rühle <j-r@online.de>
Merge branch 'js/reflog'
* js/reflog:
Sanitize for_each_reflog_ent()
* js/reflog:
Sanitize for_each_reflog_ent()
Makefile: remove $foo when $foo.exe is built/installed.
On Cygwin, newly builtins are not recognized, because there exist both
the executable binaries (with .exe extension) _and_ the now-obsolete
scripts (without extension), but the script is executed.
Signed-off-by: Junio C Hamano <junkio@cox.net>
On Cygwin, newly builtins are not recognized, because there exist both
the executable binaries (with .exe extension) _and_ the now-obsolete
scripts (without extension), but the script is executed.
Signed-off-by: Junio C Hamano <junkio@cox.net>
send-email: work around double encoding of in-body From field.
git-send-email sends out the message taken from format-patch
output without quoting nor encoding. When copying the From:
line to form in-body From: field, it should not copy it
verbatim, because the From: for the header is quoted according
to RFC 2047 when not ASCII.
The original came from Jürgen Rühle, but I moved the
string munging into a separate function so that later other
people can tweak it more easily. Bugs introduced during the
translation are mine.
Signed-off-by: Junio C Hamano <junkio@cox.net>
git-send-email sends out the message taken from format-patch
output without quoting nor encoding. When copying the From:
line to form in-body From: field, it should not copy it
verbatim, because the From: for the header is quoted according
to RFC 2047 when not ASCII.
The original came from Jürgen Rühle, but I moved the
string munging into a separate function so that later other
people can tweak it more easily. Bugs introduced during the
translation are mine.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Add git-init documentation.
Oops. Commit 515377ea9ec6192f82a2fa5c5b5b7651d9d6cf6c missed one
file, git-init documentation.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Oops. Commit 515377ea9ec6192f82a2fa5c5b5b7651d9d6cf6c missed one
file, git-init documentation.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Fix t1410 for core.filemode==false
Since c869753e, core.filemode is hardwired to false on Cygwin.
So this test had no chance to succeed, since an early commit
(changing just the filemode) failed, and therefore all subsequent
tests.
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Since c869753e, core.filemode is hardwired to false on Cygwin.
So this test had no chance to succeed, since an early commit
(changing just the filemode) failed, and therefore all subsequent
tests.
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Make git-describe a builtin.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Don't save the commit buffer in git-describe.
The commit buffer (message of the commit) is not actually
used by the git-describe process. We can save some memory
by not keeping it around.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
The commit buffer (message of the commit) is not actually
used by the git-describe process. We can save some memory
by not keeping it around.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Fix warnings in sha1_file.c - use C99 printf format if available
Signed-off-by: Pavel Roskin <proski@gnu.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
Signed-off-by: Pavel Roskin <proski@gnu.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
-u is now default for 'git-mailinfo'.
Originally from David Woodhouse, but also adjusts the callers of
mailinfo to the new default.
Signed-off-by: Junio C Hamano <junkio@cox.net>
Originally from David Woodhouse, but also adjusts the callers of
mailinfo to the new default.
Signed-off-by: Junio C Hamano <junkio@cox.net>