lzma_stream_encoder() and lzma_stream_encoder_mt() always assumed
this. Before this patch, failing lzma_filters_copy() could result
in free(invalid_pointer) or invalid memory reads in stream_encoder.c
or stream_encoder_mt.c.
To trigger this, allocating memory for a filter options structure
has to fail. These are tiny allocations so in practice they very
rarely fail.
Certain badness in the filter chain array could also make
lzma_filters_copy() fail but both stream_encoder.c and
stream_encoder_mt.c validate the filter chain before
trying to copy it, so the crash cannot occur this way.
The documentation in src/liblzma/api/lzma/index.h suggests that
both the unpadded (compressed) size and the uncompressed size
are checked for overflow, but only the unpadded size was checked.
The uncompressed check is done first since that is more likely to
occur than the unpadded or index field size overflows.
RHEL/CentOS 7 shipped with 5.1.2alpha, including the threaded
encoder that is behind #ifdef LZMA_UNSTABLE in the API headers.
In 5.1.2alpha these symbols are under XZ_5.1.2alpha in liblzma.map.
API/ABI compatibility tracking isn't done between development
releases so newer releases didn't have XZ_5.1.2alpha anymore.
Later RHEL/CentOS 7 updated xz to 5.2.2 but they wanted to keep
the exported symbols compatible with 5.1.2alpha. After checking
the ABI changes it turned out that >= 5.2.0 ABI is backward
compatible with the threaded encoder functions from 5.1.2alpha
(but not vice versa as fixes and extensions to these functions
were made between 5.1.2alpha and 5.2.0).
In RHEL/CentOS 7, XZ Utils 5.2.2 was patched with
xz-5.2.2-compat-libs.patch to modify liblzma.map:
- XZ_5.1.2alpha was added with lzma_stream_encoder_mt and
lzma_stream_encoder_mt_memusage. This matched XZ Utils 5.1.2alpha.
- XZ_5.2 was replaced with XZ_5.2.2. It is clear that this was
an error; the intention was to keep using XZ_5.2 (XZ_5.2.2
has never been used in XZ Utils). So XZ_5.2.2 lists all
symbols that were listed under XZ_5.2 before the patch.
lzma_stream_encoder_mt and _mt_memusage are included too so
they are listed both here and under XZ_5.1.2alpha.
The patch didn't add any __asm__(".symver ...") lines to the .c
files. Thus the resulting liblzma.so exports the threaded encoder
functions under XZ_5.1.2alpha only. Listing the two functions
also under XZ_5.2.2 in liblzma.map has no effect without
matching .symver lines.
The lack of XZ_5.2 in RHEL/CentOS 7 means that binaries linked
against unpatched XZ Utils 5.2.x won't run on RHEL/CentOS 7.
This is unfortunate but this alone isn't too bad as the problem
is contained within RHEL/CentOS 7 and doesn't affect users
of other distributions. It could also be fixed internally in
RHEL/CentOS 7.
The second problem is more serious: In XZ Utils 5.2.2 the API
headers don't have #ifdef LZMA_UNSTABLE for obvious reasons.
This is true in RHEL/CentOS 7 version too. Thus now programs
using new APIs can be compiled without an extra #define. However,
the programs end up depending on symbol version XZ_5.1.2alpha
(and possibly also XZ_5.2.2) instead of XZ_5.2 as they would
with an unpatched XZ Utils 5.2.2. This means that such binaries
won't run on other distributions shipping XZ Utils >= 5.2.0 as
they don't provide XZ_5.1.2alpha or XZ_5.2.2; they only provide
XZ_5.2 (and XZ_5.0). (This includes RHEL/CentOS 8 as the patch
luckily isn't included there anymore with XZ Utils 5.2.4.)
Binaries built by RHEL/CentOS 7 users get distributed and then
people wonder why they don't run on some other distribution.
Seems that people have found out about the patch and been copying
it to some build scripts, seemingly curing the symptoms but
actually spreading the illness further and outside RHEL/CentOS 7.
The ill patch seems to be from late 2016 (RHEL 7.3) and in 2017 it
had spread at least to EasyBuild. I heard about the events only
recently. :-(
This commit splits liblzma.map into two versions: one for
GNU/Linux and another for other OSes that can use symbol versioning
(FreeBSD, Solaris, maybe others). The Linux-specific file and the
matching additions to .c files add full compatibility with binaries
that have been built against a RHEL/CentOS-patched liblzma. Builds
for OSes other than GNU/Linux won't get the vaccine as they should
be immune to the problem (I really hope that no build script uses
the RHEL/CentOS 7 patch outside GNU/Linux).
The RHEL/CentOS compatibility symbols XZ_5.1.2alpha and XZ_5.2.2
are intentionally put *after* XZ_5.2 in liblzma_linux.map. This way
if one forgets to #define HAVE_SYMBOL_VERSIONS_LINUX when building,
the resulting liblzma.so.5 will have lzma_stream_encoder_mt@@XZ_5.2
since XZ_5.2 {...} is the first one that lists that function.
Without HAVE_SYMBOL_VERSIONS_LINUX @XZ_5.1.2alpha and @XZ_5.2.2
will be missing but that's still a minor problem compared to
only having lzma_stream_encoder_mt@@XZ_5.1.2alpha!
The "local: *;" line was moved to XZ_5.0 so that it doesn't need
to be moved around. It doesn't matter where it is put.
Having two similar liblzma_*.map files is a bit silly as it is,
at least for now, easily possible to generate the generic one
from the Linux-specific file. But that adds extra steps and
increases the risk of mistakes when supporting more than one
build system. So I rather maintain two files in parallel and let
validate_map.sh check that they are in sync when "make mydist"
is run.
This adds .symver lines for lzma_stream_encoder_mt@XZ_5.2.2 and
lzma_stream_encoder_mt_memusage@XZ_5.2.2 even though these
weren't exported by RHEL/CentOS 7 (only @@XZ_5.1.2alpha was
for these two). I added these anyway because someone might
misunderstand the RHEL/CentOS 7 patch and think that @XZ_5.2.2
(@@XZ_5.2.2) versions were exported too.
At glance one could suggest using __typeof__ to copy the function
prototypes when making aliases. However, this doesn't work trivially
because __typeof__ won't copy attributes (lzma_nothrow, lzma_pure)
and it won't change symbol visibility from hidden to default (done
by LZMA_API()). Attributes could be copied with __copy__ attribute
but that needs GCC 9 and a fallback method would be needed anyway.
This uses __symver__ attribute with GCC >= 10 and
__asm__(".symver ...") with everything else. The attribute method
is required for LTO (-flto) support with GCC. Using -flto with
GCC older than 10 is now broken on GNU/Linux and will not be fixed
(can silently result in a broken liblzma build that has dangerously
incorrect symbol versions). LTO builds with Clang seem to work
with the traditional __asm__(".symver ...") method.
Thanks to Boud Roukema for reporting the problem and discussing
the details and testing the fix.
This documents the changes made in commits
6c6da57ae2,
cad299008c, and
898faa9728.
The --info-memory bit hasn't been finished yet
even though it's already mentioned in this commit
under --memlimit-mt-decompress and --threads.
It will now return LZMA_DATA_ERROR (not LZMA_OK or LZMA_BUF_ERROR)
if LZMA_FINISH is used and there isn't enough input to finish
decoding the Block Header or the Block. The use of LZMA_DATA_ERROR
is simpler and the less risky than LZMA_BUF_ERROR but this might
be changed before 5.4.0.
xzgrep wouldn't exit on SIGPIPE or SIGQUIT when it clearly
should have. It's quite possible that it's not perfect still
but at least it's much better.
If multiple exit statuses compete, now it tries to pick
the largest of value.
Some comments were added.
The exit status handling of signals is still broken if the shell
uses values larger than 255 in $? to indicate that a process
died due to a signal ***and*** their "exit" command doesn't take
this into account. This seems to work well with the ksh and yash
versions I tried. However, there is a report in gzip/zgrep that
OpenSolaris 5.11 (not 5.10) has a problem with "exit" truncating
the argument to 8 bits:
https://debbugs.gnu.org/cgi/bugreport.cgi?bug=22900#25
Such a bug would break xzgrep but I didn't add a workaround
at least for now. 5.11 is old and I don't know if the problem
exists in modern descendants, or if the problem exists in other
ksh implementations in use.
I don't know if this can make a difference in the real world
but it looked kind of suspicious (what happens with sed
implementations that cannot process very long lines?).
At least this commit shouldn't make it worse.
It avoids the use of sed for prefixing filenames to output lines.
Using sed for that is slower and prone to security bugs so now
the sed method is only used as a fallback.
This also fixes an actual bug: When grepping a binary file,
GNU grep nowadays prints its diagnostics to stderr instead of
stdout and thus the sed-method for prefixing the filename doesn't
work. So with this commit grepping binary files gives reasonable
output with GNU grep now.
This was inspired by zgrep but the implementation is different.
Also replace one use of expr with printf.
The rationale for LC_ALL=C was already mentioned in
69d1b3fc29 that fixed a security
issue. However, unrelated uses weren't changed in that commit yet.
POSIX says that with sed and such tools one should use LC_ALL=C
to ensure predictable behavior when strings contain byte sequences
that aren't valid multibyte characters in the current locale. See
under "Application usage" in here:
https://pubs.opengroup.org/onlinepubs/9699919799/utilities/sed.html
With GNU sed invalid multibyte strings would work without this;
it's documented in its Texinfo manual. Some other implementations
aren't so forgiving.
Fix handling of "xzgrep -25 foo" (in GNU grep "grep -25 foo" is
an alias for "grep -C25 foo"). xzgrep would treat "foo" as filename
instead of as a pattern. This bug was fixed in zgrep in gzip in 2012.
Add -E, -F, -G, and -P to the "no argument required" list.
Add -X to "argument required" list. It is an
intentionally-undocumented GNU grep option so this isn't
an important option for xzgrep but it seems that other grep
implementations (well, those that I checked) don't support -X
so I hope this change is an improvement still.
grep -d (grep --directories=ACTION) requires an argument. In
contrast to zgrep, I kept -d in the "no argument required" list
because it's not supported in xzgrep (or zgrep). This way
"xzgrep -d" gives an error about option being unsupported instead
of telling that it requires an argument. Both zgrep and xzgrep
tell that it's unsupported if an argument is specified.
Add comments.
Turns out that this is needed for .lzma files as the spec in
LZMA SDK says that end marker may be present even if the size
is stored in the header. Such files are rare but exist in the
real world. The code in liblzma is so old that the spec didn't
exist in LZMA SDK back then and I had understood that such
files weren't possible (the lzma tool in LZMA SDK didn't
create such files).
This modifies the internal API so that LZMA decoder can be told
if EOPM is allowed even when the uncompressed size is known.
It's allowed with .lzma and not with other uses.
Thanks to Karl Beldan for reporting the problem.
The SIZE_MAX / 3 was 1365 MiB. 1400 MiB gives little more room
and it looks like a round (artificial) number in --info-memory
once --info-memory is made to display it.
Also, using #if avoids useless code on 64-bit builds.
This is a soft limit in sense that it only affects the number of
threads. It never makes xz fail and it never makes xz change
settings that would affect the compressed output.
The idea is to make -T0 have more reasonable behavior when
the system has very many cores or when a memory-hungry
compression options are used. This also helps with 32-bit xz,
preventing it from running out of address space.
The downside of this commit is that now the number of threads
might become too low compared to what the user expected. I
hope this to be an acceptable compromise as the old behavior
has been a source of well-argued complaints for a long time.
The main problem withi the old behavior is that the compressed
output is different on single-core systems vs. multicore systems.
This commit fixes it by making -T0 one thread in multithreaded mode
on single-core systems.
The downside of this is that it uses more memory. However, if
--memlimit-compress is used, xz can (thanks to the previous commit)
drop to the single-threaded mode still.
In single-threaded mode, --memlimit-compress can make xz scale down
the LZMA2 dictionary size to meet the memory usage limit. This
obviously affects the compressed output. However, if xz was in
threaded mode, --memlimit-compress could make xz reduce the number
of threads but it wouldn't make xz switch from multithreaded mode
to single-threaded mode or scale down the LZMA2 dictionary size.
This seemed illogical and there was even a "FIXME?" about it.
Now --memlimit-compress can make xz switch to single-threaded
mode if one thread in multithreaded mode uses too much memory.
If memory usage is still too high, then the LZMA2 dictionary
size can be scaled down too.
The option --no-adjust was also changed so that it no longer
prevents xz from scaling down the number of threads as that
doesn't affect compressed output (only performance). After
this commit --no-adjust only prevents adjustments that affect
compressed output, that is, with --no-adjust xz won't switch
from multithreaded mode to single-threaded mode and won't
scale down the LZMA2 dictionary size.
The man page wasn't updated yet.
--memlimit-mt-decompress allows specifying the limit for
multithreaded decompression. This matches memlimit_threading in
liblzma. This limit can only affect the number of threads being
used; it will never prevent xz from decompressing a file. The
old --memlimit-decompress option is still used at the same time.
If the value of --memlimit-decompress (the default value or
one specified by the user) is less than the value of
--memlimit-mt-decompress , then --memlimit-mt-decompress is
reduced to match --memlimit-decompress.
Man page wasn't updated yet.
In most cases if the input file is corrupt the application won't
care about the uncompressed content at all. With this new flag
the threaded decoder will return an error as soon as any thread
has detected an error; it won't wait to copy out the data before
the location of the error.
I don't plan to use this in xz to keep the behavior consistent
between single-threaded and multi-threaded modes.
This makes it possible to call lzma_code() in a loop that only
reads new input when lzma_code() didn't fill the output buffer
completely. That isn't the calling style suggested by the
liblzma example program 02_decompress.c so perhaps the usefulness
of this feature is limited.
Also, it is possible to write such a loop so that it works
with the single-threaded decoder but not with the threaded
decoder even after this commit, or so that it works only if
lzma_mt.timeout = 0.
The zlib tutorial <https://zlib.net/zlib_how.html> is a well-known
example of a loop where more input is read only when output isn't
full. Porting this as is to liblzma would work with the
single-threaded decoder (if LZMA_CONCATENATED isn't used) but it
wouldn't work with threaded decoder even after this commit because
the loop assumes that no more output is possible when it cannot
read more input ("if (strm.avail_in == 0) break;"). This cannot
be fixed at liblzma side; the loop has to be modified at least
a little.
I'm adding this in any case because the actual code is simple
and short and should have no harmful side-effects in other
situations.
Malicious filenames can make xzgrep to write to arbitrary files
or (with a GNU sed extension) lead to arbitrary code execution.
xzgrep from XZ Utils versions up to and including 5.2.5 are
affected. 5.3.1alpha and 5.3.2alpha are affected as well.
This patch works for all of them.
This bug was inherited from gzip's zgrep. gzip 1.12 includes
a fix for zgrep.
The issue with the old sed script is that with multiple newlines,
the N-command will read the second line of input, then the
s-commands will be skipped because it's not the end of the
file yet, then a new sed cycle starts and the pattern space
is printed and emptied. So only the last line or two get escaped.
One way to fix this would be to read all lines into the pattern
space first. However, the included fix is even simpler: All lines
except the last line get a backslash appended at the end. To ensure
that shell command substitution doesn't eat a possible trailing
newline, a colon is appended to the filename before escaping.
The colon is later used to separate the filename from the grep
output so it is fine to add it here instead of a few lines later.
The old code also wasn't POSIX compliant as it used \n in the
replacement section of the s-command. Using \<newline> is the
POSIX compatible method.
LC_ALL=C was added to the two critical sed commands. POSIX sed
manual recommends it when using sed to manipulate pathnames
because in other locales invalid multibyte sequences might
cause issues with some sed implementations. In case of GNU sed,
these particular sed scripts wouldn't have such problems but some
other scripts could have, see:
info '(sed)Locale Considerations'
This vulnerability was discovered by:
cleemy desu wayo working with Trend Micro Zero Day Initiative
Thanks to Jim Meyering and Paul Eggert discussing the different
ways to fix this and for coordinating the patch release schedule
with gzip.
If a worker thread has consumed all input so far and it's
waiting on thr->cond and then the main thread enables
partial update for that thread, the code used to deadlock.
This commit allows one dummy decoding pass to occur in this
situation which then also does the partial update.
As part of the fix, this moves thr->progress_* updates to
avoid the second thr->mutex locking.
Thanks to Jia Tan for finding, debugging, and reporting the bug.
LZMA_TIMED_OUT is not an error and thus stopping threads on
LZMA_TIMED_OUT breaks the decoder badly.
Thanks to Jia Tan for finding the bug and for the patch.
If threading support is enabled at build time, this will
use lzma_stream_decoder_mt() even for single-threaded mode.
With memlimit_threading=0 the behavior should be identical.
This needs some work like adding --memlimit-threading=LIMIT.
The original patch from Sebastian Andrzej Siewior included
a method to get currently available RAM on Linux. It might
be one way to go but as it is Linux-only, the available-RAM
approach needs work for portability or using a fallback method
on other OSes.
The man page wasn't updated yet.
I realize that this is about a decade late.
Big thanks to Sebastian Andrzej Siewior for the original patch.
I made a bunch of smaller changes but after a while quite a few
things got rewritten. So any bugs in the commit were created by me.
Add lzma_outq_clear_cache2() which may leave one buffer allocated
in the cache.
Add lzma_outq_outbuf_memusage() to get the memory needed for
a single lzma_outbuf. This is now used internally in outqueue.c too.
Track both the total amount of memory allocated and the amount of
memory that is in active use (not in cache).
In lzma_outbuf, allow storing the current input position that
matches the current output position. This way the main thread
can notice when no more output is possible without first providing
more input.
Allow specifying return code for lzma_outq_read() in a finished
lzma_outbuf.
If lzma_index_append() failed (most likely memory allocation failure)
it could have gone unnoticed and the resulting .xz file would have
an incorrect Index. Decompressing such a file would produce the
correct uncompressed data but then an error would occur when
verifying the Index field.
Now it limits the input and output buffer sizes that are
passed to a raw decoder. This way there's no need to check
if the sizes can grow too big or overflow when updating
Compressed Size and Uncompressed Size counts. This also means
that a corrupt file cannot cause the raw decoder to process
useless extra input or output that would exceed the size info
in Block Header (and thus cause LZMA_DATA_ERROR anyway).
More importantly, now the size information is verified more
carefully in case raw decoder returns LZMA_OK. This doesn't
really matter with the current single-threaded .xz decoder
as the errors would be detected slightly later anyway. But
this helps avoiding corner cases in the upcoming threaded
decompressor, and it might help other Block decoder uses
outside liblzma too.
The test files bad-1-lzma2-{9,10,11}.xz test these conditions.
With the single-threaded .xz decoder the only difference is
that LZMA_DATA_ERROR is detected in a difference place now.
Previously lzma_lzma_props_encode() and lzma_lzma2_props_encode()
assumed that the options pointers must be non-NULL because the
with these filters the API says it must never be NULL. It is
good to do these checks anyway.
This broke 32-bit builds due to a pointer type mismatch.
This bug was introduced with the output-size-limited encoding
in 625f4c7c99.
Thanks to huangqinjin for the bug report.
OpenBSD does not allow to change the group of a file if the user
does not belong to this group. In contrast to Linux, OpenBSD also
fails if the new group is the same as the old one. Do not call
fchown(2) in this case, it would change nothing anyway.
This fixes an issue with Perl Alien::Build module.
https://github.com/PerlAlien/Alien-Build/issues/62
Sometimes the version number from "less -V" contains a dot,
sometimes not. xzless failed detect the version number when
it does contain a dot. This fixes it.
Thanks to nick87720z for reporting this. Apparently it had been
reported here <https://bugs.gentoo.org/489362> in 2013.
Due to architectural limitations, address space available to a single
userspace process on MIPS32 is limited to 2 GiB, not 4, even on systems
that have more physical RAM -- e.g. 64-bit systems with 32-bit
userspace, or systems that use XPA (an extension similar to x86's PAE).
So, for MIPS32, we have to impose stronger memory limits. I've chosen
2000MiB to give the process some headroom.
When the uncompressed size is known to be exact, after decompressing
the stream exactly comp_size bytes of input must have been consumed.
This is a minor improvement to error detection.
The caller must still not specify an uncompressed size bigger
than the actual uncompressed size.
As a downside, this now needs the exact compressed size.
Right now this is just a planned extra-compact format for use
in the EROFS file system in Linux. At this point it's possible
that the format will either change or be abandoned and removed
completely.
The special thing about the encoder is that it uses the
output-size-limited encoding added in the previous commit.
EROFS uses fixed-sized blocks (e.g. 4 KiB) to hold compressed
data so the compressors must be able to create valid streams
that fill the given block size.
With this it is possible to encode LZMA1 data without EOPM so that
the encoder will encode as much input as it can without exceeding
the specified output size limit. The resulting LZMA1 stream will
be a normal LZMA1 stream without EOPM. The actual uncompressed size
will be available to the caller via the uncomp_size pointer.
One missing thing is that the LZMA layer doesn't inform the LZ layer
when the encoding is finished and thus the LZ may read more input
when it won't be used. However, this doesn't matter if encoding is
done with a single call (which is the planned use case for now).
For proper multi-call encoding this should be improved.
This commit only adds the functionality for internal use.
Nothing uses it yet.
Previously this required using --force but that has other
effects too which might be undesirable. Changing the behavior
of --keep has a small risk of breaking existing scripts but
since this is a fairly special corner case I expect the
likehood of breakage to be low enough.
I think the new behavior is more logical. The only reason for
the old behavior was to be consistent with gzip and bzip2.
Thanks to Vincent Lefevre and Sebastian Andrzej Siewior.
Omit the -q option from xz, gzip, and bzip2. With xz this shouldn't
matter. With gzip it's important because -q makes gzip replace SIGPIPE
with exit status 2. With bzip2 it's important because with -q bzip2
is completely silent if input is corrupt while other decompressors
still give an error message.
Avoiding exit status 2 from gzip is important because bzip2 uses
exit status 2 to indicate corrupt input. Before this commit xzgrep
didn't recognize corrupt .bz2 files because xzgrep was treating
exit status 2 as SIGPIPE for gzip compatibility.
zstd still needs -q because otherwise it is noisy in normal
operation.
The code to detect real SIGPIPE didn't check if the exit status
was due to a signal (>= 128) and so could ignore some other exit
status too.
This is a minor fix since this affects only the situation when
the files differ and the exit status is something else than 0.
In such case there could be SIGPIPE from a decompression tool
and that would result in exit status of 2 from xzdiff/xzcmp
while the correct behavior would be to return 1 or whatever
else diff or cmp may have returned.
This commit omits the -q option from xz/gzip/bzip2/lzop arguments.
I'm not sure why the -q was used in the first place, perhaps it
hides warnings in some situation that I cannot see at the moment.
Hopefully the removal won't introduce a new bug.
With gzip the -q option was harmful because it made gzip return 2
instead of >= 128 with SIGPIPE. Ignoring exit status 2 (warning
from gzip) isn't practical because bzip2 uses exit status 2 to
indicate corrupt input file. It's better if SIGPIPE results in
exit status >= 128.
With bzip2 the removal of -q seems to be good because with -q
it prints nothing if input is corrupt. The other tools aren't
silent in this situation even with -q. On the other hand, if
zstd support is added, it will need -q since otherwise it's
noisy in normal situations.
Thanks to Étienne Mollier and Sebastian Andrzej Siewior.
Before this commit all output queue buffers were allocated as
a single big allocation. Now each buffer is allocated separately
when needed. Used buffers are cached to avoid reallocation
overhead but the cache will keep only one buffer size at a time.
This should make things work OK in the decompression where most
of the time the buffer sizes will be the same but with some less
common files the buffer sizes may vary.
While this should work fine, it's still a bit preliminary
and may even get reverted if it turns out to be useless for
decompression.
When Intel CET is enabled, we need to include <cet.h> in assembly codes
to mark Intel CET support and add _CET_ENDBR to indirect jump targets.
Tested on Intel Tiger Lake under CET enabled Linux.
I don't want to use \c in macro arguments but groff_man(7)
suggests that \f has better portability. \f would be needed
for the .TP strings for portability reasons anyway.
Thanks to Bjarni Ingi Gislason.
A few are simply omitted, most are converted to "for example"
and surrounded with commas. Sounds like that this is better
style, for example, man-pages(7) recommends avoiding such
abbreviations except in parenthesis.
Thanks to Bjarni Ingi Gislason.
Docs of ancient troff/nroff mention \(em (em-dash) but not \(en
and \- was used for both minus and en-dash. I don't know how
portable \(en is nowadays but it can be changed back if someone
complains. At least GNU groff and OpenBSD's mandoc support it.
Thanks to Bjarni Ingi Gislason for the patch.
Output is from: test-groff -b -e -mandoc -T utf8 -rF0 -t -w w -z
[ "test-groff" is a developmental version of "groff" ]
Input file is ./src/scripts/xzgrep.1
<src/scripts/xzgrep.1>:20 (macro RB): only 1 argument, but more are expected
<src/scripts/xzgrep.1>:23 (macro RB): only 1 argument, but more are expected
<src/scripts/xzgrep.1>:26 (macro RB): only 1 argument, but more are expected
<src/scripts/xzgrep.1>:29 (macro RB): only 1 argument, but more are expected
<src/scripts/xzgrep.1>:32 (macro RB): only 1 argument, but more are expected
"abc..." does not mean the same as "abc ...".
The output from nroff and troff is unchanged except for the space
between "file" and "...".
Signed-off-by: Bjarni Ingi Gislason <bjarniig@rhi.hi.is>
Summary:
mandoc -T lint xzgrep.1 :
mandoc: xzgrep.1:79:2: WARNING: skipping paragraph macro: PP empty
There is no change in the output of "nroff" and "troff".
Signed-off-by: Bjarni Ingi Gislason <bjarniig@rhi.hi.is>
Output is from: test-groff -b -e -mandoc -T utf8 -rF0 -t -w w -z
[ "test-groff" is a developmental version of "groff" ]
Input file is ./src/xz/xz.1
<src/xz/xz.1>:408 (macro BR): only 1 argument, but more are expected
<src/xz/xz.1>:1009 (macro BR): only 1 argument, but more are expected
<src/xz/xz.1>:1743 (macro BR): only 1 argument, but more are expected
<src/xz/xz.1>:1920 (macro BR): only 1 argument, but more are expected
<src/xz/xz.1>:2213 (macro BR): only 1 argument, but more are expected
Output from nroff and troff is unchanged, except for a font change of a
full stop (.).
Signed-off-by: Bjarni Ingi Gislason <bjarniig@rhi.hi.is>
DJGPP 2.05 added support for thousands separators but it's
broken at least under WinXP with Finnish locale that uses
a non-breaking space as the thousands separator. Workaround
by disabling thousands separators for DJGPP builds.
The comment didn't match the value of RC_SYMBOLS_MAX and the value
itself was slightly larger than actually needed. The only harm
about this was that memory usage was a few bytes larger.
strerror() needs <string.h> which happened to be included via
tuklib_common.h -> tuklib_config.h -> sysdefs.h if HAVE_CONFIG_H
was defined. This wasn't tested without config.h before so it
had worked fine.
The previous commit broke crc32_tablegen.c.
If the whole package is built without config.h (with defines
set on the compiler command line) this should still work fine
as long as these headers conform to C99 well enough.
string.h is used unconditionally elsewhere in the project and
configure has always stopped if limits.h is missing, so these
headers must have been always available even on the weirdest
systems.
The dependency on po4a is optional. It's never required to install
the translated man pages when xz is built from a release tarball.
If po4a is missing when building from xz.git, the translated man
pages won't be generated but otherwise the build will work normally.
The translations are only updated automatically by autogen.sh and
by "make mydist". This makes it easy to keep po4a as an optional
dependency and ensures that I won't forget to put updated
translations to a release tarball.
The translated man pages aren't installed if --disable-nls is used.
The installation of translated man pages abuses Automake internals
by calling "install-man" with redefined dist_man_MANS and man_MANS.
This makes the hairy script code slightly less hairy. If it breaks
some day, this code needs to be fixed; don't blame Automake developers.
Also, this adds more quotes to the existing shell script code in
the Makefile.am "-hook"s.
Perhaps it's too drastic but on the other hand it will let me
learn about possible problems if people report the errors.
This won't be backported to the v5.2 branch.
See the code comment for reasoning. It's far from perfect but
hopefully good enough for certain cases while hopefully doing
nothing bad in other situations.
At presets -5 ... -9, 4020 MiB vs. 4096 MiB makes no difference
on how xz scales down the number of threads.
The limit has to be a few MiB below 4096 MiB because otherwise
things like "xz --lzma2=dict=500MiB" won't scale down the dict
size enough and xz cannot allocate enough memory. With
"ulimit -v $((4096 * 1024))" on x86-64, the limit in xz had
to be no more than 4085 MiB. Some safety margin is good though.
This is hack but it should be useful when running 32-bit xz on
a 64-bit kernel that gives full 4 GiB address space to xz.
Hopefully this is enough to solve this:
https://bugzilla.redhat.com/show_bug.cgi?id=1196786
FreeBSD has a patch that limits the result in tuklib_physmem()
to SIZE_MAX on 32-bit systems. While I think it's not the way
to do it, the results on --memlimit-compress have been good. This
commit should achieve practically identical results for compression
while leaving decompression and tuklib_physmem() and thus
lzma_physmem() unaffected.
xz --flush-timeout=2000, old version:
1. xz is started. The next flush will happen after two seconds.
2. No input for one second.
3. A burst of a few kilobytes of input.
4. No input for one second.
5. Two seconds have passed and flushing starts.
The first second counted towards the flush-timeout even though
there was no pending data. This can cause flushing to occur more
often than needed.
xz --flush-timeout=2000, after this commit:
1. xz is started.
2. No input for one second.
3. A burst of a few kilobytes of input. The next flush will
happen after two seconds counted from the time when the
first bytes of the burst were read.
4. No input for one second.
5. No input for another second.
6. Two seconds have passed and flushing starts.
The same code sequence repeats so it's nicer as a separate function.
Note that in one case there was no test for opt_mode != MODE_TEST,
but that was only because that condition would always be true, so
this commit doesn't change the behavior there.
When input blocked, xz --flush-timeout=1 would wake up every
millisecond and initiate flushing which would have nothing to
flush and thus would just waste CPU time. The fix disables the
timeout when no input has been seen since the previous flush.
Using the aligned methods requires more care to ensure that
the address really is aligned, so it's nicer if the aligned
methods are prefixed. The next commit will remove the unaligned_
prefix from the unaligned methods which in liblzma are used in
more places than the aligned ones.
Add a configure option --enable-unsafe-type-punning to get the
old non-conforming memory access methods. It can be useful with
old compilers or in some other less typical situations but
shouldn't normally be used.
Omit the packed struct trick for unaligned access. While it's
best in some cases, this is simpler. If the memcpy trick doesn't
work, one can request unsafe type punning from configure.
Because CRC32/CRC64 code needs fast aligned reads, if no very
safe way to do it is found, type punning is used as a fallback.
This sucks but since it currently works in practice, it seems to
be the least bad option. It's never needed with GCC >= 4.7 or
Clang >= 3.6 since these support __builtin_assume_aligned and
thus fast aligned access can be done with the memcpy trick.
Other things:
- Support GCC/Clang __builtin_bswapXX
- Cleaner bswap fallback macros
- Minor cleanups
This adds a configure option --enable-path-for-scripts=PREFIX
which defaults to empty except on Solaris it is /usr/xpg4/bin
to make POSIX grep and others available. The Solaris case had
been documented in INSTALL with a manual fix but it's better
to do this automatically since it is needed on most Solaris
systems anyway.
Thanks to Daniel Richard G.
LZMA_TIMED_OUT is *internally* used as a value for lzma_ret
enumeration. Previously it was #defined to 32 and cast to lzma_ret.
That way it wasn't visible in the public API, but this was hackish.
Now the public API has eight LZMA_RET_INTERNALx members and
LZMA_TIMED_OUT is #defined to LZMA_RET_INTERNAL1. This way
the code is cleaner overall although the public API has a few
extra mysterious enum members.
Or any off_t which isn't very big (like signed 64 bit integer
that most system have). A small off_t could overflow if the
file being decompressed had long enough run of zero bytes,
which would result in corrupt output.
Now memcpy() or GNU C packed structs for unaligned access instead
of type punning. See the comment in this commit for details.
Avoiding type punning with unaligned access is needed to
silence gcc -fsanitize=undefined.
New functions: unaliged_readXXne and unaligned_writeXXne where
XX is 16, 32, or 64.
I should have always known this but I didn't. Here is an example
as a reminder to myself:
int mycopy(void *dest, void *src, size_t n)
{
memcpy(dest, src, n);
return dest == NULL;
}
In the example, a compiler may assume that dest != NULL because
passing NULL to memcpy() would be undefined behavior. Testing
with GCC 8.2.1, mycopy(NULL, NULL, 0) returns 1 with -O0 and -O1.
With -O2 the return value is 0 because the compiler infers that
dest cannot be NULL because it was already used with memcpy()
and thus the test for NULL gets optimized out.
In liblzma, if a null-pointer was passed to memcpy(), there were
no checks for NULL *after* the memcpy() call, so I cautiously
suspect that it shouldn't have caused bad behavior in practice,
but it's hard to be sure, and the problematic cases had to be
fixed anyway.
Thanks to Jeffrey Walton.
Now the widths of the check names is used to adjust the width
of the Check column. This way there no longer is a need to restrict
the widths of the check names to be at most ten terminal-columns.
"xz -dcfv not_an_xz_file" crashed (all four options are
required to trigger it). It caused xz to call
lzma_get_progress(&strm, ...) when no coder was initialized
in strm. In this situation strm.internal is NULL which leads
to a crash in lzma_get_progress().
The bug was introduced when xz started using lzma_get_progress()
to get progress info for multi-threaded compression, so the
bug is present in versions 5.1.3alpha and higher.
Thanks to Filip Palian <Filip.Palian@pjwstk.edu.pl> for
the bug report.
FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION is #defined when liblzma
is being built for fuzz testing.
Most fuzzed inputs would normally get rejected because of incorrect
CRC32 and the actual header decoding code wouldn't get fuzzed.
Disabling CRC32 checks avoids this problem. The fuzzer program
must still use LZMA_IGNORE_CHECK flag to disable verification of
integrity checks of uncompressed data.
It ended up printing an uninitialized char-array when trying to
print the check names (column 7) on the "totals" line.
This also changes the column 12 (minimum xz version) to
50000002 (xz 5.0.0) instead of 0 when there are no valid
input files.
Thanks to kidmin for the bug report.
The 0 got treated specially in a buggy way and as a result
the function did nothing. The API doc said that 0 was supposed
to return LZMA_PROG_ERROR but it didn't.
Now 0 is treated as if 1 had been specified. This is done because
0 is already used to indicate an error from lzma_memlimit_get()
and lzma_memusage().
In addition, lzma_memlimit_set() no longer checks that the new
limit is at least LZMA_MEMUSAGE_BASE. It's counter-productive
for the Index decoder and was actually needed only by the
auto decoder. Auto decoder has now been modified to check for
LZMA_MEMUSAGE_BASE.
It returned LZMA_PROG_ERROR, which was done to avoid zero as
the limit (because it's a special value elsewhere), but using
LZMA_PROG_ERROR is simply inconvenient and can cause bugs.
The fix/workaround is to treat 0 as if it were 1 byte. It's
effectively the same thing. The only weird consequence is
that then lzma_memlimit_get() will return 1 even when 0 was
specified as the limit.
This fixes a very rare corner case in xz --list where a specific
memory usage limit and a multi-stream file could print the
error message "Internal error (bug)" instead of saying that
the memory usage limit is too low.
Only one definition was visible in a translation unit.
It avoided a few casts and temp variables but seems that
this hack doesn't work with link-time optimizations in compilers
as it's not C99/C11 compliant.
Fixes:
http://www.mail-archive.com/xz-devel@tukaani.org/msg00279.html
It's available in glibc (GNU/Linux, GNU/kFreeBSD). It's better
than sysconf(_SC_NPROCESSORS_ONLN) because sched_getaffinity()
gives the number of cores available to the process instead of
the total number of cores online.
As a side effect, this commit fixes a bug on GNU/kFreeBSD where
configure would detect the FreeBSD-specific cpuset_getaffinity()
but it wouldn't actually work because on GNU/kFreeBSD it requires
using -lfreebsd-glue when linking. Now the glibc-specific function
will be used instead.
Thanks to Sebastian Andrzej Siewior for the original patch
and testing.
This is the sane thing to do. The conflict with OpenSSL
on some OSes and especially that the OS-provided versions
can be significantly slower makes it clear that it was
a mistake to have the external SHA-256 support enabled by
default.
Those who want it can now pass --enable-external-sha256 to
configure. INSTALL was updated with notes about OSes where
this can be a bad idea.
The SHA-256 detection code in configure.ac had some bugs that
could lead to a build failure in some situations. These were
fixed, although it doesn't matter that much now that the
external SHA-256 is disabled by default.
MINIX >= 3.2.0 uses NetBSD's libc and thus has SHA256_Init
in libc instead of libutil. Support for the libutil version
was removed.
When optimizing, GCC can reorder code so that an uninitialized
value gets used in a comparison, which makes Valgrind unhappy.
It doesn't happen when compiled with -O0, which I tend to use
when running Valgrind.
Thanks to Rich Prohaska. I remember this being mentioned long
ago by someone else but nothing was done back then.
The patch is quite long but it's mostly about adding new #ifdefs
to omit code when encoders or decoders have been disabled.
This adds two new #defines to config.h: HAVE_ENCODERS and
HAVE_DECODERS.
People shouldn't rely on the presets when decoding raw streams,
but xz uses the presets as the starting point for raw decoder
options anyway.
lzma_encocder_presets.c was renamed to lzma_presets.c to
make it clear it's not used solely by the encoder code.
lzma_index_dup() calls index_dup_stream() which, in case of
an error, calls index_stream_end() to free memory allocated
by index_stream_init(). However, it illogically didn't
actually free the memory. To make it logical, the tree
handling code was modified a bit in addition to changing
index_stream_end().
Thanks to Evan Nemerson for the bug report.
This reverts commit 7a11c4a8e5.
It is a problem when libc has pipe2() but the kernel is too
old to have pipe2() and thus pipe2() fails. In xz it's pointless
to have a fallback for non-functioning pipe2(); it's better to
avoid pipe2() completely.
Thanks to Michael Fox for the bug report.
The sandboxing is used conditionally as described in main.c.
This isn't optimal but it was much easier to implement than
a full sandboxing solution and it still covers the most common
use cases where xz is writing to standard output. This should
have practically no effect on performance even with small files
as fork() isn't needed.
C and locale libraries can open files as needed. This has been
fine in the past, but it's a problem with things like Capsicum.
io_sandbox_enter() tries to ensure that various locale-related
files have been loaded before cap_enter() is called, but it's
possible that there are other similar problems which haven't
been seen yet.
Currently Capsicum is available on FreeBSD 10 and later
and there is a port to Linux too.
Thanks to Loganaden Velvindron for help.
The earlier version compiled but didn't actually work
since sysconf(_SC_PHYS_PAGES) always fails (or so I was told).
Thanks to Ole André Vadla Ravnås for the patch and testing.
In FreeBSD, cpuset_getaffinity() is the preferred way to get
the number of available cores.
Thanks to Rui Paulo for the patch. I edited it slightly, but
hopefully I didn't break anything.
It's a problem at least on OpenBSD which doesn't support
O_NONBLOCK on e.g. /dev/null. I'm not surprised if it's
a problem on other OSes too since this behavior is allowed
in POSIX-1.2008.
The code relying on this behavior was committed in June 2013
and included in 5.1.3alpha released on 2013-10-26. Clearly
the development releases only get limited testing.
I know that soname != app version, but I skip AGE=1
in -version-info to make the soname match the liblzma
version anyway. It doesn't hurt anything as long as
it doesn't conflict with library versioning rules.
This way an invalid filter chain is detected at the Stream
encoder initialization instead of delaying it to the first
call to lzma_code() which triggers the initialization of
the actual filter encoder(s).
This avoids the possibility of "File name too long" when
creating a temp file when the input file name is very long.
This also means that other users on the system can no longer
see the input file names in /tmp (or whatever $TMPDIR is)
since the temporary directory will have a generic name. This
usually doesn't matter since on many systems one can see
the arguments given to all processes anyway.
The number X chars to mktemp where increased from 6 to 10.
Note that with some shells temp files or dirs won't be used at all.
The behavior of grep -ql varies:
- GNU grep behaves like grep -q.
- OpenBSD grep behaves like grep -l.
POSIX doesn't make it 100 % clear what behavior is expected.
Anyway, using both -q and -l at the same time makes no sense
so both options simply should never be used at the same time.
Thanks to Christian Weisgerber.
POSIX supports $< only in inference rules (suffix rules).
Using it elsewhere is a GNU make extension and doesn't
work e.g. with OpenBSD make.
Thanks to Christian Weisgerber for the patch.
Note that this slightly changes how lzma_block_header_decode()
has been documented. Earlier it said that the .version is set
to the lowest required value, but now it says that the .version
field is kept unchanged if possible. In practice this doesn't
affect any old code, because before this commit the only
possible .version was 0.
The Maj macro is used where multiple things are added
together, so making Maj a sum of two expressions allows
some extra freedom for the compiler to schedule the
instructions.
I learned this trick from
<http://www.hackersdelight.org/corres.txt>.
This looks weird because the rotations become sequential,
but it helps quite a bit on both 32-bit and 64-bit x86:
- It requires fewer instructions on two-operand
instruction sets like x86.
- It requires one register less which matters especially
on 32-bit x86.
I hope this doesn't hurt other archs.
I didn't invent this idea myself, but I don't remember where
I saw it first.
This way a branch isn't needed for each operation
to choose between blk0 and blk2, and still the code
doesn't grow as much as it would with full unrolling.
This commit just adds the function. Its uses will be in
separate commits.
This hasn't been tested much yet and it's perhaps a bit early
to commit it but if there are bugs they should get found quite
quickly.
Thanks to Jun I Jin from Intel for help and for pointing out
that string comparison needs to be optimized in liblzma.
Mimic the original grep behavior and return exit_success when
at least one xz compressed file matches given pattern.
Original bugreport:
https://bugzilla.redhat.com/show_bug.cgi?id=1108085
Thanks to Pavel Raiskup for the patch.
This avoids a memzero() call for a newly-allocated memory,
which can be expensive when encoding small streams with
an over-sized dictionary.
To avoid using lzma_alloc_zero() for memory that doesn't
need to be zeroed, lzma_mf.son is now allocated separately,
which requires handling it separately in normalize() too.
Thanks to Vincenzo Innocente for reporting the problem.
In this case "make install" could fail if the man page directory
didn't already exist at the destination. If it did exist, a
dangling symlink was created there. Now the link is omitted
instead. This isn't the best fix but it's better than the old
behavior.
I don't know the details but I have an impression that there's
no problem in practice if using GCC since people have built xz
with GCC (without patching xz), but renaming the variable cannot
hurt either.
Thanks to Mark Ashley.
Previously, --block-list and --block-size only worked together
in threaded mode. Boundaries are specified by --block-list, but
--block-size specifies the maximum size for a Block. Now this
works in single-threaded mode too.
Thanks to James M Leddy for the original patch.
Now if --block-list is used in threaded mode, the encoder
won't need to flush at each Block boundary specified via
--block-list. This improves performance a lot, making
threading helpful with --block-list.
The flush timer was reset after LZMA_FULL_FLUSH but since
LZMA_FULL_BARRIER doesn't flush, resetting the timer is
no longer done.
Now --block-list=SIZES works with in the threaded mode too,
although the performance is still bad due to the use of
LZMA_FULL_FLUSH instead of the new LZMA_FULL_BARRIER.
Now liblzma only uses "mythread" functions and types
which are defined in mythread.h matching the desired
threading method.
Before Windows Vista, there is no direct equivalent to
pthread condition variables. Since this package doesn't
use pthread_cond_broadcast(), pre-Vista threading can
still be kept quite simple. The pre-Vista code doesn't
use anything that wasn't already available in Windows 95,
so the binaries should run even on Windows 95 if someone
happens to care.
Previously it was done in configure, but doing that goes
against the Autoconf manual. Autoconf requires that it is
possible to override e.g. prefix after running configure
and that doesn't work correctly if liblzma.pc is created
by configure.
A potential downside of this change is that now e.g.
libdir in liblzma.pc is a standalone string instead of
being defined via ${prefix}, so if one overrides prefix
when running pkg-config the libdir won't get the new value.
I don't know if this matters in practice.
Thanks to Vincent Torri.
When --flush-timeout=TIMEOUT is used, xz will use
LZMA_SYNC_FLUSH if read() would block and at least
TIMEOUT milliseconds has elapsed since the previous flush.
This can be useful in realtime-like use cases where the
data is simultanously decompressed by another process
(possibly on a different computer). If new uncompressed
input data is produced slowly, without this option xz could
buffer the data for a long time until it would become
decompressible from the output.
If TIMEOUT is 0, the feature is disabled. This is the default.
This commit affects the compression side. Using xz for
the decompression side for the above purpose doesn't work
yet so well because there is quite a bit of input and
output buffering when decompressing.
The --long-help or man page were not updated yet.
The details of this feature may change.
Testing for end of file was no longer correct after full flushing
became possible with --block-size=SIZE and --block-list=SIZES.
There was no bug in practice though because xz just made a few
unneeded zero-byte reads.
This switches units from microseconds to milliseconds.
New clock_gettime(CLOCK_MONOTONIC) will be used if available.
There is still a fallback to gettimeofday().
The man pages of lzmainfo, xzmore, and xzdec had similar
constructs as the man page of xz had before the commit
eb6ca9854b. Eric S. Raymond
didn't mention these man pages in his bug report, but
it's nice to be consistent.
Now both reading and writing should be without
race conditions with signals.
They might still be signal handling issues left.
Signals are blocked during many operations to avoid
EINTR but it may cause problems e.g. if writing to
stderr blocks when trying to display an error message.
It is possible that a signal to set user_abort arrives right
before a blocking system call is made. In this case the call
may block until another signal arrives, while the wanted
behavior is to make xz clean up and exit as soon as possible.
After this commit, the race condition is avoided with the
input side which already uses non-blocking I/O. The output
side still uses blocking I/O and thus has the race condition.
POSIX says that fcntl(fd, F_SETFL, flags) returns -1 on
error and "other than -1" on success. This is how it is
documented e.g. on OpenBSD too. On Linux, success with
F_SETFL is always 0 (at least accorinding to fcntl(2)
from man-pages 3.51).
Due to a wrong variable name, when writing a sparse file
to standard output, *all* file status flags were cleared
(to the extent the operating system allowed it) instead of
only clearing the O_APPEND flag. In practice this worked
fine in the common situations on GNU/Linux, but I didn't
check how it behaved elsewhere.
The original flags were still restored correctly. I still
changed the code to use a separate boolean variable to
indicate when the flags should be restored instead of
relying on a special value in stdout_flags.
Input file can be a FIFO or something else that doesn't
support posix_fadvise() so don't check the return value
even with an assertion. Nothing bad happens if the call
to posix_fadvise() fails.
It is a no-op for now, but if an old xz version is used
together with a newer liblzma that supports something new,
then this check becomes important and will stop the old xz
from trying to parse files that it won't understand.
This affects only "xz -lvv". Normal decompression with xz
already detected if Block Header and Index had mismatched
Uncompressed Size fields. So this just makes "xz -lvv"
show such files as corrupt instead of showing the
Uncompressed Size from Index.
Now the interaction of presets and custom filter chains
is described correctly. Earlier it contradicted itself.
Thanks to DevHC who reported these issues on IRC to me
on 2012-12-14.
There was somewhat illogical behavior when --extreme was
specified and mixed with custom filter chains.
Before this commit, "xz -9 --lzma2 -e" was equivalent
to "xz --lzma2". After it is equivalent to "xz -6e"
(all earlier preset options get forgotten when a custom
filter chain is specified and the default preset is 6
to which -e is applied). I find this less illogical.
This also affects the meaning of "xz -9e --lzma2 -7".
Earlier it was equivalent to "xz -7e" (the -e specified
before a custom filter chain wasn't forgotten). Now it
is "xz -7". Note that "xz -7e" still is the same as "xz -e7".
Hopefully very few cared about this in the first place,
so pretty much no one should even notice this change.
Thanks to Conley Moorhous.
The options are now ordered in the same order as in xz's help
message.
Descriptions were added to the options that are ignored.
I left them in parenthesis even if it looks a bit weird
because I find it easier to spot the ignored vs. non-ignored
options from the list that way.
To avoid false positives when detecting .lzma files,
rare values in dictionary size and uncompressed size fields
were rejected. They will still be rejected if .lzma files
are decoded with lzma_auto_decoder(), but when using
lzma_alone_decoder() directly, such files will now be accepted.
Hopefully this is an OK compromise.
This doesn't affect xz because xz still has its own file
format detection code. This does affect lzmadec though.
So after this commit lzmadec will accept files that xz or
xz-emulating-lzma doesn't.
NOTE: lzma_alone_decoder() still won't decode all .lzma files
because liblzma's LZMA decoder doesn't support lc + lp > 4.
Reported here:
http://sourceforge.net/projects/lzmautils/forums/forum/708858/topic/7068827
This race condition could cause a deadlock if lzma_end() was
called before finishing the encoding. This can happen with
xz with debugging enabled (non-debugging version doesn't
call lzma_end() before exiting).
Use "read" instead of "awk" in xzless to get the version
number of "less". The need for awk was introduced in
the commit db5c1817fa.
Thanks to Ariel P for the patch.
This adds lzma_get_progress() to liblzma and takes advantage
of it in xz.
lzma_get_progress() collects progress information from
the thread-specific structures so that fairly accurate
progress information is available to applications. Adding
a new function seemed to be a better way than making the
information directly available in lzma_stream (like total_in
and total_out are) because collecting the information requires
locking mutexes. It's waste of time to do it more often than
the up to date information is actually needed by an application.
In v4.999.9beta~30 (xzless: Support compressed standard input,
2009-08-09), xzless learned to parse ‘less -V’ output to figure out
whether less is new enough to handle $LESSOPEN settings starting
with “|-”. That worked well for a while, but the version string from
‘less’ versions 448 (June, 2012) is misparsed, producing a warning:
$ xzless /tmp/test.xz; echo $?
/usr/bin/xzless: line 49: test: 456 (GNU regular expressions): \
integer expression expected
0
More precisely, modern ‘less’ lists the regexp implementation along
with its version number, and xzless passes the entire version number
with attached parenthetical phrase as a number to "test $a -gt $b",
producing the above confusing message.
$ less-444 -V | head -1
less 444
$ less -V | head -1
less 456 (no regular expressions)
So relax the pattern matched --- instead of expecting "less <number>",
look for a line of the form "less <number>[ (extra parenthetical)]".
While at it, improve the behavior when no matching line is found ---
instead of producing a cryptic message, we can fall back on a LESSPIPE
setting that is supported by all versions of ‘less’.
The implementation uses "awk" for simplicity. Hopefully that’s
portable enough.
Reported-by: Jörg-Volker Peetz <jvpeetz@web.de>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
There is a tiny risk of causing breakage: If an application
assigns lzma_stream.allocator to a non-const pointer, such
code won't compile anymore. I don't know why anyone would do
such a thing though, so in practice this shouldn't cause trouble.
Thanks to Jan Kratochvil for the patch.
lzma_code() could incorrectly return LZMA_BUF_ERROR if
all of the following was true:
- The caller knows how many bytes of output to expect
and only provides that much output space.
- When the last output bytes are decoded, the
caller-provided input buffer ends right before
the LZMA2 end of payload marker. So LZMA2 won't
provide more output anymore, but it won't know it
yet and thus won't return LZMA_STREAM_END yet.
- A BCJ filter is in use and it hasn't left any
unfiltered bytes in the temp buffer. This can happen
with any BCJ filter, but in practice it's more likely
with filters other than the x86 BCJ.
Another situation where the bug can be triggered happens
if the uncompressed size is zero bytes and no output space
is provided. In this case the decompression can fail even
if the whole input file is given to lzma_code().
A similar bug was fixed in XZ Embedded on 2011-09-19.
When grepping binary files, grep may exit before it has
read all the input. In this case, gzip -q returns 2 (eating
SIGPIPE), but xz and bzip2 show SIGPIPE as the exit status
(e.g. 141). This causes wrong exit status when grepping
xz- or bzip2-compressed binary files.
The fix checks for the special exit status that indicates SIGPIPE.
It uses kill -l which should be supported everywhere since it
is in both SUSv2 (1997) and POSIX.1-2008.
Thanks to James Buren for the bug report.
xzdiff was clobbering the exit status from diff in a case
statement used to analyze the exit statuses from "xz" when
its operands were two compressed files. Save and restore
diff's exit status to fix this.
The bug is inherited from zdiff in GNU gzip and was fixed
there on 2009-10-09.
Thanks to Jonathan Nieder for the patch and
to Peter Pallinger for reporting the bug.
Symbol versioning is enabled by default on GNU/Linux,
other GNU-based systems, and FreeBSD.
I'm not sure how stable this is, so it may need
backward-incompatible changes before the next release.
The idea is that alpha and beta symbols are considered
unstable and require recompiling the applications that
use those symbols. Once a symbol is stable, it may get
extended with new features in ways that don't break
compatibility with older ABI & API.
The mydist target runs validate_map.sh which should
catch some probable problems in liblzma.map. Otherwise
I would forget to update the map file for new releases.
If the operating system libc or other base libraries
provide SHA-256, use that instead of our own copy.
Note that this doesn't use OpenSSL or libgcrypt or
such libraries to avoid creating dependencies to
other packages.
This supports at least FreeBSD, NetBSD, OpenBSD, Solaris,
MINIX, and Darwin. They all provide similar but not
identical SHA-256 APIs; everyone is a little different.
Thanks to Wim Lewis for the original patch, improvements,
and testing.
Now the following works as you would expect:
echo foo | xz > foo.xz
echo bar | xz >> foo.xz
( xz -dc --single-stream ; xz -dc --single-stream ) < foo.xz
Note that it doesn't work if the input is not seekable
or if there is Stream Padding between the concatenated
.xz Streams.
Use gettimeofday() if clock_gettime() isn't available
(e.g. Darwin).
The test for availability of pthread_condattr_setclock()
and CLOCK_MONOTONIC was incorrect. Instead of fixing the
#ifdefs, use an Autoconf test. That way if there exists a
system that supports them but doesn't specify the matching
POSIX #defines, the features will still get detected.
Don't try to use pthread_sigmask() on OpenVMS. It doesn't
have that function.
Guard mythread.h against being #included multiple times.
Reported-by: Diego Elio Pettenò <flameeyes@gentoo.org>
Signed-off-by: Martin Väth <vaeth@mathematik.uni-wuerzburg.de>
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Spot candidates by running these commands:
git ls-files |xargs perl -0777 -n \
-e 'while (/\b(then?|[iao]n|i[fst]|but|f?or|at|and|[dt]o)\s+\1\b/gims)' \
-e '{$n=($` =~ tr/\n/\n/ + 1); ($v=$&)=~s/\n/\\n/g; print "$ARGV:$n:$v\n"}'
Thanks to Jim Meyering for the original patch.
This is the simplest method to do threading, which splits
the uncompressed data into blocks and compresses them
independently from each other. There's room for improvement
especially to reduce the memory usage, but nevertheless,
this is a good start.
Empty Block was created if the input buffer was empty.
Empty Block wastes a few bytes of space, but more importantly
it triggers a bug in XZ Utils 5.0.1 and older when trying
to decompress such a file. 5.0.1 and older consider such
files to be corrupt. I thought that no encoder creates empty
Blocks when releasing 5.0.2 but I was wrong.
The biggest problem was that the integrity check type
wasn't validated, and e.g. lzma_easy_buffer_encode()
would create a corrupt .xz Stream if given an unsupported
Check ID. Luckily applications don't usually try to use
an unsupport Check ID, so this bug is unlikely to cause
many real-world problems.
This adds:
- mythread_sync() macro to create synchronized blocks
- mythread_cond structure and related functions
and macros for condition variables with timed
waiting using a relative timeout
- mythread_create() to create a thread with all
signals blocked
Some of these wouldn't need to be inline functions,
but I'll keep them this way for now for simplicity.
For timed waiting on a condition variable, librt is
now required on some systems to use clock_gettime().
configure.ac was updated to handle this.
This is incompatible with the 8.3 support patch made by
Juan Manuel Guerrero. I think this one is nicer, but
I need to get feedback from DOS users before saying
that this is the final version of 8.3 filename support.
Try to avoid overwriting the source file if --force is
used and the generated destination filename refers to
the source file. This can happen with 8.3 filenames where
extra characters are ignored.
If the generated output file refers to a special file
like "con" or "prn", refuse to write to it even if --force
is used.
It leaks old filter options structures (hundred bytes or so)
every time the lzma_stream is reinitialized. With the xz tool,
this happens when compressing multiple files.
The decoder considered empty LZMA2 streams to be corrupt.
This shouldn't matter much with .xz files, because no encoder
creates empty LZMA2 streams in .xz. This bug is more likely
to cause problems in applications that use raw LZMA2 streams.
It didn't work at all. It tried to use the -q option
for grep, but it appended it after "--". This works
around it by redirecting to /dev/null. The downside
is that this can be slower with big files compared
to proper use of "grep -q".
Thanks to Gregory Margo.
xz didn't compress setuid/setgid/sticky files and files
with multiple hard links even with --force. This bug was
introduced in 23ac2c44c3.
Thanks to Charles Wilson.
This has no semantic changes. I find the new names slightly
more logical and they match the names that are already used
in XZ Embedded.
The name fastpos wasn't changed (not worth the hassle).
If any of the reserved members in lzma_stream are non-zero
or non-NULL, LZMA_OPTIONS_ERROR is returned. It is possible
that a new feature in the future is indicated by just setting
a reserved member to some other value, so the old liblzma
version need to catch it as an unsupported feature.
The non-standard ones from msvcrt.dll appear to work
most of the time with XZ Utils, but there are some
corner cases where things may go very wrong. So it's
good to use the better replacements provided by
MinGW(-w64) runtime.
Calling raise() to kill xz when user has pressed C-c
is a bit verbose on OS/2 and DOS/DJGPP. Instead of
calling raise(), set only the exit status to 1.
Those are the same thing, and the former makes it a bit
easier to build the code with other build systems, because
one doesn't need to update the version number into custom
config.h.
This change affects only lzmainfo. Other tools were already
using LZMA_VERSION_STRING.
Most distros want xz linked against shared liblzma, so
it doesn't help much to require --enable-dynamic for that.
Those who want to avoid PIC on x86-32 to get better
performance, can still do it e.g. by using --disable-shared
to compile xz and then another pass to compile shared liblzma.
Part of these static/dynamic tricks were needed for Windows
in the past. Nowadays we rely on GCC and binutils to do the
right thing with auto-import. If the Autotooled build system
needs to support some other toolchain on Windows in the future,
this may need some rethinking.
Lots of content was updated on the xz man page.
Technical improvements:
- Start a new sentence on a new line.
- Use fairly short lines.
- Use constant-width font for examples (where supported).
- Some minor cleanups.
Thanks to Jonathan Nieder for some language fixes.
The code assumed that printing numbers with thousand separators
and decimal points would always produce only US-ASCII characters.
This was used for buffer sizes (with snprintf(), no overflows)
and aligning columns of the progress indicator and --list. That
assumption was wrong (e.g. LC_ALL=fi_FI.UTF-8 with glibc), so
multibyte character support was added in this commit. The old
way is used if the operating system doesn't have enough multibyte
support (e.g. lacks wcwidth()).
The sizes of buffers were increased to accomodate multibyte
characters. I don't know how big they should be exactly, but
they aren't used for anything critical, so it's not too bad.
If they still aren't big enough, I hopefully get a bug report.
snprintf() takes care of avoiding buffer overflows.
Some static buffers were replaced with buffers allocated on
stack. double_to_str() was removed. uint64_to_str() and
uint64_to_nicestr() now share the static buffer and test
for thousand separator support.
Integrity check names "None" and "Unknown-N" (2 <= N <= 15)
were marked to be translated. I had forgot these, plus they
wouldn't have worked correctly anyway before this commit,
because printing tables with multibyte strings didn't work.
Thanks to Marek Černocký for reporting the bug about
misaligned table columns in --list output.
This should reduce the cases where --extreme makes
compression worse. On the other hand, some other
files may now benefit slightly less from --extreme.
It was 8 + nice_len / 4, now it is 4 + nice_len / 4.
This allows faster settings at lower nice_len values,
even though it seems that I won't use automatic depth
calcuation with HC3 and HC4 in the presets.
For several people, the limiter causes bigger problems that
it solves, so it is better to have it disabled by default.
Those who want to have a limiter by default need to enable
it via the environment variable XZ_DEFAULTS.
Support for environment variable XZ_DEFAULTS was added. It is
parsed before XZ_OPT and technically identical with it. The
intended uses differ quite a bit though; see the man page.
The memory usage limit can now be set separately for
compression and decompression using --memlimit-compress and
--memlimit-decompress. To set both at once, -M or --memlimit
can be used. --memory was retained as a legacy alias for
--memlimit for backwards compatibility.
The semantics of --info-memory were changed in backwards
incompatible way. Compatibility wasn't meaningful due to
changes in the memory usage limiter functionality.
The memory usage limiter info is no longer shown at the
bottom of xz --long -help.
The memory usage limiter support for removed completely from xzdec.
xz's man page was updated to match the above changes. Various
unrelated fixes were also made to the man page.