The convention is that
lzma_filter filters[LZMA_FILTERS_MAX + 1];
contains the filters of a single filter chain.
It was so here as well before the commit
d6af7f3470.
It changes "filters" to a ten-element array of filter chains.
It's clearer to call this array-of-arrays "chains".
This also renames "filter_idx" to "chain_idx" which is used
as an index as in chains[chain_idx].
opt_mode == MODE_COMPRESS isn't possible when HAVE_ENCODERS isn't
defined. Thus, when *encoding*, the message about *decoder* memory
usage is possible to show only when both encoder and decoder have
been built.
Since the message is shown only at V_DEBUG, skip the memusage
calculation if verbosity level isn't high enough.
Fixes: 5f0c5a0438
This was added to xz in 02e3505991
but I forgot to do the same in xzdec.
The Landlock sandbox in xzdec could be stricter as now it's
active only for the last file being decompressed. In xz,
read-only sandbox is used for multi-file case. On the other hand,
xz doesn't go to the strictest mode when processing the last file
when more than one file was specified; xzdec does.
Clang 17 with -fsanitize=address,undefined:
src/liblzma/common/filter_common.c:366:8: runtime error:
call to function encoder_find through pointer to incorrect
function type 'const lzma_filter_coder *(*)(unsigned long)'
src/liblzma/common/filter_encoder.c:187: note:
encoder_find defined here
Use a wrapper function to get the correct type neatly.
This reduces the number of casts needed too.
This issue could be a problem with control flow integrity (CFI)
methods that check the function type on indirect function calls.
Fixes: 3b34851de1
It's undefined behavior. The result wasn't ever used as it occurred
in the last iteration of a loop.
Clang 17 with -fsanitize=address,undefined:
$ src/xz/xz --block-list=123
src/xz/args.c:164:12: runtime error: applying non-zero offset 1
to null pointer
Fixes: 88ccf47205
Co-authored-by: Sam James <sam@gentoo.org>
If the arguments to lzma_index_decoder() or lzma_index_buffer_decode()
were such that LZMA_PROG_ERROR was returned, the lzma_index **i
argument wasn't touched even though the API docs say that *i = NULL
is done if an error occurs. This obviously won't be done even now
if i == NULL but otherwise it is best to do it due to the wording
in the API docs.
In practice this matters very little: The problem can occur only
if the functions are called with invalid arguments, that is,
the calling application must already have a bug.
The __builtin_bswapXX from GCC and Clang are preferred when
they are available. This can allow compilers to emit the x86 MOVBE
instruction instead of doing a load + byteswap as two instructions
(which would happen if the byteswapping is done in inline asm).
bswap16, bswap32, and bswap64 exist in system headers on *BSDs
and Darwin. #defining bswap16 on NetBSD results in a warning about
macro redefinition. It's safest to avoid this namespace conflict
completely.
No OS supported by tuklib_integer.h uses byteswapXX names and
a web search doesn't immediately find any obvious danger of
namespace conflicts. So let's try these still-pretty-short names
for the macros.
Thanks to Sam James for pointing out the compiler warning on
NetBSD 10.0.
The API docs clearly say that if error_pos isn't NULL then *error
is always set on any error. However, it wasn't touched if str == NULL
or filters == NULL or unsupported flags were specified.
Fixes: cedeeca2ea
It is logical why it cannot know for sure that the value has
to be at most 4 if it is less than 16.
The x86 filter is based on a very old LZMA SDK version. Newer
ones have quite a different implementation for the same filter.
Thanks to Sam James.
On macOS, we get:
```
signals.c: In function 'signals_init':
signals.c:76:17: error: conversion to 'sigset_t' {aka 'unsigned int'} from 'int' may change the sign of the result [-Werror=sign-conversion]
76 | sigaddset(&hooked_signals, sigs[i]);
| ^~~~~~~~~
signals.c:81:17: error: conversion to 'sigset_t' {aka 'unsigned int'} from 'int' may change the sign of the result [-Werror=sign-conversion]
81 | sigaddset(&hooked_signals, message_progress_sigs[i]);
| ^~~~~~~~~
signals.c:86:9: error: conversion to 'sigset_t' {aka 'unsigned int'} from 'int' may change the sign of the result [-Werror=sign-conversion]
86 | sigaddset(&hooked_signals, SIGTSTP);
| ^~~~~~~~~
```
We use `int` for `hooked_signals` but we can't just cast to whatever
`sigset_t` is because `sigset_t` is an opaque type. It's an unsigned int
on macOS. On macOS, `sigaddset` is implemented as a macro.
Just suppress -Wsign-conversion for `signals_init` for macOS given
there's no real nice way of fixing this.
This is *NOT* done for security reasons even though the backdoor
relied on the ifunc code. Instead, the reason is that in this
project ifunc provides little benefits but it's quite a bit of
extra code to support it. The only case where ifunc *might* matter
for performance is if the CRC functions are used directly by an
application. In normal compression use it's completely irrelevant.
While the backdoor was inactive (and thus harmless) without inserting
a small trigger code into the build system when the source package was
created, it's good to remove this anyway:
- The executable payloads were embedded as binary blobs in
the test files. This was a blatant violation of the
Debian Free Software Guidelines.
- On machines that see lots bots poking at the SSH port, the backdoor
noticeably increased CPU load, resulting in degraded user experience
and thus overwhelmingly negative user feedback.
- The maintainer who added the backdoor has disappeared.
- Backdoors are bad for security.
This reverts the following without making any other changes:
6e636819 Tests: Update two test files.
a3a29bbd Tests: Test --single-stream can decompress bad-3-corrupt_lzma2.xz.
0b4ccc91 Tests: Update RISC-V test files.
8c9b8b20 liblzma: Fix typos in crc32_fast.c and crc64_fast.c.
82ecc538 liblzma: Fix false Valgrind error report with GCC.
cf44e4b7 Tests: Add a few test files.
3060e107 Tests: Use smaller dictionary size in RISC-V test files.
e2870db5 Tests: Add two RISC-V Filter test files.
The RISC-V test files also have real content that tests the filter
but the real content would fit into much smaller files. A generator
program would need to be available as well.
Thanks to Andres Freund for finding and reporting it and making
it public quickly so others could act without a delay.
See: https://www.openwall.com/lists/oss-security/2024/03/29/4
NVHPC compiler has several issues that make it impossible to
build liblzma:
- the compiler cannot handle unions that contain pointers that
are not the first members;
- the compiler cannot handle the assembler code in range_decoder.h
(LZMA_RANGE_DECODER_CONFIG has to be set to zero);
- the compiler fails to produce valid code for delta_decode if the
vectorization is enabled, which results in failed tests.
This introduces NVHPC-specific workarounds that address the issues.
With GCC and a certain combination of flags, Valgrind will falsely
trigger an invalid write. This appears to be due to the omission of
instructions to properly save, set up, and restore the frame pointer.
The IFUNC resolver is a leaf function since it only calls a function
that is inlined. So sometimes GCC omits the frame pointer instructions
in the resolver unless this optimization is explictly disabled.
This fixes https://bugzilla.redhat.com/show_bug.cgi?id=2267598.
Now that multi threaded encoding is the default, users do not need to
see a warning message everytime the number of threads is reduced. On
some machines, this could happen very often. It is not unreasonable for
users to need to set double verbose mode to see this kind of
information.
To see these warning messages -vv or --verbose --verbose must be passed
to set xz into the highest possible verbosity mode.
These warnings had caused automated testing frameworks to fail when they
expected no output to stderr.
Thanks to Sebastian Andrzej Siewior for reporting this and for the
initial version of the patch.
The previous Linux Landlock feature test assumed that having the
linux/landlock.h header file was enough. The new feature tests also
requires that prctl() and the required Landlock system calls are
supported.
Like 5.5.0alpha, 5.7.0alpha won't be released, it's just to mark that
the branch is not stable.
Once again there is no API/ABI stability for new features in devel
versions. The major soname won't be bumped even if API/ABI of new
features breaks between devel releases.
If xz is given a directory, it should look like this:
$ xz /usr/bin
xz: /usr/bin: Is a directory, skipping
The Landlock rules didn't allow opening directories for reading:
$ xz /usr/bin
xz: /usr/bin: Permission denied
The simplest fix was to allow opening directories for reading.
While it's a bit silly to allow it solely for the error message,
it shouldn't make the sandbox significantly weaker.
The single-file use case (like when called from GNU tar) is
still as strict as possible: all Landlock restrictions are
enabled before (de)compression starts.
This makes these sandboxing methods stricter when no files are
created or deleted. That is, it's a middle ground between the
initial sandbox and the strictest single-file-to-stdout sandbox:
this allows opening files for reading but output has to go to stdout.
Linux 6.7 added support for ABI version 4 which restricts
TCP connections which xz won't need and thus those can be
forbidden now. Since the ABI version is handled at runtime,
supporting version 4 won't cause any compatibility issues.
Note that new enough kernel headers are required to get
version 4 support enabled at build time.
Landlock is now always used just like pledge(2) is: first in more
permissive mode and later (under certain common conditions) in
a strict mode that doesn't allow opening more files.
I put pledge(2) first in sandbox.c because it's the simplest API
to use and still somewhat fine-grained for basic applications.
So it's the simplest thing to understand for anyone reading sandbox.c.
Also explicitly initialize progress_automatic to make it clear
that it can be read before message_init() sets it. Static variable
was initialized to false by default already so this is only for
clarity.
GCC docs promise that it works and a few other compilers do
too. Clang/LLVM is documented source code only but unsurprisingly
it behaves the same as others on x86-64 at least. But the
certainly-portable way is good enough here so use that.
The x32 port has a x86-64 ABI in term of all registers but uses only
32bit pointer like x86-32. The assembly optimisation fails to compile on
x32. Given the state of x32 I suggest to exclude it from the
optimisation rather than trying to fix it.
Signed-off-by: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
It's used only for basic bittrees and fixed-size reverse bittree
because those showed a clear benefit on x86-64 with GCC and Clang.
The other methods were more mixed and thus are commented out but
they should be tested on other archs.
Now extra buffer space is reserved so that repeating bytes for
any single match will never need to copy from two places (both
the beginning and the end of the buffer). This simplifies
dict_repeat() and helps a little with speed.
This seems to reduce .lzma decompression time about 2 %, so
with .xz and CRC it could be slightly less. The small things
add up still.
It's not completely obvious if this is better in the decoder.
It should be good if compiler can avoid creating a branch
(like using CMOV on x86).
This also makes lzma_encoder.c use the new macros.
The new decoder resumes the first decoder loop in the Resumable mode.
Then, the code executes in Non-resumable mode until it detects that it
cannot guarantee to have enough input/output to decode another symbol.
The Resumable mode is how the decoder has always worked. Before decoding
every input bit, it checks if there is enough space and will save its
location to be resumed later. When the decoder has more input/output,
it jumps back to the correct sequence in the Resumable mode code.
When the input/output buffers are large, the Resumable mode is much
slower than the Non-resumable because it has more branches and is harder
for the compiler to optimize since it is in a large switch block.
Early benchmarking shows significant time improvement (8-10% on gcc and
clang x86) by using the Non-resumable code as much as possible.
The new "safe" range decoder mode is the same as old range decoder, but
now the default behavior of the range decoder will not check if there is
enough input or output to complete the operation. When the buffers are
close to fully consumed, the "safe" operations must be used instead. This
will improve speed because it will reduce the number of branches needed
for most of the range decoder operations.
The main reason is a kind of silly one:
xz-man.pot contains strings from all man pages in XZ Utils.
The man pages of xzdiff, xzgrep, and xzmore were under GPLv2
and the rest under 0BSD. Thus xz-man.pot contained strings
under two licences. po4a creates the translated man pages
from the combined 0BSD+GPLv2 xz-man.pot.
I haven't liked this mixing in xz-man.pot but the
Translation Project requires that all man pages must be
in the same .pot file. So a separate xz-man-gpl.pot
wasn't an option.
Since these man pages are short, rewriting them was quick enough.
Now xz-man.pot is entirely under 0BSD and marking the per-file
licenses is simpler.
As a bonus, some wording hopefully is now slightly better
although it's perhaps a matter of taste.
NOTE: In xzgrep.1, the EXIT STATUS section was written by me
in the commit d796b6d7fd so that's
why that section could be taken as is from the old xzgrep.1.
Perhaps the generated files aren't even copyrightable but
using the same license for them as for the rest of the liblzma
keeps things more consistent for tools that look for license info.