It requires fast unaligned access to 64-bit integers
and a fast instruction to count leading zeros in
a 64-bit integer (__builtin_ctzll()). This perhaps
should be enabled on some other archs too.
Thanks to Chenxi Mao for the original patch:
https://github.com/tukaani-project/xz/pull/75 (the first commit)
According to the numbers there, this may improve encoding
speed by about 3-5 %.
This enables the 8-byte method on MSVC ARM64 too which
should work but wasn't tested.
The sandbox is now enabled for xzdec as well, so it no longer belongs
in just the xz section. xz and xzdec are always built, except for older
MSVC versions, so there isn't a need to conditionally show the sandbox
configuration. CMake will do a little unecessary work on older MSVC
versions that can't build xz or xzdec, but this is a very small
downside.
A very strict sandbox is used when the last file is decompressed. The
likely most common use case of xzdec is to decompress a single file.
The Pledge sandbox is applied to the entire process with slightly more
relaxed promises, until the last file is processed.
Thanks to Christian Weisgerber for the initial patch adding Pledge
sandboxing.
This fixes the recent change to lzma_lz_encoder that used memzero
instead of the NULL constant. On some compilers the NULL constant
(always 0) may not equal the NULL pointer (this only needs to guarentee
to not point to valid memory address).
Later code compares the pointers to the NULL pointer so we must
initialize them with the NULL pointer instead of 0 to guarentee
code correctness.
The first member of lzma_lz_encoder doesn't necessarily need to be set
to NULL since it will always be set before anything tries to use it.
However the function pointer members must be set to NULL since other
functions rely on this NULL value to determine if this behavior is
supported or not.
This fixes a somewhat serious bug, where the options_update() and
set_out_limit() function pointers are not set to NULL. This seems to
have been forgotten since these function pointers were added many years
after the original two (code() and end()).
The problem is that by not setting this to NULL we are relying on the
memory allocation to zero things out if lzma_filters_update() is called
on a LZMA1 encoder. The function pointer for set_out_limit() is less
serious because there is not an API function that could call this in an
incorrect way. set_out_limit() is only called by the MicroLZMA encoder,
which must use LZMA1 where set_out_limit() is always set. Its currently
not possible to call set_out_limit() on an LZMA2 encoder at this time.
So calling lzma_filters_update() on an LZMA1 encoder had undefined
behavior since its possible that memory could be manipulated so the
options_update member pointed to a different instruction sequence.
This is unlikely to be a bug in an existing application since it relies
on calling lzma_filters_update() on an LZMA1 encoder in the first place.
For instance, it does not affect xz because lzma_filters_update() can
only be used when encoding to the .xz format.
This is fixed by using memzero() to set all members of lzma_lz_encoder
to NULL after it is allocated. This ensures this mistake will not occur
here in the future if any additional function pointers are added.
lzma_raw_encoder() and lzma_raw_encoder_init() used "options" as the
parameter name instead of "filters" (used by the declaration). "filters"
is more clear since the parameter represents the list of filters passed
to the raw encoder, each of which contains filter options.
lzma_encoder_init() did not check for NULL options, but
lzma2_encoder_init() did. This is more of a code style improvement than
anything else to help make lzma_encoder_init() and lzma2_encoder_init()
more similar.
Since GCC version 10, GCC no longer complains about simple implicit
integer conversions with Arithmetic operators.
For instance:
uint8_t a = 5;
uint32_t b = a + 5;
Give a warning on GCC 9 and earlier but this:
uint8_t a = 5;
uint32_t b = (a + 5) * 2;
Gives a warning with GCC 10+.
Most of these fixes are small typos and tweaks. A few were caused by bad
advice from me. Here is the summary of what is changed:
- Author line edits
- Small comment changes/additions
- Using the return value in the error messages in the fuzz targets'
coder initialization code
- Removed fuzz_encode_stream.options. This set a max length, which may
prevent some worthwhile code paths from being properly exercised.
- Removed the max_len option from fuzz_decode_stream.options for the
same reason as fuzz_encode_stream. The alone decoder fuzz target still
has this restriction.
- Altered the dictionary contents for fuzz_lzma.dict. Instead of keeping
the properties static and varying the dictionary size, the properties
are varied and the dictionary size is kept small. The dictionary size
doesn't have much impact on the code paths but the properties do.
Closes: https://github.com/tukaani-project/xz/pull/73
This fuzz target handles .xz stream encoding. The first byte of input
is used to dynamically set the preset level in order to increase the
fuzz coverage of complex critical code paths.
This fuzz target that handles LZMA alone decoding. A new fuzz
dictionary .dict was also created with common LZMA header values to
help speed up the discovery of valid headers.
All .c files can be built as separate fuzz targets. This simplifies
the Makefile by allowing us to use wildcards instead of having a
Makefile target for each fuzz target.
Some compilers support __attribute__((__ifunc__())) even though the
dynamic linker does not. The compiler is able to create the binary
but it will fail on startup. So it is not enough to just test if
the attribute is supported.
The default value for enable_ifunc is now auto, which will attempt
to compile a program using __attribute__((__ifunc__())). There are
additional checks in this program if glibc is being used or if it
is running on FreeBSD.
Setting --enable-ifunc will skip this test and always enable
__attribute__((__ifunc__())), even if is not supported.
The new is_tty() will report if a file descriptor is a terminal or not.
On POSIX systems, it is a wrapper around isatty(). However, the native
Windows implementation of isatty() will return true for all character
devices, not just terminals. So is_tty() has a special case for Windows
so it can use alternative Windows API functions to determine if a file
descriptor is a terminal.
This fixes a bug with MSVC and MinGW-w64 builds that refused to read from
or write to non-terminal character devices because xz thought it was a
terminal. For instance:
xz foo -c > /dev/null
would fail because /dev/null was assumed to be a terminal.
This tests some complicated interactions with the --suffix= option.
The suffix option must be used with --format=raw, but can optionally
be used to override the default .xz suffix.
This test also verifies some recent bugs have been correctly solved
and to hopefully avoid further regressions in the future.
The following command caused a segmentation fault:
xz -Fraw --lzma1 --files=foo
when foo was a valid file. The usage of --files or --files0 was not
being checked when compressing or decompressing in raw mode without a
suffix. The suffix checking code was meant to validate that all files
to be processed are "-" (if not writing to standard out), meaning the
data is only coming from standard in. In this case, there were no file
names to check since --files and --files0 store their file name in a
different place.
Later code assumed the suffix was set and caused a segmentation fault.
Now, the above command results in an error.
The previous version set opt_stdout, but this caused an issue with
copying an input file to standard out when decompressing an unknown file
type. The following needs to result in an error:
echo foo | xz -df
since -c, --stdout is not used. This fixes the previous error by not
setting opt_stdout.
This fixes a bug introduced in cc5aa9ab13
when the suffix check was initially moved. This caused a situation that
previously worked:
echo foo | xz -Fraw --lzma1 | wc -c
to fail because the old code knew that this would write to standard out
so a suffix was not needed.
If the -c, --stdout argument is not used, then we can still detect when
the data will be written to standard out if all of the provided
filenames are "-" (denoting standard in) or if no filenames are
provided.