Currently crc32 is always enabled, so COND_CHECK_CRC32 must always be
set. Because of this, it makes the recent change to conditionally
compile check/crc_clmul.c appear wrong since that file has CLMUL
implementations for both CRC32 and CRC64.
After forcing crc_simd_body() to always be inlined it caused
-fsanitize=address to fail for lzma_crc32_clmul() and
lzma_crc64_clmul(). The __no_sanitize_address__ attribute was added
to lzma_crc32_clmul() and lzma_crc64_clmul(), but not removed from
crc_simd_body(). ASAN and inline functions behavior has changed over
the years for GCC specifically, so while strictly required we will
keep __attribute__((__no_sanitize_address__)) on crc_simd_body() in
case this becomes a requirement in the future.
Older GCC versions refuse to inline a function with ASAN if the
caller and callee do not agree on sanitization flags
(https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89124#c3). If the
function was forced to be inlined, it will not compile if the callee
function has __no_sanitize_address__ but the caller doesn't.
After testing a 32-bit Release build on MSVC, only lzma_crc64_clmul()
has the bug. crc_simd_body() and lzma_crc32_clmul() do not need the
optimizations disabled.
Forcing this to be inline has a significant speed improvement at the
cost of a few repeated instructions. The compilers tested on did not
inline this function since it is large and is used twice in the same
translation unit.
This macro must be used instead of the inline keyword. On MSVC, it is
a replacement for __forceinline which is an MSVC specific keyword that
should not be used with inline (it will issue a warning if it is).
It does not use a build system check to determine if
__attribute__((__always_inline__)) since all compilers that can use
CLMUL extensions (except the special case for MSVC) should support this
attribute. If this assumption is incorrect then it will result in a bug
report instead of silently producing slow code.
A detailed description of the three dispatch methods was added. Also,
duplicated comments now only appear in crc32_fast.c or were removed from
both crc32_fast.c and crc64_fast.c if they appeared in crc_clmul.c.
Both crc32_clmul() and crc64_clmul() are now exported from
crc32_clmul.c as lzma_crc32_clmul() and lzma_crc64_clmul(). This
ensures that is_clmul_supported() (now lzma_is_clmul_supported()) is
not duplicated between crc32_fast.c and crc64_fast.c.
Also, it encapsulates the complexity of the CLMUL implementations into a
single file and reduces the complexity of crc32_fast.c and crc64_fast.c.
Before, CLMUL code was present in crc32_fast.c, crc64_fast.c, and
crc_common.h.
During the conversion, various cleanups were applied to code (thanks to
Lasse Collin) including:
- Require using semicolons with MASK_/L/H/LH macros.
- Variable typing and const handling improvements.
- Improvements to comments.
- Fixes to the pragmas used.
- Removed unneeded variables.
- Whitespace improvements.
- Fixed CRC_USE_GENERIC_FOR_SMALL_INPUTS handling.
- Silenced warnings and removed the need for some #pragmas
The C standards don't allow an empty translation unit which can be
avoided by declaring something, without exporting any symbols.
When I committed f644473a211394447824ea00518d0a214ff3f7f2 I had
a feeling that some specific toolchain somewhere didn't like
empty object files (assembler or maybe "ar" complained) but
I cannot find anything to confirm this now. Quite likely I
remembered nonsense. I leave this here as a note to my future self. :-)
When the generic fast crc64 method is used, then we omit
lzma_crc64_table[][]. Similar to
d9166b52cf3458a4da3eb92224837ca8fc208d79, we can avoid compiler warnings
with -Wempty-translation-unit (Clang) or -pedantic (GCC) by creating a
never used typedef instead of an extra symbol.
Clang 16.0.0 and earlier have a bug that the ifunc resolver function
triggers the -Wunused-function warning. The resolver function is static
and only "used" by the __attribute__((__ifunc()__)).
At this time, the bug is still unresolved, but has been reported:
https://github.com/llvm/llvm-project/issues/63957
This is not a problem in GCC.
The ifunc method avoids indirection via the function pointer
crc64_func. This works on GNU/Linux and probably on FreeBSD too.
The previous __attribute((__constructor__)) method is kept for
compatibility with ELF platforms which do support ifunc.
The ifunc method has some limitations, for example, building
liblzma with -fsanitize=address will result in segfaults.
The configure option --disable-ifunc must be used for such builds.
Thanks to Hans Jansen for the original patch.
Closes: https://github.com/tukaani-project/xz/pull/53
It gives C4146 here since unary minus with unsigned integer
is still unsigned (which is the intention here). Doing it
with substraction makes it clearer and avoids the warning.
Thanks to Nathan Moinvaziri for reporting this.
This affects only 32-bit x86 builds. x86-64 is OK as is.
I still cannot easily test this myself. The reporter has tested
this and it passes the tests included in the CMake build and
performance is good: raw CRC64 is 2-3 times faster than the
C version of the slice-by-four method. (Note that liblzma doesn't
include a MSVC-compatible version of the 32-bit x86 assembly code
for the slice-by-four method.)
Thanks to Iouri Kharon for figuring out a fix, testing, and
benchmarking.
This reverts commit 36edc65ab4cf10a131f239acbd423b4510ba52d5.
It was reported that it wasn't a good enough fix and MSVC
still produced (different kind of) bad code when building
for 32-bit x86 if optimizations are enabled.
Thanks to Iouri Kharon.
I haven't tested with MSVC myself and there doesn't seem to be
information about the problem online, so I'm relying on the bug report.
Thanks to Iouri Kharon for the bug report and the patch.
It also works on E2K as it supports these intrinsics.
On x86-64 runtime detection is used so the code keeps working on
older processors too. A CLMUL-only build can be done by using
-msse4.1 -mpclmul in CFLAGS and this will reduce the library
size since the generic implementation and its 8 KiB lookup table
will be omitted.
On 32-bit x86 this isn't used by default for now because by default
on 32-bit x86 the separate assembly file crc64_x86.S is used.
If --disable-assembler is used then this new CLMUL code is used
the same way as on 64-bit x86. However, a CLMUL-only build
(-msse4.1 -mpclmul) won't omit the 8 KiB lookup table on
32-bit x86 due to a currently-missing check for disabled
assembler usage.
The configure.ac check should be such that the code won't be
built if something in the toolchain doesn't support it but
--disable-clmul-crc option can be used to unconditionally
disable this feature.
CLMUL speeds up decompression of files that have compressed very
well (assuming CRC64 is used as a check type). It is know that
the CLMUL code is significantly slower than the generic code for
tiny inputs (especially 1-8 bytes but up to 16 bytes). If that
is a real-world problem then there is already a commented-out
variant that uses the generic version for small inputs.
Thanks to Ilya Kurdyukov for the original patch which was
derived from a white paper from Intel [1] (published in 2009)
and public domain code from [2] (released in 2016).
[1] https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf
[2] https://github.com/rawrunprotected/crc
This uses it for CRC table initializations when using --disable-small.
It avoids mythread_once() overhead. It also means that then
--disable-small --disable-threads is thread-safe if this attribute
is supported.
When Intel CET is enabled, we need to include <cet.h> in assembly codes
to mark Intel CET support and add _CET_ENDBR to indirect jump targets.
Tested on Intel Tiger Lake under CET enabled Linux.
Using the aligned methods requires more care to ensure that
the address really is aligned, so it's nicer if the aligned
methods are prefixed. The next commit will remove the unaligned_
prefix from the unaligned methods which in liblzma are used in
more places than the aligned ones.
This is the sane thing to do. The conflict with OpenSSL
on some OSes and especially that the OS-provided versions
can be significantly slower makes it clear that it was
a mistake to have the external SHA-256 support enabled by
default.
Those who want it can now pass --enable-external-sha256 to
configure. INSTALL was updated with notes about OSes where
this can be a bad idea.
The SHA-256 detection code in configure.ac had some bugs that
could lead to a build failure in some situations. These were
fixed, although it doesn't matter that much now that the
external SHA-256 is disabled by default.
MINIX >= 3.2.0 uses NetBSD's libc and thus has SHA256_Init
in libc instead of libutil. Support for the libutil version
was removed.
The Maj macro is used where multiple things are added
together, so making Maj a sum of two expressions allows
some extra freedom for the compiler to schedule the
instructions.
I learned this trick from
<http://www.hackersdelight.org/corres.txt>.
This looks weird because the rotations become sequential,
but it helps quite a bit on both 32-bit and 64-bit x86:
- It requires fewer instructions on two-operand
instruction sets like x86.
- It requires one register less which matters especially
on 32-bit x86.
I hope this doesn't hurt other archs.
I didn't invent this idea myself, but I don't remember where
I saw it first.
This way a branch isn't needed for each operation
to choose between blk0 and blk2, and still the code
doesn't grow as much as it would with full unrolling.
If the operating system libc or other base libraries
provide SHA-256, use that instead of our own copy.
Note that this doesn't use OpenSSL or libgcrypt or
such libraries to avoid creating dependencies to
other packages.
This supports at least FreeBSD, NetBSD, OpenBSD, Solaris,
MINIX, and Darwin. They all provide similar but not
identical SHA-256 APIs; everyone is a little different.
Thanks to Wim Lewis for the original patch, improvements,
and testing.
This replaces bswap.h and integer.h.
The tuklib module uses <byteswap.h> on GNU,
<sys/endian.h> on *BSDs and <sys/byteorder.h>
on Solaris, which may contain optimized code
like inline assembly.
Seems that it is a problem in some cases if the same
version of XZ Utils produces different output on different
endiannesses, so this commit fixes that problem. The output
will still vary between different XZ Utils versions, but I
cannot avoid that for now.
This commit bloatens the code on big endian systems by 1 KiB,
which should be OK since liblzma is bloated already. ;-)