Update default hash settings to 64MiB:3:4 for Argon2 in -m 70000, following RFC 9106 recommendations.
Add option OPTS_TYPE_THREAD_MULTI_DISABLE: allows plugin developers to disable scaling the password candidate batch size based on device thread count. This can be useful for super slow hash algorithms that utilize threads differently, e.g., when the algorithm allows parallelization. Note: thread count for the device can still be set normally.
Add options OPTI_TYPE_SLOW_HASH_DIMY_INIT/LOOP/COMP: enable 2D launches for slow hash init/loop/comp kernel with dimensions X and Y. The Y value must be set via salt->salt_dimy attribute.
Change autotune kernel-loops start value to the lowest multiple of the target hash iteration count, if kernel_loops_min permits.
Fixed a bug in autotune where kernel_threads_max was not respected during initial init and loop-prepare kernel runs.
Since loop values increase by doubling in autotune, a slow hash-mode
with, for example, 1000 iterations can end up with a suboptimal -u count.
Currently, autotuning starts at 1 and doubles (2, 4, 8, ..., 512, 1024).
If the maximum is 1000, autotune stops at 512, resulting in two kernel
calls: one with 512 iterations and another with 488.
The tweak attempts to find the smallest factor that, when repeatedly
doubled, reaches the target exactly. For 1000, this would be 125
and for 1024, it would be 1.
However, this logic doesn’t align well with how hashcat handles slow
hash iterations. For instance, PBKDF2-based plugins typically set the
iteration count to N-1, since the first iteration is handled by the
`_init` kernel. So, a plugin might set 1023 instead of 1024, and in such
cases, the logic would incorrectly assume 1023 is the minimum factor
which leads to suboptimal tuning.
To work around this, the factor-finder is executed twice: once with
the original iteration count and once with `iteration count + 1`.
The configuration that results in a lower starting point is used.
Other stuff:
- Fixed a critical bug in the autotuner
This bug was introduced a few days ago. The autotuner has the ability
to overtune the maximum allowed thread count under certain conditions.
For example, in unoptimized -a 0 cracking mode when using rules.
Several parts of the hashcat core require strict adherence to this limit,
especially when shared memory is involved.
To resolve this while retaining overtuning for compatible modes,
a new attribute `device_param->overtune_unfriendly` was introduced.
When set to true, it prevents the autotuner from modifying
`kernel_threads_max` and `kernel_accel_max`.
Four sections in `backend.c` have been updated to set this flag,
though additional areas may also require it.
- Moved the code that aligns `kernel_accel` to a multiple of the compute
unit count into the overtune section.
- Fixed a bug in the HIP dynloader. It now reports actual error strings,
provided the API returns them.
shuffle() present in some OpenCL runtimes
- Updated autotune logic: if the best kernel-loop is not yet found and
the current kernel-loops setting resulting in a kernel runtime which
is already above a certain threshold, do not skip to kernel-threads
or kernel-accel section if no variance is possible
- Revised all plugin module_unstable_warning() checks for
AMD Radeon Pro W5700X GPU on Metal: rechecked with the latest
Metal version and removed those now fixed
- Inform the user on startup when backend runtimes and devices are
initialized
- Fixed some file permissions in the tools/ folder
Improved shared memory handling for -m 10700. Removed the hard-coded limit of 256 threads and now dynamically check the device's shared memory pool to adapt threads accordingly.
Implemented a feature request to display non-default session names early during startup.
Added a check for the number of registers required by a kernel (CUDA and HIP only). This allows us to estimate the max threads per block before entering the auto-tune engine and make pre-adjustments.
Fixed Metal command encoder argument to work with the new auto-tuner's extra kernel invocation.
Fixed incorrect host memory calculation logic during automatic kernel-accel reduction for scrypt-based algorithms. This ensures memory constraints are respected.
Improved several plugins by setting maximum loop counts and others using the OPTS_TYPE_NATIVE_THREADS option.
Fixed compilation on Apple platforms by excluding '#include <sys/sysinfo.h>'.
- Integrated occupancy hints from vendor APIs (CUDA, HIP) to set a
dynamic threads-per-block limit per kernel instead of using static
values.
- Added `find_tuning_function()` to identify the relevant kernel.
- Autotuner now runs in three stages: threads -> loops -> accel. The
first two stages now stop increasing when the tested kernel runtime
gets too close to the target runtime (96ms for `-w 3`), leaving
headroom for the next stage to adjust in a finer sense.
- Accel tuning now uses a capped floating-point multiplier instead of
powers of two.
- Removed workarounds for missing thread autotuning in plugins.
- Removed the hardcoded 4GiB host memory limit for accel. Added a
cross-platform `get_free_memory()` to check actual free RAM during GPU
initialization, preventing underutilization of high-end GPUs like the
4090. If needed, users can still cap memory usage with `-T` or `-n`.
- Updated enums for ROCm 6.4.x and CUDA 12.9.
- Added code to detect kernel register spilling. That's relevant so we
can keep free enough global memory on the runtime for the runtime to
handle spills efficiently.
Add hipDeviceProp_t and bindings for hipGetDeviceProperties(), hipGetDeviceProperties is required to retrieve gcnArchName[].
Add gcnArchName[] to select the correct --gpu-architecture value for a specific device when using hiprtc.
Include sm_major and sm_minor for CUDA and gcnArchName for HIP in the kernel filename hash.
Update nvrtc_options[] and hiprtc_options[] to avoid unused variables, eliminating the use of --restrict as a placeholder and preventing nvrtc from aborting.
Add check_file_suffix() and remove_file_suffix() helper functions.
General:
The logic for calculating the SCRYPT workload has been moved
from module_extra_buffer_size() to module_extra_tuningdb_block().
Previously, this function just returned values from a static
tuning file. Now, it actually computes tuning values on the fly
based on the device's resources and SCRYPT parameters. This
was always possible, it just wasn't used that way until now.
After running the calculation, the calculated kernel_accel value
is injected into the tuning database as if it had come from a
file. The tmto value is stored internally.
Users can still override kernel-threads, kernel-accel, and
scrypt-tmto via the command line or via tuningdb file.
module_extra_tuningdb_block():
This is now where kernel_accel and tmto are automatically
calculated.
The logic for accel and tmto is now separated and more
flexible. Whether the user is using defaults, tuningdb entries, or
manual command line overrides, the code logic will try to make
smart choices based on what's actually available on the device.
First, it tries to find a kernel_accel value that fits into
available memory. It starts with a base value and simulates
tmto=1 or 2 (which is typical good on GPU).
It also leaves room for other buffers (like pws[], tmps[], etc.).
If the result is close to the actual processor count,
it gets clamped.
This value is then added to the tuning database, so hashcat can pick
it up during startup.
Once that's set, it derives tmto using available memory, thread
count, and the actual SCRYPT parameters.
module_extra_buffer_size():
This function now just returns the size of the SCRYPT B[] buffer,
based on the tmto that was already calculated.
kernel_threads:
Defaults are now set to 32 threads in most cases. On AMD GPUs,
64 threads might give a slight performance bump, but 32 is more
consistent and reliable.
For very memory-heavy algorithms (like Ethereum Wallet), it
scales down the thread count.
Here's a rough reference for other SCRYPT-based modes:
- 64 MiB: 16 threads
- 256 MiB: 4 threads
Tuning files:
All built-in tuningdb entries have been removed, because they
shouldn’t be needed anymore. But you can still add custom entries
if needed. There’s even a commented-out example in the tuningdb
file for mode 22700.
Free memory handling:
Getting the actual amount of free GPU memory is critical for
this to work right. Unfortunately, none of the common GPGPU APIs
give reliable numbers. We now query low-level interfaces like
SYSFS (AMD) and NVML (NVIDIA). Support for those APIs is in
place already, except for ADL, which still needs to be added.
Because of this, hwmon support (which handles those low-level
queries) can no longer be disabled.
Remove existing tuningdb entries due to salsa_r() core
refactor. Update tuningdb engine to prefer file entries,
when available, over automatic discovery.
Improve memory-free detection per device, default
--backend-device-keepfree is now set to 0.
Old brute-force OpenCL behavior can be restored using
--backend-device-keepfree 100.
- Skip memory-free detection on MS OpenCL platform to avoid crashes
- Improve salt usage of 70100/70200, use decoder/kernels from 8900
- Add REPLACE bridge type support (eg. BRIDGE_TYPE_REPLACE_LOOP)
- Switch 70000, 70100 and 70200 to BRIDGE_TYPE_REPLACE_LOOP
- Add synchronization barriers on d2h copy when using bridges
- Improve speed status display updates when using bridges
- Set AMD_DIRECT_DISPATCH=0 to reduce CPU burning loop on AMD backends
- Set benchmark/selftest hash on 70100/70200 to 16:8:1
Various types included in zlib.h such as gzFile are implicitly included
with minizip. Get rid of this behavior and explicitly include zlib.h.
Signed-off-by: Rosen Penev <rosenp@gmail.com>