1
0
mirror of https://github.com/hashcat/hashcat.git synced 2025-07-23 06:58:31 +00:00
Commit Graph

1556 Commits

Author SHA1 Message Date
Jens Steube
a66e667c90
Merge pull request #3724 from matrix/hashInfo2int
User Options: assigned -H to --hash-info && Hash-Info: show more details using -HH
2025-07-07 19:55:48 +02:00
Gabriele Gristina
f663abee44
Added workaround to get rid of internal runtimes memory leaks
As of now, especially in the benchmark mode, hashcat will not go to create and destroy context and command-queue for each enabled device each time it switches from one hash-mode to the next.
Specifically using OpenCL with an NVIDIA device, it was not possible to complete the benchmark because clCreateContext has memory leaks that slowly consume all available GPU memory until hashcat can activate a new context and disable the device.

Avoid deprecated HIP functions

All hipCtx* features have been declared deprecated, so we have replaced them with the new ones, also fixing a critical bug on handling multiple AMD devices in the same system.
2025-07-06 21:28:37 +02:00
Gabriele Gristina
fba89b6888
Merge branch 'master' into hashInfo2int 2025-07-06 07:54:05 +02:00
Jens Steube
0df156e4c1
Merge branch 'master' into totalcandidates 2025-07-05 22:51:27 +02:00
Gabriele Gristina
bcc351068f
Metal Backend:
- added support to 2D/3D Compute
- improved compute workloads calculation
Makefile:
- updated MACOSX_DEPLOYMENT_TARGET to 15.0
Unit tests:
- updated install_modules.sh with Crypt::Argon2

Argon2 start works with Apple Metal
2025-07-03 22:06:32 +02:00
Gabriele Gristina
4d39f881fd
support 2D/3D kernel invocation with Metal 2025-07-03 10:26:51 +02:00
Jens Steube
696fa3b2ad Modified the automatic kernel-accel count reduction routine to also reduce kernel-thread count if insufficient device or host memory is available.
Reduced the fixed memory reservation size from 1GiB to 64MiB as a result.
Added a warning when the user sets a thread count on the command line higher than recommended by the runtime (based on available registers and shared memory).
Added host-side logic to detect true funnel shift support and disable kernels using it if not supported on the device.
Updated more plugins to limit register count to 128 on NVIDIA GPUs.
2025-06-30 19:38:54 +02:00
Gabriele Gristina
3b12c6b79d
Merge branch 'master' into hashInfo2int 2025-06-30 13:42:29 +02:00
Jens Steube
f8df94f457 Switched all async and non-blocking calls to synchronous and blocking ones. Kept the original async bindings intact. This avoids race conditions like the one fixed in the previous commit, with no performance impact.
Fixed a typedef issue for clEnqueueReadBuffer().
Updated Python/hcshared.py with missing entry for new salt_dimy attribute in salt_t struct.
Fixed a bug in the autotuner when determining the starting value for kernel loops, in cases where the iteration count is N-1 and not a multiple of 1024.
Updated additional plugins to use OPTI_TYPE_REGISTER_LIMIT.
2025-06-30 11:26:05 +02:00
Jens Steube
0c2ed0d199 Update plugins that benefit from an artificially limited register count (NVIDIA).
Update default hash settings to 64MiB:3:4 for Argon2 in -m 70000, following RFC 9106 recommendations.
Add option OPTS_TYPE_THREAD_MULTI_DISABLE: allows plugin developers to disable scaling the password candidate batch size based on device thread count. This can be useful for super slow hash algorithms that utilize threads differently, e.g., when the algorithm allows parallelization. Note: thread count for the device can still be set normally.
Add options OPTI_TYPE_SLOW_HASH_DIMY_INIT/LOOP/COMP: enable 2D launches for slow hash init/loop/comp kernel with dimensions X and Y. The Y value must be set via salt->salt_dimy attribute.
Change autotune kernel-loops start value to the lowest multiple of the target hash iteration count, if kernel_loops_min permits.
Fixed a bug in autotune where kernel_threads_max was not respected during initial init and loop-prepare kernel runs.
2025-06-29 14:39:14 +02:00
Gabriele Gristina
904da431ae
Merge remote-tracking branch 'upstream/master' into hashInfo2int 2025-06-28 11:13:45 +02:00
Jens Steube
d5050d1f32
Merge pull request #4218 from matrix/ripemd320
Added hash-modes: RIPEMD-320, HMAC-RIPEMD320 (key = $pass), HMAC-RIPEMD320 (key = $salt)
2025-06-27 22:05:50 +02:00
Jens Steube
974934dcdf Trying out a tweak to autotune behavior related to -u loop tuning.
Since loop values increase by doubling in autotune, a slow hash-mode
with, for example, 1000 iterations can end up with a suboptimal -u count.
Currently, autotuning starts at 1 and doubles (2, 4, 8, ..., 512, 1024).
If the maximum is 1000, autotune stops at 512, resulting in two kernel
calls: one with 512 iterations and another with 488.

The tweak attempts to find the smallest factor that, when repeatedly
doubled, reaches the target exactly.  For 1000, this would be 125
and for 1024, it would be 1.

However, this logic doesn’t align well with how hashcat handles slow
hash iterations. For instance, PBKDF2-based plugins typically set the
iteration count to N-1, since the first iteration is handled by the
`_init` kernel. So, a plugin might set 1023 instead of 1024, and in such
cases, the logic would incorrectly assume 1023 is the minimum factor
which leads to suboptimal tuning.

To work around this, the factor-finder is executed twice: once with
the original iteration count and once with `iteration count + 1`.
The configuration that results in a lower starting point is used.

Other stuff:

- Fixed a critical bug in the autotuner

This bug was introduced a few days ago. The autotuner has the ability
to overtune the maximum allowed thread count under certain conditions.
For example, in unoptimized -a 0 cracking mode when using rules.
Several parts of the hashcat core require strict adherence to this limit,
especially when shared memory is involved.
To resolve this while retaining overtuning for compatible modes,
a new attribute `device_param->overtune_unfriendly` was introduced.
When set to true, it prevents the autotuner from modifying
`kernel_threads_max` and `kernel_accel_max`.
Four sections in `backend.c` have been updated to set this flag,
though additional areas may also require it.

- Moved the code that aligns `kernel_accel` to a multiple of the compute
  unit count into the overtune section.

- Fixed a bug in the HIP dynloader. It now reports actual error strings,
  provided the API returns them.
2025-06-27 21:52:57 +02:00
Gabriele Gristina
a3afca56b8
Merge remote-tracking branch 'upstream/master' into ripemd320 2025-06-26 21:37:40 +02:00
Jens Steube
3182af1bc9 - Renamed shuffle() in inc_hash_scrypt.cl to avoid name collision with
shuffle() present in some OpenCL runtimes
- Updated autotune logic: if the best kernel-loop is not yet found and
  the current kernel-loops setting resulting in a kernel runtime which
  is already above a certain threshold, do not skip to kernel-threads
  or kernel-accel section if no variance is possible
- Revised all plugin module_unstable_warning() checks for
  AMD Radeon Pro W5700X GPU on Metal: rechecked with the latest
  Metal version and removed those now fixed
- Inform the user on startup when backend runtimes and devices are
  initialized
- Fixed some file permissions in the tools/ folder
2025-06-26 19:36:06 +02:00
Jens Steube
58fa783095 Enhanced the auto-tune engine: when a kernel runs with a single thread and no accel, it should finish quickly (ideally under 1 ms). If it doesn't, the kernel is likely overloaded with code. If such a kernel also uses barriers (e.g., to load shared storage with multiple threads), high iteration counts cause unnecessary thread waiting. To address this, we now skip increasing the loop count if the runtime exceeds either 1/8 of the target time (based on the -w setting) or a hard-coded threshold of 4 ms.
Improved shared memory handling for -m 10700. Removed the hard-coded limit of 256 threads and now dynamically check the device's shared memory pool to adapt threads accordingly.
Implemented a feature request to display non-default session names early during startup.
Added a check for the number of registers required by a kernel (CUDA and HIP only). This allows us to estimate the max threads per block before entering the auto-tune engine and make pre-adjustments.
Fixed Metal command encoder argument to work with the new auto-tuner's extra kernel invocation.
Fixed incorrect host memory calculation logic during automatic kernel-accel reduction for scrypt-based algorithms. This ensures memory constraints are respected.
Improved several plugins by setting maximum loop counts and others using the OPTS_TYPE_NATIVE_THREADS option.
Fixed compilation on Apple platforms by excluding '#include <sys/sysinfo.h>'.
2025-06-25 22:10:29 +02:00
Jens Steube
62a5a85dd6 Added 'next_power_of_two()' and moved both 'next_power_of_two()' and 'previous_power_of_two()' to 'shared.c'
Improved autotuner tweak logic and added boundary checks for accel and threads
Fixed available host memory detection on Windows
Fixed compilation error in MSYS2 native shell
Introduced an 8 GiB host memory usage limit per GPU, even if more is available
Replaced fixed-size host memory detection per GPU with a dynamic kernel-accel based method (similar to GPU memory detection)
Disabled hash-mode autodetection in the python bridge
Removed default invocation of 'rocm-smi' in 'benchmark_deep.pl' to avoid skewed initial results
Reduced default runtime in 'benchmark_deep.pl' scripts due to improved benchmark accuracy in hashcat in general
2025-06-25 11:21:51 +02:00
Jens Steube
69a585fa4a Autotune refactoring II: dynamic threads-per-block
- Integrated occupancy hints from vendor APIs (CUDA, HIP) to set a
  dynamic threads-per-block limit per kernel instead of using static
  values.
- Added `find_tuning_function()` to identify the relevant kernel.
- Autotuner now runs in three stages: threads -> loops -> accel. The
  first two stages now stop increasing when the tested kernel runtime
  gets too close to the target runtime (96ms for `-w 3`), leaving
  headroom for the next stage to adjust in a finer sense.
- Accel tuning now uses a capped floating-point multiplier instead of
  powers of two.
- Removed workarounds for missing thread autotuning in plugins.
- Removed the hardcoded 4GiB host memory limit for accel. Added a
  cross-platform `get_free_memory()` to check actual free RAM during GPU
  initialization, preventing underutilization of high-end GPUs like the
  4090. If needed, users can still cap memory usage with `-T` or `-n`.
- Updated enums for ROCm 6.4.x and CUDA 12.9.
- Added code to detect kernel register spilling. That's relevant so we
  can keep free enough global memory on the runtime for the runtime to
  handle spills efficiently.
2025-06-24 20:19:42 +02:00
Jens Steube
ed10e6a913 Autotune and Benchmark refactoring
This change affects three key areas, each improving autotuning:

- Autotune refactoring itself

The main autotune algorithm had become too complex to maintain and has
now been rewritten from scratch. The engine is now closer to the old
v6.0.0 version, using a much more straightforward approach.

Additionally, the backend is now informed when the autotune engine runs
its operations and runs an extra invisible kernel invocation. This
significantly improves runtime accuracy because the same caching
mechanisms which kick in normal cracking sessions now also apply during
autotuning. This leads to more consistent and reliable automatic
workload tuning.

- Benchmarking and '--speed-only' accuracy bugs fixed

Benchmark runtimes had become too short, especially since the default
benchmark mask changed from '?b?b?b?b?b?b?b' to '?a?a?a?a?a?a?a?a'. For
very fast hashes like NTLM, benchmarks often stopped immediately when
base words needed to be regenerated, producing highly inaccurate
results.

This issue also misled users tuning '-n' values, as manually
oversubscribing kernels could mask the problem, creating the impression
that increasing '-n' had a larger impact on performance than it truly
does. While '-n' still has an effect, it’s not as significant. With this
fix, users achieve the same speed without needing to tune '-n' manually.

The bug was fixed by enforcing a minimum benchmark runtime of 4 seconds,
regardless of kernel runtime or kernel type. This ensures more stable
and realistic benchmark results, but typically increasing the benchmark
duration by up to 4 seconds.

- Kernel-Threads set to 32 and plugin configuration cleanup

Some plugin configurations existed solely to work around the old
benchmarking bug and can now be removed. For example,
'OPTS_TYPE_MAXIMUM_THREADS' is no longer required and has been removed
from all plugins, although the parameter itself remains to avoid
breaking custom plugins.

Because increasing threads beyond 32 no longer offers meaningful
performance gains, the default is now capped at 32 (unless overridden
with '-T'). This simplifies GPU memory management. Currently, work-item
counts are indirectly limited by buffer sizes (e.g., 'pws_buf[]'), which
must not exceed 4 GiB (a hard-coded limit). This buffer size depends on
the product of 'kernel-accel', 'kernel-threads', and the device’s
compute units. By reducing the default threads from 1024 to 32, there is
now more space available for base words.
2025-06-22 20:17:52 +02:00
Jens Steube
b7c8fcf27c Removed shared-memory based optimization for SCRYPT on HIP, because the shared-memory buffer is incompatible with TMTO, which is limiting SCRYPT-R to a maximum of 8. This change also simplifies the code, allowing removal of large sections of duplicated code. Removed the section in scrypt_module_extra_tuningdb_block() that increased TMTO when there was insufficient shared memory, as this is no longer applicable.
Refactored inc_hash_scrypt.cl almost completely and improved macro names in inc_hash_scrypt.h. Adapted all existing SCRYPT-based plugins to the new standard. If you have custom SCRYPT based plugins use hash-mode 8900 as reference.
Fixed some compiler warnings in inc_platform.cl.
Cleaned up code paths in inc_vendor.h for finding values for HC_ATTR_SEQ and DECLSPEC.
Removed option --device-as-default-execution-space from nvrtc for hiprtc compatibility. As a result, added __device__ back to DECLSPEC.
Removed option --restrict from nvrtc compile options since we actually alias some buffers.
Added --gpu-max-threads-per-block to hiprtc options.
Added -D MAX_THREADS_PER_BLOCK to OpenCL options (currently unused).
Removed all OPTS_TYPE_MP_MULTI_DISABLE entries for SNMPv3-based plugins.
These plugins consume large amounts of memory and for this reason,limited kernel_accel max to 256. This may still be high, but hashcat will automatically tune down kernel_accel if insufficient memory is detected.
Removed command `rocm-smi --resetprofile --resetclocks --resetfans` from benchmark_deep.pl, since some AMD GPUs become artificially slow for a while after running these commands.
Replaced load_source() with file_to_buffer() from shared.c, which does the exact same operations.
Moved suppress_stderr() and restore_stderr() to shared.c and reused them in both Python bridges and opencl_test_instruction(), where the same type of code existed.
2025-06-21 07:09:20 +02:00
Jens Steube
c033873e4b Update hipDeviceAttribute_t for ROCm 6.x
Add hipDeviceProp_t and bindings for hipGetDeviceProperties(), hipGetDeviceProperties is required to retrieve gcnArchName[].
Add gcnArchName[] to select the correct --gpu-architecture value for a specific device when using hiprtc.
Include sm_major and sm_minor for CUDA and gcnArchName for HIP in the kernel filename hash.
Update nvrtc_options[] and hiprtc_options[] to avoid unused variables, eliminating the use of --restrict as a placeholder and preventing nvrtc from aborting.
Add check_file_suffix() and remove_file_suffix() helper functions.
2025-06-18 18:29:47 +02:00
Jens Steube
07395626fa Introduce hashes_init_stage5() and call module_extra_tmp_size() there. At this stage, the self-test hash is initialized and its values can be used.
Remove hard-coded SCRYPT N, R, and P values in modules, except where they are intentionally hardcoded.
Fix a bug that always caused a TMTO value of 1, even when it was not needed.
Respect device_available_mem and device_maxmem_alloc values even if a reliable low-level free memory API is present, and always select the lowest of all available limits.
Fix benchmark_deep.pl mask to avoid UTF-8 rejects.
Improve error messages when the check verifying that all SCRYPT configuration settings across all hashes are identical is triggered.
Also improve the error message shown when the SCRYPT configuration of the self-test hash does not match that of the target hash.
Fix a bug where a low-tuned SCRYPT hash combined with a TMTO could result in fewer than 1024 iterations, which breaks the hard-coded minimum of 1024 iterations in the SCRYPT kernel.
2025-06-15 14:13:48 +02:00
Gabriele Gristina
1096d961a1
Backend: Updated filename chksum format to prevent invalid cache on Apple Silicon when switching arch 2025-06-10 23:19:12 +02:00
Jens Steube
c87a87f992 Improvements to SCRYPT autotuning strategy
General:

The logic for calculating the SCRYPT workload has been moved
from module_extra_buffer_size() to module_extra_tuningdb_block().
Previously, this function just returned values from a static
tuning file. Now, it actually computes tuning values on the fly
based on the device's resources and SCRYPT parameters. This
was always possible, it just wasn't used that way until now.

After running the calculation, the calculated kernel_accel value
is injected into the tuning database as if it had come from a
file. The tmto value is stored internally.

Users can still override kernel-threads, kernel-accel, and
scrypt-tmto via the command line or via tuningdb file.

module_extra_tuningdb_block():

This is now where kernel_accel and tmto are automatically
calculated.

The logic for accel and tmto is now separated and more
flexible. Whether the user is using defaults, tuningdb entries, or
manual command line overrides, the code logic will try to make
smart choices based on what's actually available on the device.

First, it tries to find a kernel_accel value that fits into
available memory. It starts with a base value and simulates
tmto=1 or 2 (which is typical good on GPU).

It also leaves room for other buffers (like pws[], tmps[], etc.).
If the result is close to the actual processor count,
it gets clamped.

This value is then added to the tuning database, so hashcat can pick
it up during startup.

Once that's set, it derives tmto using available memory, thread
count, and the actual SCRYPT parameters.

module_extra_buffer_size():

This function now just returns the size of the SCRYPT B[] buffer,
based on the tmto that was already calculated.

kernel_threads:

Defaults are now set to 32 threads in most cases. On AMD GPUs,
64 threads might give a slight performance bump, but 32 is more
consistent and reliable.

For very memory-heavy algorithms (like Ethereum Wallet), it
scales down the thread count.

Here's a rough reference for other SCRYPT-based modes:

- 64 MiB: 16 threads
- 256 MiB: 4 threads

Tuning files:

All built-in tuningdb entries have been removed, because they
shouldn’t be needed anymore. But you can still add custom entries
if needed. There’s even a commented-out example in the tuningdb
file for mode 22700.

Free memory handling:

Getting the actual amount of free GPU memory is critical for
this to work right. Unfortunately, none of the common GPGPU APIs
give reliable numbers. We now query low-level interfaces like
SYSFS (AMD) and NVML (NVIDIA). Support for those APIs is in
place already, except for ADL, which still needs to be added.

Because of this, hwmon support (which handles those low-level
queries) can no longer be disabled.
2025-06-09 11:02:34 +02:00
Jens Steube
ed6e967425 Add experimental SCRYPT N-parameter auto-discovery
Remove existing tuningdb entries due to salsa_r() core
refactor. Update tuningdb engine to prefer file entries,
when available, over automatic discovery.

Improve memory-free detection per device, default
--backend-device-keepfree is now set to 0.

Old brute-force OpenCL behavior can be restored using
--backend-device-keepfree 100.
2025-06-08 07:32:32 +02:00
Jens Steube
ac2ed9f402 - Remove old iconv patches (replaced by cmake)
- Replace Queues in hcmp/hcsp and make code more pythonic
- Synchronize python thread in hcmp count with detected cores
- Move setting PYTHON_GIL to shared.c
- Fix allocating and freeing aligned memory
- Update BUILD guides for WSL and macOS
- Fix python plugin documentation for macOS
2025-06-05 06:56:38 +02:00
Jens Steube
c8d81016ca Fix compile error on apple silicon 2025-06-04 10:41:24 +02:00
Jens Steube
d60658102b Added option --backend-devices-keepfree to configure X percentage of device memory available to keep free 2025-06-04 10:13:29 +02:00
Jens Steube
3d4901a60c - Add CPU SIMD detection at runtime, relevant for bridge plugins
- Update BUILD_WSL.md document, add preparation for python bridge
2025-06-04 10:09:44 +02:00
Jens Steube
b02b1b5033 - Add code to recognize Microsofts OpenCL D3D12 platform
- Skip memory-free detection on MS OpenCL platform to avoid crashes
- Improve salt usage of 70100/70200, use decoder/kernels from 8900
- Add REPLACE bridge type support (eg. BRIDGE_TYPE_REPLACE_LOOP)
- Switch 70000, 70100 and 70200 to BRIDGE_TYPE_REPLACE_LOOP
- Add synchronization barriers on d2h copy when using bridges
- Improve speed status display updates when using bridges
- Set AMD_DIRECT_DISPATCH=0 to reduce CPU burning loop on AMD backends
- Set benchmark/selftest hash on 70100/70200 to 16:8:1
2025-06-02 06:59:36 +02:00
Chick3nman
310e9ee79a Add --total-candidates flag and functionality 2025-05-30 14:13:43 -05:00
Jens Steube
ceb5ff5641 The Assimilation Bridge (Framework) 2025-05-29 15:38:13 +02:00
Gabriele Gristina
31a19b9acf Added hash-modes: RIPEMD-320, HMAC-RIPEMD320 (key = $pass), HMAC-RIPEMD320 (key = $salt) 2025-05-26 20:28:13 +02:00
Jens Steube
b9938f34d6
Merge pull request #4201 from matrix/RC4_KPA
Added hash-modes: RC4 40-bit DropN, RC4 72-bit DropN, RC4 104-bit DropN
2025-05-18 20:39:24 +02:00
Gabriele Gristina
55f53ba076 assigned -H to --hash-info 2025-05-11 17:16:03 +02:00
Gabriele Gristina
120e758be9 Added options --benchmark-min and --benchmark-max to set a hash-mode range to be used during the benchmark 2025-05-07 18:46:51 +02:00
Gabriele Gristina
ff6185e9b4 Added hash-modes: RC4 40-bit DropN, RC4 72-bit DropN, RC4 104-bit DropN 2025-05-06 20:44:50 +02:00
fse-a
f8c0899670 Increased-virtual-backend-limit
Increased the virtual backend limit.
2024-01-25 10:27:38 +01:00
jsteube
dad46090d7 Fix variable name dynamicx_t in hashinfo_t 2023-11-16 19:01:25 +00:00
jsteube
7faa6a94a5 Add dynamicx_t type for later use in potfile.c and outfile.c and also add to hashinfo_t 2023-11-15 17:06:37 +00:00
jsteube
2029be782e Refactor extract_dynamic_x() to extract_dynamicx_hash() and add code 2023-11-09 15:04:32 +00:00
jsteube
2d3ebf1d4e Add global dynamic-x hash mode extraction function 2023-11-08 14:43:45 +00:00
jsteube
b66527f0d2 Prepare commandline parameter for upcoming partial support for $dynamic_X$ 2023-11-01 16:12:44 +00:00
jsteube
b1b349e8c9 Prepare commandline parameter for upcoming partial support for $dynamic_X$ 2023-10-31 10:38:10 +00:00
jsteube
adbcef6909 Refactor optimized_kernel_enable variable to just optimized_kernel, same idea as outfile_autohex variable 2023-10-30 15:09:36 +00:00
jsteube
e55b331058 Refactor potfile_disable variable to just putfile, same idea as outfile_autohex variable 2023-10-21 16:42:29 +00:00
jsteube
b597a96328 Refactor restore_disable variable to restore_enable, try to make variable names always positive for easier handling 2023-10-20 12:16:18 +00:00
jsteube
ff94f4c9d0 Refactor self_test_disable variable to just self_test, same idea as outfile_autohex variable 2023-10-19 06:08:24 +00:00
jsteube
7b52dad8c9 Refactor multiply_accel_disable variable to just multiply_accel, same idea as outfile_autohex variable 2023-10-18 12:41:09 +00:00
jsteube
bf04a158aa Refactor markov_disable variable to just markov, same idea as outfile_autohex variable 2023-10-17 12:17:05 +00:00