1
0
mirror of https://github.com/hashcat/hashcat.git synced 2025-07-22 14:38:55 +00:00
Commit Graph

9858 Commits

Author SHA1 Message Date
hashcat-bot
c4c01868d0
Merge pull request #4103 from matrix/stdout_slow-candidates
do not allow --stdout and --slow-candidates
2025-07-09 09:50:09 +02:00
hashcat-bot
71e3c87e01
Merge pull request #4264 from DhruvTheDev1/patch-4
Fix: correct typo and "Hashcat" to "hashcat"
2025-07-09 09:47:28 +02:00
hashcat-bot
4654eccbc4
Merge pull request #4265 from DhruvTheDev1/patch-5
Update hashcat-assimilation-bridge.md
2025-07-09 09:47:00 +02:00
hashcat-bot
470954de71
Merge pull request #4296 from matrix/argon2_metal_moreStable
got stable cracking with argon2 on Apple Metal
2025-07-09 09:46:07 +02:00
hashcat-bot
b5856d793c
Merge pull request #4149 from Sc00bz/totp
Added support for multiple TOTP codes
2025-07-09 09:44:43 +02:00
Jens Steube
290360ee55 Merge PR #4189 with minor edits 2025-07-09 09:14:21 +02:00
luke
5f41bfa3f4 Updated to latest hashcat base 2025-07-09 09:14:09 +02:00
Jens Steube
c4c4a9fdc5 Merge PR #4194 2025-07-09 09:10:33 +02:00
Gabriele Gristina
adbf9d175b
using simd_shuffle on Apple Metal for argon2 2025-07-09 08:16:00 +02:00
Gabriele Gristina
5210ccd50d
got more stable cracking with argon2 on Apple Metal 2025-07-08 22:44:23 +02:00
Jens Steube
853b149561 Argon2: add early support for multihash mixed mode cracking
This commit introduces initial support for mixed mode multihash cracking
in Argon2. Although I was skeptical at first, the final solution turned
out better than expected with only a minimal speed loss (1711H/s ->
1702H/s).

Unit tests have been updated to generate random combinations of
Argon2-I/D/ID with randomized m, t, and p values. So far, results look
solid.

Note: This is a complex change and may have undiscovered edge cases.

Some optimization opportunities remain. JIT-based optimizations are not
fully removed. We could also detect single-hash scenarios at runtime
and disable self-tests to re-enable JIT. Currently, the kernel workload
is sized based on the largest hash to avoid out-of-bound memory access.
2025-07-08 20:46:16 +02:00
Jens Steube
91d51b0df2
Merge pull request #4270 from roycewilliams/master
show max length in Kernel.Feature status
2025-07-08 13:25:16 +02:00
Jens Steube
b98d5d5f8a Fixed out-of-boundary read for -a 9 when using the new OPTS_TYPE_THREAD_MULTI_DISABLE parameter. This only affected Argon2.
Fixed compiler warnings in inc_hash_argon2.cl.
Moved argon2_tmp_t and argon2_extra_t typedefs from argon2_common.c back to the module to allow plugin developers to modify them when using Argon2 as a primitive.
Slightly improved autotune behavior for edge cases such as 8700 and 18600, where some algorithms started with theoretical excessively high value, leaving no room for proper tuning.
Removed argon2_module_kernel_threads_min() and argon2_module_kernel_threads_max() from argon2_common.c. Switched to using OPTS_TYPE_NATIVE_THREADS instead. Plugin developers can still use it. This simplifies CPU integration, as CPUs typically run with a single thread.
Updated plugins 15500 and 20510. Added a thread limit to prevent autotune from selecting an excessively high thread count. The issue originated from the runtime returning an unrealistically high ideal thread count.
2025-07-08 13:21:10 +02:00
Royce Williams
ca1ebc23a4 changes.txt for min/max password length display 2025-07-07 15:41:23 -08:00
Royce Williams
17e29f298a clarify Kernel.Feature: password, min-max bytes 2025-07-07 10:44:59 -08:00
Royce Williams
e6ed375658
Merge branch 'hashcat:master' into master 2025-07-07 10:39:57 -08:00
Jens Steube
615f9f2d83
Merge pull request #4283 from roycewilliams/ii-mach-mod
skip non-machine preamble if --backend_info and --machine
2025-07-07 19:59:54 +02:00
Jens Steube
cb36d337eb
Merge pull request #4290 from redongh/master
minor Python-bridge documentation related updates
2025-07-07 19:59:26 +02:00
Jens Steube
a66e667c90
Merge pull request #3724 from matrix/hashInfo2int
User Options: assigned -H to --hash-info && Hash-Info: show more details using -HH
2025-07-07 19:55:48 +02:00
Jens Steube
13a9bb300f
Merge pull request #4275 from matrix/docs_status_code
Documents: Renamed status_code.txt in exit_status_code.txt and added device_status_code.txt
2025-07-06 21:50:04 +02:00
Jens Steube
92d83eab3a
Merge pull request #4293 from matrix/one_time_init_release_ctx_cmdqueue
Added workaround to get rid of internal runtimes memory leaks + Avoid deprecated HIP functions
2025-07-06 21:37:55 +02:00
Gabriele Gristina
f663abee44
Added workaround to get rid of internal runtimes memory leaks
As of now, especially in the benchmark mode, hashcat will not go to create and destroy context and command-queue for each enabled device each time it switches from one hash-mode to the next.
Specifically using OpenCL with an NVIDIA device, it was not possible to complete the benchmark because clCreateContext has memory leaks that slowly consume all available GPU memory until hashcat can activate a new context and disable the device.

Avoid deprecated HIP functions

All hipCtx* features have been declared deprecated, so we have replaced them with the new ones, also fixing a critical bug on handling multiple AMD devices in the same system.
2025-07-06 21:28:37 +02:00
Jens Steube
0576c41491 Updated autotune to set initial values for accel, threads, and loop based on theoretical assumptions, with the idea for more accurate early results from measured test runs.
Updated autotune to use the iteration count of the first user-defined hash instead of the self-test hash for slow hash tuning, assuming consistency across the hash list.
Updated autotune to prefer best-efficiency thread count only if it is at least 6% better than the max thread count, improving consistency in thread and accel values while allowing exceptions for special modes like 18600.
Changed default theoretical free memory by applying a reduction from max memory from 20% changed to 34%/. This happens only when runtime/OS cannot provide low-level free memory data.
Applied the same logic using --backend-keep-free percentage to host memory during early setup, when hashcat auto-reduces thread and accel counts to stay within limits, and that per compute device.
Changed terminal output from "Host memory required for this attack: ..." to "Host memory allocated for this attack: ...", and added free host memory as reference.
2025-07-06 10:14:20 +02:00
Gabriele Gristina
aa10bcf80e
update remaining user_options->hash_info checks 2025-07-06 08:06:24 +02:00
Gabriele Gristina
9f3d771137
fix build error on src/user_options.c 2025-07-06 07:59:56 +02:00
Gabriele Gristina
fba89b6888
Merge branch 'master' into hashInfo2int 2025-07-06 07:54:05 +02:00
Jens Steube
db2214f755
Merge pull request #4227 from Chick3nman/totalcandidates
Add --total-candidates flag and functionality
2025-07-05 22:55:09 +02:00
Jens Steube
0df156e4c1
Merge branch 'master' into totalcandidates 2025-07-05 22:51:27 +02:00
Jens Steube
9457c62ef0 Removed redundant casts in inc_hash_blake2b.cl and inc_hash_blake2s.cl.
Fixed parameter types in inc_hash_blake2b.cl and inc_hash_blake2s.cl for FINAL value.
Added kernel code for -m 15400 to s04/s08/m04/m08, even if not needed, to help autotune find optimal workitem settings.
Fixed a rare autotune case (e.g. in mode 18600) where threads_min was not a multiple of kernel_preferred_wgs_multiple, and changes it so that as long as it only threads_min is affected and not threads_max, we now ensure at least kernel_preferred_wgs_multiple.
Improved autotune logic for best thread count: double thread count until reaching the device's preferred multiple, then increase in steps of that multiple while comparing efficiency vs. runtime, and select the configuration with best efficiency, not highest thread count.
Always set funnelshift support to true for HIP devices, as it always reports false.
Set minimum loop count to 250 for all VeraCrypt modes with PIM brute-force support.
2025-07-05 19:44:31 +02:00
red
381b2cac67
Update BUILD_WSL.md
Bump version of mingw-w64-x86_64-python-3.12.XX-X-any.pkg.tar.zst to latest.
2025-07-05 00:36:01 +02:00
red
d2656e376d
Update hashcat-python-plugin-development-guide.md
clarify location of custom Python scripts to be run without creating a dedicated module.
2025-07-05 00:30:39 +02:00
Jens Steube
d3983edaf2 Improved handling in get_opencl_kernel_wgs()
There are cases where we fix the thread count in a kernel using
FIXED_LOCAL_SIZE, but when the runtime loads the kernel binary, it
reports that it can only execute it with a different thread count.
According to the OpenCL specification, this can happen due to register
pressure.

However, we fix the thread count for a specific reason, and we choose to
accept potential register spilling to global memory. A warning is now
issued to inform the user about the runtime's suggested thread count,
allowing them to override it via the command line if they encounter
issues.

Also fixed the thread count for -m 10700 on NVIDIA's OpenCL, where 4
bytes are always lost for an unknown reason (similar to the issue seen
in bcrypt).
2025-07-04 21:51:32 +02:00
Jens Steube
6150fd5d37
Merge branch 'master' into totp 2025-07-04 21:30:33 +02:00
Jens Steube
7ec73877fa
Merge pull request #4287 from matrix/improve_metal
Updates on Metal Backend, Makefile, Unit tests and Argon2
2025-07-04 10:26:46 +02:00
Gabriele Gristina
25b9e67470
make error messages on hashes_init_stage5() generic 2025-07-04 08:09:56 +02:00
Gabriele Gristina
80803e2ea5
fix -a9 by add missing get_global_id() in m34000_loop 2025-07-04 08:04:44 +02:00
Jens Steube
dabf1aff3e
Merge pull request #4280 from Eomtaeyong820/fix/null-deref-read_restore
[FIX] Prevent NULL dereference in read_restore() via hcmalloc
2025-07-03 23:53:40 +02:00
Gabriele Gristina
bcc351068f
Metal Backend:
- added support to 2D/3D Compute
- improved compute workloads calculation
Makefile:
- updated MACOSX_DEPLOYMENT_TARGET to 15.0
Unit tests:
- updated install_modules.sh with Crypt::Argon2

Argon2 start works with Apple Metal
2025-07-03 22:06:32 +02:00
Jens Steube
b9b20c3340
Merge pull request #4286 from matrix/metal_3d
support 2D/3D kernel invocation with Metal
2025-07-03 10:36:28 +02:00
Gabriele Gristina
4d39f881fd
support 2D/3D kernel invocation with Metal 2025-07-03 10:26:51 +02:00
Jens Steube
e8cf8bd146 Fix OpenCL spawning unnecessary work-item due to redundant multiplication in new OPTS_TYPE_THREAD_MULTI_DISABLE mode.
Prepare Metal section in run_kernel() for 2D kernel invocation related to new salt->salt_dimy variable.
Move reusable Argon2 module code into separate file argon2_commit.c, similar to scrypt_commit.c, and update headers.
Update existing hash mode 34000 to use argon2_commit.c.
2025-07-03 08:10:30 +02:00
Jens Steube
2e185b5450
Merge pull request #4285 from matrix/argon2_metal
porting Argon2 to metal and fix OpenCL bug on hc__shfl
2025-07-02 22:44:50 +02:00
Gabriele Gristina
8a91fccefd
porting to metal and fix OpenCL bug on hc__shfl 2025-07-02 22:19:39 +02:00
Jens Steube
d9918d7e44 Add Argon2 support for OpenCL and HIP
=====================================

This patch modifies the existing Argon2 plugin, which was initially
designed to work only with CUDA. Supporting OpenCL and HIP required
broader architectural changes.

1. The tmps[] structure no longer holds the "large buffer". This
buffer stored the scratch areas for all password candidates in one
chunk. But we do not need to hold scratch areas for all candidates
simultaneously. All we need to do is hold chunks large enough
per password.

To simplify logic, the buffer is not divided by password count, but
divided by four, which fits within the "1/4 global memory" limit on
some OpenCL runtimes.

Hashcat already had logic to support this, but the buffer needed to be
moved to a different buffer type. It has now been relocated from the
"tmp buffer" to the "extra tmp buffer", following the same strategy
used in newer SCRYPT plugins.

This improves handling across several subcomponents:

  - Hashcat backend divides into four asymmetric buffers, hence the
    name "4-buffer strategy"
  - If the candidate count isn't divisible by 4, leftover candidates are
    assigned to the first (and possibly second and third) buffer
  - No code in the plugin is required, as this was designed for exactly
    such cases where future algorithms require a lot of memory
  - Plugin was rewritten to report the size needed in
    module_extra_tmp_size(), which triggers the "4-buffer" strategy
  - The split is not even, but each part is large enough to hold
    a multiple of a full scratch buffer for a password
  - The kernel code in m34000_init/loop/comp now uses a code block
    that finds its buffer by doing "group_id % 4"
  - Prevents the need to over-allocate memory to avoid OOB access
  - The original "tmps buffer" now holds a small dummy state buffer

2. Replaced warp shuffle instruction

The instruction __shfl_sync() is not available in runtimes
other than CUDA. Some have alternatives, some do not.

To prevent branching per backend runtime, the new general macro
hc__shfl_sync() replaces all calls to __shfl_sync().
This allows us to implement runtime-specific solutions and
take effect at compile time to prevent regressions.

- CUDA:
  We simply map to the original __shfl_sync()

- HIP:
  We map to shfl(), a built-in intrinsic. This instruction doesn't
  support masks like __shfl_sync() does, but masks are not needed
  in Argon2 anyway. It requires an additional parameter, the wavefront
  size. This is natively 64, but we hardcode this to 32 so it aligns
  with NVIDIA's warp size.

- OpenCL:
  - AMD: We have access to the instruction __builtin_amdgcn_ds_bpermute().
    This instruction only supports 32-bit integers, requiring us to
    pack and unpack the 64-bit values manually
  - NVIDIA: We use inline assembly with "shfl.sync.idx.b32". Same as
    with AMD, we need to pack and unpack 32-bit integers. The 64-bit
    support in CUDA is just overloaded and internally does the same thing.
  - Others: We use a shared memory pool and combine it with a barrier.
    This LOCAL_VK pool must be sized at compile time and transported to
    the Argon2 code in "inc_hash_argon2.cl". This required changing all
    function declarations that use shuffles slightly.

Unlock full threading for init and comp kernels
===============================================

This is implemented using a new flag:
  OPTS_TYPE_THREAD_MULTI_DISABLE

Behavior is similar to:
  OPTS_TYPE_MP_MULTI_DISABLE

It simply disables the multiplier normally applied to password batch size.

But attention, this change completely unbinds this effect from the
real threads spawned on the compute device. If the thread count is not
set to 1 in the plugin, it will start autotuning it.

In the case of Argon2, we hard-code it to 32 instead, which also changes
how "warp size" was used in the original implementation, and which is not
compatible with HIP and/or OpenCL. However, we need to maintain this thread
size to utilize warp shuffle and its alternatives in other runtimes.

Benefits:

  - Enables full threading for init and comp kernels (1667 H/s to 1722 H/s)
  - Allows future algorithms to enable parallel processing of single
    password candidates, if supported

Plugin changes:

  - Removed the "hack" where thread count = 1 disabled the multiplier
  - Removed per-device warp count detection code and struct changes
  - Removed warp handling and "num_elements / thread_count" division in
    the run_kernel() function

Simplified autotune logic for Argon2
====================================

The goal is to calculate the maximum number of password candidates that
can run in parallel, constrained only by device memory.

  - Removed all code related to Argon2 from autotune
  - Implemented in "module_extra_tuningdb_block()" (like SCRYPT)
  - We create a tuningdb entry at runtime!
  - Still allows override via tuningdb or CLI
  - Considers register spilling (read at startup)
  - Prevents global-to-host memory swap performance issues

Add Argon2I and ArgonD support
==============================

The kernel prepared from NFI already had support for the different Argon
types. No change was needed.

To support the other Argon2 types, the tokenizer had to be improved to
support a variety of different signatures in the same hash-mode.

Bugfixes
========

- Fixed missing entries in "switch_buffer_by_offset_8x4_le_S()"
- Fixed benchmark hash misdetection for scrypt. This was due to
  outdated logic used in scrypt to detect whether the plugin was
  called from a benchmark session or a regular one
- Fixed a bug in "module_hash_encode()" where Base64 padding '=' was
  retained
- Fixed missing "GLOBAL_AS" / "PRIVATE_AS" casts for OpenCL
- Fixed compiler warnings (e.g., "index_u32x4()", "get_group_id()")
  by adding return values
- Fixed a bug in token.len_max[6], which was allowing decoding
  of a 256-byte data into a 16-byte buffer (digest)

Other improvements
==================

- Added unit test module for automated testing
- Added support to the tokenizer to allow multiple signatures.
  Leave out TOKEN_ATTR_FIXED_LENGTH to enable this in your plugins
- Updated "hc_umulhi()", also exists for HIP
- Renamed "gid" to "bid" when using "get_group_id()" for clarity
- Removed "#ifdef IS_CUDA" as all backends are now supported
- Removed deprecated "OPTS_TYPE_MAXIMUM_ACCEL" attribute

Performance note
================

For testing, I used the self-test hash configured according to the
RFC 9106 recommendation: m=65536, t=3, p=1.

In my benchmarks, the AMD RX 7900 XTX achieved 1401 H/s using the same
hash that was used to test NVIDIA's RTX 4090. The RTX 4090 reached
1722 H/s, making it faster in absolute terms. However, at the time of
writing, it is more than three times as expensive as the 7900 XTX.

It's also worth noting that an older NVIDIA GTX 1080 Ti still reached
565 H/s with the same test vector, and may be found at significantly
lower cost.

Across all tested Argon2 configurations, the performance gap between
the RX 7900 XTX and the RTX 4090 remained proportionally consistent,
indicating a clear linear scaling relationship between the two GPUs.
2025-07-02 11:02:57 +02:00
Jens Steube
96e3b6581d
Merge pull request #4284 from fse-a/argon2id-support
Support for Argon2id on NVIDIA CUDA GPUs
2025-07-02 10:55:16 +02:00
Pelle Kuiters
3c1649ccc8 GPU support for Argon2id for NVIDIA CUDA 2025-07-02 10:47:00 +02:00
Royce Williams
76869e5099 skip preamble if --backend_info and --machine 2025-07-01 18:31:20 -08:00
Royce Williams
7c9c1af30f fix type mismatch in msg 2025-07-01 18:21:02 -08:00
Jens Steube
5c1ae6a1f6
Merge pull request #4273 from matrix/backend_ctx_devices_init_splitted
Backend: Splitting backend_ctx_devices_init into smaller runtime-specific functions
2025-07-01 20:24:43 +02:00
Jens Steube
9aa7c94ede
Merge pull request #4278 from matrix/hwmon_ctx_init_split
Refactoring Hardware Monitor
2025-07-01 20:24:20 +02:00