1
0
mirror of https://github.com/hashcat/hashcat.git synced 2025-07-08 15:48:15 +00:00
hashcat/OpenCL/inc_platform.cl
Jens Steube d9918d7e44 Add Argon2 support for OpenCL and HIP
=====================================

This patch modifies the existing Argon2 plugin, which was initially
designed to work only with CUDA. Supporting OpenCL and HIP required
broader architectural changes.

1. The tmps[] structure no longer holds the "large buffer". This
buffer stored the scratch areas for all password candidates in one
chunk. But we do not need to hold scratch areas for all candidates
simultaneously. All we need to do is hold chunks large enough
per password.

To simplify logic, the buffer is not divided by password count, but
divided by four, which fits within the "1/4 global memory" limit on
some OpenCL runtimes.

Hashcat already had logic to support this, but the buffer needed to be
moved to a different buffer type. It has now been relocated from the
"tmp buffer" to the "extra tmp buffer", following the same strategy
used in newer SCRYPT plugins.

This improves handling across several subcomponents:

  - Hashcat backend divides into four asymmetric buffers, hence the
    name "4-buffer strategy"
  - If the candidate count isn't divisible by 4, leftover candidates are
    assigned to the first (and possibly second and third) buffer
  - No code in the plugin is required, as this was designed for exactly
    such cases where future algorithms require a lot of memory
  - Plugin was rewritten to report the size needed in
    module_extra_tmp_size(), which triggers the "4-buffer" strategy
  - The split is not even, but each part is large enough to hold
    a multiple of a full scratch buffer for a password
  - The kernel code in m34000_init/loop/comp now uses a code block
    that finds its buffer by doing "group_id % 4"
  - Prevents the need to over-allocate memory to avoid OOB access
  - The original "tmps buffer" now holds a small dummy state buffer

2. Replaced warp shuffle instruction

The instruction __shfl_sync() is not available in runtimes
other than CUDA. Some have alternatives, some do not.

To prevent branching per backend runtime, the new general macro
hc__shfl_sync() replaces all calls to __shfl_sync().
This allows us to implement runtime-specific solutions and
take effect at compile time to prevent regressions.

- CUDA:
  We simply map to the original __shfl_sync()

- HIP:
  We map to shfl(), a built-in intrinsic. This instruction doesn't
  support masks like __shfl_sync() does, but masks are not needed
  in Argon2 anyway. It requires an additional parameter, the wavefront
  size. This is natively 64, but we hardcode this to 32 so it aligns
  with NVIDIA's warp size.

- OpenCL:
  - AMD: We have access to the instruction __builtin_amdgcn_ds_bpermute().
    This instruction only supports 32-bit integers, requiring us to
    pack and unpack the 64-bit values manually
  - NVIDIA: We use inline assembly with "shfl.sync.idx.b32". Same as
    with AMD, we need to pack and unpack 32-bit integers. The 64-bit
    support in CUDA is just overloaded and internally does the same thing.
  - Others: We use a shared memory pool and combine it with a barrier.
    This LOCAL_VK pool must be sized at compile time and transported to
    the Argon2 code in "inc_hash_argon2.cl". This required changing all
    function declarations that use shuffles slightly.

Unlock full threading for init and comp kernels
===============================================

This is implemented using a new flag:
  OPTS_TYPE_THREAD_MULTI_DISABLE

Behavior is similar to:
  OPTS_TYPE_MP_MULTI_DISABLE

It simply disables the multiplier normally applied to password batch size.

But attention, this change completely unbinds this effect from the
real threads spawned on the compute device. If the thread count is not
set to 1 in the plugin, it will start autotuning it.

In the case of Argon2, we hard-code it to 32 instead, which also changes
how "warp size" was used in the original implementation, and which is not
compatible with HIP and/or OpenCL. However, we need to maintain this thread
size to utilize warp shuffle and its alternatives in other runtimes.

Benefits:

  - Enables full threading for init and comp kernels (1667 H/s to 1722 H/s)
  - Allows future algorithms to enable parallel processing of single
    password candidates, if supported

Plugin changes:

  - Removed the "hack" where thread count = 1 disabled the multiplier
  - Removed per-device warp count detection code and struct changes
  - Removed warp handling and "num_elements / thread_count" division in
    the run_kernel() function

Simplified autotune logic for Argon2
====================================

The goal is to calculate the maximum number of password candidates that
can run in parallel, constrained only by device memory.

  - Removed all code related to Argon2 from autotune
  - Implemented in "module_extra_tuningdb_block()" (like SCRYPT)
  - We create a tuningdb entry at runtime!
  - Still allows override via tuningdb or CLI
  - Considers register spilling (read at startup)
  - Prevents global-to-host memory swap performance issues

Add Argon2I and ArgonD support
==============================

The kernel prepared from NFI already had support for the different Argon
types. No change was needed.

To support the other Argon2 types, the tokenizer had to be improved to
support a variety of different signatures in the same hash-mode.

Bugfixes
========

- Fixed missing entries in "switch_buffer_by_offset_8x4_le_S()"
- Fixed benchmark hash misdetection for scrypt. This was due to
  outdated logic used in scrypt to detect whether the plugin was
  called from a benchmark session or a regular one
- Fixed a bug in "module_hash_encode()" where Base64 padding '=' was
  retained
- Fixed missing "GLOBAL_AS" / "PRIVATE_AS" casts for OpenCL
- Fixed compiler warnings (e.g., "index_u32x4()", "get_group_id()")
  by adding return values
- Fixed a bug in token.len_max[6], which was allowing decoding
  of a 256-byte data into a 16-byte buffer (digest)

Other improvements
==================

- Added unit test module for automated testing
- Added support to the tokenizer to allow multiple signatures.
  Leave out TOKEN_ATTR_FIXED_LENGTH to enable this in your plugins
- Updated "hc_umulhi()", also exists for HIP
- Renamed "gid" to "bid" when using "get_group_id()" for clarity
- Removed "#ifdef IS_CUDA" as all backends are now supported
- Removed deprecated "OPTS_TYPE_MAXIMUM_ACCEL" attribute

Performance note
================

For testing, I used the self-test hash configured according to the
RFC 9106 recommendation: m=65536, t=3, p=1.

In my benchmarks, the AMD RX 7900 XTX achieved 1401 H/s using the same
hash that was used to test NVIDIA's RTX 4090. The RTX 4090 reached
1722 H/s, making it faster in absolute terms. However, at the time of
writing, it is more than three times as expensive as the 7900 XTX.

It's also worth noting that an older NVIDIA GTX 1080 Ti still reached
565 H/s with the same test vector, and may be found at significantly
lower cost.

Across all tested Argon2 configurations, the performance gap between
the RX 7900 XTX and the RTX 4090 remained proportionally consistent,
indicating a clear linear scaling relationship between the two GPUs.
2025-07-02 11:02:57 +02:00

600 lines
11 KiB
Common Lisp

/**
* Author......: See docs/credits.txt
* License.....: MIT
*/
#include "inc_vendor.h"
#include "inc_types.h"
#include "inc_platform.h"
#ifdef IS_NATIVE
#define FIXED_THREAD_COUNT(n)
#define SYNC_THREADS()
#endif
#ifdef IS_AMD
DECLSPEC u64x rotl64 (const u64x a, const int n)
{
return rotr64 (a, 64 - n);
}
DECLSPEC u64x rotr64 (const u64x a, const int n)
{
#if VECT_SIZE == 1
return rotr64_S (a, n);
#else
return ((a >> n) | ((a << (64 - n))));
#endif
}
DECLSPEC u64 rotl64_S (const u64 a, const int n)
{
return rotr64_S (a, 64 - n);
}
DECLSPEC u64 rotr64_S (const u64 a, const int n)
{
vconv64_t in;
in.v64 = a;
const u32 a0 = in.v32.a;
const u32 a1 = in.v32.b;
vconv64_t out;
if (n < 32)
{
out.v32.a = amd_bitalign (a1, a0, n);
out.v32.b = amd_bitalign (a0, a1, n);
}
else
{
out.v32.a = amd_bitalign (a0, a1, n - 32);
out.v32.b = amd_bitalign (a1, a0, n - 32);
}
return out.v64;
}
#endif // IS_AMD
#if defined IS_CUDA
#if ATTACK_EXEC == 11
CONSTANT_VK u32 generic_constant[8192]; // 32k
#if ATTACK_KERN == 0
#define bfs_buf g_bfs_buf
#define rules_buf ((const kernel_rule_t *) generic_constant)
#define words_buf_s g_words_buf_s
#define words_buf_r g_words_buf_r
#elif ATTACK_KERN == 1
#define bfs_buf g_bfs_buf
#define rules_buf g_rules_buf
#define words_buf_s g_words_buf_s
#define words_buf_r g_words_buf_r
#elif ATTACK_KERN == 3
#define rules_buf g_rules_buf
#define bfs_buf ((const bf_t *) generic_constant)
#define words_buf_s ((const bs_word_t *) generic_constant)
#define words_buf_r ((const u32x *) generic_constant)
#endif // ATTACK_KERN
#endif // ATTACK_EXEC
DECLSPEC u32 hc_atomic_dec (GLOBAL_AS u32 *p)
{
volatile const u32 val = 1;
return atomicSub (p, val);
}
DECLSPEC u32 hc_atomic_inc (GLOBAL_AS u32 *p)
{
volatile const u32 val = 1;
return atomicAdd (p, val);
}
DECLSPEC u32 hc_atomic_or (GLOBAL_AS u32 *p, volatile const u32 val)
{
return atomicOr (p, val);
}
DECLSPEC size_t get_group_id (const u32 dimindx)
{
switch (dimindx)
{
case 0:
return blockIdx.x;
case 1:
return blockIdx.y;
case 2:
return blockIdx.z;
}
return (size_t) -1;
}
DECLSPEC size_t get_global_id (const u32 dimindx __attribute__((unused)))
{
return (blockIdx.x * blockDim.x) + threadIdx.x;
}
DECLSPEC size_t get_local_id (const u32 dimindx)
{
switch (dimindx)
{
case 0:
return threadIdx.x;
case 1:
return threadIdx.y;
case 2:
return threadIdx.z;
}
return (size_t) -1;
}
DECLSPEC size_t get_local_size (const u32 dimindx)
{
switch (dimindx)
{
case 0:
return blockDim.x;
case 1:
return blockDim.y;
case 2:
return blockDim.z;
}
return (size_t) -1;
}
DECLSPEC u32x rotl32 (const u32x a, const int n)
{
#if VECT_SIZE == 1
return rotl32_S (a, n);
#else
u32x t = 0;
#if VECT_SIZE >= 2
t.s0 = rotl32_S (a.s0, n);
t.s1 = rotl32_S (a.s1, n);
#endif
#if VECT_SIZE >= 4
t.s2 = rotl32_S (a.s2, n);
t.s3 = rotl32_S (a.s3, n);
#endif
#if VECT_SIZE >= 8
t.s4 = rotl32_S (a.s4, n);
t.s5 = rotl32_S (a.s5, n);
t.s6 = rotl32_S (a.s6, n);
t.s7 = rotl32_S (a.s7, n);
#endif
#if VECT_SIZE >= 16
t.s8 = rotl32_S (a.s8, n);
t.s9 = rotl32_S (a.s9, n);
t.sa = rotl32_S (a.sa, n);
t.sb = rotl32_S (a.sb, n);
t.sc = rotl32_S (a.sc, n);
t.sd = rotl32_S (a.sd, n);
t.se = rotl32_S (a.se, n);
t.sf = rotl32_S (a.sf, n);
#endif
return t;
#endif
}
DECLSPEC u32x rotr32 (const u32x a, const int n)
{
#if VECT_SIZE == 1
return rotr32_S (a, n);
#else
u32x t = 0;
#if VECT_SIZE >= 2
t.s0 = rotr32_S (a.s0, n);
t.s1 = rotr32_S (a.s1, n);
#endif
#if VECT_SIZE >= 4
t.s2 = rotr32_S (a.s2, n);
t.s3 = rotr32_S (a.s3, n);
#endif
#if VECT_SIZE >= 8
t.s4 = rotr32_S (a.s4, n);
t.s5 = rotr32_S (a.s5, n);
t.s6 = rotr32_S (a.s6, n);
t.s7 = rotr32_S (a.s7, n);
#endif
#if VECT_SIZE >= 16
t.s8 = rotr32_S (a.s8, n);
t.s9 = rotr32_S (a.s9, n);
t.sa = rotr32_S (a.sa, n);
t.sb = rotr32_S (a.sb, n);
t.sc = rotr32_S (a.sc, n);
t.sd = rotr32_S (a.sd, n);
t.se = rotr32_S (a.se, n);
t.sf = rotr32_S (a.sf, n);
#endif
return t;
#endif
}
DECLSPEC u32 rotl32_S (const u32 a, const int n)
{
#ifdef USE_FUNNELSHIFT
return __funnelshift_l (a, a, n);
#else
return ((a << n) | ((a >> (32 - n))));
#endif
}
DECLSPEC u32 rotr32_S (const u32 a, const int n)
{
#ifdef USE_FUNNELSHIFT
return __funnelshift_r (a, a, n);
#else
return ((a >> n) | ((a << (32 - n))));
#endif
}
DECLSPEC u64x rotl64 (const u64x a, const int n)
{
#if VECT_SIZE == 1
return rotl64_S (a, n);
#else
return ((a << n) | ((a >> (64 - n))));
#endif
}
DECLSPEC u64x rotr64 (const u64x a, const int n)
{
#if VECT_SIZE == 1
return rotr64_S (a, n);
#else
return ((a >> n) | ((a << (64 - n))));
#endif
}
DECLSPEC u64 rotl64_S (const u64 a, const int n)
{
return rotr64_S (a, 64 - n);
}
DECLSPEC u64 rotr64_S (const u64 a, const int n)
{
return ((a >> n) | ((a << (64 - n))));
}
#define FIXED_THREAD_COUNT(n) __launch_bounds__((n), 0)
#define SYNC_THREADS() __syncthreads ()
#endif // IS_CUDA
#if defined IS_HIP
#if ATTACK_EXEC == 11
CONSTANT_VK u32 generic_constant[8192] __attribute__((used)); // 32k
#if ATTACK_KERN == 0
#define bfs_buf g_bfs_buf
#define rules_buf ((const kernel_rule_t *) generic_constant)
#define words_buf_s g_words_buf_s
#define words_buf_r g_words_buf_r
#elif ATTACK_KERN == 1
#define bfs_buf g_bfs_buf
#define rules_buf g_rules_buf
#define words_buf_s g_words_buf_s
#define words_buf_r g_words_buf_r
#elif ATTACK_KERN == 3
#define rules_buf g_rules_buf
#define bfs_buf ((const bf_t *) generic_constant)
#define words_buf_s ((const bs_word_t *) generic_constant)
#define words_buf_r ((const u32x *) generic_constant)
#endif // ATTACK_KERN
#endif // ATTACK_EXEC
DECLSPEC u32 hc_atomic_dec (GLOBAL_AS u32 *p)
{
volatile const u32 val = 1;
return atomicSub (p, val);
}
DECLSPEC u32 hc_atomic_inc (GLOBAL_AS u32 *p)
{
volatile const u32 val = 1;
return atomicAdd (p, val);
}
DECLSPEC u32 hc_atomic_or (GLOBAL_AS u32 *p, volatile const u32 val)
{
return atomicOr (p, val);
}
DECLSPEC size_t get_group_id (const u32 dimindx)
{
switch (dimindx)
{
case 0:
return blockIdx.x;
case 1:
return blockIdx.y;
case 2:
return blockIdx.z;
}
return (size_t) -1;
}
DECLSPEC size_t get_global_id (const u32 dimindx __attribute__((unused)))
{
return (blockIdx.x * blockDim.x) + threadIdx.x;
}
DECLSPEC size_t get_local_id (const u32 dimindx)
{
switch (dimindx)
{
case 0:
return threadIdx.x;
case 1:
return threadIdx.y;
case 2:
return threadIdx.z;
}
return (size_t) -1;
}
DECLSPEC size_t get_local_size (const u32 dimindx)
{
switch (dimindx)
{
case 0:
return blockDim.x;
case 1:
return blockDim.y;
case 2:
return blockDim.z;
}
return (size_t) -1;
}
DECLSPEC u32x rotl32 (const u32x a, const int n)
{
#if VECT_SIZE == 1
return rotl32_S (a, n);
#else
u32x t = 0;
#if VECT_SIZE >= 2
t.s0 = rotl32_S (a.s0, n);
t.s1 = rotl32_S (a.s1, n);
#endif
#if VECT_SIZE >= 4
t.s2 = rotl32_S (a.s2, n);
t.s3 = rotl32_S (a.s3, n);
#endif
#if VECT_SIZE >= 8
t.s4 = rotl32_S (a.s4, n);
t.s5 = rotl32_S (a.s5, n);
t.s6 = rotl32_S (a.s6, n);
t.s7 = rotl32_S (a.s7, n);
#endif
#if VECT_SIZE >= 16
t.s8 = rotl32_S (a.s8, n);
t.s9 = rotl32_S (a.s9, n);
t.sa = rotl32_S (a.sa, n);
t.sb = rotl32_S (a.sb, n);
t.sc = rotl32_S (a.sc, n);
t.sd = rotl32_S (a.sd, n);
t.se = rotl32_S (a.se, n);
t.sf = rotl32_S (a.sf, n);
#endif
return t;
#endif
}
DECLSPEC u32x rotr32 (const u32x a, const int n)
{
#if VECT_SIZE == 1
return rotr32_S (a, n);
#else
u32x t = 0;
#if VECT_SIZE >= 2
t.s0 = rotr32_S (a.s0, n);
t.s1 = rotr32_S (a.s1, n);
#endif
#if VECT_SIZE >= 4
t.s2 = rotr32_S (a.s2, n);
t.s3 = rotr32_S (a.s3, n);
#endif
#if VECT_SIZE >= 8
t.s4 = rotr32_S (a.s4, n);
t.s5 = rotr32_S (a.s5, n);
t.s6 = rotr32_S (a.s6, n);
t.s7 = rotr32_S (a.s7, n);
#endif
#if VECT_SIZE >= 16
t.s8 = rotr32_S (a.s8, n);
t.s9 = rotr32_S (a.s9, n);
t.sa = rotr32_S (a.sa, n);
t.sb = rotr32_S (a.sb, n);
t.sc = rotr32_S (a.sc, n);
t.sd = rotr32_S (a.sd, n);
t.se = rotr32_S (a.se, n);
t.sf = rotr32_S (a.sf, n);
#endif
return t;
#endif
}
DECLSPEC u32 rotl32_S (const u32 a, const int n)
{
#ifdef USE_FUNNELSHIFT
return __funnelshift_l (a, a, n);
#else
return ((a << n) | ((a >> (32 - n))));
#endif
}
DECLSPEC u32 rotr32_S (const u32 a, const int n)
{
#ifdef USE_FUNNELSHIFT
return __funnelshift_r (a, a, n);
#else
return ((a >> n) | ((a << (32 - n))));
#endif
}
DECLSPEC u64x rotl64 (const u64x a, const int n)
{
#if VECT_SIZE == 1
return rotl64_S (a, n);
#else
return ((a << n) | ((a >> (64 - n))));
#endif
}
DECLSPEC u64x rotr64 (const u64x a, const int n)
{
#if VECT_SIZE == 1
return rotr64_S (a, n);
#else
return ((a >> n) | ((a << (64 - n))));
#endif
}
DECLSPEC u64 rotl64_S (const u64 a, const int n)
{
return rotr64_S (a, 64 - n);
}
DECLSPEC u64 rotr64_S (const u64 a, const int n)
{
#ifdef USE_FUNNELSHIFT
vconv64_t in;
in.v64 = a;
const u32 a0 = in.v32.a;
const u32 a1 = in.v32.b;
vconv64_t out;
if (n < 32)
{
out.v32.a = __funnelshift_r (a0, a1, n);
out.v32.b = __funnelshift_r (a1, a0, n);
}
else
{
out.v32.a = __funnelshift_r (a1, a0, n - 32);
out.v32.b = __funnelshift_r (a0, a1, n - 32);
}
return out.v64;
#else
return ((a >> n) | ((a << (64 - n))));
#endif
}
#define FIXED_THREAD_COUNT(n) __launch_bounds__((n), 0)
#define SYNC_THREADS() __syncthreads ()
#endif // IS_HIP
#ifdef IS_METAL
DECLSPEC u32 hc_atomic_dec (volatile GLOBAL_AS u32 *p)
{
volatile const u32 val = 1;
volatile GLOBAL_AS atomic_int *pd = (volatile GLOBAL_AS atomic_int *) p;
return atomic_fetch_sub_explicit (pd, val, memory_order_relaxed);
}
DECLSPEC u32 hc_atomic_inc (volatile GLOBAL_AS u32 *p)
{
volatile const u32 val = 1;
volatile GLOBAL_AS atomic_int *pd = (volatile GLOBAL_AS atomic_int *) p;
return atomic_fetch_add_explicit (pd, val, memory_order_relaxed);
}
DECLSPEC u32 hc_atomic_or (volatile GLOBAL_AS u32 *p, volatile const u32 val)
{
volatile GLOBAL_AS atomic_int *pd = (volatile GLOBAL_AS atomic_int *) p;
return atomic_fetch_or_explicit (pd, val, memory_order_relaxed);
}
#define FIXED_THREAD_COUNT(n)
#define SYNC_THREADS() threadgroup_barrier (mem_flags::mem_threadgroup)
#endif // IS_METAL
#ifdef IS_OPENCL
DECLSPEC u32 hc_atomic_dec (volatile GLOBAL_AS u32 *p)
{
volatile const u32 val = 1;
return atomic_sub (p, val);
}
DECLSPEC u32 hc_atomic_inc (volatile GLOBAL_AS u32 *p)
{
volatile const u32 val = 1;
return atomic_add (p, val);
}
DECLSPEC u32 hc_atomic_or (volatile GLOBAL_AS u32 *p, volatile const u32 val)
{
return atomic_or (p, val);
}
#define FIXED_THREAD_COUNT(n) __attribute__((reqd_work_group_size((n), 1, 1)))
#define SYNC_THREADS() barrier (CLK_LOCAL_MEM_FENCE)
#endif // IS_OPENCL