mirror of
https://github.com/hashcat/hashcat.git
synced 2025-08-05 13:25:39 +00:00

===================================== This patch modifies the existing Argon2 plugin, which was initially designed to work only with CUDA. Supporting OpenCL and HIP required broader architectural changes. 1. The tmps[] structure no longer holds the "large buffer". This buffer stored the scratch areas for all password candidates in one chunk. But we do not need to hold scratch areas for all candidates simultaneously. All we need to do is hold chunks large enough per password. To simplify logic, the buffer is not divided by password count, but divided by four, which fits within the "1/4 global memory" limit on some OpenCL runtimes. Hashcat already had logic to support this, but the buffer needed to be moved to a different buffer type. It has now been relocated from the "tmp buffer" to the "extra tmp buffer", following the same strategy used in newer SCRYPT plugins. This improves handling across several subcomponents: - Hashcat backend divides into four asymmetric buffers, hence the name "4-buffer strategy" - If the candidate count isn't divisible by 4, leftover candidates are assigned to the first (and possibly second and third) buffer - No code in the plugin is required, as this was designed for exactly such cases where future algorithms require a lot of memory - Plugin was rewritten to report the size needed in module_extra_tmp_size(), which triggers the "4-buffer" strategy - The split is not even, but each part is large enough to hold a multiple of a full scratch buffer for a password - The kernel code in m34000_init/loop/comp now uses a code block that finds its buffer by doing "group_id % 4" - Prevents the need to over-allocate memory to avoid OOB access - The original "tmps buffer" now holds a small dummy state buffer 2. Replaced warp shuffle instruction The instruction __shfl_sync() is not available in runtimes other than CUDA. Some have alternatives, some do not. To prevent branching per backend runtime, the new general macro hc__shfl_sync() replaces all calls to __shfl_sync(). This allows us to implement runtime-specific solutions and take effect at compile time to prevent regressions. - CUDA: We simply map to the original __shfl_sync() - HIP: We map to shfl(), a built-in intrinsic. This instruction doesn't support masks like __shfl_sync() does, but masks are not needed in Argon2 anyway. It requires an additional parameter, the wavefront size. This is natively 64, but we hardcode this to 32 so it aligns with NVIDIA's warp size. - OpenCL: - AMD: We have access to the instruction __builtin_amdgcn_ds_bpermute(). This instruction only supports 32-bit integers, requiring us to pack and unpack the 64-bit values manually - NVIDIA: We use inline assembly with "shfl.sync.idx.b32". Same as with AMD, we need to pack and unpack 32-bit integers. The 64-bit support in CUDA is just overloaded and internally does the same thing. - Others: We use a shared memory pool and combine it with a barrier. This LOCAL_VK pool must be sized at compile time and transported to the Argon2 code in "inc_hash_argon2.cl". This required changing all function declarations that use shuffles slightly. Unlock full threading for init and comp kernels =============================================== This is implemented using a new flag: OPTS_TYPE_THREAD_MULTI_DISABLE Behavior is similar to: OPTS_TYPE_MP_MULTI_DISABLE It simply disables the multiplier normally applied to password batch size. But attention, this change completely unbinds this effect from the real threads spawned on the compute device. If the thread count is not set to 1 in the plugin, it will start autotuning it. In the case of Argon2, we hard-code it to 32 instead, which also changes how "warp size" was used in the original implementation, and which is not compatible with HIP and/or OpenCL. However, we need to maintain this thread size to utilize warp shuffle and its alternatives in other runtimes. Benefits: - Enables full threading for init and comp kernels (1667 H/s to 1722 H/s) - Allows future algorithms to enable parallel processing of single password candidates, if supported Plugin changes: - Removed the "hack" where thread count = 1 disabled the multiplier - Removed per-device warp count detection code and struct changes - Removed warp handling and "num_elements / thread_count" division in the run_kernel() function Simplified autotune logic for Argon2 ==================================== The goal is to calculate the maximum number of password candidates that can run in parallel, constrained only by device memory. - Removed all code related to Argon2 from autotune - Implemented in "module_extra_tuningdb_block()" (like SCRYPT) - We create a tuningdb entry at runtime! - Still allows override via tuningdb or CLI - Considers register spilling (read at startup) - Prevents global-to-host memory swap performance issues Add Argon2I and ArgonD support ============================== The kernel prepared from NFI already had support for the different Argon types. No change was needed. To support the other Argon2 types, the tokenizer had to be improved to support a variety of different signatures in the same hash-mode. Bugfixes ======== - Fixed missing entries in "switch_buffer_by_offset_8x4_le_S()" - Fixed benchmark hash misdetection for scrypt. This was due to outdated logic used in scrypt to detect whether the plugin was called from a benchmark session or a regular one - Fixed a bug in "module_hash_encode()" where Base64 padding '=' was retained - Fixed missing "GLOBAL_AS" / "PRIVATE_AS" casts for OpenCL - Fixed compiler warnings (e.g., "index_u32x4()", "get_group_id()") by adding return values - Fixed a bug in token.len_max[6], which was allowing decoding of a 256-byte data into a 16-byte buffer (digest) Other improvements ================== - Added unit test module for automated testing - Added support to the tokenizer to allow multiple signatures. Leave out TOKEN_ATTR_FIXED_LENGTH to enable this in your plugins - Updated "hc_umulhi()", also exists for HIP - Renamed "gid" to "bid" when using "get_group_id()" for clarity - Removed "#ifdef IS_CUDA" as all backends are now supported - Removed deprecated "OPTS_TYPE_MAXIMUM_ACCEL" attribute Performance note ================ For testing, I used the self-test hash configured according to the RFC 9106 recommendation: m=65536, t=3, p=1. In my benchmarks, the AMD RX 7900 XTX achieved 1401 H/s using the same hash that was used to test NVIDIA's RTX 4090. The RTX 4090 reached 1722 H/s, making it faster in absolute terms. However, at the time of writing, it is more than three times as expensive as the 7900 XTX. It's also worth noting that an older NVIDIA GTX 1080 Ti still reached 565 H/s with the same test vector, and may be found at significantly lower cost. Across all tested Argon2 configurations, the performance gap between the RX 7900 XTX and the RTX 4090 remained proportionally consistent, indicating a clear linear scaling relationship between the two GPUs.
387 lines
11 KiB
Common Lisp
387 lines
11 KiB
Common Lisp
/**
|
|
* Author......: Netherlands Forensic Institute
|
|
* License.....: MIT
|
|
*
|
|
* Warp code based on original work by Ondrej Mosnáček
|
|
*/
|
|
|
|
#include "inc_vendor.h"
|
|
#include "inc_types.h"
|
|
#include "inc_platform.h"
|
|
#include "inc_common.h"
|
|
#include "inc_hash_blake2b.h"
|
|
#include "inc_hash_argon2.h"
|
|
|
|
DECLSPEC void argon2_initial_block (const u32 *in, const u32 lane, const u32 blocknum, const u32 parallelism, GLOBAL_AS argon2_block_t *blocks)
|
|
{
|
|
blake2b_ctx_t ctx;
|
|
|
|
blake2b_init (&ctx);
|
|
|
|
u64 blake_buf[16] = { 0 };
|
|
|
|
blake_buf[0] = sizeof(argon2_block_t);
|
|
|
|
blake2b_update (&ctx, (u32 *) blake_buf, 4);
|
|
blake2b_update (&ctx, in, 64);
|
|
|
|
blake_buf[0] = hl32_to_64 (lane, blocknum);
|
|
|
|
blake2b_update (&ctx, (u32 *) blake_buf, 8);
|
|
|
|
blake2b_final (&ctx);
|
|
|
|
GLOBAL_AS u64 *out = blocks[(blocknum * parallelism) + lane].values;
|
|
|
|
out[0] = ctx.h[0];
|
|
out[1] = ctx.h[1];
|
|
out[2] = ctx.h[2];
|
|
out[3] = ctx.h[3];
|
|
|
|
for (u32 off = 4; off < 124; off += 4)
|
|
{
|
|
for (u32 idx = 0; idx < 8; idx++) blake_buf[idx] = ctx.h[idx];
|
|
|
|
blake2b_init (&ctx);
|
|
blake2b_transform (ctx.h, blake_buf, 64, BLAKE2B_FINAL);
|
|
|
|
out[off + 0] = ctx.h[0];
|
|
out[off + 1] = ctx.h[1];
|
|
out[off + 2] = ctx.h[2];
|
|
out[off + 3] = ctx.h[3];
|
|
}
|
|
|
|
out[124] = ctx.h[4];
|
|
out[125] = ctx.h[5];
|
|
out[126] = ctx.h[6];
|
|
out[127] = ctx.h[7];
|
|
}
|
|
|
|
DECLSPEC void argon2_initial_hash (GLOBAL_AS const pw_t *pw, GLOBAL_AS const salt_t *salt, const argon2_options_t *options, u64 *blockhash)
|
|
{
|
|
blake2b_ctx_t ctx;
|
|
blake2b_init (&ctx);
|
|
|
|
u32 option_input[32] = { 0 };
|
|
|
|
option_input[0] = options->parallelism;
|
|
option_input[1] = options->digest_len;
|
|
option_input[2] = options->memory_usage_in_kib;
|
|
option_input[3] = options->iterations;
|
|
option_input[4] = options->version;
|
|
option_input[5] = options->type;
|
|
|
|
blake2b_update (&ctx, option_input, 24);
|
|
|
|
u32 len_input[32] = { 0 };
|
|
|
|
len_input[0] = pw->pw_len;
|
|
|
|
blake2b_update (&ctx, len_input, 4);
|
|
blake2b_update_global (&ctx, pw->i, pw->pw_len);
|
|
|
|
len_input[0] = salt->salt_len;
|
|
|
|
blake2b_update (&ctx, len_input, 4);
|
|
blake2b_update_global (&ctx, salt->salt_buf, salt->salt_len);
|
|
|
|
len_input[0] = 0;
|
|
|
|
blake2b_update (&ctx, len_input, 4); // secret (K)
|
|
blake2b_update (&ctx, len_input, 4); // associated data (X)
|
|
|
|
blake2b_final (&ctx);
|
|
|
|
for (u32 idx = 0; idx < 8; idx++) blockhash[idx] = ctx.h[idx];
|
|
}
|
|
|
|
DECLSPEC void argon2_init (GLOBAL_AS const pw_t *pw, GLOBAL_AS const salt_t *salt,
|
|
const argon2_options_t *options, GLOBAL_AS argon2_block_t *out)
|
|
{
|
|
u64 blockhash[16] = { 0 };
|
|
|
|
argon2_initial_hash (pw, salt, options, blockhash);
|
|
|
|
// Generate the first two blocks of each lane
|
|
for (u32 lane = 0; lane < options->parallelism; lane++)
|
|
{
|
|
argon2_initial_block ((u32 *) blockhash, lane, 0, options->parallelism, out);
|
|
argon2_initial_block ((u32 *) blockhash, lane, 1, options->parallelism, out);
|
|
}
|
|
}
|
|
|
|
// TODO: reconsider 'trunc_mul()'
|
|
DECLSPEC u64 trunc_mul (u64 x, u64 y)
|
|
{
|
|
const u32 xlo = (u32) x;
|
|
const u32 ylo = (u32) y;
|
|
return hl32_to_64_S (hc_umulhi (xlo, ylo), (u32) (xlo * ylo));
|
|
}
|
|
|
|
DECLSPEC inline u32 argon2_ref_address (const argon2_options_t *options, const argon2_pos_t *pos, u32 index, u64 pseudo_random)
|
|
{
|
|
u32 ref_lane;
|
|
u32 ref_area;
|
|
u32 ref_index;
|
|
|
|
if ((pos->pass == 0) && (pos->slice == 0))
|
|
{
|
|
ref_lane = pos->lane;
|
|
}
|
|
else
|
|
{
|
|
ref_lane = h32_from_64_S (pseudo_random) % options->parallelism;
|
|
}
|
|
|
|
ref_area = (pos->pass == 0) ? pos->slice : (ARGON2_SYNC_POINTS - 1);
|
|
ref_area *= options->segment_length;
|
|
|
|
if ((ref_lane == pos->lane) || (index == 0))
|
|
{
|
|
ref_area += (index - 1);
|
|
}
|
|
|
|
const u32 j1 = l32_from_64_S (pseudo_random);
|
|
ref_index = (ref_area - 1 - hc_umulhi (ref_area, hc_umulhi (j1, j1)));
|
|
|
|
if (pos->pass > 0)
|
|
{
|
|
ref_index += (pos->slice + 1) * options->segment_length;
|
|
|
|
if (ref_index >= options->lane_length)
|
|
{
|
|
ref_index -= options->lane_length;
|
|
}
|
|
}
|
|
|
|
return (options->parallelism * ref_index) + ref_lane;
|
|
}
|
|
|
|
DECLSPEC void swap_u64 (u64 *x, u64 *y)
|
|
{
|
|
u64 tmp = *x;
|
|
*x = *y;
|
|
*y = tmp;
|
|
}
|
|
|
|
DECLSPEC void transpose_permute_block (u64 R[4], int thread)
|
|
{
|
|
if (thread & 0x08)
|
|
{
|
|
swap_u64 (&R[0], &R[2]);
|
|
swap_u64 (&R[1], &R[3]);
|
|
}
|
|
if (thread & 0x04)
|
|
{
|
|
swap_u64 (&R[0], &R[1]);
|
|
swap_u64 (&R[2], &R[3]);
|
|
}
|
|
}
|
|
|
|
DECLSPEC int argon2_shift (int idx, int thread)
|
|
{
|
|
const int delta = ((idx & 0x02) << 3) + (idx & 0x01);
|
|
return (thread & 0x0e) | (((thread & 0x11) + delta + 0x0e) & 0x11);
|
|
}
|
|
|
|
DECLSPEC void argon2_hash_block (u64 R[4], int thread, LOCAL_AS u64 *shuffle_buf)
|
|
{
|
|
for (u32 idx = 1; idx < 4; idx++) R[idx] = hc__shfl_sync (shuffle_buf, FULL_MASK, R[idx], thread ^ (idx << 2));
|
|
|
|
transpose_permute_block (R, thread);
|
|
|
|
for (u32 idx = 1; idx < 4; idx++) R[idx] = hc__shfl_sync (shuffle_buf, FULL_MASK, R[idx], thread ^ (idx << 2));
|
|
|
|
ARGON2_G(R[0], R[1], R[2], R[3]);
|
|
|
|
for (u32 idx = 1; idx < 4; idx++) R[idx] = hc__shfl_sync (shuffle_buf, FULL_MASK, R[idx], (thread & 0x1c) | ((thread + idx) & 0x03));
|
|
|
|
ARGON2_G(R[0], R[1], R[2], R[3]);
|
|
|
|
for (u32 idx = 1; idx < 4; idx++) R[idx] = hc__shfl_sync (shuffle_buf, FULL_MASK, R[idx], ((thread & 0x1c) | ((thread - idx) & 0x03)) ^ (idx << 2));
|
|
|
|
transpose_permute_block (R, thread);
|
|
|
|
for (u32 idx = 1; idx < 4; idx++) R[idx] = hc__shfl_sync (shuffle_buf, FULL_MASK, R[idx], thread ^ (idx << 2));
|
|
|
|
ARGON2_G(R[0], R[1], R[2], R[3]);
|
|
|
|
for (u32 idx = 1; idx < 4; idx++) R[idx] = hc__shfl_sync (shuffle_buf, FULL_MASK, R[idx], argon2_shift (idx, thread));
|
|
|
|
ARGON2_G(R[0], R[1], R[2], R[3]);
|
|
|
|
for (u32 idx = 1; idx < 4; idx++) R[idx] = hc__shfl_sync (shuffle_buf, FULL_MASK, R[idx], argon2_shift ((4 - idx), thread));
|
|
}
|
|
|
|
DECLSPEC void argon2_next_addresses (const argon2_options_t *options, const argon2_pos_t *pos, u32 *addresses, u32 start_index, u32 thread, LOCAL_AS u64 *shuffle_buf)
|
|
{
|
|
u64 Z[4] = { 0 };
|
|
u64 tmp[4];
|
|
|
|
switch (thread)
|
|
{
|
|
case 0: Z[0] = pos->pass; break;
|
|
case 1: Z[0] = pos->lane; break;
|
|
case 2: Z[0] = pos->slice; break;
|
|
case 3: Z[0] = options->memory_block_count; break;
|
|
case 4: Z[0] = options->iterations; break;
|
|
case 5: Z[0] = options->type; break;
|
|
case 6: Z[0] = (start_index / 128) + 1; break;
|
|
default: Z[0] = 0; break;
|
|
}
|
|
|
|
tmp[0] = Z[0];
|
|
|
|
argon2_hash_block (Z, thread, shuffle_buf);
|
|
|
|
Z[0] ^= tmp[0];
|
|
|
|
for (u32 idx = 0; idx < 4; idx++) tmp[idx] = Z[idx];
|
|
|
|
argon2_hash_block (Z, thread, shuffle_buf);
|
|
|
|
for (u32 idx = 0; idx < 4; idx++) Z[idx] ^= tmp[idx];
|
|
|
|
for (u32 i = 0, index = (start_index + thread); i < 4; i++, index += THREADS_PER_LANE)
|
|
{
|
|
addresses[i] = argon2_ref_address (options, pos, index, Z[i]);
|
|
}
|
|
}
|
|
|
|
DECLSPEC u32 index_u32x4 (const u32 array[4], u32 index)
|
|
{
|
|
switch (index)
|
|
{
|
|
case 0:
|
|
return array[0];
|
|
case 1:
|
|
return array[1];
|
|
case 2:
|
|
return array[2];
|
|
case 3:
|
|
return array[3];
|
|
}
|
|
|
|
return -1;
|
|
}
|
|
|
|
DECLSPEC GLOBAL_AS argon2_block_t *argon2_get_current_block (GLOBAL_AS argon2_block_t *blocks, const argon2_options_t *options, u32 lane, u32 index_in_lane, u64 R[4], u32 thread)
|
|
{
|
|
// Apply wrap-around to previous block index if the current block is the first block in the lane
|
|
const u32 prev_in_lane = (index_in_lane == 0) ? (options->lane_length - 1) : (index_in_lane - 1);
|
|
|
|
GLOBAL_AS argon2_block_t *prev_block = &blocks[(prev_in_lane * options->parallelism) + lane];
|
|
|
|
for (u32 idx = 0; idx < 4; idx++) R[idx] = prev_block->values[(idx * THREADS_PER_LANE) + thread];
|
|
|
|
return &blocks[(index_in_lane * options->parallelism) + lane];
|
|
}
|
|
|
|
DECLSPEC void argon2_fill_subsegment (GLOBAL_AS argon2_block_t *blocks, const argon2_options_t *options, const argon2_pos_t *pos, bool indep_addr, const u32 addresses[4],
|
|
u32 start_index, u32 end_index, GLOBAL_AS argon2_block_t *cur_block, u64 R[4], u32 thread, LOCAL_AS u64 *shuffle_buf)
|
|
{
|
|
for (u32 index = start_index; index < end_index; index++, cur_block += options->parallelism)
|
|
{
|
|
u32 ref_address;
|
|
|
|
if (indep_addr)
|
|
{
|
|
ref_address = index_u32x4 (addresses, (index / THREADS_PER_LANE) % ARGON2_SYNC_POINTS);
|
|
ref_address = hc__shfl_sync (shuffle_buf, FULL_MASK, ref_address, index);
|
|
}
|
|
else
|
|
{
|
|
ref_address = argon2_ref_address (options, pos, index, R[0]);
|
|
ref_address = hc__shfl_sync (shuffle_buf, FULL_MASK, ref_address, 0);
|
|
}
|
|
|
|
GLOBAL_AS const argon2_block_t *ref_block = &blocks[ref_address];
|
|
|
|
u64 tmp[4] = { 0 };
|
|
|
|
// First pass is overwrite, next passes are XOR with previous
|
|
if ((pos->pass > 0) && (options->version != ARGON2_VERSION_10))
|
|
{
|
|
for (u32 idx = 0; idx < 4; idx++) tmp[idx] = cur_block->values[(idx * THREADS_PER_LANE) + thread];
|
|
}
|
|
|
|
for (u32 idx = 0; idx < 4; idx++) R[idx] ^= ref_block->values[(idx * THREADS_PER_LANE) + thread];
|
|
|
|
for (u32 idx = 0; idx < 4; idx++) tmp[idx] ^= R[idx];
|
|
|
|
argon2_hash_block (R, thread, shuffle_buf);
|
|
|
|
for (u32 idx = 0; idx < 4; idx++) R[idx] ^= tmp[idx];
|
|
|
|
for (u32 idx = 0; idx < 4; idx++) cur_block->values[(idx * THREADS_PER_LANE) + thread] = R[idx];
|
|
}
|
|
}
|
|
|
|
DECLSPEC void argon2_fill_segment (GLOBAL_AS argon2_block_t *blocks, const argon2_options_t *options, const argon2_pos_t *pos, LOCAL_AS u64 *shuffle_buf)
|
|
{
|
|
const u32 thread = get_local_id(0);
|
|
|
|
// We have already generated the first two blocks of each lane (for the first pass)
|
|
const u32 skip_blocks = (pos->pass == 0) && (pos->slice == 0) ? 2 : 0;
|
|
const u32 index_in_lane = (pos->slice * options->segment_length) + skip_blocks;
|
|
|
|
u64 R[4];
|
|
|
|
GLOBAL_AS argon2_block_t *cur_block = argon2_get_current_block (blocks, options, pos->lane, index_in_lane, R, thread);
|
|
|
|
if ((options->type == TYPE_I) || ((options->type == TYPE_ID) && (pos->pass == 0) && (pos->slice <= 1)))
|
|
{
|
|
for (u32 block_index = 0; block_index < options->segment_length; block_index += 128)
|
|
{
|
|
const u32 start_index = (block_index == 0) ? skip_blocks : block_index;
|
|
const u32 end_index = MIN(((start_index | 127) + 1), options->segment_length);
|
|
|
|
u32 addresses[4];
|
|
|
|
argon2_next_addresses (options, pos, addresses, block_index, thread, shuffle_buf);
|
|
argon2_fill_subsegment (blocks, options, pos, true, addresses, start_index, end_index, cur_block, R, thread, shuffle_buf);
|
|
|
|
cur_block += (end_index - start_index) * options->parallelism;
|
|
}
|
|
}
|
|
else
|
|
{
|
|
u32 addresses[4] = { 0 };
|
|
|
|
argon2_fill_subsegment (blocks, options, pos, false, addresses, skip_blocks, options->segment_length, cur_block, R, thread, shuffle_buf);
|
|
}
|
|
}
|
|
|
|
DECLSPEC void argon2_final (GLOBAL_AS argon2_block_t *blocks, const argon2_options_t *options, u32 *out)
|
|
{
|
|
const u32 lane_length = options->lane_length;
|
|
const u32 lanes = options->parallelism;
|
|
|
|
argon2_block_t final_block = { };
|
|
|
|
for (u32 l = 0; l < lanes; l++)
|
|
{
|
|
for (u32 idx = 0; idx < 128; idx++) final_block.values[idx] ^= blocks[((lane_length - 1) * lanes) + l].values[idx];
|
|
}
|
|
|
|
u32 output_len [32] = {0};
|
|
output_len [0] = options->digest_len;
|
|
|
|
blake2b_ctx_t ctx;
|
|
blake2b_init (&ctx);
|
|
|
|
// Override default (0x40) value in BLAKE2b
|
|
ctx.h[0] ^= 0x40 ^ options->digest_len;
|
|
|
|
blake2b_update (&ctx, output_len, 4);
|
|
blake2b_update (&ctx, (u32 *) final_block.values, sizeof(final_block));
|
|
|
|
blake2b_final (&ctx);
|
|
|
|
for (int i = 0, idx = 0; i < (options->digest_len / 4); i += 2, idx += 1)
|
|
{
|
|
out [i + 0] = l32_from_64_S (ctx.h[idx]);
|
|
out [i + 1] = h32_from_64_S (ctx.h[idx]);
|
|
}
|
|
}
|