![]() This change affects three key areas, each improving autotuning: - Autotune refactoring itself The main autotune algorithm had become too complex to maintain and has now been rewritten from scratch. The engine is now closer to the old v6.0.0 version, using a much more straightforward approach. Additionally, the backend is now informed when the autotune engine runs its operations and runs an extra invisible kernel invocation. This significantly improves runtime accuracy because the same caching mechanisms which kick in normal cracking sessions now also apply during autotuning. This leads to more consistent and reliable automatic workload tuning. - Benchmarking and '--speed-only' accuracy bugs fixed Benchmark runtimes had become too short, especially since the default benchmark mask changed from '?b?b?b?b?b?b?b' to '?a?a?a?a?a?a?a?a'. For very fast hashes like NTLM, benchmarks often stopped immediately when base words needed to be regenerated, producing highly inaccurate results. This issue also misled users tuning '-n' values, as manually oversubscribing kernels could mask the problem, creating the impression that increasing '-n' had a larger impact on performance than it truly does. While '-n' still has an effect, it’s not as significant. With this fix, users achieve the same speed without needing to tune '-n' manually. The bug was fixed by enforcing a minimum benchmark runtime of 4 seconds, regardless of kernel runtime or kernel type. This ensures more stable and realistic benchmark results, but typically increasing the benchmark duration by up to 4 seconds. - Kernel-Threads set to 32 and plugin configuration cleanup Some plugin configurations existed solely to work around the old benchmarking bug and can now be removed. For example, 'OPTS_TYPE_MAXIMUM_THREADS' is no longer required and has been removed from all plugins, although the parameter itself remains to avoid breaking custom plugins. Because increasing threads beyond 32 no longer offers meaningful performance gains, the default is now capped at 32 (unless overridden with '-T'). This simplifies GPU memory management. Currently, work-item counts are indirectly limited by buffer sizes (e.g., 'pws_buf[]'), which must not exceed 4 GiB (a hard-coded limit). This buffer size depends on the product of 'kernel-accel', 'kernel-threads', and the device’s compute units. By reducing the default threads from 1024 to 32, there is now more space available for base words. |
||
---|---|---|
.github | ||
bridges | ||
charsets | ||
deps | ||
docker | ||
docs | ||
extra/tab_completion | ||
include | ||
layouts | ||
masks | ||
modules | ||
obj | ||
OpenCL | ||
Python | ||
rules | ||
src | ||
tools | ||
tunings | ||
.appveyor.yml.old | ||
.editorconfig | ||
.gitattributes | ||
.gitignore | ||
.travis.yml | ||
BUILD_CYGWIN.md | ||
BUILD_Docker.md | ||
BUILD_macOS.md | ||
BUILD_MSYS2.md | ||
BUILD_WSL.md | ||
BUILD.md | ||
example0.cmd | ||
example0.hash | ||
example0.sh | ||
example400.cmd | ||
example400.hash | ||
example400.sh | ||
example500.cmd | ||
example500.hash | ||
example500.sh | ||
example.dict | ||
hashcat.hcstat2 | ||
Makefile | ||
README.md |
hashcat
hashcat is the world's fastest and most advanced password recovery utility, supporting five unique modes of attack for over 300 highly-optimized hashing algorithms. hashcat currently supports CPUs, GPUs, and other hardware accelerators on Linux, Windows, and macOS, and has facilities to help enable distributed password cracking.
License
hashcat is licensed under the MIT license. Refer to docs/license.txt for more information.
Installation
Download the latest release and unpack it in the desired location. Please remember to use 7z x
when unpacking the archive from the command line to ensure full file paths remain intact.
Usage/Help
Please refer to the Hashcat Wiki and the output of --help
for usage information and general help. A list of frequently asked questions may also be found here. The Hashcat Forum also contains a plethora of information. If you still think you need help by a real human come to Discord.
Building
Refer to BUILD.md for instructions on how to build hashcat from source.
Tests:
Travis | Coverity | GitHub Actions |
---|---|---|
Contributing
Contributions are welcome and encouraged, provided your code is of sufficient quality. Before submitting a pull request, please ensure your code adheres to the following requirements:
- Licensed under MIT license, or dedicated to the public domain (BSD, GPL, etc. code is incompatible)
- Adheres to gnu99 standard
- Compiles cleanly with no warnings when compiled with
-W -Wall -std=gnu99
- Uses Allman-style code blocks & indentation
- Uses 2-spaces as the indentation or a tab if it's required (for example: Makefiles)
- Uses lower-case function and variable names
- Avoids the use of
!
and uses positive conditionals wherever possible (e.g.,if (foo == 0)
instead ofif (!foo)
, andif (foo)
instead ofif (foo != 0)
) - Use code like array[index + 0] if you also need to do array[index + 1], to keep it aligned
You can use GNU Indent to help assist you with the style requirements:
indent -st -bad -bap -sc -bl -bli0 -ncdw -nce -cli0 -cbi0 -pcs -cs -npsl -bs -nbc -bls -blf -lp -i2 -ts2 -nut -l1024 -nbbo -fca -lc1024 -fc1
Your pull request should fully describe the functionality you are adding/removing or the problem you are solving. Regardless of whether your patch modifies one line or one thousand lines, you must describe what has prompted and/or motivated the change.
Solve only one problem in each pull request. If you're fixing a bug and adding a new feature, you need to make two separate pull requests. If you're fixing three bugs, you need to make three separate pull requests. If you're adding four new features, you need to make four separate pull requests. So on, and so forth.
If your patch fixes a bug, please be sure there is an issue open for the bug before submitting a pull request. If your patch aims to improve performance or optimize an algorithm, be sure to quantify your optimizations and document the trade-offs, and back up your claims with benchmarks and metrics.
In order to maintain the quality and integrity of the hashcat source tree, all pull requests must be reviewed and signed off by at least two board members before being merged. The project lead has the ultimate authority in deciding whether to accept or reject a pull request. Do not be discouraged if your pull request is rejected!