mirror of
https://github.com/hashcat/hashcat.git
synced 2024-11-19 14:48:10 +00:00
512 lines
31 KiB
Plaintext
512 lines
31 KiB
Plaintext
|
|
# This file is used to override autotune settings
|
|
# This file is used to preset the Vector-Width, the Kernel-Accept and the Kernel-Loops Value per Device, Attack-Mode and Hash-Type
|
|
#
|
|
# - A valid line consists of the following fields (in that order):
|
|
# - Device-Name
|
|
# - Attack-Mode
|
|
# - Hash-Type
|
|
# - Vector-Width
|
|
# - Kernel-Accel
|
|
# - Kernel-Loops
|
|
# - The first three columns define the filter, the other three is what is assigned when that filter matches
|
|
# - If no filter matches, autotune is used
|
|
# - Columns are separated with one or many spaces or tabs
|
|
# - A line can not start with a space or a tab
|
|
# - Comment lines are allowed, use a # as first character
|
|
# - Invalid lines are ignored
|
|
# - The Device-Name is the OpenCL Device-Name. It's shown on hashcat startup.
|
|
# - If the device contains spaces, replace all spaces with _ character.
|
|
# - The Device-Name can be assigned an alias. This is useful if many devices share the same chip
|
|
# - If you assign an alias, make sure to not use the devices name directly
|
|
# - There's also a hard-wired Device-Name which matches all device types called:
|
|
# - DEVICE_TYPE_CPU
|
|
# - DEVICE_TYPE_GPU
|
|
# - DEVICE_TYPE_ACCELERATOR
|
|
# - The use of wildcards is allowed, some rules:
|
|
# - Wildcards can only replace an entire Device-Name, not parts just of it. eg: not Geforce_*
|
|
# - The policy is local > global, means the closer you configure something, the more likely it is selected
|
|
# - The policy testing order is from left to right
|
|
# - Attack modes can be:
|
|
# - 0 = Dictionary-Attack
|
|
# - 1 = Combinator-Attack, will also be used for attack-mode 6 and 7 since they share the same kernel
|
|
# - 3 = Mask-Attack
|
|
# - The Kernel-Accel is a multiplier to OpenCL's concept of a workitem, not the workitem count
|
|
# - The Kernel-Loops has a functionality depending on the hash-type:
|
|
# - Slow Hash: Number of iterations calculated per workitem
|
|
# - Fast Hash: Number of mutations calculated per workitem
|
|
# - None of both should be confused with the OpenCL concept of a "thread", this one is maintained automatically
|
|
# - The Vector-Width can have only the values 1, 2, 4, 8 or 'N', where 'N' stands for native, which is an OpenCl-queried data value
|
|
# - The Kernel-Accel is limited to 1024
|
|
# - The Kernel-Loops is limited to 1024
|
|
# - The Kernel-Accel can have 'A', where 'A' stands for autotune
|
|
# - The Kernel-Accel can have 'M', where 'M' stands for maximum possible
|
|
# - The Kernel-Loops can have 'A', where 'A' stands for autotune
|
|
# - The Kernel-Loops can have 'M', where 'M' stands for maximum possible
|
|
|
|
#############
|
|
## ALIASES ##
|
|
#############
|
|
|
|
#Device Alias
|
|
#Name Name
|
|
|
|
Tesla_C2050 ALIAS_nv_real_simd
|
|
Tesla_C2050/C2070 ALIAS_nv_real_simd
|
|
Tesla_C2070 ALIAS_nv_real_simd
|
|
Tesla_C2075 ALIAS_nv_real_simd
|
|
Tesla_K10 ALIAS_nv_real_simd
|
|
Tesla_K20 ALIAS_nv_real_simd
|
|
Tesla_K40 ALIAS_nv_real_simd
|
|
Tesla_K80 ALIAS_nv_real_simd
|
|
Tesla_M20xx ALIAS_nv_real_simd
|
|
|
|
Quadro_410 ALIAS_nv_real_simd
|
|
Quadro_K2000 ALIAS_nv_real_simd
|
|
Quadro_K2000D ALIAS_nv_real_simd
|
|
Quadro_K4000 ALIAS_nv_real_simd
|
|
Quadro_K4200 ALIAS_nv_real_simd
|
|
Quadro_K420 ALIAS_nv_real_simd
|
|
Quadro_K5000 ALIAS_nv_real_simd
|
|
Quadro_K5200 ALIAS_nv_real_simd
|
|
Quadro_K6000 ALIAS_nv_real_simd
|
|
Quadro_K600 ALIAS_nv_real_simd
|
|
Quadro_Plex_7000 ALIAS_nv_real_simd
|
|
|
|
NVS_310 ALIAS_nv_real_simd
|
|
NVS_315 ALIAS_nv_real_simd
|
|
NVS_4200M ALIAS_nv_real_simd
|
|
NVS_510 ALIAS_nv_real_simd
|
|
NVS_5200M ALIAS_nv_real_simd
|
|
NVS_5400M ALIAS_nv_real_simd
|
|
|
|
GeForce_410M ALIAS_nv_real_simd
|
|
GeForce_610M ALIAS_nv_real_simd
|
|
GeForce_705M ALIAS_nv_real_simd
|
|
GeForce_710M ALIAS_nv_real_simd
|
|
GeForce_800M ALIAS_nv_real_simd
|
|
GeForce_820M ALIAS_nv_real_simd
|
|
GeForce_920M ALIAS_nv_real_simd
|
|
GeForce_GT_410M ALIAS_nv_real_simd
|
|
GeForce_GT_415M ALIAS_nv_real_simd
|
|
GeForce_GT_420M ALIAS_nv_real_simd
|
|
GeForce_GT_430 ALIAS_nv_real_simd
|
|
GeForce_GT_435M ALIAS_nv_real_simd
|
|
GeForce_GT_440 ALIAS_nv_real_simd
|
|
GeForce_GT_445M ALIAS_nv_real_simd
|
|
GeForce_GT_520 ALIAS_nv_real_simd
|
|
GeForce_GT_520M ALIAS_nv_real_simd
|
|
GeForce_GT_520MX ALIAS_nv_real_simd
|
|
GeForce_GT_525M ALIAS_nv_real_simd
|
|
GeForce_GT_540M ALIAS_nv_real_simd
|
|
GeForce_GT_550M ALIAS_nv_real_simd
|
|
GeForce_GT_555M ALIAS_nv_real_simd
|
|
GeForce_GT_610 ALIAS_nv_real_simd
|
|
GeForce_GT_620 ALIAS_nv_real_simd
|
|
GeForce_GT_620M ALIAS_nv_real_simd
|
|
GeForce_GT_625M ALIAS_nv_real_simd
|
|
GeForce_GT_630 ALIAS_nv_real_simd
|
|
GeForce_GT_630M ALIAS_nv_real_simd
|
|
GeForce_GT_635M ALIAS_nv_real_simd
|
|
GeForce_GT_640 ALIAS_nv_real_simd
|
|
GeForce_GT_640M ALIAS_nv_real_simd
|
|
GeForce_GT_640M_LE ALIAS_nv_real_simd
|
|
GeForce_GT_645M ALIAS_nv_real_simd
|
|
GeForce_GT_650M ALIAS_nv_real_simd
|
|
GeForce_GT_705 ALIAS_nv_real_simd
|
|
GeForce_GT_720 ALIAS_nv_real_simd
|
|
GeForce_GT_720M ALIAS_nv_real_simd
|
|
GeForce_GT_730 ALIAS_nv_real_simd
|
|
GeForce_GT_730M ALIAS_nv_real_simd
|
|
GeForce_GT_735M ALIAS_nv_real_simd
|
|
GeForce_GT_740 ALIAS_nv_real_simd
|
|
GeForce_GT_740M ALIAS_nv_real_simd
|
|
GeForce_GT_745M ALIAS_nv_real_simd
|
|
GeForce_GT_750M ALIAS_nv_real_simd
|
|
GeForce_GTS_450 ALIAS_nv_real_simd
|
|
GeForce_GTX_460 ALIAS_nv_real_simd
|
|
GeForce_GTX_460M ALIAS_nv_real_simd
|
|
GeForce_GTX_465 ALIAS_nv_real_simd
|
|
GeForce_GTX_470 ALIAS_nv_real_simd
|
|
GeForce_GTX_470M ALIAS_nv_real_simd
|
|
GeForce_GTX_480 ALIAS_nv_real_simd
|
|
GeForce_GTX_480M ALIAS_nv_real_simd
|
|
GeForce_GTX_485M ALIAS_nv_real_simd
|
|
GeForce_GTX_550_Ti ALIAS_nv_real_simd
|
|
GeForce_GTX_560M ALIAS_nv_real_simd
|
|
GeForce_GTX_560_Ti ALIAS_nv_real_simd
|
|
GeForce_GTX_570 ALIAS_nv_real_simd
|
|
GeForce_GTX_570M ALIAS_nv_real_simd
|
|
GeForce_GTX_580 ALIAS_nv_real_simd
|
|
GeForce_GTX_580M ALIAS_nv_real_simd
|
|
GeForce_GTX_590 ALIAS_nv_real_simd
|
|
GeForce_GTX_610M ALIAS_nv_real_simd
|
|
GeForce_GTX_650 ALIAS_nv_real_simd
|
|
GeForce_GTX_650_Ti ALIAS_nv_real_simd
|
|
GeForce_GTX_650_Ti_BOOST ALIAS_nv_real_simd
|
|
GeForce_GTX_660 ALIAS_nv_real_simd
|
|
GeForce_GTX_660M ALIAS_nv_real_simd
|
|
GeForce_GTX_660_Ti ALIAS_nv_real_simd
|
|
GeForce_GTX_670 ALIAS_nv_real_simd
|
|
GeForce_GTX_670M ALIAS_nv_real_simd
|
|
GeForce_GTX_670MX ALIAS_nv_real_simd
|
|
GeForce_GTX_675M ALIAS_nv_real_simd
|
|
GeForce_GTX_675MX ALIAS_nv_real_simd
|
|
GeForce_GTX_680 ALIAS_nv_real_simd
|
|
GeForce_GTX_680M ALIAS_nv_real_simd
|
|
GeForce_GTX_680MX ALIAS_nv_real_simd
|
|
GeForce_GTX_690 ALIAS_nv_real_simd
|
|
GeForce_GTX_705M ALIAS_nv_real_simd
|
|
GeForce_GTX_710M ALIAS_nv_real_simd
|
|
GeForce_GTX_760 ALIAS_nv_real_simd
|
|
GeForce_GTX_760M ALIAS_nv_real_simd
|
|
GeForce_GTX_765M ALIAS_nv_real_simd
|
|
GeForce_GTX_770 ALIAS_nv_real_simd
|
|
GeForce_GTX_770M ALIAS_nv_real_simd
|
|
GeForce_GTX_780 ALIAS_nv_real_simd
|
|
GeForce_GTX_780M ALIAS_nv_real_simd
|
|
GeForce_GTX_780_Ti ALIAS_nv_real_simd
|
|
GeForce_GTX_800M ALIAS_nv_real_simd
|
|
GeForce_GTX_820M ALIAS_nv_real_simd
|
|
GeForce_GTX_860M ALIAS_nv_real_simd
|
|
GeForce_GTX_870M ALIAS_nv_real_simd
|
|
GeForce_GTX_880M ALIAS_nv_real_simd
|
|
GeForce_GTX_920M ALIAS_nv_real_simd
|
|
#GeForce_GTX_TITAN ALIAS_nv_real_simd
|
|
GeForce_GTX_TITAN_Black ALIAS_nv_real_simd
|
|
GeForce_GTX_TITAN_Z ALIAS_nv_real_simd
|
|
|
|
##
|
|
## Maxwell sm_50 cards or higher
|
|
##
|
|
|
|
Quadro_K1200 ALIAS_nv_sm50_or_higher
|
|
Quadro_K2200 ALIAS_nv_sm50_or_higher
|
|
Quadro_K2200M ALIAS_nv_sm50_or_higher
|
|
Quadro_K620 ALIAS_nv_sm50_or_higher
|
|
Quadro_K620M ALIAS_nv_sm50_or_higher
|
|
Quadro_M1000M ALIAS_nv_sm50_or_higher
|
|
Quadro_M2000M ALIAS_nv_sm50_or_higher
|
|
Quadro_M3000M ALIAS_nv_sm50_or_higher
|
|
Quadro_M4000M ALIAS_nv_sm50_or_higher
|
|
Quadro_M5000M ALIAS_nv_sm50_or_higher
|
|
Quadro_M500M ALIAS_nv_sm50_or_higher
|
|
Quadro_M5500M ALIAS_nv_sm50_or_higher
|
|
Quadro_M600M ALIAS_nv_sm50_or_higher
|
|
|
|
NVS_810 ALIAS_nv_sm50_or_higher
|
|
|
|
GeForce_GTX_750 ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_750_Ti ALIAS_nv_sm50_or_higher
|
|
|
|
GeForce_830M ALIAS_nv_sm50_or_higher
|
|
GeForce_830M ALIAS_nv_sm50_or_higher
|
|
GeForce_840M ALIAS_nv_sm50_or_higher
|
|
GeForce_840M ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_850M ALIAS_nv_sm50_or_higher
|
|
|
|
Tesla_M4 ALIAS_nv_sm50_or_higher
|
|
Tesla_M6 ALIAS_nv_sm50_or_higher
|
|
Tesla_M10 ALIAS_nv_sm50_or_higher
|
|
Tesla_M40 ALIAS_nv_sm50_or_higher
|
|
Tesla_M60 ALIAS_nv_sm50_or_higher
|
|
Tesla_P4 ALIAS_nv_sm50_or_higher
|
|
Tesla_P40 ALIAS_nv_sm50_or_higher
|
|
Tesla_P100 ALIAS_nv_sm50_or_higher
|
|
Tesla_V100 ALIAS_nv_sm50_or_higher
|
|
|
|
Quadro_M2000 ALIAS_nv_sm50_or_higher
|
|
Quadro_M4000 ALIAS_nv_sm50_or_higher
|
|
Quadro_M5000 ALIAS_nv_sm50_or_higher
|
|
Quadro_M6000 ALIAS_nv_sm50_or_higher
|
|
|
|
TITAN_X ALIAS_nv_sm50_or_higher
|
|
TITAN_Xp ALIAS_nv_sm50_or_higher
|
|
TITAN_V ALIAS_nv_sm50_or_higher
|
|
TITAN_RTX ALIAS_nv_sm50_or_higher
|
|
|
|
Tegra_X1 ALIAS_nv_sm50_or_higher
|
|
|
|
GeForce_910M ALIAS_nv_sm50_or_higher
|
|
GeForce_920M ALIAS_nv_sm50_or_higher
|
|
GeForce_920MX ALIAS_nv_sm50_or_higher
|
|
GeForce_930M ALIAS_nv_sm50_or_higher
|
|
GeForce_930MX ALIAS_nv_sm50_or_higher
|
|
GeForce_940M ALIAS_nv_sm50_or_higher
|
|
GeForce_940MX ALIAS_nv_sm50_or_higher
|
|
GeForce_945M ALIAS_nv_sm50_or_higher
|
|
GeForce_GT_945A ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_950 ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_950M ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_960 ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_960M ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_965M ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_970 ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_970M ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_980 ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_980M ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_980_Ti ALIAS_nv_sm50_or_higher
|
|
|
|
GeForce_GT_1030 ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_1050 ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_1050_Ti ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_1060 ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_1060_Ti ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_1070 ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_1070_Ti ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_1080 ALIAS_nv_sm50_or_higher
|
|
GeForce_GTX_1080_Ti ALIAS_nv_sm50_or_higher
|
|
GeForce_MX110 ALIAS_nv_sm50_or_higher
|
|
GeForce_MX130 ALIAS_nv_sm50_or_higher
|
|
GeForce_MX150 ALIAS_nv_sm50_or_higher
|
|
|
|
GeForce_RTX_2060 ALIAS_nv_sm50_or_higher
|
|
GeForce_RTX_2060_SUPER ALIAS_nv_sm50_or_higher
|
|
GeForce_RTX_2070 ALIAS_nv_sm50_or_higher
|
|
GeForce_RTX_2070_SUPER ALIAS_nv_sm50_or_higher
|
|
GeForce_RTX_2080 ALIAS_nv_sm50_or_higher
|
|
GeForce_RTX_2080_SUPER ALIAS_nv_sm50_or_higher
|
|
GeForce_RTX_2080_Ti ALIAS_nv_sm50_or_higher
|
|
|
|
GeForce_RTX_3060 ALIAS_nv_sm50_or_higher
|
|
GeForce_RTX_3060_Ti ALIAS_nv_sm50_or_higher
|
|
GeForce_RTX_3070 ALIAS_nv_sm50_or_higher
|
|
GeForce_RTX_3080 ALIAS_nv_sm50_or_higher
|
|
GeForce_RTX_3090 ALIAS_nv_sm50_or_higher
|
|
|
|
#############
|
|
## ENTRIES ##
|
|
#############
|
|
|
|
DEVICE_TYPE_CPU * 6100 1 A A
|
|
DEVICE_TYPE_CPU * 6231 1 A A
|
|
DEVICE_TYPE_CPU * 6232 1 A A
|
|
DEVICE_TYPE_CPU * 6233 1 A A
|
|
DEVICE_TYPE_CPU * 13731 1 A A
|
|
DEVICE_TYPE_CPU * 13732 1 A A
|
|
DEVICE_TYPE_CPU * 13733 1 A A
|
|
|
|
#Device Attack Hash Vector Kernel Kernel
|
|
#Name Mode Type Width Accel Loops
|
|
|
|
ALIAS_nv_real_simd 3 0 2 A A
|
|
ALIAS_nv_real_simd 3 10 2 A A
|
|
ALIAS_nv_real_simd 3 11 2 A A
|
|
ALIAS_nv_real_simd 3 12 2 A A
|
|
ALIAS_nv_real_simd 3 20 2 A A
|
|
ALIAS_nv_real_simd 3 21 2 A A
|
|
ALIAS_nv_real_simd 3 22 2 A A
|
|
ALIAS_nv_real_simd 3 23 2 A A
|
|
ALIAS_nv_real_simd 3 200 2 A A
|
|
ALIAS_nv_real_simd 3 400 2 A A
|
|
ALIAS_nv_real_simd 3 900 4 A A
|
|
ALIAS_nv_real_simd 3 1000 4 A A
|
|
ALIAS_nv_real_simd 3 1100 4 A A
|
|
ALIAS_nv_real_simd 3 2400 2 A A
|
|
ALIAS_nv_real_simd 3 2410 2 A A
|
|
ALIAS_nv_real_simd 3 2600 4 A A
|
|
ALIAS_nv_real_simd 3 2611 4 A A
|
|
ALIAS_nv_real_simd 3 2612 4 A A
|
|
ALIAS_nv_real_simd 3 2711 4 A A
|
|
ALIAS_nv_real_simd 3 2811 4 A A
|
|
ALIAS_nv_real_simd 3 3711 2 A A
|
|
ALIAS_nv_real_simd 3 5100 2 A A
|
|
ALIAS_nv_real_simd 3 5300 2 A A
|
|
ALIAS_nv_real_simd 3 5500 4 A A
|
|
ALIAS_nv_real_simd 3 5600 2 A A
|
|
ALIAS_nv_real_simd 3 8700 4 A A
|
|
ALIAS_nv_real_simd 3 9900 2 A A
|
|
ALIAS_nv_real_simd 3 11000 4 A A
|
|
ALIAS_nv_real_simd 3 11100 2 A A
|
|
ALIAS_nv_real_simd 3 11900 2 A A
|
|
ALIAS_nv_real_simd 3 13300 4 A A
|
|
ALIAS_nv_real_simd 3 18700 8 A A
|
|
|
|
ALIAS_nv_sm50_or_higher 3 0 8 A A
|
|
ALIAS_nv_sm50_or_higher 3 10 8 A A
|
|
ALIAS_nv_sm50_or_higher 3 11 8 A A
|
|
ALIAS_nv_sm50_or_higher 3 12 8 A A
|
|
ALIAS_nv_sm50_or_higher 3 20 4 A A
|
|
ALIAS_nv_sm50_or_higher 3 21 4 A A
|
|
ALIAS_nv_sm50_or_higher 3 22 4 A A
|
|
ALIAS_nv_sm50_or_higher 3 23 4 A A
|
|
ALIAS_nv_sm50_or_higher 3 30 4 A A
|
|
ALIAS_nv_sm50_or_higher 3 40 4 A A
|
|
ALIAS_nv_sm50_or_higher 3 200 8 A A
|
|
ALIAS_nv_sm50_or_higher 3 900 8 A A
|
|
ALIAS_nv_sm50_or_higher 3 1000 8 A A
|
|
ALIAS_nv_sm50_or_higher 3 1100 4 A A
|
|
ALIAS_nv_sm50_or_higher 3 2400 8 A A
|
|
ALIAS_nv_sm50_or_higher 3 2410 4 A A
|
|
ALIAS_nv_sm50_or_higher 3 3800 4 A A
|
|
ALIAS_nv_sm50_or_higher 3 4800 8 A A
|
|
ALIAS_nv_sm50_or_higher 3 5500 2 A A
|
|
ALIAS_nv_sm50_or_higher 3 9900 4 A A
|
|
ALIAS_nv_sm50_or_higher 3 16400 8 A A
|
|
ALIAS_nv_sm50_or_higher 3 18700 8 A A
|
|
|
|
##
|
|
## The following cards were manually tuned, as example
|
|
##
|
|
|
|
GeForce_GTX_TITAN 3 0 4 A A
|
|
GeForce_GTX_TITAN 3 11 4 A A
|
|
GeForce_GTX_TITAN 3 12 4 A A
|
|
GeForce_GTX_TITAN 3 21 1 A A
|
|
GeForce_GTX_TITAN 3 22 1 A A
|
|
GeForce_GTX_TITAN 3 23 1 A A
|
|
GeForce_GTX_TITAN 3 30 4 A A
|
|
GeForce_GTX_TITAN 3 200 2 A A
|
|
GeForce_GTX_TITAN 3 900 4 A A
|
|
GeForce_GTX_TITAN 3 1000 4 A A
|
|
GeForce_GTX_TITAN 3 1100 4 A A
|
|
GeForce_GTX_TITAN 3 2400 4 A A
|
|
GeForce_GTX_TITAN 3 2410 2 A A
|
|
GeForce_GTX_TITAN 3 5500 1 A A
|
|
GeForce_GTX_TITAN 3 9900 2 A A
|
|
|
|
##
|
|
## BCRYPT
|
|
##
|
|
|
|
DEVICE_TYPE_CPU * 3200 1 N A
|
|
|
|
##
|
|
## SCRYPT
|
|
##
|
|
|
|
DEVICE_TYPE_CPU * 8900 1 N A
|
|
DEVICE_TYPE_CPU * 9300 1 N A
|
|
DEVICE_TYPE_CPU * 15700 1 N A
|
|
DEVICE_TYPE_CPU * 22700 1 N A
|
|
|
|
DEVICE_TYPE_GPU * 8900 1 N A
|
|
DEVICE_TYPE_GPU * 9300 1 N A
|
|
DEVICE_TYPE_GPU * 15700 1 1 A
|
|
DEVICE_TYPE_GPU * 22700 1 N A
|
|
|
|
##
|
|
## CryptoAPI
|
|
##
|
|
|
|
DEVICE_TYPE_CPU * 14500 1 A A
|
|
DEVICE_TYPE_GPU * 14500 1 A A
|
|
|
|
## Here's an example of how to manually tune SCRYPT algorithm kernels for your hardware.
|
|
## Manually tuning the GPU will yield increased performance. There is typically no noticeable change to CPU performance.
|
|
##
|
|
## First, you need to know the parameters of your SCRYPT hash: N, r and p.
|
|
##
|
|
## The reference SCRYPT parameter values are N=14, r=8 and p=1, but these will likely not match the parameters used by real-world applications.
|
|
## For reference, the N value represents an exponent (2^N, which we calculate by bit shifting 1 left by N bits).
|
|
## Hashcat expects this N value in decimal format: 1 << 14 = 16384
|
|
##
|
|
## Now that you have the 3 configuration items in decimal format, multiply them by 128 (underlaying crypto primitive block size).
|
|
## For example: 128 * 16384 * 8 * 1 = 16777216 = 16MB
|
|
## This is the amount of memory required for the GPU to compute the hash of one password candidate.
|
|
##
|
|
## Hashcat computes multiple password candidates in parallel - this is what allows for full utilization of the device.
|
|
## The number of password candidates that Hashcat can run in parallel is VRAM limited and depends on:
|
|
##
|
|
## 1. Compute devices' native compute units
|
|
## 2. Compute devices' native thread count
|
|
## 3. Artificial multiplier (--kernel-accel aka -n)
|
|
##
|
|
## In order to find these values:
|
|
##
|
|
## 1. On startup Hashcat will show: * Device #1: GeForce GTX 980, 3963/4043 MB, 16MCU. The 16 MCU is the number of compute units on that device.
|
|
## 2. Native thread counts are fixed values: CPU=1, GPU-Intel=8, GPU-AMD=64 (wavefronts), GPU-NVIDIA=32 (warps)
|
|
##
|
|
## Now multiply them together. For my GTX980: 16 * 32 * 16777216 = 8589934592 = 8GB
|
|
##
|
|
## If we want to actually make use of all computing resources, this GPU would require 8GB of GPU RAM.
|
|
## However, it doesn't have that:
|
|
##
|
|
## Device #1: GeForce GTX 980, 3963/4043 MB, 16MCU. We only have 4043 MB (4GB minus some overhead from the OS).
|
|
##
|
|
## How do we deal with this? This is where SCRYPT TMTO(time-memory trde off) kicks in. The SCRYPT algorithm is designed in such a way that we
|
|
## can pre-compute that 16MB buffer from a self-choosen offset. Details on how this actually works are not important for this process.
|
|
##
|
|
## What's relevant to us is that we can halve the buffer size, but we pay with twice the computation time.
|
|
## We can repeat this as often as we want. That's why it's a trade-off.
|
|
##
|
|
## This mechanic can be manually set using --scrypt-tmto on the commandline, but this is not the best way.
|
|
##
|
|
## Back to our problem. We need 8GB of memory but have only ~4GB.
|
|
## It's not a full 4GB. The OS needs some of it and Hashcat needs some of it to store password candidates and other things.
|
|
## If you run a headless server it should be safe to subtract a fixed value of 200MB from whatever you have in your GPU.
|
|
##
|
|
## So lets divide our required memory(8GB) by 2 until it fits in our VRAM - 200MB.
|
|
##
|
|
## (8GB >> 0) = 8GB < 3.8GB = No, Does not fit
|
|
## (8GB >> 1) = 4GB < 3.8GB = No, Does not fit
|
|
## (8GB >> 2) = 2GB < 3.8GB = Yes!
|
|
##
|
|
## This process is automated in Hashcat, but it is important to understand what's happening here.
|
|
## Because of the light overhead from the OS and Hashcat, we pay a very high price.
|
|
## Even though it is just 200MB, it forces us to increase the TMTO by another step.
|
|
## In terms of speed, the speed is now only 1/4 of what we could archieve on that same GPU if it had only 8.2GB ram.
|
|
## But now we end up in a situation that we waste 1.8GB RAM which costs us ((1.8GB/16MB)>>1) candidates/second.
|
|
##
|
|
## This is where manual tuning can come into play.
|
|
## If we know that the resources we need are close to what we have (in this case 3.8GB <-> 4.0GB)
|
|
## We could decide to throw away some of our compute units so that we will no longer need 4.0GB but only 3.8GB.
|
|
## Therefore, we do not need to increase the TMTO by another step to fit in VRAM.
|
|
##
|
|
## If we cut down our 16 MCU to only 15 MCU or 14 MCU using --kernel-accel(-n), we end up with:
|
|
##
|
|
## 16 * 32 * 16777216 = 8589934592 / 2 = 4294967296 = 4.00GB < 3.80GB = Nope, next
|
|
## 15 * 32 * 16777216 = 8053063680 / 2 = 4026531840 = 3.84GB < 3.80GB = Nope, next
|
|
## 14 * 32 * 16777216 = 7516192768 / 2 = 3758096384 = 3.58GB < 3.80GB = Yes!
|
|
##
|
|
## So we can throw away 2/16 compute units, but save half of the computation trade-off on the rest of the compute device.
|
|
## On my GTX980, this improves the performance from 163 H/s to 201 H/s.
|
|
## You don't need to control --scrypt-tmto manually because now that the multiplier (-n) is smaller than the native value
|
|
## Hashcat will automatically realize it can decrease the TMTO by one.
|
|
##
|
|
## At this point, you found the optimal base value for your compute device. In this case: 14.
|
|
##
|
|
## Depending on our hardware, especially hardware with very slow memory access like a GPU
|
|
## there's a good chance that it's cheaper (faster) to compute an extra step on the GPU register.
|
|
## So if we increase the TMTO again by one, this gives an extra speed boost.
|
|
##
|
|
## On my GTX980, this improves the performance from 201 H/s to 255 H/s.
|
|
## Again, there's no need to control this with --scrypt-tmto. Hashcat will realize it has to increase the TMTO again.
|
|
##
|
|
## All together, you can control all of this by using the -n parameter in the command line.
|
|
## This is not ideal in a production environment because you must use the --force flag.
|
|
## The best way to set this is by using this Hashcat.hctune file to store it. This avoids the need to bypass any warnings.
|
|
##
|
|
## Find the ideal -n value, then store it here along with the proper compute device name.
|
|
## Formatting guidelines are availabe at the top of this document.
|
|
|
|
## 4GB
|
|
GeForce_GTX_980 * 8900 1 28 A
|
|
GeForce_GTX_980 * 9300 1 128 A
|
|
GeForce_GTX_980 * 15700 1 28 A
|
|
GeForce_GTX_980 * 22700 1 28 A
|
|
|
|
## 8GB
|
|
GeForce_GTX_1080 * 8900 1 14 A
|
|
GeForce_GTX_1080 * 9300 1 256 A
|
|
GeForce_GTX_1080 * 15700 1 14 A
|
|
GeForce_GTX_1080 * 22700 1 14 A
|
|
|
|
## 11GB
|
|
GeForce_RTX_2080_Ti * 8900 1 68 A
|
|
GeForce_RTX_2080_Ti * 9300 1 532 A
|
|
GeForce_RTX_2080_Ti * 15700 1 68 A
|
|
GeForce_RTX_2080_Ti * 22700 1 68 A
|
|
|
|
## 8GB
|
|
Vega_10_XL/XT_[Radeon_RX_Vega_56/64] * 8900 1 28 A
|
|
Vega_10_XL/XT_[Radeon_RX_Vega_56/64] * 9300 1 442 A
|
|
Vega_10_XL/XT_[Radeon_RX_Vega_56/64] * 15700 1 28 A
|
|
Vega_10_XL/XT_[Radeon_RX_Vega_56/64] * 22700 1 28 A
|
|
|
|
## 4GB
|
|
AMD_Radeon_(TM)_RX_480_Graphics * 8900 1 14 A
|
|
AMD_Radeon_(TM)_RX_480_Graphics * 9300 1 126 A
|
|
AMD_Radeon_(TM)_RX_480_Graphics * 15700 1 14 A
|
|
AMD_Radeon_(TM)_RX_480_Graphics * 22700 1 14 A
|