1
0
mirror of https://github.com/hashcat/hashcat.git synced 2024-11-29 19:38:18 +00:00
Commit Graph

18 Commits

Author SHA1 Message Date
jsteube
a3a16f676f OpenCL Kernels: Add a decompressing kernel and a compressing host code in order to reduce PCIe transfer time
For details see https://hashcat.net/forum/thread-7267.html
2018-02-05 17:18:58 +01:00
jsteube
1d04de3a8e Limit kernel-loops in straight-mode to 256, therefore allow rules to be stored in constant memory 2017-08-23 12:43:59 +02:00
jsteube
319799bbbf Switch the datatypes of the variables responsible for work-item count and work-item size from u32 to u64 2017-08-19 16:39:22 +02:00
jsteube
dcaa91a88f Fix rule engine function call in amp_a0 2017-08-12 13:28:31 +02:00
jsteube
07b54c1257 Replace code to use pure kernel rule engine for slow hashes 2017-08-11 16:21:19 +02:00
jsteube
34d882a116 Rename inc_rp.X to inc_rp_optimized.X 2017-08-11 11:25:47 +02:00
jsteube
c918173fcf Get rid of comb_t which can be safely replace with pw_t now 2017-06-25 00:56:25 +02:00
jsteube
a673aee037 Very hot commit, continue reading here:
This is a test commit using buffers large enough to handle both passwords and salts up to length 256.
It requires changes to the kernel code, which is not included in here.
It also requires some of the host code to be modified. Before we're going to modify kernel code to support the larger lengths I want to be
sure of:
1. Host code modification is ok (no overflows or underflows)
2. Passwords and Salts are printed correctly to status, outfile, show, left, etc.
3. Performance does not change (or only very minimal)
This is not a patch that supports actual cracking both passwords and salts up to length 256, but it can not fail anyway.
If if it does, there's no reason to continue to add support for both passwords and salts up to length 256.
2017-06-17 17:57:30 +02:00
Jens Steube
7fe575e204 Add const qualifier to variable declaration of matching global memory objects 2016-11-22 20:20:34 +01:00
jsteube
f273d4771b Fix missing pwlen copy in amp_a0 2016-09-16 23:56:05 +02:00
jsteube
30371bef10 Allow words of length > 32 in wordlists for -a 0 for slow hashes if no rules are in use or a : rule is in the rulefile 2016-09-14 17:40:39 +02:00
jsteube
3daf0af480 Added docs/credits.txt
Added docs/team.txt
2016-09-11 22:20:15 +02:00
Jens Steube
2899f53a15 Move files from include/ to OpenCL/ if they are used within kernels
Rename includes in OpenCL so that it's easier to recognize them as such
2016-05-25 23:04:26 +02:00
Jens Steube
a62b7ed06e Upgrade kernel to support dynamic local work sizes 2016-01-19 16:06:03 +01:00
jsteube
331188167c Replace the substring GPU to a more appropriate "device" or "kernel" substring depending on the context 2016-01-05 08:26:44 +01:00
jsteube
61744662c0 Fix path to includes 2016-01-03 01:56:41 +01:00
jsteube
5f7c47b461 Fix path to includes 2016-01-03 01:48:05 +01:00
jsteube
0bf4e3c34a - Dropped all vector code since new GPU's are all scalar, makes the code much easier
- Some performance on low-end GPU may drop because of that, but only for a few hash-modes
- Dropped scalar code (aka warp) since we do not have any vector datatypes anymore
- Renamed C++ overloading functions memcat32_9 -> memcat_c32_w4x4_a3x4
- Still need to fix kernels to new function names, needs to be done manually
- Temperature Management needs to be rewritten partially because of conflicting datatypes names
- Added code to create different codepaths for NV on AMD in runtime in host (see data.vendor_id)
- Added code to create different codepaths for NV on AMD in runtime in kernels (see IS_NV and IS_AMD)
- First tests working for -m 0, for example
- Great performance increases in general for NV so far
- Tested amp_* and markov_* kernel
- Migrated special NV optimizations for rule processor
2015-12-15 12:04:22 +01:00