Signing uses the bn_inverse function that is prone to side-channel
attacks. We randomize its argument by multiplying it with a random
non-zero number. At the end we multiply again by the same number to
cancel it out.
Changed get_k_random to take the prime range as a second argument and
to return a non-zero number. This function was previously only used
for (non-rfc6979) signing and is now used for side-channel protection.
This adds an is_canonic parameter to all sign functions. This is a
callback that determines if a signature corresponds to some coin
specific rules. It is used, e. g., by ethereum (where the recovery
byte must be 0 or 1, and not 2 or 3) and or steem signatures (which
require both r and s to be between 2^248 and 2^255).
This also separates the initialization and the step function of the
random number generator, making it easy to restart the signature
process with the next random number.
The new name of the function is `hdnode_get_ethereum_address`
and it gets a hdnode as input as opposed to a public key. This
also avoids first computing the compressed public key and then
uncompressing it.
Test cases were adapted to work with new function. The test-vectors
are the same as for bip32 and independently checked with an adhoc
python implementation.
Moved the code from Trezor firmware to here for recovering the public key
when verifying a bitcoin message. Fixed the signing and verification for
the unlikely case the r value overflows.
* Split ecdsa_curve into curve_info and ecdsa_curve to support bip32 on
curves that don't have a ecdsa_curve.
* Don't fail in key derivation but retry with a new hash.
* Adapted test case accordingly
Describe normalized, partly reduced and reduced numbers.
Comment which function expects which kind of input.
Removed unused bn_bitlen.
Add bn_add that does not reduce.
Bug fix in ecdsa_validate_pubkey: bn_mod before bn_is_equal.
Bug fix in hdnode_private_ckd: bn_mod after bn_addmod.
New function `bn_mult_3_2` that multiplies by 3/2.
This function is used in point_double and point_jacobian_double.
Cleaned up point_add and point_double, more comments.
Use a similar algorithm for `point_multiply` as for
`scalar_multiply` but with less precomputation.
Added double for points in Jacobian coordinates.
Simplified `point_jacobian_add` a little.
This version of scalar_mult should be faster and much better
against side-channel attacks. Except bn_inverse and bn_mod
all functions are constant time. bn_inverse is only used
in the last step and its input is randomized. The function
bn_mod is only taking extra time in 2^32/2^256 cases, so
in practise it should not occur at all. The input to bn_mod
is also depending on the random value.
There is secret dependent array access in scalar_multiply,
so cache may be an issue.
This should make side-channel attacks much more difficult. However,
1. Timing of bn_inverse, which is used in point_add depends on input.
2. Timing of reading secp256k1_cp may depend on input due to cache.
3. The conditions in point_add are not timing attack safe.
However point_add is always a straight addition, never double or some
other special case.
In the long run, I would like to use a specialized point_add using Jacobian
representation plus a randomization when converting the first point to
Jacobian representation. The Jacobian representation would also make
the procedure a bit faster.
An invalid point may crash the implementation or, worse,
reveal information about the private key if used in a ECDH
context (e.g. cryptoMessageEn/Decrypt).
Therefore, check all user supplied points even if
USE_PUBKEY_VALIDATE is not set.
To improve speed, we don't check if the point lies in the
main group, since the secp256k1 curve does not have
any other subgroup.
The standard says:
step h:
Set T to the empty sequence.
while tlen < qlen
V = HMAC_K(V)
T = T || V
k = bits2int(T)
in this case (HMAC-SHA256, qlen=256bit) this simplifies to
V = HMAC_K(V)
T = V
k = bits2int(T)
and T can be omitted.
The old code (wrong) did:
T = HMAC_K(V)
k = bits2int(T)
Note that V will only be used again if the first k is out of range.
Thus, the old code produced the right result with a very high probability.