1
0
mirror of https://github.com/trezor/trezor-firmware.git synced 2024-11-12 02:31:05 +00:00
trezor-firmware/docs/tests/device-tests.md
Martin Milata e7f2d3f6cc test(core): use internal model names
[no changelog]
2024-03-12 20:55:23 +00:00

4.2 KiB

Running device tests

1. Running the full test suite

Note: You need Poetry, as mentioned in the core's documentation section.

In the trezor-firmware checkout, in the root of the monorepo, install the environment:

poetry install

And run the automated tests:

poetry run make -C core test_emu

2. Running tests manually

Install the poetry environment as outlined above. Then switch to a shell inside the environment:

poetry shell

If you want to test against the emulator, run it in a separate terminal:

./core/emu.py

Now you can run the test suite with pytest from the root directory:

pytest tests/device_tests

Useful Tips

The tests are randomized using the pytest-random-order plugin. The random seed is printed in the header of the tests output, in case you need to run the tests in the same order.

If you only want to run a particular test, pick it with -k <keyword> or -m <marker>:

pytest -k nem      # only runs tests that have "nem" in the name
pytest -k "nem or stellar"  # only runs tests that have "nem" or "stellar" in the name
pytest -m stellar  # only runs tests marked with @pytest.mark.stellar

If you want to see debugging information and protocol dumps, run with -v.

Print statements from testing files are not shown by default. To enable them, use -s flag.

If you would like to interact with the device (i.e. press the buttons yourself), just prefix pytest with INTERACT=1:

INTERACT=1 pytest tests/device_tests

When testing transaction signing, there is an option to check transaction hashes on-chain using Blockbook. It is chosen by setting CHECK_ON_CHAIN=1 environment variable before running the tests.

CHECK_ON_CHAIN=1 pytest tests/device_tests

To run the tests quicker, spawn the emulator with disabled animations using -a flag.

./core/emu.py -a

To run the tests even quicker, the emulator should come from a frozen build. (However, then changes to python code files are not reflected in emulator, one needs to build it again each time.)

PYOPT=0 make build_unix_frozen

It is possible to specify the timeout for each test in seconds, using PYTEST_TIMEOUT env variable.

PYTEST_TIMEOUT=15 pytest tests/device_tests

When running tests from Makefile target, it is possible to specify TESTOPTS env variable with testing options, as if pytest would be called normally.

TESTOPTS="-x -v -k test_msg_backup_device.py" make test_emu

When troubleshooting an unstable test that is failing occasionally, following runs it until it fails (so failure is visible on screen):

export TESTOPTS="-x -v -k test_msg_backup_device.py"
while make test_emu; do sleep 1; done

3. Using markers

When you're developing a new currency, you should mark all tests that belong to that currency. For example, if your currency is called NewCoin, your device tests should have the following marker:

@pytest.mark.newcoin

This marker must be registered in REGISTERED_MARKERS file in tests folder.

Tests can be run only for specific models - it is done by disallowing the tests for the other models. @pytest.mark.skip_t1b1 @pytest.mark.skip_t2t1 @pytest.mark.skip_t2b1 @pytest.mark.skip_t3t1 are valid markers to skip current test for Model 1, Model T, Safe 3, and T3T1 respectively.

Extended testing and debugging

Building for debugging (Emulator only)

Build the debuggable unix binary so you can attach the gdb or lldb. This removes optimizations and reduces address space randomizaiton.

make build_unix_debug

The final executable is significantly slower due to ASAN(Address Sanitizer) integration. If you want to catch some memory errors use this.

time ASAN_OPTIONS=verbosity=1:detect_invalid_pointer_pairs=1:strict_init_order=true:strict_string_checks=true TREZOR_PROFILE="" poetry run make test_emu

Coverage (Emulator only)

Get the Python code coverage report.

If you want to get HTML/console summary output you need to install the coverage.py tool.

pip3 install coverage

Run the tests with coverage output.

make build_unix && make coverage