1
0
mirror of https://github.com/trezor/trezor-firmware.git synced 2024-12-29 09:38:08 +00:00
trezor-firmware/docs/tests/ui-tests.md
2020-03-23 17:22:56 +00:00

2.6 KiB

Running UI tests

1. Running the full test suite

Note: You need Pipenv, as mentioned in the core's documentation section.

In the trezor-firmware checkout, in the root of the monorepo, install the environment:

pipenv sync

And run the tests:

pipenv run make -C core test_emu_ui

2. Running tests manually

Install the pipenv environment as outlined above. Then switch to a shell inside the environment:

pipenv shell

If you want to test against the emulator, run it in a separate terminal:

./core/emu.py

Now you can run the test suite with pytest from the root directory:

pytest tests/device_tests --ui=test

If you wish to check that all test cases in fixtures.json were used set the --ui-check-missing flag. Of course this is meaningful only if you run the tests on the whole device_tests folder.

pytest tests/device_tests --ui=test --ui-check-missing

You can also skip tests marked as skip_ui.

pytest tests/device_tests --ui=test -m "not skip_ui"

Updating Fixtures ("Recording")

Short version:

pipenv run make -C core test_emu_ui_record

Long version:

The --ui pytest argument has two options:

  • record: Create screenshots and calculate theirs hash for each test. The screenshots are gitignored, but the hash is included in git.
  • test: Create screenshots, calculate theirs hash and test the hash against the one stored in git.

If you want to make a change in the UI you simply run --ui=record. An easy way to proceed is to run --ui=test at first, see what tests fail (see the Reports section below), decide if those changes are the ones you expected and then finally run the --ui=record and commit the new hashes.

Also here we provide an option to check the fixtures.json file. Use --ui-check-missing flag again to make sure there are no extra fixtures in the file:

pytest tests/device_tests --ui=record --ui-check-missing

Reports

Tests

Each --ui=test creates a clear report which tests passed and which failed. The index file is stored in tests/ui_tests/reporting/reports/test/index.html, but for an ease of use you will find a link at the end of the pytest summary.

On CI this report is published as an artifact.

Master diff

In the ui tests folder you will also find a Python script report_master_diff.py, which creates a report where you find which tests were altered, added, or removed relative to master. This useful for Pull Requests.

This report is available as an artifact on CI as well.