2020-01-29 12:24:26 +00:00
# Running UI tests
## 1. Running the full test suite
2020-09-25 14:21:23 +00:00
_Note: You need Poetry, as mentioned in the core's [documentation ](https://docs.trezor.io/trezor-firmware/core/ ) section._
2020-01-29 12:24:26 +00:00
In the `trezor-firmware` checkout, in the root of the monorepo, install the environment:
```sh
2020-09-25 14:21:23 +00:00
poetry install
2020-01-29 12:24:26 +00:00
```
And run the tests:
```sh
2020-09-25 14:21:23 +00:00
poetry run make -C core test_emu_ui
2020-01-29 12:24:26 +00:00
```
## 2. Running tests manually
2020-09-25 14:21:23 +00:00
Install the poetry environment as outlined above. Then switch to a shell inside the
2020-01-29 12:24:26 +00:00
environment:
```sh
2020-09-25 14:21:23 +00:00
poetry shell
2020-01-29 12:24:26 +00:00
```
2021-01-22 14:05:02 +00:00
If you want to test against the emulator, run it with disabled animation in a separate terminal:
2020-01-29 12:24:26 +00:00
```sh
2021-01-22 14:05:02 +00:00
./core/emu.py -a
2020-01-29 12:24:26 +00:00
```
Now you can run the test suite with `pytest` from the root directory:
```sh
pytest tests/device_tests --ui=test
```
2020-02-17 14:38:26 +00:00
If you wish to check that all test cases in `fixtures.json` were used set the `--ui-check-missing` flag. Of course this is meaningful only if you run the tests on the whole `device_tests` folder.
```sh
pytest tests/device_tests --ui=test --ui-check-missing
```
2020-01-29 12:24:26 +00:00
# Updating Fixtures ("Recording")
2020-02-17 14:38:26 +00:00
Short version:
```sh
2020-09-25 14:21:23 +00:00
poetry run make -C core test_emu_ui_record
2020-02-17 14:38:26 +00:00
```
Long version:
2020-01-29 12:24:26 +00:00
The `--ui` pytest argument has two options:
- **record**: Create screenshots and calculate theirs hash for each test.
The screenshots are gitignored, but the hash is included in git.
- **test**: Create screenshots, calculate theirs hash and test the hash against
the one stored in git.
If you want to make a change in the UI you simply run `--ui=record` . An easy way
to proceed is to run `--ui=test` at first, see what tests fail (see the Reports section below),
decide if those changes are the ones you expected and then finally run the `--ui=record`
and commit the new hashes.
2020-02-17 14:38:26 +00:00
Also here we provide an option to check the `fixtures.json` file. Use `--ui-check-missing` flag again to make sure there are no extra fixtures in the file:
```sh
pytest tests/device_tests --ui=record --ui-check-missing
```
2020-01-29 12:24:26 +00:00
## Reports
2020-03-23 17:22:56 +00:00
### Tests
2020-01-29 12:24:26 +00:00
Each `--ui=test` creates a clear report which tests passed and which failed.
2023-03-03 14:59:40 +00:00
The index file is stored in `tests/ui_tests/reports/test/index.html` .
2021-06-22 09:01:29 +00:00
The script `tests/show_results.py` starts a local HTTP server that serves this page --
this is necessary for access to browser local storage, which enables a simple reviewer
UI.
2020-01-29 12:24:26 +00:00
2024-02-05 10:17:00 +00:00
On CI this report is published as an artifact. You can see the latest `main` branch report [here ](https://gitlab.com/satoshilabs/trezor/trezor-firmware/-/jobs/artifacts/main/file/test_ui_report/index.html?job=core%20device%20test ). The reviewer features work directly here.
2021-06-22 09:01:29 +00:00
If needed, you can use `python3 -m tests.ui_tests` to regenerate the report from local
recorded screens.
2020-03-23 17:22:56 +00:00
### Master diff
In the ui tests folder you will also find a Python script `report_master_diff.py` , which
creates a report where you find which tests were altered, added, or removed relative to
master. This useful for Pull Requests.
2020-07-22 14:31:31 +00:00
This report is available as an artifact on CI as well. You can find it by
2022-02-09 13:34:22 +00:00
visiting the "unix ui changes" job in your pipeline - browse the
2020-07-22 14:31:31 +00:00
artifacts and open `master_diff/index.html` .