From 50521bb16b5f98d2c0508a650248e868ed98754e Mon Sep 17 00:00:00 2001 From: Tomas Susanka Date: Wed, 29 Jan 2020 12:24:26 +0000 Subject: [PATCH] tests/ui: add readme --- docs/tests/index.md | 6 ++++ docs/tests/ui-tests.md | 64 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 70 insertions(+) create mode 100644 docs/tests/ui-tests.md diff --git a/docs/tests/index.md b/docs/tests/index.md index 9bbb31c94..af59ca6a7 100644 --- a/docs/tests/index.md +++ b/docs/tests/index.md @@ -15,6 +15,12 @@ The original version of device tests. These tests can be run against both Model See [device-tests.md](device-tests.md) for instructions how to run it. +### UI tests + +UI tests use device tests and take screenshots of every screen change and compare them against fixtures. Currently for model T only. + +See [ui-tests.md](ui-tests.md) for more info. + ### Click tests Click tests are a next-generation of the Device tests. The tests are quite similar, but they are capable of imitating user's interaction with the screen. diff --git a/docs/tests/ui-tests.md b/docs/tests/ui-tests.md new file mode 100644 index 000000000..ac8bca9c2 --- /dev/null +++ b/docs/tests/ui-tests.md @@ -0,0 +1,64 @@ +# Running UI tests + +## 1. Running the full test suite + +_Note: You need Pipenv, as mentioned in the core's [documentation](https://docs.trezor.io/trezor-firmware/core/) section._ + +In the `trezor-firmware` checkout, in the root of the monorepo, install the environment: + +```sh +pipenv sync +``` + +And run the tests: + +```sh +pipenv run make -C core test_emu_ui +``` + +## 2. Running tests manually + +Install the pipenv environment as outlined above. Then switch to a shell inside the +environment: + +```sh +pipenv shell +``` + +If you want to test against the emulator, run it in a separate terminal: +```sh +./core/emu.py +``` + +Now you can run the test suite with `pytest` from the root directory: +```sh +pytest tests/device_tests --ui=test +``` + +You can also skip tests marked as `skip_ui`. + +```sh +pytest tests/device_tests --ui=test -m "not skip_ui" +``` + +# Updating Fixtures ("Recording") + +The `--ui` pytest argument has two options: + +- **record**: Create screenshots and calculate theirs hash for each test. +The screenshots are gitignored, but the hash is included in git. +- **test**: Create screenshots, calculate theirs hash and test the hash against +the one stored in git. + +If you want to make a change in the UI you simply run `--ui=record`. An easy way +to proceed is to run `--ui=test` at first, see what tests fail (see the Reports section below), +decide if those changes are the ones you expected and then finally run the `--ui=record` +and commit the new hashes. + +## Reports + +Each `--ui=test` creates a clear report which tests passed and which failed. +The index file is stored in `tests/ui_tests/reports/index.html`, but for an ease of use +you will find a link at the end of the pytest summary. + +On CI this report is published as an artifact.