Pytest can offer various options that can increase your productivity when you test your code. Although Pytest provides sensible default settings, and you can use it out of the box, it will not offer a one-size-fits-all solution. As you continue writing tests, you will sooner or later start to look for ideas that can make your test activities more efficient. This article will explain some of the basic options that you must know as a proficient Python developer.
As you go through the article, you can also watch my explainer video:
The Very Basics
First of all, let’s have a quick look at what Pytest output looks like when it runs without any options.
Just for explanation purposes, let’s use the following trivial tests.
test_baic.py
def test_that_always_passes(): a = 1 b = 1 assert a == b def test_that_always_fails(): a = 1 b = 2 assert a == b
When you run this test using Pytest, the output will look like this:
$ pytest test_basic.py =================== test session starts =================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 2 items test_basic.py .F [100%] ======================== FAILURES ========================= _________________ test_that_always_fails __________________ def test_that_always_fails(): a = 1 b = 2 > assert a == b E assert 1 == 2 test_basic.py:9: AssertionError ================= short test summary info ================= FAILED test_basic.py::test_that_always_fails - assert 1 ... =============== 1 failed, 1 passed in 0.02s ===============
The first part is the header, which shows the version of the Python interpreter, pytest and where the root directory is.
=================== test session starts =================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2
Then, you see the test file name and its result. The dot means that the first function passed, and the following “F” means the second function failed.
test_basic.py .F [100%]
Then there is a summary section including the traceback, which I will explain more later in this article.
======================== FAILURES ========================= _________________ test_that_always_fails __________________ def test_that_always_fails(): a = 1 b = 2 > assert a == b E assert 1 == 2 test_basic.py:9: AssertionError ================= short test summary info ================= FAILED test_basic.py::test_that_always_fails - assert 1 ... =============== 1 failed, 1 passed in 0.02s ===============
Run Tests Selectively in Pytest
Pytest offers various ways to specify which test files to run. By default, Pytest runs the tests in the Python files whose names start with test_
in the current directory and its subdirectories.
So, if you only have a test file called test_basic.py
in the current directory, you can run the command pytest, which will run the tests in this file.
$ ls test_basic.py venv $ pytest =================== test session starts =================== platform darwin -- Python Fest2 collected 1 item test_basic.py . [100%] ==================== 1 passed in 0.00s ====================
Specify Pytest files or directory
If you want to run tests in a specific file, you can specify the file name in the Pytest command, and Pytest will only run the specified file.
$ pytest test_basic.py =================== test session starts =================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 1 item test_basic.py . [100%] ==================== 1 passed in 0.00s ====================
You can also specify a directory, and Pytest will run tests in the files that reside in the specified directory and its subdirectories.
$ pytest /Users/mikio/pytest2 =================== test session starts =================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 1 item test_basic.py . [100%] ==================== 1 passed in 0.00s ====================
Use the -k OptionΒ
You can also use the -k
option to specify part of the file names to choose specific files selectively. The following example shows that I have test_basic.py
and test_advanced.py
in the current directory and another file test_subdir.py
in the subdirectory called subdir. Pytest automatically runs all test files by default:
$ tree -I venv . βββ subdir β βββ test_subdir.py βββ test_advanced.py βββ test_basic.py $ pytest =================== test session starts =================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 3 items test_advanced.py . [ 33%] test_basic.py . [ 66%] subdir/test_subdir.py . [100%] ==================== 3 passed in 0.01s ====================
If you specify the option -k basic
, Pytest will run only test_basic.py
.
$ pytest -k basic =================== test session starts =================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 3 items / 2 deselected / 1 selected test_basic.py . [100%] ============= 1 passed, 2 deselected in 0.00s =============
If you specify the option -k subdir
, Pytest will run only subdir/test_subdire.py
.
$ pytest -k subdir =================== test session starts =================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 3 items / 2 deselected / 1 selected subdir/test_subdir.py . [100%] ============= 1 passed, 2 deselected in 0.00s =============
Use Command Options in Pytest
Pytest has various command-line options, which mainly control how Pytest runs and what information you see in the output. I will explain some of the most commonly used options in this section.
Change the Verbosity of the Pytest Output
You can make the Pytest output more verbose or less verbose, depending on your needs.
Adding the -v
option to the Pytest command allows you to see more information in the output.
$ pytest -v test_basic.py =================== test session starts =================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /Users/mikio/pytest2/venv/bin/python3 cachedir: .pytest_cache rootdir: /Users/mikio/pytest2 collected 2 items test_basic.py::test_that_always_passes PASSED [ 50%] test_basic.py::test_that_always_fails FAILED [100%] ======================== FAILURES ========================= _________________ test_that_always_fails __________________ def test_that_always_fails(): a = 1 b = 2 > assert a == b E assert 1 == 2 E +1 E -2 test_basic.py:9: AssertionError ================= short test summary info ================= FAILED test_basic.py::test_that_always_fails - assert 1 ... =============== 1 failed, 1 passed in 0.01s ===============
Now you can see the file name (test_basic.py
), the function names (test_that_always_passes
and test_that_always_fail
s ) and the results (PASSED and FAILED).
On a side note, you might be used to command-line programs that show the version with the -v
option. In Pytest, --version
and -V
are the options to display the version number of Pytest, so be careful not to get confused.
$ pytest --version pytest 6.2.5 $ pytest -V pytest 6.2.5
If you want to see less information in the output, you can use the -q
option, which only shows the test results and the traceback.
$ pytest -q test_basic.py .F [100%] ======================== FAILURES ========================= _________________ test_that_always_fails __________________ def test_that_always_fails(): a = 1 b = 2 > assert a == b E assert 1 == 2 test_basic.py:9: AssertionError ================= short test summary info ================= FAILED test_basic.py::test_that_always_fails - assert 1 ... 1 failed, 1 passed in 0.01s
This option might be helpful if you run hundreds of tests regularly and want to see the summary of the test results. As shown above, if there were errors, you can still get information to find out what went wrong.
If you don’t even want to show the traceback, you can use the --no-summary
option to suppress it.Β
$ pytest --no-summary -q test_basic.py .F [100%] 1 failed, 1 passed in 0.02s
Show the Values in the Local VariablesΒ
The -l
option shows the actual values in the local variables in the tracebacks. So, you can see what values were used when tests failed.
$ pytest -l test_basic.py =================== test session starts =================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 2 items test_basic.py .F [100%] ======================== FAILURES ========================= _________________ test_that_always_fails __________________ def test_that_always_fails(): a = 1 b = 2 > assert a == b E assert 1 == 2 a = 1 b = 2 test_basic.py:9: AssertionError ================= short test summary info ================= FAILED test_basic.py::test_that_always_fails - assert 1 ... =============== 1 failed, 1 passed in 0.02s ===============
This example is too simple to see the benefits, but when you have more complex functions in the tests, it can save a lot of time to analyze the cause of test failures.
Capture the standard output
You can add a print statement in the test function. Pytest captures the output and shows it in the summary info section, but it might not be evident at first glance.
Let’s add a print statement to the test_basic.py
file like this:
test_basic.py
def test_that_always_passes(): print("This test always passes.") a = 1 b = 1 assert a == b def test_that_always_fails(): print("This test always fails.") a = 1 b = 2 assert a == b
Run Pytest.
$ pytest test_basic.py ==================== test session starts ===================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 2 items test_basic.py .F [100%] ========================== FAILURES ========================== ___________________ test_that_always_fails ___________________ def test_that_always_fails(): print("This test always fails.") a = 1 b = 2 > assert a == b E assert 1 == 2 test_basic.py:11: AssertionError -------------------- Captured stdout call -------------------- This test always fails. ================== short test summary info =================== FAILED test_basic.py::test_that_always_fails - assert 1 == 2 ================ 1 failed, 1 passed in 0.02s =================
You can see that the section “Captured stdout call” has been added after the traceback in the “FAILURES” section, which includes the text in the print statement in the failed test.
But you don’t see the print statement output from the passed test. To make the output easier to read, let’s remove the failing test from test_basic.py
.
test_basic.py
def test_that_always_passes(): print("This test always passes.") print("This is another line.") a = 1 b = 1 assert a == b
When you run Pytest, you don’t see the print output by default.
$ pytest test_basic.py ==================== test session starts ===================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 1 item test_basic.py . [100%] ===================== 1 passed in 0.00s ======================
To show the print output for the passed tests, you can use the -rP
option (summary info for the passed tests with output).
$ pytest -rP test_basic.py ==================== test session starts ===================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 1 item test_basic.py . [100%] =========================== PASSES =========================== __________________ test_that_always_passes ___________________ -------------------- Captured stdout call -------------------- This test always passes. This is another line. ===================== 1 passed in 0.00s ======================
Alternatively, you can use the -rA
option (summary info for all tests).
$ pytest -rA test_basic.py ==================== test session starts ===================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 1 item test_basic.py . [100%] =========================== PASSES =========================== __________________ test_that_always_passes ___________________ -------------------- Captured stdout call -------------------- This test always passes. This is another line. ================== short test summary info =================== PASSED test_basic.py::test_that_always_passes ===================== 1 passed in 0.00s ======================
You can now see the print output in the “Captured stdout call” section in both cases.
You can also use the -s
option, which tells Pytest not to capture the standard output.Β
$ pytest -rA -s test_basic.py ==================== test session starts ===================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 1 item test_basic.py This test always passes. This is another line. . =========================== PASSES =========================== ================== short test summary info =================== PASSED test_basic.py::test_that_always_passes ===================== 1 passed in 0.00s ======================
You can see that the “Captured stdout call” section does not appear in the summary info. Instead, you can see the print output right after the test file name. Although the output in the summary info, as shown previously, would look nicer, this option might serve the purpose in some cases.Β
Use Markers to Select Test Functions
Pytest has the built-in functionality to mark test functions. Markers are like tags, which you can use to categorize your test functions into different groups.Β
Pytest provides various built-in markers, but two of the most commonly used markers are skip
and xfail
, so I will explain them first. Then I will explain custom markers.
Skip Tests with @pytest.mark.skipΒ
You can use the skip marker when you want to skip specific tests (as the name suggests). Pytest will exclude the marked functions but will show that they have been skipped just as a reminder. You might want to use this marker, for example, when some external dependencies are temporarily unavailable, and the tests cannot pass until they become available again.
To mark a function as skip, you can use the @pytest.mark.skip
decorator as shown below. Be sure to import pytest
in the test file.
import pytest @pytest.mark.skip def test_that_should_be_skipped(): a = 2 b = 3 assert a == b
When you run Pytest with the -v
option, you can see that the function test_that_should_be_skipped
is shown as “SKIPPED”.
$ pytest -v test_basic.py =================== test session starts =================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /Users/mikio/pytest2/venv/bin/python3 cachedir: .pytest_cache rootdir: /Users/mikio/pytest2 collected 2 items test_basic.py::test_that_always_passes PASSED [ 50%] test_basic.py::test_that_should_be_skipped SKIPPED [100%] ============== 1 passed, 1 skipped in 0.00s ===============
If you run the pytest
command without the -v
option, you see “s” instead.
$ pytest test_basic.py =================== test session starts =================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 2 items test_basic.py .s [100%] ============== 1 passed, 1 skipped in 0.00s ===============
If you add the -rs
option (extra summary info of skipped functions), you can see an additional section about the skipped function.
$ pytest -rs test_basic.py =================== test session starts =================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 2 items test_basic.py .s [100%] ================= short test summary info ================= SKIPPED [1] test_basic.py:8: unconditional skip ============== 1 passed, 1 skipped in 0.00s ===============
You can optionally specify the reason to skip the function. If you set it, you can see it in the summary info. The following is an example to add a reason:
import pytest @pytest.mark.skip(reason="Just skipping...") def test_that_should_be_skipped(): a = 2 b = 3 assert a == b
Then it is shown in the summary info:
$ pytest -rs test_basic.py =================== test session starts =================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 2 items test_basic.py .s [100%] ================= short test summary info ================= SKIPPED [1] test_basic.py:8: Just skipping... ============== 1 passed, 1 skipped in 0.00s ===============
Skip Tests Conditionally with @pytest.mark.skipifΒ
If you want to skip specific tests based on some conditions which you can programmatically determine, you can use @pytest.mark.skipif
. For example, there might be a situation where you don’t want to run some tests on a particular operating system. You can get the platform name from sys.platform
in Python, so you can use it with @pytest.mark.skipif
to automatically skip platform-specific tests.Β
Just for demonstration purposes, let’s say you don’t want to run the test on macOS, where sys.platform
returns darwin
.
>>> import sys >>> sys.platform 'darwin'
So, you can specify skipif
as shown below to exclude the test function test_that_should_be_skippe
d.
import pytest import sys @pytest.mark.skipif(sys.platform=='darwin', reason=f'Skip on {sys.platform}') def test_that_should_be_skipped(): a = 2 b = 3 assert a == b
Then, Pytest will skip it if you run it on macOS:
$ pytest -rs test_basic.py =================== test session starts =================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 2 items test_basic.py .s [100%] ================= short test summary info ================= SKIPPED [1] test_basic.py:9: Skip on darwin ============== 1 passed, 1 skipped in 0.00s ===============
You can find other return values of sys.platform
on the Python documentation page.
Skip Failing Tests with @pytest.mark.xfailΒ
You can skip tests when you already know they will fail but do not want to remove them from the test suite. For example, you might have a test that fails because of a bug in the code. You can mark it as xfail
to acknowledge that it will fail until the fix is implemented, but keep it in the suite.
Let’s add the xfail
marker to the test function test_that_fails
as shown below.
import pytest @pytest.mark.xfail def test_that_fails(): a = 2 b = 3 assert a == b
When you run Pytest, the result is shown as “XFAIL”.
$ pytest -v test_basic.py =================== test session starts ==================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /Users/mikio/pytest2/venv/bin/python3 cachedir: .pytest_cache rootdir: /Users/mikio/pytest2 collected 2 items test_basic.py::test_that_always_passes PASSED [ 50%] test_basic.py::test_that_fails XFAIL [100%] =============== 1 passed, 1 xfailed in 0.02s ===============
You can use the -rx
option (extra summary info of expected fail functions) to see additional information.
$ pytest -rx test_basic.py =================== test session starts ==================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 2 items test_basic.py .x [100%] ================= short test summary info ================== XFAIL test_basic.py::test_that_fails =============== 1 passed, 1 xfailed in 0.01s ===============
To simulate the situation where the bug has been fixed, let’s update the test function so that the assert returns True
.
import pytest @pytest.mark.xfail def test_that_fails(): a = 2 b = 2 assert a == b
When you run Pytest, the result is now shown as “XPASS” (unexpectedly passed). It is because the test now passes but is still marked as xfail
.
$ pytest -v test_basic.py =================== test session starts ==================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /Users/mikio/pytest2/venv/bin/python3 cachedir: .pytest_cache rootdir: /Users/mikio/pytest2 collected 2 items test_basic.py::test_that_always_passes PASSED [ 50%] test_basic.py::test_that_fails XPASS [100%] =============== 1 passed, 1 xpassed in 0.00s ===============
This way, you can find out that the fix has been successfully implemented, and therefore you can now remove the xfail
marker from the test function.
Using Custom Markers
You can define your custom markers and use them to categorize your test functions into different groups.Β
The following example specifies a custom marker called “basic”.
test_basic.py
import pytest @pytest.mark.basic def test_that_always_passes(): a = 1 b = 1 assert a == b
You can run Pytest with the option -m basic
to select the functions that have the marker “basic”. In this example, there are three test files, as shown below.
$ tree -I venv . βββ subdir β βββ test_subdir.py βββ test_advanced.py βββ test_basic.py
When running the pytest
command with the option -m basic
, only test_basic.py
is run.
$ pytest -m basic =================== test session starts =================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 3 items / 2 deselected / 1 selected test_basic.py . [100%] ... ======= 1 passed, 2 deselected, 1 warning in 0.01s ========
If you run the same command, you will see the warning below. I will explain how to fix it later in this article.
==================== warnings summary ===================== test_basic.py:3 /Users/mikio/pytest2/test_basic.py:3: PytestUnknownMarkWarning: Unknown pytest.mark.basic - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html @pytest.mark.basic -- Docs: https://docs.pytest.org/en/stable/warnings.html
You can also specify -m "not basic"
to exclude this function. In this example, test_basic.py
is excluded, and the other two test files are run.
$ pytest -m "not basic" =================== test session starts =================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2 collected 3 items / 1 deselected / 2 selected test_advanced.py . [ 50%] subdir/test_subdir.py . [100%] ... ======= 2 passed, 1 deselected, 1 warning in 0.01s ========
Use the Configuration File pytest.iniΒ
As you see in the previous sections, you can specify various options when running the Pytest command. But if you run the same command regularly, it is not very convenient to manually type all the options each time. Pytest has a configuration file to keep all your settings, so let’s use it to save some typing.
Pytest has various options to use configuration files, but we use pytest.ini
in this article.Β
Identify Which Configuration File is Being Used
First of all, it’s helpful to understand how Pytest determines which configuration file to use.
Let’s create an empty file called pytest.ini
in the current directory, as shown below.
$ touch pytest.ini
I am using the current directory /Users/mikio/pytest2
, as shown below.
$ pwd /Users/mikio/pytest2
Now run the Pytest command without any arguments.
$ pytest ==================== test session starts ===================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/mikio/pytest2, configfile: pytest.ini collected 3 items test_advanced.py . [ 33%] test_basic.py . [ 66%] subdir/test_subdir.py . [100%] ===================== 3 passed in 0.01s ======================
In the header of the output, you now see a line rootdir: /Users/mikio/pytest2, configfile: pytest.ini
. It confirms that Pytest is using the configuration file pytest.ini
in the current directory (/Users/mikio/pytest2
).
You can place pytest.ini
in the current directory or any one of the parent directories. Pytest will find it and set the rootdir
to the directory where pytst.ini
exists. pytest.ini
is often placed in the project root directory (or the repository’s root directory), but for demonstration purposes, let’s use pytest.ini
in the current directory in this article.
Save command line options in addopts
Let’s add the addopts
option to pytest.ini
as shown below.Β
pytest.ini
[pytest] addopts = -v
Then run the Pytest command without any arguments.
$ pytest =================== test session starts ==================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /Users/mikio/pytest2/venv/bin/python3 cachedir: .pytest_cache rootdir: /Users/mikio/pytest2, configfile: pytest.ini collected 3 items test_advanced.py::test_very_advanced_feature PASSED [ 33%] test_basic.py::test_that_always_passes PASSED [ 66%] subdir/test_subdir.py::test_in_a_sub_directory PASSED [100%] ==================== 3 passed in 0.01s =====================
As you can see, the output is the same as when you specify the -v
option. You can specify any other command-line options to addopts
.Β
Specify Test File Search Paths in testpaths
So far, we have been using test files in the current directory and its subdirectory. But as you develop the Python application, the project directory structure becomes more complex, and you would probably keep all your tests in a specific directory, such as tests. You can specify the directory in pytest.ini
so Pytest doesn’t waste time searching test files in other directories.
Let’s create a subdirectory called tests
in the current directory and move the test files in the current directory to tests as shown below.
$ mkdir tests $ mv test*.py tests/
So, the current directory structure looks like this:
$ tree -I venv . βββ pytest.ini βββ subdir β βββ test_subdir.py βββ tests βββ test_advanced.py βββ test_basic.py
Then, add the testpaths
option in pytest.ini
like this:
[pytest] addopts = -v testpaths = tests
Now, run the Pytest command without any arguments from the current directory.
$ pytest =================== test session starts ==================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /Users/mikio/pytest2/venv/bin/python3 cachedir: .pytest_cache rootdir: /Users/mikio/pytest2, configfile: pytest.ini, testpaths: tests collected 2 items tests/test_advanced.py::test_very_advanced_feature PASSED [ 50%] tests/test_basic.py::test_that_always_passes PASSED [100%] ==================== 2 passed in 0.01s =====================
You can see the following:
testpaths
is set to tests in the header.- Collected two items in the header confirms the correct number of test files.Β
- The results are shown only for the test files in tests, ignoring subdir.Β
Specify Test File and Function Name Prefix in python_files and python_functions
So far, we have been using test files with the prefix test_
, such as test_basic.py
, but this prefix is configurable.
Let’s add the python_files
option to pytest.ini
as shown below:
pytest.ini
[pytest] addopts = -v testpaths = tests python_files = a_*.py
Pytest will now search files with the prefix a_
and consider them as test files.
Let’s rename one of the test files in the subdirectory tests.
$ mv tests/test_basic.py tests/a_basic.py $ tree -I venv . βββ pytest.ini βββ subdir β βββ test_subdir.py βββ tests βββ a_basic.py βββ test_advanced.py
Now run Pytest without any arguments.
$ pytest ==================== test session starts ===================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /Users/mikio/pytest2/venv/bin/python3 cachedir: .pytest_cache rootdir: /Users/mikio/pytest2, configfile: pytest.ini, testpaths: tests collected 1 item tests/a_basic.py::test_that_always_passes PASSED [100%] ===================== 1 passed in 0.00s ======================
You can see that Pytest only found and ran one test file, tests/a_basic.py
.
Likewise, the function name prefix is configurable. Let’s add another test function called my_test_that_always_passes
to the file tests/a_basic.py
as shown below:
tests/a_basic.py
def test_that_always_passes(): a = 1 b = 1 assert a == b def my_test_that_always_passes(): a = 2 b = 2 assert a == b
When you run Pytest, you can see that it does not pick up the newly added function.
$ pytest ==================== test session starts ===================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /Users/mikio/pytest2/venv/bin/python3 cachedir: .pytest_cache rootdir: /Users/mikio/pytest2, configfile: pytest.ini, testpaths: tests collected 1 item tests/a_basic.py::test_that_always_passes PASSED [100%] ===================== 1 passed in 0.00s ======================
Now, letβs add the python_functions
option to pytest.ini
.
pytest.ini
[pytest] addopts = -v testpaths = tests python_files = a_*.py python_functions = my_*
When you rerun Pytest, you can see that it now runs only the newly added function.
$ pytest ==================== test session starts ===================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /Users/mikio/pytest2/venv/bin/python3 cachedir: .pytest_cache rootdir: /Users/mikio/pytest2, configfile: pytest.ini, testpaths: tests collected 1 item tests/a_basic.py::my_test_that_always_passes PASSED [100%] ===================== 1 passed in 0.00s ======================
Register Custom Markers
When you add custom markers in the previous section, you see “PytestUnknownMarkWarning
” in the output. You can eliminate this warning by registering your custom markers in pytest.ini
.
For demonstration purposes, remove the python_files
and python_functions
options in pytest.ini
.
pytest.ini
[pytest] addopts = -v testpaths = tests
Then rename the test file name prefix back to test_
so that the test file names look like this:
$ tree -I venv . βββ pytest.ini βββ subdir β βββ test_subdir.py βββ tests βββ test_advanced.py βββ test_basic.py
Let’s add a custom marker called basic to the test file tests/test_basic.py
like this:
tests/test_basic.py
import pytest @pytest.mark.basic def test_that_always_passes(): a = 1 b = 1 assert a == b
As we saw in the previous section, running Pytest with -m basic
will pick up only the functions marked as basic.
$ pytest -m basic ==================== test session starts ===================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /Users/mikio/pytest2/venv/bin/python3 cachedir: .pytest_cache rootdir: /Users/mikio/pytest2, configfile: pytest.ini, testpaths: tests collected 2 items / 1 deselected / 1 selected tests/test_basic.py::test_that_always_passes PASSED [100%] ====================== warnings summary ====================== tests/test_basic.py:3 /Users/mikio/pytest2/tests/test_basic.py:3: PytestUnknownMarkWarning: Unknown pytest.mark.basic - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html @pytest.mark.basic -- Docs: https://docs.pytest.org/en/stable/warnings.html ========= 1 passed, 1 deselected, 1 warning in 0.00s =========
You can also see the warning “PytestUnknownMarkWarning
“. It means that the marker “basic
” is not registered, so Pytest is asking whether it is intentional or it might be a typo. In this case, we are sure that this is not a typo, so let’s register this marker in pytest.ini
to remove this warning.
Add the following option markers to pytest.ini
:
pytest.ini
[pytest] addopts = -v testpaths = tests markers = basic: marks basic tests
The text after the colon (:) is an optional description of the marker. It is generally a good practice to add explanations so that other people (and often for future yourself) can understand what it is and why it is necessary.
Then, rerun Pytest.
$ pytest -m basic ==================== test session starts ===================== platform darwin -- Python 3.9.1, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 -- /Users/mikio/pytest2/venv/bin/python3 cachedir: .pytest_cache rootdir: /Users/mikio/pytest2, configfile: pytest.ini, testpaths: tests collected 2 items / 1 deselected / 1 selected tests/test_basic.py::test_that_always_passes PASSED [100%] ============== 1 passed, 1 deselected in 0.00s ===============
You will no longer see the warning because the marker is already registered.
When running the pytset
command with --markers
, you can see all the markers, including the custom marker you just registered.
Summary
In this article, I explained some of the options that Pytest offers to make your test activities more efficient.
First, we looked at how you can selectively run tests. You can specify the file name or directory name in the command line arguments. You can also use the -k
option to identify a part of the file names.
Then, we looked at the pytest command options to change the output. You can use the -v
option to make it more verbose and the -q
option to make it less verbose. You can also use the -l
option to see the values in the local variables in the traceback. You can also capture the standard output and display it in the result.Β
One of the powerful options we looked at next is markers. You can skip specific tests by adding @pytest.mark.skip
or @pytest.mark.skipif
. You can mark failing tests to @pytest.mark.xfail
. You can also use your custom markers to group tests and selectively run them.
Finally, we looked at the configuration file pytest.ini
, where you can save various settings.
You can find more options in the Pytest documentation. I hope you find some of the options in this article helpful and continue exploring other options that suit your needs.