At BriteCore we consider test code to be as important as the main application code. Tests need to be well written, well thought out and easy to extend. Writing tests should not be a pain for developers.
There are two popular testing tools for Python projects:
- Unittest comes with the Python standard library. It might be a good option if you are used to xUnit-style frameworks, such as Java’s JUnit.
- Pytest is a third party library available on PyPI. Pytest does everything unittest can and comes with an extra handful of features that will help you write better test code.
Let’s examine the advantages of pytest over unittest, and how you can migrate to it.
But First, Example Code
On our system, we often need to translate boolean, JSON, or date values represented as strings (e.g.: “2012-01-02”, “true”) to python objects. We call the class that translates the values a Translator and it is defined as follows:
We will use this example code to demonstrate pytest testing capabilities.
So, Why Pytest?
Tests need to be easy to extend and well written. Pytest allows us to write better test code because:
- It is more pythonic and requires less boilerplate.
- It produces more readable output.
- It allows us to represent the resources we need in a granular way and reuse them across test cases (fixtures).
- It allows us to run the same test code across different parameters.
- We can write our own extensions. (plugins!)
1. More Pythonic, Less Boilerplate
Unittest uses custom assert methods that are defined under unittest.TestCase. Since it is based on JUnit, all the assert methods are defined using camelCase notation: assertEqual, assertTrue, assertRaises, etc.
Pytest, on the other hand, takes advantage of the standard python assert statement. You can assert any type of expression by doing a simple assert value == expected_value. That is much more familiar to the eyes of a Python developer (no camelCase formatting, no need to remember custom method names or check documentation).
Check out this unit test, originally written using the unittest framework:
Now look how this very same test could be written using pytest:
Notice how much boilerplate code we saved when using pytest. You don’t need to create a class to use pytest, and you also don’t need to import the pytest module.
2. More readable output
When a test fails, pytest makes it much easier for you to identify the error:
- It prints the actual code of the test function that failed on the console.
- It uses a customized output (depending on the value type) to show you why the expression you're asserting is failing. It shows exactly what differs between the objects being compared. Learn more about how this is done on the Pytest Documentation.
- It uses colors to highlight important parts.
This is an example output from a failed message when using the unittest framework:
Now look how pytest makes the output much clear and easier to understand:
In the xUnit world, you usually have a setUp method that takes care of setting up all the resources you need for the test cases within that test class. It’s common to see multiple unrelated things (fakes, mocks, database records) being set up on this method, even though (most of the time) not all test cases need all of them.
Fixtures are a replacement for the usual setUp/tearDown feature of xUnit. A fixture is simply a function that can be reused across test cases. You can define one using the pytest.fixture decorator. To use a fixture, you just add it to the parameter list of your test case.
By default, fixtures will be torn down once the test case finishes running. If you need to preserve them, you can choose a different scope (class, module, or session), in which case they will only be torn down once all tests for that scope have finished running.
Since fixtures are just functions, and we can use as many as we want on a test case, they have multiple advantages over the setup/teardown pattern:
- We can be very granular, having a fixture method for each of the different things (fakes, mocks, database records) that we need to set up. This makes our code easier to understand and extend.
- Since you choose where you want to use them, you won’t be setting up unnecessary resources.
- We can define fixtures at the project level on conftest.py file, which makes them available for all test cases. This avoids lots of code repetition.
Check out this test written with unittest. Notice how everything on a test class needs to be set up on a single method (setUp). Even though we don’t want to access the boolean_translator within the date translator tests, it’s still accessible through self:
Now notice what the same test looks like when written using pytest and fixtures:
On the snippet above, we extracted the initialization of a Translator to the boolean_translator and date_translator fixtures. To use them, we added them to the parameter list of the test cases that needed them. Notice that the parameter name has to match exactly with the fixture name.
With this implementation, we avoided having to initialize the Translator twice, and we have a single source of truth on how to initialize a Translator instance for a boolean type and a date type.
Pytest allows you to parametrize test cases, which makes it really easy to cover multiple test scenarios using the same test case. To do this, decorate the test function with the @pytest.mark.parametrize decorator and then indicate the parameters the function is going to receive, followed by the different values for those parameters.
A great advantage of the parametrize feature is that pytest outputs each of the test scenarios on your parameters list as a different test case on the console—so you get to see exactly for which parameter a certain test case is failing.
Check out this test written with unittest. Notice how we had to use a for inside of each of the test cases to be able to support multiple test scenarios:
This is what the same test case would look like using pytest:
- We easily cover all test scenarios without repeating a single line of code.
- We do not need any logic on the test.
- Each of these test scenarios will be considered a different test case when logged on the console.
This is what the console output would look like for the test above:
Pytest can be easily extended with plugins. There are lots of useful plugins already developed, and we can also write our own!
Here are some of my favorites:
- pytest-xdist allows you to:
- run tests in multiple CPUs,
- run tests for multiple Python interpreters/platforms in parallel, and
- run tests until one fails, wait for a file change, and then run them again (useful for refactoring).
- pytest-django is a set of useful tools for testing Django applications. (We will talk more about this in the following section.)
- pytest-sugar shows test results in a nicer way by adding a progress bar and colors.
- pytest-mock provides the mocker fixture, which is a wrapper around the mock package. This fixture automatically tears down all your mocks after a test has been executed.
How Do I Migrate?
There are two options when it comes to migrating from unittest to pytest:
- Migrate as you go, test by test. Since the Pytest runner also runs unittest tests, they can co-exist without a problem
- Use unittest2pytest to convert all tests automatically. When migrating using an automated tool, remember:
- The tool won’t know how to take advantage of pytest’s great features, such as fixtures or parameters, so you will have to keep improving your tests after the migration
- pytest.raises and assertRaises context managers return different values. You will have to manually fix the places where you use them
What if I Use Django?
Django has its own TestCase class on top of unitttest.TestCase features. Migrating to plain pytest would be troublesome in this case, since it won’t handle creating a test database for you.
To solve this, you can use the pytest-django plugin. This plugin handles database creation for you. All you will need to do is add the @pytest.mark.django_db to the test cases that use the database. It is also possible to mark a full module or class with the django_db marker.
Besides handling database creation/deletion, the plugin also comes with some useful fixtures:
- Settings allows you to modify Django settings on your test case. It will then automatically revert them after the test case finishes
- django_assert_max_num_queries is a context manager that will capture the SQL queries being made by the Django ORM calls and will fail if there are more queries than the amount specified
- Client is an instance of django.test.Client to make HTTP calls
You can find plenty more on the Pytest-Django Documentation.