Python Unittest Runner

Tutorial

Unittest-xml-reporting (aka xmlrunner) A unittest test runner that can save test results to XML files in xUnit format. The files can be consumed by a wide range of tools, such as build systems, IDEs and continuous integration servers. The unittest module can be used from the command line to run single or multiple tests. Python -m unittest test1 python -m unittest testmodule.TestClass python -m unittest testmodule.TestClass.testmethod unittest supports the following command line options. For a list of all the command-line options, use the following command −. Exit(1) Just calling unittest.TextTestRunner.run (suite) won't exit the script and thus won't set the exit code. This is not a good idea as external tools (e.g. CI systems) will rely on the exit code to determine if running the tests was successful or not. So we capture the result of the test run in a variable and then call the.

The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

Introduction

The Python standard library includes the unittest module to help you write and run tests for your Python code.

Tests written using the unittest module can help you find bugs in your programs, and prevent regressions from occurring as you change your code over time. Teams adhering to test-driven development may find unittest useful to ensure all authored code has a corresponding set of tests.

In this tutorial, you will use Python’s unittest module to write a test for a function.

Prerequisites

To get the most out of this tutorial, you’ll need:

  • An understanding of functions in Python. You can review the How To Define Functions in Python 3 tutorial, which is part of the How To Code in Python 3 series.

Defining a TestCase Subclass

One of the most important classes provided by the unittest module is named TestCase. TestCase provides the general scaffolding for testing our functions. Let’s consider an example:

First we import unittest to make the module available to our code. We then define the function we want to test—here it is add_fish_to_aquarium.

In this case our add_fish_to_aquarium function accepts a list of fish named fish_list, and raises an error if fish_list has more than 10 elements. The function then returns a dictionary mapping the name of a fish tank 'tank_a' to the given fish_list.

A class named TestAddFishToAquarium is defined as a subclass of unittest.TestCase. A method named test_add_fish_to_aquarium_success is defined on TestAddFishToAquarium. test_add_fish_to_aquarium_success calls the add_fish_to_aquarium function with a specific input and verifies that the actual returned value matches the value we’d expect to be returned.

Now that we’ve defined a TestCase subclass with a test, let’s review how we can execute that test.

Executing a TestCase

In the previous section, we created a TestCase subclass named TestAddFishToAquarium. From the same directory as the test_add_fish_to_aquarium.py file, let’s run that test with the following command:

We invoked the Python library module named unittest with python -m unittest. Then, we provided the path to our file containing our TestAddFishToAquariumTestCase as an argument.

After we run this command, we receive output like the following:

The unittest module ran our test and told us that our test ran OK. The single . on the first line of the output represents our passed test.

Note:TestCase recognizes test methods as any method that begins with test. For example, def test_add_fish_to_aquarium_success(self) is recognized as a test and will be run as such. def example_test(self), conversely, would not be recognized as a test because it does not begin with test. Only methods beginning with test will be run and reported when you run python -m unittest ....

Now let’s try a test with a failure.

We modify the following highlighted line in our test method to introduce a failure:

test_add_fish_to_aquarium.py

The modified test will fail because add_fish_to_aquarium won’t return 'rabbit' in its list of fish belonging to 'tank_a'. Let’s run the test.

Again, from the same directory as test_add_fish_to_aquarium.py we run:

When we run this command, we receive output like the following:

The failure output indicates that our test failed. The actual output of {'tank_a': ['shark', 'tuna']} did not match the (incorrect) expectation we added to test_add_fish_to_aquarium.py of: {'tank_a': ['rabbit']}. Notice also that instead of a ., the first line of the output now has an F. Whereas . characters are outputted when tests pass, F is the output when unittest runs a test that fails.

Now that we’ve written and run a test, let’s try writing another test for a different behavior of the add_fish_to_aquarium function.

Testing a Function that Raises an Exception

unittest can also help us verify that the add_fish_to_aquarium function raises a ValueError Exception if given too many fish as input. Let’s expand on our earlier example, and add a new test method named test_add_fish_to_aquarium_exception:

The new test method test_add_fish_to_aquarium_exception also invokes the add_fish_to_aquarium function, but it does so with a 25 element long list containing the string 'shark' repeated 25 times.

test_add_fish_to_aquarium_exception uses the with self.assertRaises(...)context manager provided by TestCase to check that add_fish_to_aquarium rejects the inputted list as too long. The first argument to self.assertRaises is the Exception class that we expect to be raised—in this case, ValueError. The self.assertRaises context manager is bound to a variable named exception_context. The exception attribute on exception_context contains the underlying ValueError that add_fish_to_aquarium raised. When we call str() on that ValueError to retrieve its message, it returns the correct exception message we expected.

From the same directory as test_add_fish_to_aquarium.py, let’s run our test:

When we run this command, we receive output like the following:

Notably, our test would have failed if add_fish_to_aquarium either didn’t raise an Exception, or raised a different Exception (for example TypeError instead of ValueError).

Note:unittest.TestCase exposes a number of other methods beyond assertEqual and assertRaises that you can use. The full list of assertion methods can be found in the documentation, but a selection are included here:

MethodAssertion
assertEqual(a, b)a b
assertNotEqual(a, b)a != b
assertTrue(a)bool(a) is True
assertFalse(a)bool(a) is False
assertIsNone(a)a is None
assertIsNotNone(a)a is not None
assertIn(a, b)a in b
assertNotIn(a, b)a not in b

Now that we’ve written some basic tests, let’s see how we can use other tools provided by TestCase to harness whatever code we are testing.

Using the setUp Method to Create Resources

TestCase also supports a setUp method to help you create resources on a per-test basis. setUp methods can be helpful when you have a common set of preparation code that you want to run before each and every one of your tests. setUp lets you put all this preparation code in a single place, instead of repeating it over and over for each individual test.

Let’s take a look at an example:

test_fish_tank.py

test_fish_tank.py defines a class named FishTank. FishTank.has_water is initially set to False, but can be set to True by calling FishTank.fill_with_water(). The TestCase subclass TestFishTank defines a method named setUp that instantiates a new FishTank instance and assigns that instance to self.fish_tank.

Since setUp is run before every individual test method, a new FishTank instance is instantiated for both test_fish_tank_empty_by_default and test_fish_tank_can_be_filled. test_fish_tank_empty_by_default verifies that has_water starts off as False. test_fish_tank_can_be_filled verifies that has_water is set to True after calling fill_with_water().

From the same directory as test_fish_tank.py, we can run:

If we run the previous command, we will receive the following output:

The final output shows that the two tests both pass.

setUp allows us to write preparation code that is run for all of our tests in a TestCase subclass.

Note: If you have multiple test files with TestCase subclasses that you’d like to run, consider using python -m unittest discover to run more than one test file. Run python -m unittest discover --help for more information.

Using the tearDown Method to Clean Up Resources

TestCase supports a counterpart to the setUp method named tearDown. tearDown is useful if, for example, we need to clean up connections to a database, or modifications made to a filesystem after each test completes. We’ll review an example that uses tearDown with filesystems:

test_advanced_fish_tank.py defines a class named AdvancedFishTank. AdvancedFishTank creates a file named fish_tank.txt and writes the string 'shark, tuna' to it. AdvancedFishTank also exposes an empty_tank method that removes the fish_tank.txt file. The TestAdvancedFishTankTestCase subclass defines both a setUp and tearDown method.

See Full List On Pypi.org

The setUp method creates an AdvancedFishTank instance and assigns it to self.fish_tank. The tearDown method calls the empty_tank method on self.fish_tank: this ensures that the fish_tank.txt file is removed after each test method runs. This way, each test starts with a clean slate. The test_fish_tank_writes_file method verifies that the default contents of 'shark, tuna' are written to the fish_tank.txt file.

From the same directory as test_advanced_fish_tank.py let’s run:

We will receive the following output:

tearDown allows you to write cleanup code that is run for all of your tests in a TestCase subclass.

Conclusion

In this tutorial, you have written TestCase classes with different assertions, used the setUp and tearDown methods, and run your tests from the command line.

The unittest module exposes additional classes and utilities that you did not cover in this tutorial. Now that you have a baseline, you can use the unittest module’s documentation to learn more about other available classes and utilities. You may also be interested in How To Add Unit Testing to Your Django Project.

Note

This document assumes you are working from anin-development checkout of Python. If youare not then some things presented here may not work as they may dependon new features not available in earlier versions of Python.

Running¶

The shortest, simplest way of running the test suite is the following commandfrom the root directory of your checkout (after you havebuilt Python):

You may need to change this command as follows throughout this section.On most Mac OS X systems, replace ./pythonwith ./python.exe. On Windows, use python.bat. If usingPython 2.7, replace test with test.regrtest.

If you don’t have easy access to a command line, you can run the test suite froma Python or IDLE shell:

This will run the majority of tests, but exclude a small portion of them; theseexcluded tests use special kinds of resources: for example, accessing theInternet, or trying to play a sound or to display a graphical interface onyour desktop. They are disabled by default so that running the test suiteis not too intrusive. To enable some of these additional tests (and forother flags which can help debug various issues such as reference leaks), readthe help text:

If you want to run a single test file, simply specify the test file name(without the extension) as an argument. You also probably want to enableverbose mode (using -v), so that individual failures are detailed:

To run a single test case, use the unittest module, providing the importpath to the test case:

If you have a multi-core or multi-CPU machine, you can enable parallel testingusing several Python processes so as to speed up things:

If you are running a version of Python prior to 3.3 you must specify the numberof processes to run simultaneously (e.g. -j2).

Finally, if you want to run tests under a more strenuous set of settings, youcan run test as:

The various extra flags passed to Python cause it to be much stricter aboutvarious things (the -Wd flag should be -Werror at some point, but thetest suite has not reached a point where all warnings have been dealt with andso we cannot guarantee that a bug-free Python will properly complete a test runwith -Werror). The -r flag to the test runner causes it to run tests ina more random order which helps to check that the various tests do not interferewith each other. The -w flag causes failing tests to be run again to seeif the failures are transient or consistent.The -uall flag allows the use of all availableresources so as to not skip tests requiring, e.g., Internet access.

To check for reference leaks (only needed if you modified C code), use the-R flag. For example, -R3:2 will first run the test 3 times to settledown the reference count, and then run it 2 more times to verify if there areany leaks.

You can also execute the Tools/scripts/run_tests.py script as found in aCPython checkout. The script tries to balance speed with thoroughness. But ifyou want the most thorough tests you should use the strenuous approach shownabove.

Unexpected Skips¶

Sometimes when running the test suite, you will see “unexpected skips”reported. These represent cases where an entire test module has beenskipped, but the test suite normally expects the tests in that module tobe executed on that platform.

Often, the cause is that an optional module hasn’t been built due to missingbuild dependencies. In these cases, the missing module reported when the testis skipped should match one of the modules reported as failing to build whenCompile and build.

In other cases, the skip message should provide enough detail to help figureout and resolve the cause of the problem (for example, the default securitysettings on some platforms will disallow some tests)

Writing¶

Writing tests for Python is much like writing tests for your own code. Testsneed to be thorough, fast, isolated, consistently repeatable, and as simple aspossible. We try to have tests both for normal behaviour and for errorconditions. Tests live in the Lib/test directory, where every file thatincludes tests has a test_ prefix.

One difference with ordinary testing is that you are encouraged to rely on thetest.support module. It contains various helpers that are tailored toPython’s test suite and help smooth out common problems such as platformdifferences, resource consumption and cleanup, or warnings management.That module is not suitable for use outside of the standard library.

Unittest

When you are adding tests to an existing test file, it is also recommendedthat you study the other tests in that file; it will teach you which precautionsyou have to take to make your tests robust and portable.

Benchmarks¶

Benchmarking is useful to test that a change does not degrade performance.

Python Unittest Run Before All Tests

Test

Python Unit Test Runner Gui

The Python Benchmark Suitehas a collection of benchmarks for all Python implementations. Documentationabout running the benchmarks is in the README.txt of the repo.