Testing

Cloud-init has both unit tests and integration tests. Unit tests can be found at tests/unittests. Integration tests can be found at tests/integration_tests. Documentation specifically for integration tests can be found on the Integration testing page, but the guidelines specified below apply to both types of tests.

Cloud-init uses pytest to write and run its tests.

Note

While there are a subset of tests written as unittest.TestCase sub-classes, this is due to historical reasons. Their use is discouraged and they are tracked to be removed in #6427.

Guidelines

The following guidelines should be followed.

Test layout

  • For consistency, unit test files should have a matching name and directory location under tests/unittests.

  • E.g., the expected test file for code in cloudinit/path/to/file.py is tests/unittests/path/to/test_file.py.

pytest guidelines

In-house fixtures

Before implementing your own fixture do search in */conftest.py files as it could be already implemented. Another source to look for test helpers is tests/*/helpers.py.

Relevant fixtures:

  • disable_subp_usage auto-disables call to subprocesses. See its documentation to disable it.

  • fake_filesystem makes tests run on a temporary filesystem.

  • paths provides an instance of cloudinit.helper.Paths pointing to a temporary filesystem.

Dependency versions

Cloud-init supports a range of versions for each of its test dependencies, as well as runtime dependencies. If you are unsure whether a specific feature is supported for a particular dependency, check the lowest-supported environment in tox.ini. This can be run using tox -e lowest-supported. This runs as a GitHub Actions job when a pull request is submitted or updated.

Mocking and assertions

  • Variables/parameter names for Mock or MagicMock instances should start with m_ to clearly distinguish them from non-mock variables. For example, m_readurl (which would be a mock for readurl).

  • The assert_* methods that are available on Mock and MagicMock objects should be avoided, as typos in these method names may not raise AttributeError (and so can cause tests to silently pass).

    • An important exception: if a Mock is autospecced then misspelled assertion methods will raise an AttributeError, so these assertion methods may be used on autospecced Mock objects.

  • For a non-autospecced Mock, these substitutions can be used (m is assumed to be a Mock):

    • m.assert_any_call(*args, **kwargs) => assert mock.call(*args, **kwargs) in m.call_args_list

    • m.assert_called() => assert 0 != m.call_count

    • m.assert_called_once() => assert 1 == m.call_count

    • m.assert_called_once_with(*args, **kwargs) => assert [mock.call(*args, **kwargs)] == m.call_args_list

    • m.assert_called_with(*args, **kwargs) => assert mock.call(*args, **kwargs) == m.call_args_list[-1]

    • m.assert_has_calls(call_list, any_order=True) => for call in call_list: assert call in m.call_args_list

      • m.assert_has_calls(...) and m.assert_has_calls(..., any_order=False) are not easily replicated in a single statement, so their use when appropriate is acceptable.

    • m.assert_not_called() => assert 0 == m.call_count

  • When there are multiple patch calls in a test file for the module it is testing, it may be desirable to capture the shared string prefix for these patch calls in a module-level variable. If used, such variables should be named M_PATH or, for datasource tests, DS_PATH.

Test argument ordering

  • Test arguments should be ordered as follows:

    • mock.patch arguments. When used as a decorator, mock.patch partially applies its generated Mock object as the first argument, so these arguments must go first.

    • pytest.mark.parametrize arguments, in the order specified to the parametrize decorator. These arguments are also provided by a decorator, so it’s natural that they sit next to the mock.patch arguments.

    • Fixture arguments, alphabetically. These are not provided by a decorator, so they are last, and their order has no defined meaning, so we default to alphabetical.

  • It follows from this ordering of test arguments (so that we retain the property that arguments left-to-right correspond to decorators bottom-to-top) that test decorators should be ordered as follows:

    • pytest.mark.parametrize

    • mock.patch

Test pre-release cloud-init

After the cloud-init team creates an upstream release, cloud-init will be released in the -proposed APT repository for a period of testing which provides SRU updates to multiple Ubuntu releases. Users are encouraged to test their workloads on pending releases so bugs can be caught and fixed prior to becoming more broadly available via the -updates repository. This guide describes how to test the pre-release package on Ubuntu.

Add the -proposed repository pocket

The -proposed repository pocket will contain the cloud-init package to be tested prior to release in the -updates pocket.

echo "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc)-proposed main" >> /etc/apt/sources.list.d/proposed.list
apt update

Install the pre-release cloud-init package

apt install cloud-init

Test the package

Whatever workload you use cloud-init for in production is the best one to test. This ensures that you can discover and report any bugs that the cloud-init developers missed during testing before cloud-init gets released more broadly.

If issues are found during testing, please file a new cloud-init bug.

Remove the proposed repository

Do this to avoid unintentionally installing other unreleased packages.

rm /etc/apt/sources.list.d/proposed.list
apt update

Remove artifacts and reboot

This will cause cloud-init to rerun as if it is a first boot.

sudo cloud-init clean --logs --reboot