Tests
Note: The current testing framework is available in Terraform v1.6.0 and later.
Terraform tests allow module authors to validate the functionality of their modules during development and prior to release.
Types of Tests
The default use case for the testing framework is a kind of integration testing. This is because the default value for the command
attribute in each run
block (discussed below) is apply
. This means that Terraform will attempt to execute a complete apply
operation against the configuration under test which will create real infrastructure that can then be validated by the testing framework.
It is possible to simulate more local and fine-grained unit testing behaviour with the use of the command = plan
attribute and value in a given run
block. This means that Terraform will only produce a potential plan that will not be applied, and the values in the plan can be validated. This functionality can be used to check logical operations within your configuration, and validate custom conditions within resources, variables, and outputs without creating real infrastructure.
Syntax
Each Terraform test is contained within a test file. Test files are discovered by Terraform due to their file extension: .tftest.hcl
or .tftest.json
.
Each test file contains the following root level attributes and blocks:
The run
blocks are executed in order, simulating a series of Terraform commands being executed directly within the configuration directory. The order of the variables
and provider
blocks doesn't matter, all values within these blocks are processed once at the beginning of the test operation. A well laid out test file has the variables
and provider
blocks defined first, at the beginning of the file.
Example
The following example demonstrates a simple Terraform configuration that creates an AWS S3 bucket, using an input variable to modify the name, combined with a test that verifies the name of the S3 bucket is as expected.
The above test file runs a single Terraform plan command which creates the S3 bucket, and then validates the logic for calculating the name is correct by checking the actual name matches the expected name.
Run blocks
Each run
block has the following fields and blocks:
- Zero to one
command
attribute, which is eitherapply
orplan
and defaults toapply
. - Zero to one
plan_options
block, which contains:- Zero to one
mode
attribute, which is eithernormal
orrefresh-only
and defaults tonormal
. - Zero to one boolean
refresh
attribute, which defaults totrue
. - Zero to one
replace
attribute, which contains a list of resource addresses referencing resources within the configuration under test. - Zero to one
target
attribute, which contains a list of resource addresses referencing resources within the configuration under test.
- Zero to one
- Zero to one
variables
block. - Zero to one
module
block. - Zero to one
providers
attribute. - Zero to many
assert
blocks. - Zero to one
expect_failures
attribute.
The command
attribute and plan_options
block tell Terraform which command and options to execute for each run block. The default operation, if neither the command
attribute nor the plan_options
block is specified is a normal Terraform apply operation.
The command
attribute is simple, stating whether the operation should be a plan
or an apply
operation.
The plan_options
block allows test authors to customize the planning mode and planning options that would normally be edited via command-line flags and options. Note that the -var
and -var-file
options are discussed in the Variables section.
Assertions
Terraform run block assertions are Custom Conditions, made up of a condition and an error message.
At the conclusion of a Terraform test
command execution, Terraform will present any failed assertions as part of a tests passed or failed status.
Assertion References
Assertions within tests can reference any existing named values that would be available to other custom conditions within the main Terraform configuration.
In addition, test assertions can directly reference outputs. From the previous example, this would be a valid condition: condition = output.bucket_name == "test_bucket"
.
Variables
You can provide values for Input Variables within your configuration directly from your test files.
The test file syntax supports variables
blocks at both the root level and within run blocks. Variable values provided directly within run blocks will override the values provided by a variables block at the root level.
Continuing our example from above:
Variables on the Command Line and Definition Files
In addition to values provided via test files, the Terraform test
command also supports the alternate input mechanisms supported by other commands.
You can specify values for variables across all tests via the Command Line and via Variable Definition Files.
This is particularly useful when supplying sensitive values, that would otherwise be exposed directly within the testing files, and for configuring providers.
Variable Definition Precedence
The Variable Definition Precedence remains the same within tests, except for values provided by the variables
within the test files. These new input methods take the highest precedence, so will override environment variables, variables files, or command-line input.
Providers
You can set or override the required providers within the main configuration from your testing files by using provider
and providers
blocks and attributes.
At the root level of a Terraform testing file, provider
blocks can be defined as if they were being created within the main configuration. These provider blocks will then be passed into the configuration as each run
block executes.
By default, within each run
block all defined providers will be made directly available. It is also possible to customize which providers are made available within a given run
block using a providers
attribute. The behaviour and syntax for this block matches the behaviour of providers meta-argument.
If no provider configuration is provided within a testing file, Terraform will attempt to initialize any providers within the configuration using their default settings. For example, any environment variables aimed at configuring providers will still be available and will be used by Terraform to create default providers.
Extending the previous example to allow our tests, instead of the configuration, to specify the region:
We can also create a more complex example, that makes use of multiple providers and aliases:
It is also possible to define specific providers you want to use in specific run
blocks:
Note: When running tests in
apply
mode switching providers betweenrun
blocks can result in failed operations and tests as resources created by one provider definition will be unusable when modified by a second.
Modules
You can also modify the module that a given run
block will execute.
By default, Terraform will execute the given command against the configuration under test for each run
block. Each run
block also allows the user to override the configuration under test using the module
block.
Compared with the traditional module
block, the module
block within test files only supports the source
attribute and the version
attribute. The remaining attributes that would normally be supplied via the traditional module
block are provided elsewhere within the run
block.
Note: Terraform test files only support local and registry modules within the
source
attribute.
All other blocks and attributes within the run
block are supported when executing an alternate module, with assert
blocks executing against values from the alternate module. This is discussed more in Modules State.
Here are two example use cases for the modules
block within a Testing file:
- A setup module to create necessary infrastructure required by the main configuration under test.
- A loading module to load and validate secondary infrastructure (such as data sources) not created directly by the main configuration under test.
The following example demonstrates both of these use cases:
- We have a module that will create and load several files into an already created S3 bucket.
- This is the configuration we want to test.
- We have a setup module that will create the S3 bucket, so it is available to the configuration under test.
- We have a loading module, that will load the files in the s3 bucket
- This is a fairly contrived example, as it is definitely possible just to validate the files directly when they are created in the module under test. It is, however, good for demonstrating the use case.
- Finally, we have the test file itself which configures everything and calls out to the various helper modules we have created.
Modules State
During a terraform test
execution Terraform will maintain at least one but possibly many state files within memory for each test file it executes.
There is always at least one state file that maintains the state of the main configuration under test. This is shared by all run
blocks that execute without a module
block specifying an alternate module to load.
In addition, there is one state file per alternate module loaded. An alternate module state file is shared by all run
blocks that execute the given module.
The Terraform team is interested in any use cases that would require manual state management, or the ability to execute different configurations against the same state, within the test
command. If you have a use case for this please file an issue and share it with us.
The following example uses comments to highlight where the state files for each run
block are coming from. During the example a total of three state files will be created and managed, one for the main configuration under test, one for the setup module, and one for the loader module.
Modules Cleanup
Terraform will attempt to clean up every resource created during the execution of a test file. When alternate modules are loaded, the order in which objects are destroyed is important. For example, in our first Modules example earlier we cannot destroy the resources created in the "setup" run
block before the objects created in the "execute" run
block.
Terraform will destroy resources in the following order, and this order is important as it may affect the structure of your testing files:
- Destroy the resources held in the main state file first, so you should not create resources in alternate modules that depend on resources from your main configuration.
- Note that data sources can refer to objects in your main configuration, as Terraform doesn't have to destroy data sources.
- Destroy the resources created by alternate modules in
run
block reverse order.- From our example, any resources created in the "verify"
run
block would be destroyed before resources created in the "setup"run
block. Note, that in our example this doesn't particularly matter as our "verify"run
block only loads a data source and creates no resources.
- From our example, any resources created in the "verify"
If you only use a single setup module as an alternate module, and it executes first, or you use no alternate modules, then the order of destruction will not affect you. Anything more complex may require careful consideration to make sure automated destruction of created resources completes automatically.
Expecting Failures
By default, if any Custom Conditions, including check
block assertions, fail during the execution of a Terraform test file then the overall command will report the test as a failure. It is a common testing paradigm, however, to want to test failure cases. Terraform supports the expect_failures
attribute for this use case.
In each run
block the expect_failures
attribute can provide a list of checkable objects (resources, data sources, check blocks, input variables, and outputs) that should fail. The test will then pass overall if these checkable objects report an issue, while the test will fail overall if they do not report an issue.
You can still write assertions alongside an expect_failures
block, but you should be mindful that all custom conditions, except check block assertions, halt the execution of Terraform. This still applies during test execution, so your assertions should only consider values that you are sure will be computed before the checkable object is due to fail. This can be managed via references or the depends_on
meta-argument.
This also means that, with the exception of check
blocks, only a single checkable object can be reliably included. We support a list of checkable objects within the expect_failures
attribute purely for check
blocks.
A quick example here demonstrates testing the validation
block on an input variable.
Note: Failures are only expected in the operation specified by the
command
attribute of therun
block.
This means you should be careful when using expect_failures
in run
blocks with command = apply
. A run
block with command = apply
that expects a custom condition failure will fail overall if that custom condition fails during the plan.
This is logically consistent, as the run
block is expecting to be able to run an apply operation but cannot as the plan failed. It is also potentially confusing, as you will see the failure reported in the diagnostics as the reason the test failed even though that failure was marked as being expected.
There are instances when a custom condition will not be executed during the planning stage as it is relying on computed attributes only available after a referenced resource has been created. In these cases an expect_failures
block alongside a command = apply
attribute and value would be expected and acceptable. However, in most cases you should ensure that expect_failures
are used only alongside command = plan
operations.