Introduction
Like many software projects, frameworks, plugins they start off because the author could not find a solution that really fitted their needs and submitting pull requests to existing projects would not have changed the direction of such projects to the needs of the author. This is one of those kind of projects and it is offered against a number of alternative solutions that are available as Gradle plugins.
The aim with the group of plugins is making the integration of Terraform into the build automation pipeline as smooth as possible. This combination of Gradle with Terraform makes for an extremely powerful orchestration platform in the DevOps.
The plugin brings with it a number of subgoals:
-
Simplicity to use defaults - convention over configuration
-
Maximum flexibility if you need it.
-
No need to install
terraform
or relevant tools - let Gradle take care of it for you.
This is an incubating project. Until is 1.0 released one day, interfaces and DSL may change between 0.x releases.
|
These documentation pages assume that you have at least a basic working knowledge of Terraform. |
Terraform + Gradle - A mindset change
Integrating these two products delivers a pwerful combination, but also presents a number of operational ways that challenges people that is used to using terraform
standalone or in scripts.
Using Terraform with Gradle means that the former no longer needs to be installed. Just specify the version and Gradle will bootstrap the correct version. If the version is already in cache (~/gradle/native-binaries/
), Gradle will just reuse it.
Bootstrapping
These plugins are available from the plugin portal. Add the appropriate plugin identifiers to your build.gradle
file depending on the type of functionality you require.
plugins {
id 'org.ysb33r.terraform.base' version '0.11.3' (1)
id 'org.ysb33r.terraform' version '0.11.3' (2)
id 'org.ysb33r.terraform.rc' version '0.11.3' (3)
id 'org.ysb33r.terraform.wrapper' version '0.11.3' (4)
id 'org.ysb33r.terraform.remotestate.s3' version '0.11.3' (5)
id 'org.ysb33r.terraform.check' version '0.11.3' (6)
id 'org.ysb33r.terraform.aws' version '0.11.3' (7)
id 'org.ysb33r.terraform.gitlab' version '0.11.3' (8)
}
1 | Base plugin provides terraform extension. |
2 | Conventions for terraform including source sets and tasks created by convention. |
3 | This plugin is normally not applied directly, but deals specifically with Terraform configuration |
4 | This plugin is usually only applied at the root project and deals with the creation of terraformw wrappers. |
5 | Simplifies storing of remote state in S3. See Remote State in S3. |
6 | Add tfFmtCheck and similar tasks for other Terraform source sets to the check lifecycle task. |
7 | Adds an aws extension to a source set which can be used to simplify authentication including assumed roles. See AWS support. |
8 | Adds a gitlab extension to a source set which can be used to simplify authentication for the Terraform Gitlab provider. See Gitlab support
NOTE: You need at least Gradle 4.9 to use these plugins. |
Base plugin
The base plugin provides:
-
A project extension named
terraform
. -
Various Terraform task types which tend to map the
terraform
command closely. For instance for theinit
command, the appropriate task type is calledtfInit
. -
Ability to download and use
terraform
executables.
When the base plugin is applied to the root project it will automatically apply the terraformrc
plugin.
Terraform plugin
The Terraform conventions plugin provides:
-
terraformSourceSets
as a project extension. -
The default Terraform source set called
main
with default directorysrc/tf/main
. -
A number of tasks including
tfInit
,tfPlan
andtfApply
which acts upon the default source set.
Applying the Terraform plugin will apply the base plugin.
Terraform RC plugin
The terraformrc
plugin provies an extension called terraformrc
which deals with the creation of a configuration file specificaly for use by Terraform within the project.
Terraform Wrapper plugin
Provides the terraformWrapper
and cacheTerraformBinaries
tasks.
Quick start
The minimalist project is
plugins {
id 'org.ysb33r.terraform' version '0.11.3'
}
Now add a init.tf
file to src/tf/main
and add some Terraform content to it.
$ ./gradlew tfInit (1) $ ./gradlew tfApply (2) $ git add src/tf/main/terraform.tfstate (3)
1 | Initialise your Terraform project environment. |
2 | Apply your changes |
3 | Add your newly created .tfstate file to source control. If your Terraform context uses remote state storage, then this last step is not required. |
Platform installation support
These plugins can automatically download, cache and render Terraform for the following platforms:
-
Linux 32 & 64-bit.
-
Mac 64-bit.
-
Windows 32 & 64-bit.
-
FreeBSD 32 & 64-bit.
-
Solaris 64-bit.
Should you need to run Gradle on a platform not listed above, but which Terraform supports and on which Gradle can run, you will need to configure the Terraform executable via the path
or search
methods. You can also raise an issue to ask for the support or you can submit a PR with the solution.
Source Sets
When the org.ysb33r.terraform
plugin is applied, it adds the terraformSourceSets
extension which contain the main
source set. Associated with this source set is the default folder of src/tf/main
and a number of tasks including:
-
tfApply
-
tfCleanupWorkspaces
-
tfDestroy
-
tfDestroyPlan
-
tfFmtCheck
-
tfFmtApply
-
tfImport
-
tfInit
-
tfOutput
-
tfPlan
-
tfShowState
-
tfStateMv
(terraform state mv
) -
tfStatePush
(terraform state push
) -
tfStateRm
(terraform state rm
) -
tfValidate
-
tfUntaint
-
tfUpgrade
If additional source sets are needed they can be added by convention i.e.
terraformSourceSets {
development (1)
staging {
srcDir = 'staging' (2)
}
}
1 | Creates a Terraform source set named 'development' with default directory src/tf/development |
2 | Creates a Terraform source set named 'release` and set the directory to staging . |
Tasks for additional source sets follow the tf<SourceSetName><TerraformCommand>
format. For instance in the above example the initialisation task for the development
source set will be called tfDevelopmentInit
.
By convention all tasks that map Terraform commands start with tf
. Other non-commands tasks might start with terraform
or contain Terraform
within the task name.
Configuring source sets
terraformSourceSets {
main {
srcDir = 'src/tf/main' (1)
variables { (2)
var 'aws_region', 'us-east-1'
}
secondarySources 'src/tf/modules' (3)
remote.s3 { (4)
}
workspaces 'alpha', 'beta' (5)
}
}
1 | Source directory, which will also be the working directory for terraform. |
2 | Configure any variables that are specific to the source set. See Variables block for more details. |
3 | Additional sources that should affect re-running of tasks, but which are not directly part of the existing source set. |
4 | If you applied the org.ysb33r.terraform.remotestate.s3 plugin you can also configure remote state on a per-source set basis.
See configuring remote S3 state for more details. |
5 | Adds additional workspaces to the source set. |
Workspaces
As from 0.10, Terraform workspaces are supported.
This plugin suite adds naming conventions to easily deal with workspaces.
If you add a workspace called alpha
then the apply task for this workspace will be tfApplyAlpha
.
If you add a workspace called beta
to a source set called release
, then the apply task will be tfReleaseApplyBeta
.
These conventions only apply to tasks which are workstate/state-aware in terraform.
For instance there will be no task named tfInitAlpha
or tfFmtCheckAlpha
.
There is also no need to switch workspaces, as the plugin will do that under the hood automatically.
If you run ./gradlew tfApplyAlpha tfApplyBeta tfApplyGamma tfOutput
, the plugin will automatically performa a terraform select
before executing terraform apply
or terraform output
.
If you decide to remove workspaces, simply cleanup the state by running the appropriate tfDestroy
task(s) and then remove the workspaces from the terraformSourceSets
DSL. Finally run tfCleanupWorkspaces
.
Terraform extension
The terraform
project extension configures a method of obtaining a terraform
executable. On supported platforms it offers the option of configuring a Terraform version, which Gradle will then download, cache and use. The project extension also allows for the definition of variables at a global level.
Each Terraform task also has a terraform
extension which can be used to override any settings from the project extension on a per-task basis.
Setting Terraform version
It is possible to select another version of Terraform than the default that the plugin will use.
terraform {
executable version : '1.2.3' (1)
executable path : '/path/to/terraform' (2)
}
1 | Set new Terraform version |
2 | Do not bootstrap Terraform, but rather use the binary at the specified location. |
Setting Terraform version for source set
Sometimes you might need one of your source sets to run with an older version of Terraform. For instance, you might want to upgrade, but one of the source sets will take more work and leaving it at an older version for a period might be a good solution.
Assuming that you have sourceSet called monkey
, this can be achieved via task actions
tasks.all { Task t -> (1)
if(t.name.startsWith('tfMonkey')) {
t.terraform.executable version: '1.2.3'
}
}
import org.ysb33r.gradle.terraform.tasks.AbstractTerraformTask
tasks.withType(AbstractTerraformTask) { AbstractTerraformTask t -> (2)
if(t.name startsWith('tfMonkey')) {
t.terraform.executable version: '1.2.3'
}
}
1 | Using tasks.all is a simple solution, especially for groovyDSL. |
2 | Using withType is an alternative. The typing will help Kotlin DSL users. |
Terraform variables
A number of Terraform task types support variables and files containing variables. Any variables
block support the following functionality
variables {
var 'foo', 'bar' (1)
map 'fooMap', foo: 'bar' (2)
list 'fooList', 'foo', 'bar' (3)
list 'fooList', [ 'foo', 'bar'] (4)
file 'filename.tf' (5)
}
1 | Adds one variable called foo or value bar . The provided value can be anything that can be lazy-evaluated to a string. |
2 | Adds a map called fooMap . The map keys have to be strings, but the map values can be anything that will lazy-evaluate to a string. |
3 | Adds a list called fooList with values foo and bar . THe list entries can be anything that will lazy-evaluate to a string. |
4 | Alternative list format |
5 | Adds a file containing a list of terraform variables. The name can be anything that will lazy evaluate to a string and may be a relative path. The file will be resolved relative to the source set directory. |
There a are a number of ways to define these depending on the context.
- On project extension
-
When variables are defined on the
terraform
project extension they are visible to all Terraform tasks that want to utilise them. - On source set
-
When variables are defined on a specific Terraform source set they are visible to all Terraform tasks that are associated with the specific source set.
- On task extension
-
When variables are defined on the
terraform
task extension, they are only visible to the specific task. A task extension can also be set up to ignore any variables from the project extension of the associated source set.
terraform {
variables {
var 'foo', 'bar' (1)
}
}
terraformSourceSets {
main {
variables {
var 'foo', 'bar' (2)
}
}
}
tfPlan {
terraform {
variables {
var 'foo', 'bar' (3)
global.ignore = true (4)
sourceSet.ignore == true (5)
}
}
}
1 | Adds a global terraform variable. |
2 | Adds a source set-specific terraform variable. |
3 | Adds a task-specific terraform variable. |
4 | Ignore any global terraform variables. |
5 | Ignore any source set-specific terraform variables. |
Environmental Variables
Gradle execution environment is not passed down to Terraform by default with the exception of specific platform-dependent variables as defubded by the following logic:
if (OS.windows) {
[
TEMP : System.getenv('TEMP'),
TMP : System.getenv('TMP'),
HOMEDRIVE : System.getenv('HOMEDRIVE'),
HOMEPATH : System.getenv('HOMEPATH'),
USERPROFILE : System.getenv('USERPROFILE'),
(OS.pathVar): System.getenv(OS.pathVar)
] as Map<String, Object>
} else {
[
HOME : System.getProperty('user.home'),
(OS.pathVar): System.getenv(OS.pathVar)
] as Map<String, Object>
}
Environmental variables can be defined at two levels:
-
Project level via
terraform
project extension -
On the task itself.
terraform {
environment = [:] (1)
setEnvironment([:]) (2)
environment AWS_SHARED_CREDENTIALS_FILE : '/path/shared.creds' (3)
}
tfInit {
environment = [ AWS_SECRET_ACCESS_KEY : '12345' ] (4)
setEnvironment([]) (5)
environment AWS_SHARED_CREDENTIALS_FILE : '/path/shared.creds' (6)
}
1 | Clear all project environment settings and replace with empty set. |
2 | Alternative setter. |
3 | Add a project-level environmental variable. |
4 | Clear all task-level environmental variables (in this example the tfInit task) and replace with a new set. Clearing the environmental variable set will not remove the efault environmental variables. If you need to have them changed, set them explicitly |
5 | Alternative task-level setter. |
6 | Add environmental variable to the task only. |
It is possible to call the environment method on the task’s terraform extension, but those methods will simply forward to the methods on the task itself.
|
AWS environmental variables
It is possible to add all AWS-related environemtnal variables to the Terraform runtime environment via short-cuts
terraform {
useAwsEnvironment() (1)
}
tfInit {
useAwsEnvironment() (2)
}
1 | Set at project level |
2 | Set at task-specific level |
Command-line options for tasks
A number of task types support command-line options.
Task | Option | Type | Default | Purpose |
---|---|---|---|---|
|
|
list |
|
Select group of resources to apply. |
|
boolean |
|
Select group of resources to replace. |
|
|
|
boolean |
|
Force removal of dangling workspaces even if state still exists. |
|
|
list |
|
Select group of resources to apply. |
|
boolean |
|
Auto-approve destruction of all mentioned resources. |
|
|
|
list |
|
Select group of resources to apply. |
|
boolean |
|
Write textual plan in JSON format. |
|
|
|
string |
|
Resource to import e.g. |
|
string |
|
Actual identifier to import. Check the Terraform documentation for the specific resource in order to know how to identify this. |
|
|
|
boolean |
|
Force upgrade of modules |
|
boolean |
|
Do not configure backends |
|
|
boolean |
|
Automatically answer yes to any backend migration questions |
|
|
boolean |
|
Disregard any existing configuration and prevent migration of any existing state |
|
|
|
boolean |
|
Write validation output in JSON format. |
|
|
list |
|
Select group of resources to apply. |
|
boolean |
|
Write textual plan in JSON format. |
|
|
boolean |
|
Select group of resources to replace. |
|
|
|
boolean |
|
Write validation output in JSON format. |
|
|
string |
|
Source item to move i.e. |
|
string |
|
Destination item i.e. |
|
|
|
string |
|
Local state file path (relative to Terraform source directory) to push to remote state. |
|
|
string |
|
Resource to remove e.g. |
|
|
string |
|
Resource to untaint. |
|
boolean |
|
Allow task to succeed even if the resource is missing. |
|
|
boolean |
|
Continue if remote and local Terraform versions differ from Terraform Cloud. |
|
|
|
boolean |
|
Write validation output in JSON format. |
Just in case this documentation is out-of-date, always run ./gradlew help --task <taskName> to get a description of supported command-line options.
|
Remote state in S3
Storing remote state in S3 seems to be a popular approach with Terraform users. Therefore the org.ysb33r.terraform.remotestate.s3
plugin adds some conventions to make life easier. Hopefully it the default conventions does nto quite suit your environment, there are enough customisations available.
The plugin adds a createTfS3BackendConfiguration
tasks and if you have additional Terraform source sets it will add a tasks of the form createTf<SOUCESET>S3BackendConfiguration
. It also adds a terraform.remote
and terraform.remote.s3
extensions. The plugin wil also add a map variable called remote_state
which contains elements name
, aws_region
, aws_bucket
. If you apply this plugin you will need to add the map variable somewhere in your terraform files.
Approach
Normally in order for remote state to be used, a set of backend configuraion values or a backend configuration file has to be passed to terraform init
. In the case of tfInit
this is done via the backendConfigValues
and backendConfigFile
methods. The plugin simplifies this process by generating a configuration file from pre-configured values and a default template. It then automatically configures tfInit
to use this configuration file.
The default template for creating the configuration is:
# Default template (1)
bucket = "@@bucket_name@@"
key = "@@remote_state_name@@.tfstate"
region = "@@aws_region@@"
1 | If this is not sufficient for your needs see how to customise the S3 template. |
The name of the file in S3 is determined from the project name be default. For each additional sourceset, the name of the sourceset is appended. This makes up the remote state name. This prefix can be modified via the terraform.remote.prefix
property.
The AWS specifics of bucket name and region can be configured via properties in the terraform.remote.s3
extension, and further cusotmised via the remote.s3
extension on each source set.
Usage
Add the S3 backend to your Terraform files.
terraform {
backend "s3" { (1)
}
}
variable "remote_state" { (2)
type = map(string)
}
1 | Indicate that you plan to store remote state in S3. You can configure anything as per usual for the S3 backend. |
2 | Allow for Gradle to pass the map of remote state variables. |
Now tell Gradle about your S3 setup.
terraform {
remote {
prefix = 'foo' (1)
s3 {
bucket = 'terraform-managed-remote-state-files' (2)
region = 'us-west-1' (3)
dynamoDbTable = "TerraformLocks" (4)
}
}
}
1 | If you are not happy with the project name being the default prefix for remote state named, then set this here. |
2 | Set the S3 bucket that will contain your state files. |
3 | Region that you are operating in. |
4 | The DynamoDB table use for terraform locks. |
You can also customise on a per-source set basis.
Such setting will override any values from the global settings.
You can configure nearly any item that is part of the S3 backend via the remote.s3
extension.
See RemoteStateS3Spec
for details.
terraformSourceSets {
main {
remote {
s3 {
bucket = 'terraform-managed-remote-state-files' (1)
}
}
}
}
1 | Use a different bucket than what was configured globally |
Sometimes you might want to customise two source sets in the same way.
Instead of duplicating the code, you can simply, let the one source set mirror any changes made to another source set using follow
.
terraformSourceSets {
main {
remote {
s3 {
bucket = 'terraform-managed-remote-state-files'
}
}
}
staging {
remote {
s3.follow(terraformSourceSets.main.remote.s3) (1)
}
}
}
1 | Any changes made to main will be reflected in staging . |
Customising the configuration template
If this is not sufficient for your situation you can supply your own.
createTfS3BackendConfiguration {
templateFile = 'src/tf/mytemplate.tf' (1)
textTemplate = """ (2)
key = "##remote_state_name$$.tfstate"
region = "##region$$"
"""
beginToken = '##' (3)
endToken = '$$' (4)
tokens = [ (5)
myBucket : 'abc'
]
tokens foo : 'bar' (6)
}
1 | Provide an alternative template file. Anything convertible to a file can be used. |
2 | Instead of a file, supply an alternative text template. Anything convertible to a string can be used. |
3 | Start delimiter. |
4 | End delimiter. |
5 | Replace all default tokens. |
6 | Add to existing tokens. |
Substitutions will be performed by Apache Ant’s ReplaceTokens.
Remote state from other source sets
If you are interested in accessing the remote state from another configuration you probably are going to need something like this as well
data "terraform_remote_state" "aws_tf_remote_state" {
backend = "s3"
config = {
bucket = var.remote_state["bucket"] (1)
key = "${var.remote_state["remote_state_name"]}.tfstate" (2)
region = var.remote_state["aws_region"] (3)
encrypt = true
dynamodb_table = var.remote_state["dynamodb_table"] (4)
}
}
1 | The name of the bucket where remote state will be stored. You can use the remote_state map to simplify things. The bucket should pre-exist. |
2 | Name of the file in S3 where the state will be stored. Convention is to add .tfstate to the remote state name. |
3 | Region which is used to store the state. |
4 | Your DynamoDB table for storing lock status. The table should pre-exist. |
See terraform_remote_state
for more details.
Accessing Output Variables
As from v0.9 it is possible to obtain access to output variables. This is an experimental feature and should be treated with care. Always have the correct task dependencies in place as to ensure that you get up to date values for the variables.
This feature is targeted at two use cases:
-
You have more than one Terraform source set, and you want to use outputs from one source set in another source set. Please ensure that you don’t create circular dependencies between the two source sets in this case.
-
You have other build tasks that require values form your infrastructure.
Before using this feature, consider using terraform-remote-state as a possible alternative to obtain output variables from another source set. If you do not need to reflect output variables back into Gradle itself, then terraform-remote-state might be a better solution.
|
In order to obtain these variables, terraform output
will be executed behind the scenes, and the JSON output parsed.
The output variables are effectively a map of the parsed JSON and can be accessed via the following methods on a source set.
-
getRawOutputVariables. This returns a map of all of the variables. You can optionally supply a workspace name.
-
getRawOutputVariable(name). This returns the value of a specific variable. If the value is not a Terraform primitive, you are responsible for traversing the values in the object. If you are unsure of the structure, run
tfOutput
or the equivalent task for your source set using--json
on the command-line and then inspect the output in the file was generated,
Both of these methods return a Gradle Provider
. The first extraction via get()
accessor will result in the values begin cached. For the rest of the build, the values are only read from the cache. Please delay accessing the required variables for as late as possible.
terraformSourceSet {
s3Buckets { (1)
}
main {
var 'use_s3_bucket', terraformSourceSet.getByName('s3Buckets').rawOutputVariable('deploymentBucketName') (2)
}
}
1 | Assume that you have a source set that creates AWS S3 buckets and one of the outputs - deploymentBucketName - is the name of a deployment bucket. |
2 | Link the bucket name as an input variable to the main source set. This will cause the tfS3BucketsOutput task to be executed, before any of the tasks related to the main source set is executed. |
AWS Credentials
terraformSourceSets {
main {
aws {
useAwsCredentialsFromEnvironment() (1)
useAwsCredentialsFromEnvironment 'production' (2)
usePropertiesForAws 'my.access.key', 'my.access.secret' (3)
usePropertiesForAws 'production',
'my.access.key', 'my.access.secret' (4)
useAwsCredentialsFromEnvironmentForAssumeRole { (5)
roleArn = 'arn:.....'
}
useAwsCredentialsFromEnvironmentForAssumeRole 'production',
{ } (6)
usePropertiesForAssumeRole 'my.access.key',
'my.access.secret', { } (7)
usePropertiesForAssumeRole 'production',
'my.access.key', 'my.access.secret', { } (8)
clearAllCredentials() (9)
}
}
}
1 | Use credentials from environment i.e. the AWS_ variables.
This is set to be the default for all workspaces, which are not specifically configured. |
2 | Use credentials from the environment for a specific workspace. |
3 | Use the values provided by the specific properties as the default for all workspaces.
The properties are resolved by first looking for a Gradle property by the given name, failing that a System property and finally in the environment.
For the latter the characters are uppercased and the dots replaced with underscores.
For example my.access.key will become MY_ACCESS_KEY .
The properties are lazily resolved i.e. only at time of running the specific Terraform command.
You can use a variant of this method by passing two providers instead of the property names. The provider should provide the two specific values. |
4 | Use the values provided by the properties for a specific workspace. As with the previous case there is a variant which takes two providers instead of the property names. |
5 | Use the credentials from the environment to set up an assumed role by default for all workspaces.
This overrides useAwsCredentialsFromEnvironment .
See configuring the assumed role for details. |
6 | Use the credentials from the environment to set up an assumed role for a specific workspace.
This overrides useAwsCredentialsFromEnvironment for a specific workspace. |
7 | Use the values provided by the specific properties as the default for all workspaces to set up assumed role authentication.
There is also variation that takes providers instead of property names.
This overrides usePropertiesForAws . |
8 | Use the values provided by the specific properties for a specific workspace to set up assumed role authentication
There is also variation that takes providers instead of property names.
This overrides usePropertiesForAws for the specific workspace. |
9 | Reset any existing credential configurations. |
A workspace can only use either straight forward credentials or assumed role credentials. |
terraformSourceSets {
main {
aws {
useAwsCredentialsFromEnvironmentForAssumeRole {
roleArn = 'arn:.......' (1)
region = 'us-east-1' (2)
sessionName = '' (3)
durationSeconds = 240 (4)
}
}
}
}
1 | ARN of the role to be assumed. Required. |
2 | Region where the session token is being obtained from. Required. |
3 | A session name to identify the current session. If not supplied, an opinionated value will be generated. |
4 | Expiry period for the session token. Defaults to 15min. |
Gitlab Credentials
terraformSourceSets {
main {
gitlab {
useGitlabTokenFromEnvironment() (1)
useGitlabTokenFromEnvironment 'production' (2)
useProperty 'my.gitlab.token' (3)
useProperty 'production', 'my.gitlab.token' (4)
clearAllCredentials() (5)
}
}
}
1 | Use credentials from environment i.e. the GITLAB_TOKEN variable.
This is set to be the default for all workspaces, which are not specifically configured. |
2 | Use credentials from the environment for a specific workspace. |
3 | Use the value provided by the specific property as the default for all workspaces.
The property is resolved by first looking for a Gradle property by the given name, failing that a System property and finally in the environment.
For the latter the characters are uppercased and the dots replaced with underscores.
For example my.gitlab.token will become MY_GITLAB_TOKEN .
The property is lazily resolved i.e. only at time of running the specific Terraform command.
You can use a variant of this method by passing a provider instead of the property name. The provider should provide the specific value. |
4 | Use the value provided by the property for a specific workspace. As with the previous case there is a variant which takes a providersinstead of the property name. |
5 | Reset any existing credential configurations. |
Alternative Plugins
This documentation would not be complete without a description of the alternatives.
- org.curioswitch.gradle-terraform-plugin
-
This plugin from Curiostack apprantly has support for Yaml configuration. Not much documentation is available, so it is difficult to comment on how useful it is.
- tanvd.kosogor.terraform
-
Vladislav Tankov’s plugin focuses on a better Kotlin DSL. It apprantly also has support for publishing Terrform modules.