This module packages various scripts for infrastructure deployments (see What container is used for the deploy
task? for more info) into an ECS task that streams its outputs to
CloudWatch, with an AWS Lambda function that can invoke that task. CI servers can then be configured to directly invoke
the lambda function to trigger the deployment and stream the output from CloudWatch.
The sequence of events is as follows:
By insulating the deploy script from the CI server, we are able to avoid granting IAM permissions to the CI servers that
are required for deploying against the target accounts. Instead, the CI servers only need enough permissions to trigger
the deployment. Refer to the Threat Model for more information.
Threat model of the deploy runner
To implement a CI/CD pipeline for infrastructure code, it is required that the ultimate entity or system running the
infrastructure code has the permissions to deploy the infrastructure defined by code. Unfortunately, to support
arbitrary CI/CD workflows, it is necessary to grant wide ranging permissions to the target environment. As such, it is
important to consider ways to mitigate potential attacks against the various systems involved in the pipeline to avoid
attackers gaining access to deploy targets, which could be catastrophic in the case of a breach of the production
environment.
Here we define our threat model to explicitly cover what attacks are taken into consideration in the design, as well as
what attacks are not considered. The goal of the threat model is to be realistic about the threats that are
addressable with the tools available. By explicitly focusing attention on more likely and realistic threats, we can
avoid overengineering and compromising the usability of of the solution against threats that are unlikely to exist (e.g
a 5 person startup with 100 end users is unlikely to be the subject of a targeted attack by a government agency).
In this design, the following threat assumptions are made:
Attackers can originate from both external and internal sources (in relation to the organization).
External attacks are limited to those that can get full access to a CI environment, but not the underlying source
code. Note that any CI/CD solution can likely be compromised if an attacker has access to your source code.
Internal attackers are limited to those with restricted access to the environments. This means that the threat model
does not consider attackers with enough privileges to already have access to the deploy target accounts (e.g an
internal ops admin with full access to the prod environment). However, an internal attacker with permissions in the
dev environment trying to elevate their access to the prod environment is considered.
Similarly, internal attackers are limited to those with restricted access in the CI environment and git repository. A
threat where the internal attackers can bypass admin approval in a CI pipeline or can force push deployment branches
is not considered.
Internal attackers can have access to the CI environment and the underlying code of the infrastructure (e.g the git
repository).
Given the threat assumptions, the following mitigations are baked into the design:
Minimal access to target environments: Attackers that gain access to the underlying AWS secrets used by the CI
environments will at most have the ability to run deployments against a predefined set of code. This means that
external attackers who do not have access to the source code will at most be able to: (a) deploy code that has already
been deployed before, (b) see the plan of the infrastructure between two points of time. They will not be able to
write arbitrary infrastructure code to read DB secrets, for example. It is important to note that the IAM
policies are set up such that the IAM user for CI only has access to trigger predefined events. They do not have
access to arbitrarily invoke the ECS task, as that could potentially expose arbitrary deployments by modifying the
command property (e.g use command to echo some infrastructure code and run terraform).
Note that there is still risk of rolling back the existing infrastructure by attempting to deploy a previous
version. See below for potential ways to mitigate this type of attack.
Similarly, this alone does not mitigate threats from internal attackers who have access to the source code, as a
potential attacker with access to the source code can write arbitrary code to destroy or lookup arbitrary
infrastructure in the target environment. See below for potential ways to mitigate this type of attack.
Minimal options for deployment: The Lambda function exposes a minimal interface for triggering deployments.
Attackers will only be able to trigger a deployment against a known repo and known git refs (branches, tags, etc). To
further limit the scope, the lambda function can be restricted to only allow references to repositories that matches a
predefined regular expression. This prevents attackers from creating an open source repo with malicious code that they
subsequently deploy by pointing the deploy runner to it.
Restricted Refs for apply: Since many CI systems depend on the pipeline being managed as code in the same
repository, internal attackers can easily circumvent approval flows by modifying the CI configuration on a test
branch. This means that potential attackers can run an apply to destroy the environment or open backdoors by running
infrastructure code from test branches without having the code approved. To mitigate this, the Lambda function allows
specifying a list of git refs (branches, tags, etc) as the source of apply and apply-all. If you limit the source
of apply to only protected branches (see below), it prevents attackers from having the ability to run apply unless
it has been reviewed.
CI server does not need access to the source code: Since the deployments are being done remotely in an ECS task,
the actual CI server does not need to clone the underlying repository to deploy the infrastructure. This means that
you can design your CI pipeline to only have access to the webhook events and possibly the change list of files (to
know which module to deploy), but not the source code itself. This can further decrease the effect of a potential
breach of the CI server, as the attacker will not have the ability to read or modify the infrastructure code to use
the pipeline to their advantage.
These mitigations alone will not prevent all attacks defined in the threat model. For example, an internal attacker with
access to the source code can still do damage to the target environments by merging in code that removes all the
infrastructure resources, thereby destroying all infrastructure when the apply command is run. Or, they could expose
secrets by writing infrastructure code that will leak the secrets in the logs via a local-exec provisioner. Note that
that any CI/CD solution can likely be compromised if an attacker has full access to your source code.
For these types of threats, your best bet is to implement various policies and controls on the source control repository
and build configurations:
Only deploy from protected branches: In most git hosting platforms, there is a concept of protected branches (see
GitHub docs for example).
Protected branches allow you to implement policies for controlling what code can be merged in. For most platforms, you
can protect a branch such that: (a) it can never be force pushed, (b) it can never be merged to or commit to from the
cli, (c) merges require status checks to pass, (d) merges require approval from N reviewers. By only building CI
pipelines from protected branches, you can add checks and balances to ensure review of potentially harmful
infrastructure actions.
Require approval in CI build steps: If protected branches is not an option, you can implement an approval workflow
in the CI server. This can mitigate attacks such that attackers will need enough privileges on the CI server to
approve builds in order to actually modify infrastructure. This can mitigate potential attacks where the attacker has
access to the CI server to trigger arbitrary builds manually (e.g to run a previous job that is deplying an older
version to roll back the infrastructure), but not enough access to approve the job. Note that this will not mitigate
potential threats from internal attackers who have enough permissions to approve builds.
Avoid logging secrets: Our threat model assumes that attackers can get access to the CI servers, which means they
will have access to the deployment logs. This will include detailed outputs from a terraform plan or apply. While
it is impossible to prevent terraform from leaking secrets into the state, it is possible to avoid terraform from
logging sensitive information. Make use of pgp encryption functions or encrypted environment variables / config files
(in the case of service deployments) to ensure sensitive data does not show up in the plan output. Additionally, tag
sensitive outputs with the sensitive keyword so that terraform will mask the outputs.
Consider a forking based workflow for pull requests: For greater control, you can consider implemmenting a forking
based workflow. In this model, you only allow your trusted admins to have access to the main infrastructure repo, but
anyone on the team can read and fork the code. When non-admins want to implement changes, instead of branching from
the infra repo they will fork the repo, implement changes on their fork, and then open a PR from the fork. The
advantage of this approach is that many CI platforms do not automatically run builds from a fork for security reasons.
Instead, admins manually trigger a build by pushing the forked branch to an internal branch. While this is an
inconvenience to devs as you won't automatically see the plan, it prevents unwanted access to secrets by modifying
the CI pipeline to log internal environment variables or show infrastructure secrets using external data sources.
Operations
Which launch type should I use?
The ECS deploy runner supports both Fargate and EC2 launch types. When running in Fargate mode, each ECS task is spun up
on demand for each invocation. This means that you will only pay for the container runtime for the duration of the task.
Additionally, concurrency of the jobs is only limited by the maximum number of Fargate tasks AWS allows you to run at a
given point in time (default is 100). This means that you don't need to worry about scaling your capacity on demand,
allowing you to minimize your costs. This works best when you have the need to run many deployments in parallel across
multiple containers, or if you have a sparse work schedule where your builds run for a limited time each day.
The EC2 launch type will deploy a cluster of EC2 instances to run the tasks on. This launch type reserves VMs to host
the tasks which cuts down the container image download time and VM boot up time of the ECS task. However, the start up
time is traded off with the cost of keeping the resources up longer than the task run times, as well as the inability to
scale up and down on demand. This works best when you have short deployment times where the start up time of Fargate
containers is relatively expensive.
The following is a table summarizing the differences:
Feature
Fargate
EC2
Pay only for runtime
✅
❌
Serverless
✅
❌
Autoscaling
✅
⚠️ (Requires optimization for each environment)
Cached images
❌
✅
Time to boot
Minutes
10s of seconds
What container is used for the deploy task?
Any container specified in container_images can be used for the deploy task. You can also specify multiple containers
for a single ECS Deploy Runner stack. This is useful when using specialized third party containers for deployment tasks
that are not directly supported by the Gruntwork deploy runner container (e.g.,
kaniko for building Docker images).
For convenience, we provide Dockerfiles (defined in the subfolder docker) to build containers that have a
set of tools that are most commonly used in infrastructure projects that depend on Gruntwork modules. There are two
Dockerfiles in the folder:
Note that you will only be allowed to invoke the scripts in the trigger directory (/opt/ecs-deploy-runner/scripts) if
you use the standard configuration (see What configuration is recommended for
container_images? for more details).
If your infrastructure code requires additional tools, you can customize the runtime environment by building a new
container and providing the image reference to this module using the container_images input variable.
To build the docker container, follow the following steps:
Set the GITHUB_OAUTH_TOKEN environment variable to a read only machine user with access to Gruntwork.
Change working directory to the docker folder of this module (modules/ecs-deploy-runner/docker from the root of
the repo).
The ECS Deploy Runner uses ECS Fargate to run the infrastructure code. However, ECS
Fargate does not support bind mounting the docker sock to use Docker in Docker for building images. As such, it is
currently not possible to build docker images directly in ECS Fargate. Instead, we use an indirect method with a tool
called kaniko. Kaniko is a binary that was originally built for
building docker images in Kubernetes, but it supports any platform where docker in docker is not supported.
We need a specialized kaniko container for the ECS deploy runner that is setup to push the built docker images to ECR.
In addition to the kaniko command, our version contains:
A configuration file to setup the Amazon ECR Credential
Helper so that kaniko can authenticate to AWS for pushing
images to ECR.
A trigger command to wrap the kaniko command to simplify the args for AWS based CI/CD use cases.
An entrypoint script that is compatible with the ecs-deploy-runner for enforcing security restrictions around what
commands can be invoked in the container.
What configuration is recommended for container_images?
The ECS Deploy Runner stack supports a wide range of configuration options for each container to maximize the security
benefits of the stack. For example, we provide configuration options for controlling which options and arguments to
allow for each script in a container. This flexibility allows the stack to adapt to almost all CI/CD use cases, but at
the expense of requiring time and effort to figure out the best options to minimize the security risk of the stack.
For convenience, we provide container configurations that are distilled to a set of user friendly options (e.g.,
infrastructure_live_repositories as opposed to hardcoded_options) that you can use to configure a canonical ECS
Deploy Runner stack that can be used with most infrastructure and application CI/CD workflows. You can use the
ecs-deploy-runner-standard-configuration module for this purpose.
The standard configuration will set up:
A docker-image-builder ECS task using the kaniko container with recommended script configurations for restricting
what repos can be used to build containers.
An ami-builder ECS task using the deploy-runner container that is restricted to only running
build-packer-artifact. The task has recommended script configurations for restricting what repos can be used to
build AMIs.
A terraform-planner ECS task using the deploy-runner container that has recommended script configurations to
restrict the container to only allow running plan actions with the infrastructure-deploy-script.
A terraform-applier ECS task using the deploy-runner container that has recommended script configurations to
restrict the cotnainer to only allow running apply actions with the infrastructure-deploy-script. Additionally,
this container can be used to run terraform-update-variable if variables need to be updated for a deployment.
Secrets Manager entries that are passed into the containers as environment variables.
How do I use the ECS Deploy Runner with a private VCS system such as GitHub Enterprise?
If you try using the ECS Deploy Runner docker container with a private VCS system such as GitHub Enterprise, you might
get an error message indicating that the SSH host was not verified. This is expected because we enable SSH host
verification when accessing Git repos via SSH in the container. This means that the host keys must be validated
beforehand at container creation time.
This is done by copying a precompiled list of host keys for each of the major VCS systems in the
docker/known_hosts file. Each entry was added using the ssh-keyscan CLI utility that comes
with openssh. To add the host key for your private VCS server, run the following command to add it to the
known_hosts file:
# Run at root of repo
ssh-keyscan -t rsa DOMAIN_OF_VCS_SERVER >> ./modules/ecs-deploy-runner/docker/known_hosts
What scripts can be invoked as part of the pipeline?
The pipeline assumes every docker container is equipped with the deploy-runner entrypoint command (see the entrypoint
directory for the source code). This is a small go binary that enforces the configured trigger directory
of the Docker container by making sure that the script requested to invoke actually resides in the trigger directory.
This enforcement ensures that the ECS tasks with powerful IAM permissions can only be used for running specific,
pre-defined scripts.
This entrypoint should be configured on the Docker container in the Dockerfile using the ENTRYPOINT
directive so that the ECS task automatically passes
through the command args without the option to override it.
You can install the entrypoint command and configure the trigger directory using the
gruntwork-installer. Note that the install script assumes you
have a working go compiler in the PATH. See the Dockerfile for the
deploy-runner and the kaniko containers for an example
of how to do this in your custom Dockerfile.
How do I restrict what args can be passed into the scripts?
This module exposes a detailed configuration object for each container passed into container_images that can be used
to configure restrictions on the args that can be passed to the script. This is done through the script_config
attribute in each entry of the container_images map. Refer to the variables.tf documentation for the
script_config map to see the type signature and what attributes you can set on the configuration.
Each entry in the script_config map corresponds to a script in the trigger directory, with the key referencing the
script name. These options can be used to implement complex restrictions for each script to avoid allowing a user to
invoke arbitary code with the assigned IAM credentials of the container. Note that by default if a script is not
included in the configuration map, it will not allow any arg to be passed in.
For example, the following is a simplified version of the script configuration setup for the
infrastructure-deploy-script in the terraform-applier task:
The configuration hardcodes the repo arg and disallows the user from setting that value. This ensures that a user
can not change the source of the code by passing in an arbitrary repository with --repo.
The configuration also hardcodes the allowed-apply-refs-json arg to ensure that the user can not run apply from
any git ref that isn't approved. This ensures that the user can't modify the CI script in the infrastructure repo to
trigger an apply on unreviewed code.
The configuration also disables positional args. This isn't strictly necessary as the infrastructure-deploy-script
does not support positional args, but is good practice to avoid potential vulnerabilities.
The configuration allows setting the deploy-path, binary, command, and command-args options, which allow for
flexibility in the workflow (e.g., running terragrunt plan on a specific path with the -no-color option).
The configuration restricts the command option to only allow apply. This ensures that the user can't use this
container for the plan action, which can run on any branch and thus allows arbitrary code execution with powerful
IAM credentials intended for deploying infrastructure.
Here is another example from the standard configuration (build-packer-artifact):
Allows build-name, var, and packer-template-path to be set by the user.
packer-template-path is restricted to only build from a git repo, and only those repos that were passed in. However,
any subpath and ref in those repos are allowed.
What are the IAM permissions necessary to trigger a deployment?
You can use the ecs-deploy-runner-invoke-iam-policy module to create
an IAM policy that grants the minimal permissions necessary to trigger a deployment, check the status of the deployment,
and stream the logs from that deployment.
How do I stream logs from the deployment task?
The ECS task is configured to stream the stdout and stderr logs from the underlying container running the deploy
script to CloudWatch Logs under a deterministic name. You can use the predetermined name to find and stream the log
outputs from the CloudWatch Log Group and Stream.
Note that this will be done automatically for you when you invoke a deployment using the infrastructure-deployer
CLI.
How do I trigger a deployment?
This module configures an ECS task definition to run infrastructure deployments using the deploy script provided in the
infrastructure-deploy-script module. Additionally, this module will configure
an AWS Lambda function to be able to trigger the ECS task. You can read more about the architecture in the
Overview and Threat Model sections of this doc, including the
reasoning behind introducing Lambda instead of directly invoking the ECS task.
Given that, to trigger a deployment, you need to invoke the deployment Lambda function. This can be done by using the
deployment CLI in the infrastructure-deployer module. For example, to invoke a plan
action for the module dev/us-east-1/services/my-service with version v0.0.1 of the code using the standard
configuration:
AWS Lambda currently does not have direct integrations with version control tools. Therefore, there is no easy way
to configure automated git flows to direclty invoke the Lambda function. Instead, you should configure a CI build system
(e.g Jenkins, CircleCI, Gitlab) to invoke the deployment task using the infrastructure-deployer
CLI to perform the deployment actions. Refer to the How do I trigger a
deployment? section of the docs for more information.
You can read more about the architecture in the Overview and Threat
Model sections of this doc, including the reasoning behind introducing AWS
Lambda instead of directly invoking the ECS task.
To summarize:
Use existing CI servers (e.g Jenkins, CircleCI, Gitlab) to integrate your workflow with version control
Use this module to set up an ECS task to run your deployments via a trigger Lambda function
Use the infrastructure-deployer CLI in your CI builds to invoke the ECS task (via Lambda) and stream the logs
When you integrate all the components together, users can now trigger deployments when they merge infrastructure code.
For example, here is an example workflow that can be configured (where USER denotes user actions, BUILD denotes
CI build server actions, and ECS denotes actions by the ECS task):
USER: writes some Terraform code and commit it to a git branch
BUILD: git commit triggers a build job in CI
USER: logs into the CI server to see the build job
USER: clicks the build job to see the build output
BUILD: call out to the infrastructure-deployer to trigger a deployment task
BUILD: the infrastructure-deployer invokes the trigger Lambda function, which in turns create the ECS task
ECS: run the desired action (terraform/terragruntplan/apply), streaming output to CloudWatch logs
BUILD: the infrastructure-deployer finds the CloudWatch Logs and streams the logs to stdout of the build server.
USER: sees the logs streamed from the infrastructure-deployer in the CI server UI
ECS: the task exists
BUILD: the infrastructure-deployer detects the task has finished and exits as well, exiting with the same exit code
as the task
USER: sees if deployment succeeded or failed
How do I provide access to private git repositories?
Since we are not running the deployment from the CI server directly, you can't use the SSH key management mechanisms
provided by each CI server. Instead, you must store the private SSH key in AWS Secrets Manager so that it can be shared
with the ECS task at runtime. This secret is automatically injected by the ECS container agent as an environment
variable when the task is first started.
In the standard configuration, we will setup the expected environment variables for each container based on the entries
provided to the secrets_manager_env_vars input variables of the corresponding task configuration. We recommend the
following settings for each container:
docker_image_builder = {
secrets_manager_env_vars = {
GIT_USERNAME = "ARN of secrets manager entry containing github personal access token for private repos containing Dockerfiles."GITHUB_OAUTH_TOKEN = "ARN of secrets manager entry containing github personal access token for use with gruntwork-install during docker build."
}
}
ami_builder = {
secrets_manager_env_vars = {
GITHUB_OAUTH_TOKEN = "ARN of secrets manager entry containing github personal access token for use with gruntwork-install during docker build."
}
}
terraform_planner = {
secrets_manager_env_vars = {
DEPLOY_SCRIPT_SSH_PRIVATE_KEY = "ARN of secrets manager entry containing raw contents of a SSH private key for accessing private repos containing infrastructure live configuration."
}
}
terraform_applier = {
secrets_manager_env_vars = {
DEPLOY_SCRIPT_SSH_PRIVATE_KEY = "ARN of secrets manager entry containing raw contents of a SSH private key for accessing private repos containing infrastructure live configuration. This is also used when updating the config files with terraform-update-variable."
}
}
For entries corresponding to SSH keys, you will need to make sure to store the contents of the ssh private key into AWS
Secrets Manager in order for the ECS task to properly read and use the key. Note that currently the ECS deploy runner
does not support pem keys that require a password.
You will also want to make sure to use a dedicated machine user with read only privileges for accessing the source code.
As mentioned in the threat model, write access to the source code will defeat
almost any security measures employed for CI/CD of infrastructure code, so you will want to make sure that damage can be
limited even if this secret were to leak. The exception is if you are implementing automated deployment workflows, in
which case you will want to configure argument boundaries
to ensure that you can't modify the input variables of arbitrary infrastructure configurations using
terraform-update-variable.
To create a machine user and associate its SSH key:
Create the machine user on your version control platform.
Create a new SSH key pair on the command line using ssh-keygen:
ssh-keygen -t rsa -b 4096 -C "MACHINE_USER_EMAIL"
Make sure to set a different path to store the key (to avoid overwriting any existing key). Also avoid setting a
passphrase on the key.
Upload the SSH key pair to the machine user. See the following docs for the major VCS platforms:
BitBucket:
(Note: you will need to expand one of the instructions to see the full instructions for adding an SSH key to the
machine user account)
Create an AWS Secrets Manager entry with the contents of the private key. In the following example, we use the aws
CLI to create the entry in us-west-2, sourcing the contents from the SSH private key file ~/.ssh/machine_user
Record the ARN from the output and set the relevant secrets_manager_env_vars or
repo_access_ssh_key_secrets_manager_arn input variables in the standard configuration.
Contributing
Developing the Invoker Lambda function
The source code for the invoker lambda function exists in the invoker-lambda folder. In the folder,
you will find the following folder structure:
invoker: A python package containing the lambda function handler.
dev_requirements.txt: Additional requirements for enhanced developer experiences. E.g mypy and type stubs for static
analysis.
Note that the invoker code requires Python 3.8 to run. This is primarily to take advantage of the enhanced static types
that were added in Python 3.8. Since we can target a known environment (AWS Lambda), we trade off portability of the
scripts for a better developer experience.
{"treedata":{"name":"root","toggled":true,"children":[{"name":".circleci","children":[{"name":"config.yml","path":".circleci/config.yml","sha":"f074545a5bdfd157e96c7074ed9ea3bc91c0396c"}]},{"name":".gitignore","path":".gitignore","sha":"1cd2e7ca72e5102b6b85a0c1de82d28c99d4287e"},{"name":".pre-commit-config.yaml","path":".pre-commit-config.yaml","sha":"47826a6fca6c68c6e0f0e9ac88988c2830ed4ef1"},{"name":"CODEOWNERS","path":"CODEOWNERS","sha":"a834dbe979d6fb551a71156abfb7e22771115e48"},{"name":"LICENSE.txt","path":"LICENSE.txt","sha":"f4e3d9bd4717a044ed31ad847a300eee74371a78"},{"name":"README-CircleCI.adoc","path":"README-CircleCI.adoc","sha":"046b030ed6e15023530d4a29a3dcf60ac7982d27"},{"name":"README-Jenkins.adoc","path":"README-Jenkins.adoc","sha":"2587e3b59001ed39eaac88468202afdcbc2332af"},{"name":"README-Terraform-Terragrunt-Pipeline.adoc","path":"README-Terraform-Terragrunt-Pipeline.adoc","sha":"1ad99530c6cf984aa638e7c2fcd0bf4d4987ce12"},{"name":"README-TravisCI.adoc","path":"README-TravisCI.adoc","sha":"45e0d32aae5d971ee7e7670dc0da94d364c14457"},{"name":"README.adoc","path":"README.adoc","sha":"47cb10253af3b2e514678434a61ad681c02a88f2"},{"name":"_ci","children":[{"name":"output-debug-values.sh","path":"_ci/output-debug-values.sh","sha":"fa613638c76c2031b1427c2daa1d66f71fedad68"}]},{"name":"_docs","children":[{"name":"circleci-cicd-architecture.png","path":"_docs/circleci-cicd-architecture.png","sha":"06f8a55b7c123b6e589333a1ff1c3d90c43222d6"},{"name":"circleci-icon.png","path":"_docs/circleci-icon.png","sha":"d4e8df17858e6f230ff9e8d90ea388b3ff340b79"},{"name":"jenkins-architecture.png","path":"_docs/jenkins-architecture.png","sha":"a35a534eb7f13547e232635262d6b1c1506e9230"},{"name":"jenkins-icon.png","path":"_docs/jenkins-icon.png","sha":"cfb474486acb167b655c22a400fe5cc999959164"},{"name":"terraform-icon.png","path":"_docs/terraform-icon.png","sha":"85602f11c76fd989788112ba40c08d979ddb1164"},{"name":"tftg-pipeline-architecture.png","path":"_docs/tftg-pipeline-architecture.png","sha":"c016f9263be935e933db832e6e5047d6d656339c"},{"name":"travisci-cicd-architecture.png","path":"_docs/travisci-cicd-architecture.png","sha":"c7da609e901de1baba84918893af9016c3da78ed"},{"name":"travisci-icon.png","path":"_docs/travisci-icon.png","sha":"57116e900f797c8f35399929fb3a24b2cf0e7181"}]},{"name":"examples","children":[{"name":"ecs-deploy-runner","children":[{"name":"README.md","path":"examples/ecs-deploy-runner/README.md","sha":"f7dbc2c05f78cdeaff779eecfef93594f7a9c175"},{"name":"main.tf","path":"examples/ecs-deploy-runner/main.tf","sha":"ba5f23421fb1e4da5a44f12ba3b750ea56ac56e1"},{"name":"outputs.tf","path":"examples/ecs-deploy-runner/outputs.tf","sha":"0d39cc204673e2e92f0b6e32464ac8080fda59c2"},{"name":"variables.tf","path":"examples/ecs-deploy-runner/variables.tf","sha":"569c2a38d743951b63d72e61caeac4071d7bbd92"}]},{"name":"iam-policies","children":[{"name":"README.md","path":"examples/iam-policies/README.md","sha":"4eaf42cd7b4bc254ac9aabfaa8d6c1b4b8cc4281"},{"name":"main.tf","path":"examples/iam-policies/main.tf","sha":"29ed07d00498d2964f2850a49cb9b28cce541333"},{"name":"vars.tf","path":"examples/iam-policies/vars.tf","sha":"ffd4eed9234a389e6809cbb1c149efdbe778578e"}]},{"name":"jenkins","children":[{"name":"README.md","path":"examples/jenkins/README.md","sha":"5c76af2853297fa24f0ab44dc1cdcdf6ba8e1a55"},{"name":"docker-compose.yml","path":"examples/jenkins/docker-compose.yml","sha":"cdbb01e4c39d6ad67656d7997355b03027f2358e"},{"name":"main.tf","path":"examples/jenkins/main.tf","sha":"1881a2126a139b44718e4fd54aaa8a087206b8da"},{"name":"mock","children":[{"name":"mock-user-data.sh","path":"examples/jenkins/mock/mock-user-data.sh","sha":"610ba8091622d0bbec4301093b63c4263b5939ae"},{"name":"mount-ebs-volume","path":"examples/jenkins/mock/mount-ebs-volume","sha":"faa2394ad8a7a35657fb7f34b2014c7f05224e56"},{"name":"systemctl","path":"examples/jenkins/mock/systemctl","sha":"c656c65a7fd2b411548adb7865bc9c065333ad2e"}]},{"name":"outputs.tf","path":"examples/jenkins/outputs.tf","sha":"0be347ef2da711389da6e9ab98e7717e3dfa1a02"},{"name":"packer","children":[{"name":"jenkins.json","path":"examples/jenkins/packer/jenkins.json","sha":"d35f1c232b6d5cc2f35b37eb4bac1409a4c4c391"}]},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/jenkins/user-data/user-data.sh","sha":"19a4ccc9a1f8cb264d6995473ece0c2b02a0091a"}]},{"name":"vars.tf","path":"examples/jenkins/vars.tf","sha":"880057af01bab8502d9c786cf2827b3f8134cc69"}]}]},{"name":"modules","children":[{"name":"aws-helpers","children":[{"name":"README.md","path":"modules/aws-helpers/README.md","sha":"2fc74c09e7880238a4c049022f455809549d2ee1"},{"name":"bin","children":[{"name":"publish-ami","path":"modules/aws-helpers/bin/publish-ami","sha":"1d5697d411cbeed05734fb356ffbf598f7e4eeeb"}]},{"name":"install.sh","path":"modules/aws-helpers/install.sh","sha":"2700711d6a80c6f6c218e4b3d5b1b0cfe4d7609b"}]},{"name":"build-helpers","children":[{"name":"README.md","path":"modules/build-helpers/README.md","sha":"6816ecb1ffd0810892df24343366667a2a4ba66a"},{"name":"bin","children":[{"name":"build-docker-image","path":"modules/build-helpers/bin/build-docker-image","sha":"0fde55c7e1985925fe00d2d399b190ae6b2edd26"},{"name":"build-packer-artifact","path":"modules/build-helpers/bin/build-packer-artifact","sha":"fecdbcc337c1c5873aba7bbb9a1081f7958f4e17"}]},{"name":"install.sh","path":"modules/build-helpers/install.sh","sha":"a132488127b88ef8268399a0b3852aa5e0967a20"}]},{"name":"check-url","children":[{"name":"README.md","path":"modules/check-url/README.md","sha":"070d26ac5f4cf136fc28a69d97df6710a341cf24"},{"name":"bin","children":[{"name":"check-url","path":"modules/check-url/bin/check-url","sha":"4e6c57dd70e0a385fb23814619dc917b7c527e57"}]},{"name":"install.sh","path":"modules/check-url/install.sh","sha":"488fad728f75e4ffb6d7156b7cc9ee9682d60183"}]},{"name":"circleci-helpers","children":[{"name":"README.md","path":"modules/circleci-helpers/README.md","sha":"3e58d0856dd6ae04bbd22814d6243b0225a689d9"},{"name":"bin","children":[{"name":"install-go-version","path":"modules/circleci-helpers/bin/install-go-version","sha":"8a0121205f24358a00af129e729795013ebf7edb"},{"name":"place-repo-in-gopath","path":"modules/circleci-helpers/bin/place-repo-in-gopath","sha":"ac7085bb304a2004050e676bea658f039493fdcc"}]},{"name":"install.sh","path":"modules/circleci-helpers/install.sh","sha":"313288f93c55678a883f53106471a21f8ebbb2fa"}]},{"name":"ec2-backup","children":[{"name":"README.md","path":"modules/ec2-backup/README.md","sha":"d9ef51b6b9a83f8e5bf15a582d5b18d48091f3de"},{"name":"backup-lambda-function","children":[{"name":"ec2-snapper_linux_amd64-v0.5.2","path":"modules/ec2-backup/backup-lambda-function/ec2-snapper_linux_amd64-v0.5.2","sha":"11f872d3dde35b44c160eccf3f0adf49cc5bd72f"},{"name":"index.js","path":"modules/ec2-backup/backup-lambda-function/index.js","sha":"555ea9666346d7d57588452a5a9642ff73bcf25e"}]},{"name":"main.tf","path":"modules/ec2-backup/main.tf","sha":"78ed3754120c76cb150f6b858bc7cfdbddaa1466"},{"name":"outputs.tf","path":"modules/ec2-backup/outputs.tf","sha":"770744637fcd7cafd98991397b26a82d01524192"},{"name":"vars.tf","path":"modules/ec2-backup/vars.tf","sha":"dd8d61875d9f68596d05bf677a51119750f512e8"}]},{"name":"ecs-deploy-runner-invoke-iam-policy","children":[{"name":"README.md","path":"modules/ecs-deploy-runner-invoke-iam-policy/README.md","sha":"66152227a29ea201358ced76a359bb5f2cfd7c9d"},{"name":"main.tf","path":"modules/ecs-deploy-runner-invoke-iam-policy/main.tf","sha":"42b23973668008665974e714e1ca077ea96a9b9d"},{"name":"outputs.tf","path":"modules/ecs-deploy-runner-invoke-iam-policy/outputs.tf","sha":"8703d962acdf4584375558aa59831c1c873fe606"},{"name":"variables.tf","path":"modules/ecs-deploy-runner-invoke-iam-policy/variables.tf","sha":"566d414f7ef8621e8c7613847d92b77c3e08cbb6"}]},{"name":"ecs-deploy-runner-standard-configuration","children":[{"name":"README.md","path":"modules/ecs-deploy-runner-standard-configuration/README.md","sha":"f4c49c30bd8f1675ac907077b2e59a946ee96315"},{"name":"main.tf","path":"modules/ecs-deploy-runner-standard-configuration/main.tf","sha":"000a97b59512ee87ae1afea3e091723b262befdc"},{"name":"outputs.tf","path":"modules/ecs-deploy-runner-standard-configuration/outputs.tf","sha":"8a49694a66a0914299b1fb73e4b9576d32ec001e"},{"name":"variables.tf","path":"modules/ecs-deploy-runner-standard-configuration/variables.tf","sha":"f1475315eae8b7b3954400cf5abaa60855d1ee45"}]},{"name":"ecs-deploy-runner","children":[{"name":"README.adoc","path":"modules/ecs-deploy-runner/README.adoc","sha":"1857addf7729b043a1353863402b12b8cfb7ee4a"},{"name":"_docs","children":[{"name":"images","children":[{"name":"sequence-diagram.png","path":"modules/ecs-deploy-runner/_docs/images/sequence-diagram.png","sha":"15c9ea45ff27d20fdc23cf918f8c9ba64c0601a3"}]}]},{"name":"core-concepts.md","path":"modules/ecs-deploy-runner/core-concepts.md","sha":"849b1db4b083d1ba6a9f4d009ae767ad4651d458","toggled":true},{"name":"docker","children":[{"name":"deploy-runner","children":[{"name":"Dockerfile","path":"modules/ecs-deploy-runner/docker/deploy-runner/Dockerfile","sha":"e7c73ada01b17085f2b6134a28c1dfe91bc4ab8b"},{"name":"known_hosts","path":"modules/ecs-deploy-runner/docker/deploy-runner/known_hosts","sha":"0dd0a16cd905887f363c0ba4effb5e88a8c4afac"}]},{"name":"kaniko","children":[{"name":"Dockerfile","path":"modules/ecs-deploy-runner/docker/kaniko/Dockerfile","sha":"9d9c6ac827927397916f8d9fe220dd35ac5a92f6"},{"name":"build_docker_image.go","path":"modules/ecs-deploy-runner/docker/kaniko/build_docker_image.go","sha":"8051d59a7a8cc839a61f860d38a68c34957feafb"},{"name":"config.json","path":"modules/ecs-deploy-runner/docker/kaniko/config.json","sha":"34798d807bda1825a3285dd3ac4436eab0905d9d"},{"name":"go.mod","path":"modules/ecs-deploy-runner/docker/kaniko/go.mod","sha":"2915250c36924f1bbfb58a7485d58e3ce386efed"},{"name":"go.sum","path":"modules/ecs-deploy-runner/docker/kaniko/go.sum","sha":"cd278d67681ad9401f9de5d3ac258b9df24ed173"}]}]},{"name":"entrypoint","children":[{"name":"deploy_runner_entrypoint.go","path":"modules/ecs-deploy-runner/entrypoint/deploy_runner_entrypoint.go","sha":"f71a662ec5653f4c3f569d0606e05058c6130cd9"},{"name":"go.mod","path":"modules/ecs-deploy-runner/entrypoint/go.mod","sha":"eeca92d303d02287f5d4b1db8e0c10e5afa7e54f"},{"name":"go.sum","path":"modules/ecs-deploy-runner/entrypoint/go.sum","sha":"a114c9668062acd899c6a833fd7c09ebd4563146"},{"name":"install.sh","path":"modules/ecs-deploy-runner/entrypoint/install.sh","sha":"08c92e1e908e330112adff47b4da225658f4369e"}]},{"name":"invoker-lambda","children":[{"name":"dev_requirements.txt","path":"modules/ecs-deploy-runner/invoker-lambda/dev_requirements.txt","sha":"e5c3080c5b1785d0bcbb0a0c0751ae3adef55c54"},{"name":"invoker","children":[{"name":"__init__.py","path":"modules/ecs-deploy-runner/invoker-lambda/invoker/__init__.py","sha":"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391"},{"name":"assertions.py","path":"modules/ecs-deploy-runner/invoker-lambda/invoker/assertions.py","sha":"53d74960da2cd9088992f4d1e8d62297709e4e3c"},{"name":"exceptions.py","path":"modules/ecs-deploy-runner/invoker-lambda/invoker/exceptions.py","sha":"bd0ae2c664b6242323c4f19e2bf4ebdd64262009"},{"name":"index.py","path":"modules/ecs-deploy-runner/invoker-lambda/invoker/index.py","sha":"fc432c8beacbb0275e0dda086781e5eda24ee266"},{"name":"project_logging.py","path":"modules/ecs-deploy-runner/invoker-lambda/invoker/project_logging.py","sha":"7073c22d7c70ee8940a3ab194a2edc3aef554055"},{"name":"types.py","path":"modules/ecs-deploy-runner/invoker-lambda/invoker/types.py","sha":"75f4262982bfbd5f751654b35e903be5279b7d98"}]},{"name":"test","children":[{"name":"test_assertions.py","path":"modules/ecs-deploy-runner/invoker-lambda/test/test_assertions.py","sha":"8f52d353ab727bd0e34922cfac7e98924302af65"},{"name":"test_invoker.py","path":"modules/ecs-deploy-runner/invoker-lambda/test/test_invoker.py","sha":"e99f8ee3c1314716100a1378b0d50c58c83988f3"}]}]},{"name":"main.tf","path":"modules/ecs-deploy-runner/main.tf","sha":"8af4dc91ff4e02ec58d0d7980f5a79f51542dca7"},{"name":"main_ecs.tf","path":"modules/ecs-deploy-runner/main_ecs.tf","sha":"43801e89dbedbd63847320ed85545a5715896740"},{"name":"main_lambda.tf","path":"modules/ecs-deploy-runner/main_lambda.tf","sha":"d6d9836f72482c041841667465a3cd09e328b888"},{"name":"outputs.tf","path":"modules/ecs-deploy-runner/outputs.tf","sha":"a78322a16476d579a9f1d92846cd252a6eb8755a"},{"name":"variables.tf","path":"modules/ecs-deploy-runner/variables.tf","sha":"374b3cbcd42da8b5ed1a10e189567451ca035241"}],"toggled":true},{"name":"git-helpers","children":[{"name":"README.md","path":"modules/git-helpers/README.md","sha":"86803f51a9bdcca05dfd13756fe16088a6363ff4"},{"name":"bin","children":[{"name":"git-add-commit-push","path":"modules/git-helpers/bin/git-add-commit-push","sha":"f270bc4da9ee46bc7754c0ce8bf8f5a6c6c38c0a"},{"name":"git-rebase","path":"modules/git-helpers/bin/git-rebase","sha":"887dd90ba36adc5c0362cbadca79707756535b24"}]},{"name":"install.sh","path":"modules/git-helpers/install.sh","sha":"f22ede2f2f085af0a065dd43ed1279c69dd5a00b"}]},{"name":"gruntwork-module-circleci-helpers","children":[{"name":"README.md","path":"modules/gruntwork-module-circleci-helpers/README.md","sha":"074bc610447d27206bc50ba1a7b06fc95226d369"},{"name":"bin","children":[{"name":"build-go-binaries","path":"modules/gruntwork-module-circleci-helpers/bin/build-go-binaries","sha":"8aca966ed857b2d6d50a7c72c09e9181ae125e3c"},{"name":"configure-environment-for-gruntwork-module","path":"modules/gruntwork-module-circleci-helpers/bin/configure-environment-for-gruntwork-module","sha":"66538721d100ad93eac0784ab0603ddf508c6265"},{"name":"run-go-tests","path":"modules/gruntwork-module-circleci-helpers/bin/run-go-tests","sha":"d5406bdd61dae657ded161cf6f2ca66b17cd5b81"},{"name":"upload-github-release-assets","path":"modules/gruntwork-module-circleci-helpers/bin/upload-github-release-assets","sha":"b86a2f1049b1be58179569ea7fed641d960d1f05"}]},{"name":"install.sh","path":"modules/gruntwork-module-circleci-helpers/install.sh","sha":"a9cc16c05a73cf6c40097608838758a8de64daab"}]},{"name":"iam-policies","children":[{"name":"README.md","path":"modules/iam-policies/README.md","sha":"da28056c8749bc2c46dc99ed16286e14b7c55511"},{"name":"ecr-docker-push","children":[{"name":"README.md","path":"modules/iam-policies/ecr-docker-push/README.md","sha":"6a0a84157fe573f2c7ac1e0bd6e2169b2814f1c0"},{"name":"main.tf","path":"modules/iam-policies/ecr-docker-push/main.tf","sha":"f42bf07dbc968788e3afca42528ebf5f581ac7c8"},{"name":"outputs.tf","path":"modules/iam-policies/ecr-docker-push/outputs.tf","sha":"ba3177631efce4651f499e457cc9aa0b1afb338d"},{"name":"vars.tf","path":"modules/iam-policies/ecr-docker-push/vars.tf","sha":"d8aaeae6205750961e3deaba741ee68180a32ca7"}]},{"name":"ecs-service-deployment","children":[{"name":"README.md","path":"modules/iam-policies/ecs-service-deployment/README.md","sha":"77bb1df6db0d7cd886ee665f1d6566944521b00c"},{"name":"main.tf","path":"modules/iam-policies/ecs-service-deployment/main.tf","sha":"01c7f850e862dc36c1930b8deb767388c631eda7"},{"name":"outputs.tf","path":"modules/iam-policies/ecs-service-deployment/outputs.tf","sha":"4bf7dcb16326113889318ee8da63dadfc1ce35e9"},{"name":"vars.tf","path":"modules/iam-policies/ecs-service-deployment/vars.tf","sha":"78c2705c906b853f0e4b18b6c8f2c05e164109bf"}]},{"name":"terraform-remote-state-s3","children":[{"name":"README.md","path":"modules/iam-policies/terraform-remote-state-s3/README.md","sha":"b7892fd6dabecb66909706dbb8941e63e2c56954"},{"name":"main.tf","path":"modules/iam-policies/terraform-remote-state-s3/main.tf","sha":"76eb7e37989dc731b4e304303d2a46f9f17e2779"},{"name":"outputs.tf","path":"modules/iam-policies/terraform-remote-state-s3/outputs.tf","sha":"07cabd2ad56a78ea31e2bf59f1cb938fdf3b9664"},{"name":"vars.tf","path":"modules/iam-policies/terraform-remote-state-s3/vars.tf","sha":"720084fa0f9cf944b19ba30f59e78efc2e7b96b9"}]},{"name":"terragrunt","children":[{"name":"README.md","path":"modules/iam-policies/terragrunt/README.md","sha":"eccacc553810fa838b23d24718ffb8f70a9a39e5"},{"name":"main.tf","path":"modules/iam-policies/terragrunt/main.tf","sha":"e479890f7c37ce794c61022ab08e3febb91441ef"},{"name":"outputs.tf","path":"modules/iam-policies/terragrunt/outputs.tf","sha":"2b948005b8a1322f408e999226ac7b66629759cd"},{"name":"vars.tf","path":"modules/iam-policies/terragrunt/vars.tf","sha":"52f0480b69a57d0ff43e76ae26cb17d77dab5069"}]}]},{"name":"infrastructure-deploy-script","children":[{"name":"README.adoc","path":"modules/infrastructure-deploy-script/README.adoc","sha":"28e1366b934ff07de913455a42ca96555f6a8165"},{"name":"core-concepts.md","path":"modules/infrastructure-deploy-script/core-concepts.md","sha":"e617d3dd528507993c8162811cb6de16328e9ed3"},{"name":"dev_requirements.txt","path":"modules/infrastructure-deploy-script/dev_requirements.txt","sha":"6059bddb829b89421c7e1fba8a6e330de33e1a75"},{"name":"infrastructure_deploy_script","children":[{"name":"__init__.py","path":"modules/infrastructure-deploy-script/infrastructure_deploy_script/__init__.py","sha":"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391"},{"name":"deploy.py","path":"modules/infrastructure-deploy-script/infrastructure_deploy_script/deploy.py","sha":"1f204faa3579919b19067c7face5af56a2aa3785"},{"name":"exceptions.py","path":"modules/infrastructure-deploy-script/infrastructure_deploy_script/exceptions.py","sha":"087c7ad70e8077121d24d775af578aa238995dca"},{"name":"git.py","path":"modules/infrastructure-deploy-script/infrastructure_deploy_script/git.py","sha":"749ace8d9812e8a906eba2d14b6eef4f90c97e02"},{"name":"project_logging.py","path":"modules/infrastructure-deploy-script/infrastructure_deploy_script/project_logging.py","sha":"a72c2a19962838f3375390380dcf444b73e1fcef"},{"name":"py.typed","path":"modules/infrastructure-deploy-script/infrastructure_deploy_script/py.typed","sha":"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391"},{"name":"shell.py","path":"modules/infrastructure-deploy-script/infrastructure_deploy_script/shell.py","sha":"f9b98a99686b214a68c8c99d23fd8019a9f68213"},{"name":"ssh.py","path":"modules/infrastructure-deploy-script/infrastructure_deploy_script/ssh.py","sha":"3db8994be058bfa590bbbbdd0476128c44ad5979"},{"name":"terra.py","path":"modules/infrastructure-deploy-script/infrastructure_deploy_script/terra.py","sha":"67b1bc0f598d253e9b9ba13f54b9ca875f23f84f"}]},{"name":"install.sh","path":"modules/infrastructure-deploy-script/install.sh","sha":"c958f1065315b16e9f0474ee39728a4556c18247"},{"name":"requirements.txt","path":"modules/infrastructure-deploy-script/requirements.txt","sha":"ebe0f49e006828d7147c8ee540a278142e44e618"},{"name":"scripts","children":[{"name":"infrastructure-deploy-script","path":"modules/infrastructure-deploy-script/scripts/infrastructure-deploy-script","sha":"9f6b9fd7774de6892e9fdbb77f3200c1b1d698ae"}]},{"name":"setup.py","path":"modules/infrastructure-deploy-script/setup.py","sha":"0885421ba85af260d5e2d97ebc75e26c76fb25da"},{"name":"test","children":[{"name":"conftest.py","path":"modules/infrastructure-deploy-script/test/conftest.py","sha":"ebae81d85a6ce4997c9d6f4a3d833c617822a0a0"},{"name":"fixtures","children":[{"name":"terraform-with-inputs","children":[{"name":"input.tfvars","path":"modules/infrastructure-deploy-script/test/fixtures/terraform-with-inputs/input.tfvars","sha":"68468ad2c7a8319c2674d572eec058660cddfdfb"},{"name":"main.tf","path":"modules/infrastructure-deploy-script/test/fixtures/terraform-with-inputs/main.tf","sha":"de3e20f9744dc757b7ea2620ebdbc2e8aa7a3ec1"},{"name":"terragrunt.hcl","path":"modules/infrastructure-deploy-script/test/fixtures/terraform-with-inputs/terragrunt.hcl","sha":"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391"}]},{"name":"terraform","children":[{"name":"main.tf","path":"modules/infrastructure-deploy-script/test/fixtures/terraform/main.tf","sha":"362851fda16747e2a39abba071646933e51fbbdf"}]},{"name":"terragrunt","children":[{"name":"terragrunt.hcl","path":"modules/infrastructure-deploy-script/test/fixtures/terragrunt/terragrunt.hcl","sha":"2359b15bc6a698e481a8e99fdb352e174203b925"}]}]},{"name":"test_deploy.py","path":"modules/infrastructure-deploy-script/test/test_deploy.py","sha":"f41e53648f0c9df3c55692380ab02133f5bcde12"},{"name":"test_git.py","path":"modules/infrastructure-deploy-script/test/test_git.py","sha":"862852a92b7a7edb7276cdd3cf212e500ca3c8d9"},{"name":"test_shell.py","path":"modules/infrastructure-deploy-script/test/test_shell.py","sha":"fa0f984c4ce6218303d7f9c38c4c20d018411318"}]}]},{"name":"infrastructure-deployer","children":[{"name":"README.adoc","path":"modules/infrastructure-deployer/README.adoc","sha":"ccb8ddacbca1f403e2f7487fee8da11adfff0e03"},{"name":"core-concepts.md","path":"modules/infrastructure-deployer/core-concepts.md","sha":"8b4f541bbb9986cd1ad3b27fb6896dff0f4a3b90"},{"name":"deploy","children":[{"name":"aws.go","path":"modules/infrastructure-deployer/deploy/aws.go","sha":"935179ae7f0a82af9600d2552e9c43ee48dedc77"},{"name":"aws_ecs.go","path":"modules/infrastructure-deployer/deploy/aws_ecs.go","sha":"267b65065c505ea0649a29d49f14152729258c74"},{"name":"deploy.go","path":"modules/infrastructure-deployer/deploy/deploy.go","sha":"f85db6e4e21c77103925dea13c202c2d76770d40"},{"name":"errors.go","path":"modules/infrastructure-deployer/deploy/errors.go","sha":"74d73d20144cbbac68ebe4a7c50a53b204151fb4"}]},{"name":"go.mod","path":"modules/infrastructure-deployer/go.mod","sha":"6235897ee1c45ed4259089ae217684c91946adf5"},{"name":"go.sum","path":"modules/infrastructure-deployer/go.sum","sha":"803f46872f55922e3c888dd4aff5236717352ddc"},{"name":"logging","children":[{"name":"logging.go","path":"modules/infrastructure-deployer/logging/logging.go","sha":"9a7a2aa73ff0dc233bdb5b48ddfdf1d14423f011"}]},{"name":"main.go","path":"modules/infrastructure-deployer/main.go","sha":"72ee953990c0a57554312feac5b3b733b2c03b5c"},{"name":"revshlex","children":[{"name":"revshlex.go","path":"modules/infrastructure-deployer/revshlex/revshlex.go","sha":"965c2960b0d9a0ba56e7b1868def546556167c3b"},{"name":"revshlex_test.go","path":"modules/infrastructure-deployer/revshlex/revshlex_test.go","sha":"f25d5a84e38fb4fb44930477ad999431f323f5d7"}]}]},{"name":"install-jenkins","children":[{"name":"README.md","path":"modules/install-jenkins/README.md","sha":"f3c33b9c09257522281a7be9687a53b343bc1fd8"},{"name":"install.sh","path":"modules/install-jenkins/install.sh","sha":"e9ccf35e247fc5de353f0a02e187c35246ec5655"},{"name":"run-jenkins","path":"modules/install-jenkins/run-jenkins","sha":"4f66dcc9b68b4623231de09d87c76cce40fc914e"}]},{"name":"jenkins-server","children":[{"name":"README.md","path":"modules/jenkins-server/README.md","sha":"49ecb595ff9883d9a77ddfebe1369e25f1692890"},{"name":"main.tf","path":"modules/jenkins-server/main.tf","sha":"cbecb0617d71fa5905a0fafbdf0e88892185bb35"},{"name":"outputs.tf","path":"modules/jenkins-server/outputs.tf","sha":"5670bd47368b97e5eeedcca805a28a337230e9e2"},{"name":"vars.tf","path":"modules/jenkins-server/vars.tf","sha":"ae8483ae26ee5a195c224d0311a8d216d0989fbe"}]},{"name":"kubernetes-circleci-helpers","children":[{"name":"README.md","path":"modules/kubernetes-circleci-helpers/README.md","sha":"833fef8b04c17c43b10c680fad7db3aa7da2e8ad"},{"name":"bin","children":[{"name":"setup-minikube","path":"modules/kubernetes-circleci-helpers/bin/setup-minikube","sha":"9a6dd7b530cdb65c314b7ca5f60c656ee6b4bbc8"}]},{"name":"install.sh","path":"modules/kubernetes-circleci-helpers/install.sh","sha":"2ae8670872407ceb2c434dc5b117ae14073af9a2"}]},{"name":"terraform-helpers","children":[{"name":"README.md","path":"modules/terraform-helpers/README.md","sha":"d6004df66f3b9684ea901f0d41592fca2aceb874"},{"name":"bin","children":[{"name":"git-updated-folders","path":"modules/terraform-helpers/bin/git-updated-folders","sha":"d1325e85a5922d482dc8a4474fe8ef547bf51589"},{"name":"terraform-update-variable","path":"modules/terraform-helpers/bin/terraform-update-variable","sha":"44fb722fa9745668b343ca85329b003a24f650f6"}]},{"name":"install.sh","path":"modules/terraform-helpers/install.sh","sha":"6b7e40c382ec391f64e54efab3367d202b123883"}]}],"toggled":true},{"name":"setup.cfg","path":"setup.cfg","sha":"dbcd773df9356e74782ad58e0a2255ffa188df49"},{"name":"terraform-cloud-enterprise-private-module-registry-placeholder.tf","path":"terraform-cloud-enterprise-private-module-registry-placeholder.tf","sha":"ae586c0fe830819580e1009d41a9074f16e65bed"},{"name":"test","children":[{"name":"README.md","path":"test/README.md","sha":"a5996d6e678f1d454c380b7c1722843886028589"},{"name":"build_docker_image_test.go","path":"test/build_docker_image_test.go","sha":"1c3c70c8e7ae97c05acb50fdfda9c14a2762392e"},{"name":"build_helpers.go","path":"test/build_helpers.go","sha":"0e77ed869d6406431209fdbf405745dce7a7cc26"},{"name":"build_packer_artifact_test.go","path":"test/build_packer_artifact_test.go","sha":"b5463a0b657a1232c4a616716cc11dc991e21567"},{"name":"build_packer_artifact_unit_test.go","path":"test/build_packer_artifact_unit_test.go","sha":"ae89bece3fa432e3b5e9c4c917bdf00a6f358c15"},{"name":"check_url_test.go","path":"test/check_url_test.go","sha":"bd3419bcb07624923777c14619eba17c7edc8c64"},{"name":"ecs_deploy_runner_docker_test.go","path":"test/ecs_deploy_runner_docker_test.go","sha":"341046da1f7f03d2f5cb7bb40f91051cb36d7ccb"},{"name":"ecs_deploy_runner_ec2_test.go","path":"test/ecs_deploy_runner_ec2_test.go","sha":"99b17f4a1c69cdc25165b18e29497f04afde237d"},{"name":"ecs_deploy_runner_kaniko_test.go","path":"test/ecs_deploy_runner_kaniko_test.go","sha":"383e5c3ed6eda4fc3a1169f2f66b29baac4de516"},{"name":"ecs_deploy_runner_security_test.go","path":"test/ecs_deploy_runner_security_test.go","sha":"9ae42634a2bd561004d393fd969f2aad07454c2f"},{"name":"ecs_deploy_runner_standard_configuration_test.go","path":"test/ecs_deploy_runner_standard_configuration_test.go","sha":"b1ae7b1b051397409d6a54e72049c21fc78d7678"},{"name":"ecs_deploy_runner_test.go","path":"test/ecs_deploy_runner_test.go","sha":"e38d558b996746509e68631c35b6b03db8c63c3f"},{"name":"ecs_deploy_runner_test_helpers.go","path":"test/ecs_deploy_runner_test_helpers.go","sha":"9eba97b6eeb7517c9a329cfe9030453b85cee59a"},{"name":"ecs_deploy_runner_workflow_test.go","path":"test/ecs_deploy_runner_workflow_test.go","sha":"122bb8888a3364dd34c87e6aaf8aff4c93c8b3f2"},{"name":"edrhelpers","children":[{"name":"edrhelpers.go","path":"test/edrhelpers/edrhelpers.go","sha":"d07d2f6de7ef7eaecba18bae75ffa92751abdb9a"},{"name":"edrhelpers_test.go","path":"test/edrhelpers/edrhelpers_test.go","sha":"5b34ed2f7d5ea1f8290b354b045554db143069f9"},{"name":"fixtures","children":[{"name":"docker","children":[{"name":"Dockerfile","path":"test/edrhelpers/fixtures/docker/Dockerfile","sha":"09bd120286de9d94d1340b658e66088f6f8ae28b"}]}]},{"name":"go.mod","path":"test/edrhelpers/go.mod","sha":"dbd47ade4dfa1a18d5e97992f373775e7b62d2a8"},{"name":"go.sum","path":"test/edrhelpers/go.sum","sha":"5c7ba66f33aa2c50f25dc66c190ed7fc9aa0e892"}]},{"name":"fixtures","children":[{"name":"build-packer-image-unit","children":[{"name":"ami-name","children":[{"name":"clean-resource-name.json","path":"test/fixtures/build-packer-image-unit/ami-name/clean-resource-name.json","sha":"2601caa0afd10705ba4c023f31dd03c1b2c10417"},{"name":"isotime-ami-name.json","path":"test/fixtures/build-packer-image-unit/ami-name/isotime-ami-name.json","sha":"29fa01b7f32d374cca8549b2bda8b18b69953373"},{"name":"multiple.json","path":"test/fixtures/build-packer-image-unit/ami-name/multiple.json","sha":"436b13dd54866f8926ce24c7766a043ff6d1e223"},{"name":"timestamp-ami-name.json","path":"test/fixtures/build-packer-image-unit/ami-name/timestamp-ami-name.json","sha":"17bb05e841d46f41900e3f647acaec28c58a34cc"},{"name":"uuid-ami-name.json","path":"test/fixtures/build-packer-image-unit/ami-name/uuid-ami-name.json","sha":"834762f57e84d3a3b62ae2afb04ac56aa145ad00"},{"name":"variables.json","path":"test/fixtures/build-packer-image-unit/ami-name/variables.json","sha":"5743df9f17cf8056f9c553196dc49afac1dbb59d"},{"name":"whitespace.json","path":"test/fixtures/build-packer-image-unit/ami-name/whitespace.json","sha":"35cfd5200bbd124689868f115cb1b5498a8dfbbe"}]},{"name":"ami-region","children":[{"name":"variable.json","path":"test/fixtures/build-packer-image-unit/ami-region/variable.json","sha":"2601caa0afd10705ba4c023f31dd03c1b2c10417"}]},{"name":"ami-tags","children":[{"name":"multiple-tags-variable.json","path":"test/fixtures/build-packer-image-unit/ami-tags/multiple-tags-variable.json","sha":"a546cb645b151da0156251a2e40c0599945f605b"},{"name":"no-tags.json","path":"test/fixtures/build-packer-image-unit/ami-tags/no-tags.json","sha":"d160046152b90e840fe08650cb9fb4ae8273ab40"},{"name":"single-tag.json","path":"test/fixtures/build-packer-image-unit/ami-tags/single-tag.json","sha":"c2770e7139805962dfa1eab83e5d02dde42a7b95"}]},{"name":"assert-build-amazon","children":[{"name":"amazon-ebs.json","path":"test/fixtures/build-packer-image-unit/assert-build-amazon/amazon-ebs.json","sha":"d160046152b90e840fe08650cb9fb4ae8273ab40"},{"name":"ambiguous.json","path":"test/fixtures/build-packer-image-unit/assert-build-amazon/ambiguous.json","sha":"b516d4fd64b4fae25af1582d2a9236266740333c"},{"name":"docker.json","path":"test/fixtures/build-packer-image-unit/assert-build-amazon/docker.json","sha":"65adc353ebf9bda1f6821d42ec0b2f6f8d5fc125"}]},{"name":"scripts","children":[{"name":"ami-name-test.sh","path":"test/fixtures/build-packer-image-unit/scripts/ami-name-test.sh","sha":"07137cff13ec18cc3bb6c3b31e706f8a0b754882"},{"name":"ami-region-test.sh","path":"test/fixtures/build-packer-image-unit/scripts/ami-region-test.sh","sha":"7211f8d2e27f00395b2355eed780aa7e0c245196"},{"name":"ami-tags-test.sh","path":"test/fixtures/build-packer-image-unit/scripts/ami-tags-test.sh","sha":"2848b4aa11df09e8aed559a9e0b9119cdee10468"},{"name":"assert-builder-amazon-test.sh","path":"test/fixtures/build-packer-image-unit/scripts/assert-builder-amazon-test.sh","sha":"ff33aabd963e7971017901988f408c628c0dabb9"}]}]},{"name":"git-add-commit-push","children":[{"name":"auto-committed.txt","path":"test/fixtures/git-add-commit-push/auto-committed.txt","sha":"5a1b0487943a6bb6645e502dc2c550c3c043f52a"}]},{"name":"hello-world-go-app","children":[{"name":"main.go","path":"test/fixtures/hello-world-go-app/main.go","sha":"3e0c7643f51386747f1b85656a0c797f282aed04"}]},{"name":"infra-pipeline-workflow","children":[{"name":"deploy-ami","children":[{"name":"main.tf","path":"test/fixtures/infra-pipeline-workflow/deploy-ami/main.tf","sha":"ce71c0fc94842687489f9423d9983cd831c79751"},{"name":"terragrunt.hcl","path":"test/fixtures/infra-pipeline-workflow/deploy-ami/terragrunt.hcl","sha":"726d218ced39781cf10699baf05283fbfcfb22df"}]},{"name":"deploy-docker","children":[{"name":"main.tf","path":"test/fixtures/infra-pipeline-workflow/deploy-docker/main.tf","sha":"4c5b2f8e767ab37cdedc43910ee91c6e181feb6b"},{"name":"terragrunt.hcl","path":"test/fixtures/infra-pipeline-workflow/deploy-docker/terragrunt.hcl","sha":"6d6be2748d15d39b736ebdf7b9621ed001b1da09"}]}]},{"name":"test-docker-image","children":[{"name":"Dockerfile","path":"test/fixtures/test-docker-image/Dockerfile","sha":"5de35bbecce145045ae22fbc5fb97c133568a1ff"},{"name":"test.sh","path":"test/fixtures/test-docker-image/test.sh","sha":"ef3083cf7f3436ac7d01acfa33e4dab143fbcafd"}]},{"name":"test-go-test-files","children":[{"name":"simple_test.go","path":"test/fixtures/test-go-test-files/simple_test.go","sha":"53007ca88996e540ddfc503ce42982bdd5e785d2"},{"name":"test.sh","path":"test/fixtures/test-go-test-files/test.sh","sha":"25b8ef156d589c8d088d026b422e3b9c751c4b53"}]},{"name":"test-packer-image","children":[{"name":"hello-world-multiple-builders.json","path":"test/fixtures/test-packer-image/hello-world-multiple-builders.json","sha":"1c3e101f8965041b7aef0f5c4cddb14e1011d520"},{"name":"hello-world-no-tags-builder.json","path":"test/fixtures/test-packer-image/hello-world-no-tags-builder.json","sha":"d160046152b90e840fe08650cb9fb4ae8273ab40"},{"name":"hello-world-one-builder.json","path":"test/fixtures/test-packer-image/hello-world-one-builder.json","sha":"44a88c43721e45e9f5d921d8c04d08fa62c26748"}]},{"name":"test-tfvars-files","children":[{"name":"multiple-similar-variables.tfvars","path":"test/fixtures/test-tfvars-files/multiple-similar-variables.tfvars","sha":"8147977dee3d9a196177c575cd664354c601e68f"},{"name":"multiple-variables-and-comments.tfvars","path":"test/fixtures/test-tfvars-files/multiple-variables-and-comments.tfvars","sha":"8b757cfa5ed1d5049c478ed9180b9e9a5743335e"},{"name":"one-variable-extra-whitespace.tfvars","path":"test/fixtures/test-tfvars-files/one-variable-extra-whitespace.tfvars","sha":"743b02f7015bd51232f9c40564c654f424cbb523"},{"name":"one-variable-no-whitespace.tfvars","path":"test/fixtures/test-tfvars-files/one-variable-no-whitespace.tfvars","sha":"39ddb49aa78f683e3e45384d2440a904d0ad7ec9"},{"name":"one-variable.tfvars","path":"test/fixtures/test-tfvars-files/one-variable.tfvars","sha":"82a0cea8ac06d9534dd2549c73ae70afd47336bb"}]},{"name":"test-tghcl-files","children":[{"name":"multiple-similar-variables.hcl","path":"test/fixtures/test-tghcl-files/multiple-similar-variables.hcl","sha":"8224297de9667b6887136c897b473977f7013fc0"},{"name":"multiple-variables-and-comments.hcl","path":"test/fixtures/test-tghcl-files/multiple-variables-and-comments.hcl","sha":"1905719b9208e53d78a9bced9792194f6955f928"},{"name":"one-variable-extra-whitespace.hcl","path":"test/fixtures/test-tghcl-files/one-variable-extra-whitespace.hcl","sha":"aa09bc5beacd10b66660e71173d16f3b093e9415"},{"name":"one-variable-no-whitespace.hcl","path":"test/fixtures/test-tghcl-files/one-variable-no-whitespace.hcl","sha":"8e77e4baf099fdb7f1e1d1571725c835ac92093a"},{"name":"one-variable.hcl","path":"test/fixtures/test-tghcl-files/one-variable.hcl","sha":"0afe11b72b3635feb75d71d7aff4cc28806a7f67"}]},{"name":"tfpipeline","children":[{"name":"failure","children":[{"name":"terraform","children":[{"name":"main.tf","path":"test/fixtures/tfpipeline/failure/terraform/main.tf","sha":"5488bde3fbd03a58398aeeb904cfef74d7200aae"}]},{"name":"terragrunt","children":[{"name":"terragrunt.hcl","path":"test/fixtures/tfpipeline/failure/terragrunt/terragrunt.hcl","sha":"2359b15bc6a698e481a8e99fdb352e174203b925"}]}]},{"name":"nested","children":[{"name":"terraform","children":[{"name":"main.tf","path":"test/fixtures/tfpipeline/nested/terraform/main.tf","sha":"19538a8d8cb3820ffc99138ecf3b19004e7afa8f"}]},{"name":"terragrunt","children":[{"name":"terragrunt.hcl","path":"test/fixtures/tfpipeline/nested/terragrunt/terragrunt.hcl","sha":"2359b15bc6a698e481a8e99fdb352e174203b925"}]}]},{"name":"root","children":[{"name":"terraform","children":[{"name":"main.tf","path":"test/fixtures/tfpipeline/root/terraform/main.tf","sha":"4c38ad94d4d10a46cb1acd012f9b3e5513757ac2"}]},{"name":"terragrunt","children":[{"name":"terragrunt.hcl","path":"test/fixtures/tfpipeline/root/terragrunt/terragrunt.hcl","sha":"2359b15bc6a698e481a8e99fdb352e174203b925"}]}]}]}]},{"name":"git_updated_folders_test.go","path":"test/git_updated_folders_test.go","sha":"365224f7ed12aa3ccf6df765af3669d844274726"},{"name":"github_helpers.go","path":"test/github_helpers.go","sha":"56e54040790227aab12ae1b2f43991b9f00f87f6"},{"name":"go.mod","path":"test/go.mod","sha":"5cba1f1d742d8e433c44a513d43f79c06ae36126"},{"name":"go.sum","path":"test/go.sum","sha":"eeeb026c64c831e119900a51d8d12d3d9a96c455"},{"name":"gruntwork_module_circleci_helpers_integration_test.go","path":"test/gruntwork_module_circleci_helpers_integration_test.go","sha":"4dac0a41ef97f3caa5271cca216835e1b02662e2"},{"name":"iam_policies_test.go","path":"test/iam_policies_test.go","sha":"75daff44988b41977dcfd064c868f038f77ffbb0"},{"name":"infrastructure_deploy_script_test.go","path":"test/infrastructure_deploy_script_test.go","sha":"1f4c3698ade167ba793813c2326e8aa7eb04ee6a"},{"name":"jenkins_test.go","path":"test/jenkins_test.go","sha":"d2cdf692eec0d4344ea052028d9a80155b1f9bcd"},{"name":"kubernetes_circleci_helpers_test.go","path":"test/kubernetes_circleci_helpers_test.go","sha":"76a87d2854c2c7cf0a57c56582796b9cdb533c1b"},{"name":"publish_ami_test.go","path":"test/publish_ami_test.go","sha":"04f7667c39445fe5d6a336641c110e2c6c1744a3"},{"name":"terraform_update_variable_unit_test.go","path":"test/terraform_update_variable_unit_test.go","sha":"5b405d87a457c0983d1bc804d1b78bc86c6e82f3"},{"name":"terragrunt_update_variable_unit_test.go","path":"test/terragrunt_update_variable_unit_test.go","sha":"69aee2044f92db0a42acde9d9f8ee5217a840796"},{"name":"test-git-add-commit-push.sh","path":"test/test-git-add-commit-push.sh","sha":"95fd142ed3d26e85c2873ee55c0e2718f0927ffd"},{"name":"test_helpers.go","path":"test/test_helpers.go","sha":"d4eb5b752450f221e5d615ffd1e470edbf7adfa0"}]},{"name":"testdep","children":[{"name":"Gopkg.lock","path":"testdep/Gopkg.lock","sha":"f12dfa4652085a0043d69d1b3bff7cc16b64551f"},{"name":"Gopkg.toml","path":"testdep/Gopkg.toml","sha":"092de38583d1bb2aff2b194753b7cc18aecddd87"},{"name":"dep_test.go","path":"testdep/dep_test.go","sha":"b87facc135093c5258a5f2da43e5f9177bc008b7"},{"name":"fixtures","children":[{"name":"hello-world-godep-app","children":[{"name":"Gopkg.lock","path":"testdep/fixtures/hello-world-godep-app/Gopkg.lock","sha":"623c785ee006b1ff3c524d18935ebdeb45395d55"},{"name":"Gopkg.toml","path":"testdep/fixtures/hello-world-godep-app/Gopkg.toml","sha":"26f5a8f783bb942cf6fba93c10b6a09017329526"},{"name":"main.go","path":"testdep/fixtures/hello-world-godep-app/main.go","sha":"0a09ad54ada955edd8e8ae731e0c113cc708766c"}]}]}]}]},"detailsContent":"<h1 class=\"preview__body--title\" id=\"core-concepts\">Core Concepts</h1><div class=\"preview__body--border\"></div><h2 class=\"preview__body--subtitle\" id=\"overview\">Overview</h2>\n<p>This module packages various scripts for infrastructure deployments (see <a href=\"#what-container-is-used-for-the-deploy-task\" class=\"preview__body--description--blue\">What container is used for the deploy\ntask?</a> for more info) into an ECS task that streams its outputs to\nCloudWatch, with an AWS Lambda function that can invoke that task. CI servers can then be configured to directly invoke\nthe lambda function to trigger the deployment and stream the output from CloudWatch.</p>\n<p>The sequence of events is as follows:</p>\n<p><img src=\"/repos/images/v0.29.4/module-ci/modules/ecs-deploy-runner/_docs/images/sequence-diagram.png\" alt=\"ECS Deploy Task sequence diagram\" class=\"preview__body--diagram\"></p>\n<p>By insulating the deploy script from the CI server, we are able to avoid granting IAM permissions to the CI servers that\nare required for deploying against the target accounts. Instead, the CI servers only need enough permissions to trigger\nthe deployment. Refer to the <a href=\"#threat-model-of-the-deploy-runner\" class=\"preview__body--description--blue\">Threat Model</a> for more information.</p>\n<h2 class=\"preview__body--subtitle\" id=\"threat-model-of-the-deploy-runner\">Threat model of the deploy runner</h2>\n<p>To implement a CI/CD pipeline for infrastructure code, it is required that the ultimate entity or system running the\ninfrastructure code has the permissions to deploy the infrastructure defined by code. Unfortunately, to support\narbitrary CI/CD workflows, it is necessary to grant wide ranging permissions to the target environment. As such, it is\nimportant to consider ways to mitigate potential attacks against the various systems involved in the pipeline to avoid\nattackers gaining access to deploy targets, which could be catastrophic in the case of a breach of the production\nenvironment.</p>\n<p>Here we define our threat model to explicitly cover what attacks are taken into consideration in the design, as well as\nwhat attacks are <strong>not</strong> considered. The goal of the threat model is to be realistic about the threats that are\naddressable with the tools available. By explicitly focusing attention on more likely and realistic threats, we can\navoid overengineering and compromising the usability of of the solution against threats that are unlikely to exist (e.g\na 5 person startup with 100 end users is unlikely to be the subject of a targeted attack by a government agency).</p>\n<p>In this design, the following threat assumptions are made:</p>\n<ul>\n<li>Attackers can originate from both external and internal sources (in relation to the organization).</li>\n<li>External attacks are limited to those that can get full access to a CI environment, but not the underlying source\ncode. Note that <strong>any</strong> CI/CD solution can likely be compromised if an attacker has access to your source code.</li>\n<li>Internal attackers are limited to those with restricted access to the environments. This means that the threat model\ndoes not consider attackers with enough privileges to already have access to the deploy target accounts (e.g an\ninternal ops admin with full access to the prod environment). However, an internal attacker with permissions in the\ndev environment trying to elevate their access to the prod environment is considered.</li>\n<li>Similarly, internal attackers are limited to those with restricted access in the CI environment and git repository. A\nthreat where the internal attackers can bypass admin approval in a CI pipeline or can force push deployment branches\nis not considered.</li>\n<li>Internal attackers can have access to the CI environment and the underlying code of the infrastructure (e.g the git\nrepository).</li>\n</ul>\n<p>Given the threat assumptions, the following mitigations are baked into the design:</p>\n<ul>\n<li>\n<p><strong>Minimal access to target environments</strong>: Attackers that gain access to the underlying AWS secrets used by the CI\nenvironments will at most have the ability to run deployments against a predefined set of code. This means that\nexternal attackers who do not have access to the source code will at most be able to: (a) deploy code that has already\nbeen deployed before, (b) see the plan of the infrastructure between two points of time. They will not be able to\nwrite arbitrary infrastructure code to read DB secrets, for example. It is important to note that the IAM\npolicies are set up such that the IAM user for CI only has access to trigger predefined events. They do not have\naccess to arbitrarily invoke the ECS task, as that could potentially expose arbitrary deployments by modifying the\ncommand property (e.g use command to <code>echo</code> some infrastructure code and run <code>terraform</code>).</p>\n<ul>\n<li>Note that there is still risk of rolling back the existing infrastructure by attempting to deploy a previous\nversion. See below for potential ways to mitigate this type of attack.</li>\n<li>Similarly, this alone does not mitigate threats from internal attackers who have access to the source code, as a\npotential attacker with access to the source code can write arbitrary code to destroy or lookup arbitrary\ninfrastructure in the target environment. See below for potential ways to mitigate this type of attack.</li>\n</ul>\n</li>\n<li>\n<p><strong>Minimal options for deployment</strong>: The Lambda function exposes a minimal interface for triggering deployments.\nAttackers will only be able to trigger a deployment against a known repo and known git refs (branches, tags, etc). To\nfurther limit the scope, the lambda function can be restricted to only allow references to repositories that matches a\npredefined regular expression. This prevents attackers from creating an open source repo with malicious code that they\nsubsequently deploy by pointing the deploy runner to it.</p>\n</li>\n<li>\n<p><strong>Restricted Refs for <code>apply</code></strong>: Since many CI systems depend on the pipeline being managed as code in the same\nrepository, internal attackers can easily circumvent approval flows by modifying the CI configuration on a test\nbranch. This means that potential attackers can run an <code>apply</code> to destroy the environment or open backdoors by running\ninfrastructure code from test branches without having the code approved. To mitigate this, the Lambda function allows\nspecifying a list of git refs (branches, tags, etc) as the source of <code>apply</code> and <code>apply-all</code>. If you limit the source\nof <code>apply</code> to only protected branches (see below), it prevents attackers from having the ability to run <code>apply</code> unless\nit has been reviewed.</p>\n</li>\n<li>\n<p><strong>CI server does not need access to the source code</strong>: Since the deployments are being done remotely in an ECS task,\nthe actual CI server does not need to clone the underlying repository to deploy the infrastructure. This means that\nyou can design your CI pipeline to only have access to the webhook events and possibly the change list of files (to\nknow which module to deploy), but not the source code itself. This can further decrease the effect of a potential\nbreach of the CI server, as the attacker will not have the ability to read or modify the infrastructure code to use\nthe pipeline to their advantage.</p>\n</li>\n</ul>\n<p>These mitigations alone will not prevent all attacks defined in the threat model. For example, an internal attacker with\naccess to the source code can still do damage to the target environments by merging in code that removes all the\ninfrastructure resources, thereby destroying all infrastructure when the <code>apply</code> command is run. Or, they could expose\nsecrets by writing infrastructure code that will leak the secrets in the logs via a <code>local-exec</code> provisioner. Note that\nthat <strong>any</strong> CI/CD solution can likely be compromised if an attacker has full access to your source code.</p>\n<p>For these types of threats, your best bet is to implement various policies and controls on the source control repository\nand build configurations:</p>\n<ul>\n<li>\n<p><strong>Only deploy from protected branches</strong>: In most git hosting platforms, there is a concept of protected branches (see\n<a href=\"https://help.github.com/en/github/administering-a-repository/about-protected-branches\" class=\"preview__body--description--blue\" target=\"_blank\">GitHub docs</a> for example).\nProtected branches allow you to implement policies for controlling what code can be merged in. For most platforms, you\ncan protect a branch such that: (a) it can never be force pushed, (b) it can never be merged to or commit to from the\ncli, (c) merges require status checks to pass, (d) merges require approval from N reviewers. By only building CI\npipelines from protected branches, you can add checks and balances to ensure review of potentially harmful\ninfrastructure actions.</p>\n</li>\n<li>\n<p><strong>Require approval in CI build steps</strong>: If protected branches is not an option, you can implement an approval workflow\nin the CI server. This can mitigate attacks such that attackers will need enough privileges on the CI server to\napprove builds in order to actually modify infrastructure. This can mitigate potential attacks where the attacker has\naccess to the CI server to trigger arbitrary builds manually (e.g to run a previous job that is deplying an older\nversion to roll back the infrastructure), but not enough access to approve the job. Note that this will not mitigate\npotential threats from internal attackers who have enough permissions to approve builds.</p>\n</li>\n<li>\n<p><strong>Avoid logging secrets</strong>: Our threat model assumes that attackers can get access to the CI servers, which means they\nwill have access to the deployment logs. This will include detailed outputs from a <code>terraform plan</code> or <code>apply</code>. While\nit is impossible to prevent terraform from leaking secrets into the state, it is possible to avoid terraform from\nlogging sensitive information. Make use of pgp encryption functions or encrypted environment variables / config files\n(in the case of service deployments) to ensure sensitive data does not show up in the plan output. Additionally, tag\nsensitive outputs with the <code>sensitive</code> keyword so that terraform will mask the outputs.</p>\n</li>\n<li>\n<p><strong>Consider a forking based workflow for pull requests</strong>: For greater control, you can consider implemmenting a forking\nbased workflow. In this model, you only allow your trusted admins to have access to the main infrastructure repo, but\nanyone on the team can read and fork the code. When non-admins want to implement changes, instead of branching from\nthe infra repo they will fork the repo, implement changes on their fork, and then open a PR from the fork. The\nadvantage of this approach is that many CI platforms do not automatically run builds from a fork for security reasons.\nInstead, admins manually trigger a build by pushing the forked branch to an internal branch. While this is an\ninconvenience to devs as you won't automatically see the <code>plan</code>, it prevents unwanted access to secrets by modifying\nthe CI pipeline to log internal environment variables or show infrastructure secrets using external data sources.</p>\n</li>\n</ul>\n<h2 class=\"preview__body--subtitle\" id=\"operations\">Operations</h2>\n<h3 class=\"preview__body--subtitle\" id=\"which-launch-type-should-i-use\">Which launch type should I use?</h3>\n<p>The ECS deploy runner supports both Fargate and EC2 launch types. When running in Fargate mode, each ECS task is spun up\non demand for each invocation. This means that you will only pay for the container runtime for the duration of the task.\nAdditionally, concurrency of the jobs is only limited by the maximum number of Fargate tasks AWS allows you to run at a\ngiven point in time (default is 100). This means that you don't need to worry about scaling your capacity on demand,\nallowing you to minimize your costs. This works best when you have the need to run many deployments in parallel across\nmultiple containers, or if you have a sparse work schedule where your builds run for a limited time each day.</p>\n<p>The EC2 launch type will deploy a cluster of EC2 instances to run the tasks on. This launch type reserves VMs to host\nthe tasks which cuts down the container image download time and VM boot up time of the ECS task. However, the start up\ntime is traded off with the cost of keeping the resources up longer than the task run times, as well as the inability to\nscale up and down on demand. This works best when you have short deployment times where the start up time of Fargate\ncontainers is relatively expensive.</p>\n<p>The following is a table summarizing the differences:</p>\n<table>\n<thead>\n<tr>\n<th>Feature</th>\n<th>Fargate</th>\n<th>EC2</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>Pay only for runtime</td>\n<td>✅</td>\n<td>❌</td>\n</tr>\n<tr>\n<td>Serverless</td>\n<td>✅</td>\n<td>❌</td>\n</tr>\n<tr>\n<td>Autoscaling</td>\n<td>✅</td>\n<td>⚠️ (Requires optimization for each environment)</td>\n</tr>\n<tr>\n<td>Cached images</td>\n<td>❌</td>\n<td>✅</td>\n</tr>\n<tr>\n<td>Time to boot</td>\n<td>Minutes</td>\n<td>10s of seconds</td>\n</tr>\n</tbody>\n</table>\n<h3 class=\"preview__body--subtitle\" id=\"what-container-is-used-for-the-deploy-task\">What container is used for the deploy task?</h3>\n<p>Any container specified in <code>container_images</code> can be used for the deploy task. You can also specify multiple containers\nfor a single ECS Deploy Runner stack. This is useful when using specialized third party containers for deployment tasks\nthat are not directly supported by the Gruntwork deploy runner container (e.g.,\n<a href=\"https://github.com/GoogleContainerTools/kaniko\" class=\"preview__body--description--blue\" target=\"_blank\">kaniko</a> for building Docker images).</p>\n<p>For convenience, we provide <code>Dockerfiles</code> (defined in <a href=\"/repos/v0.29.4/module-ci/modules/ecs-deploy-runner/docker\" class=\"preview__body--description--blue\">the subfolder docker</a>) to build containers that have a\nset of tools that are most commonly used in infrastructure projects that depend on Gruntwork modules. There are two\n<code>Dockerfiles</code> in the folder:</p>\n<ul>\n<li><a href=\"#deploy-runner\" class=\"preview__body--description--blue\">deploy-runner</a></li>\n<li><a href=\"#kaniko\" class=\"preview__body--description--blue\">kaniko</a></li>\n</ul>\n<h4 id=\"deploy-runner\">deploy-runner</h4>\n<p>This container is an Ubuntu 18.04 image that contains the following trigger scripts:</p>\n<ul>\n<li><code>build-packer-artifact</code> (from the <a href=\"/repos/v0.29.4/module-ci/modules/build-helpers\" class=\"preview__body--description--blue\">build-helpers module</a>)</li>\n<li><code>terraform-update-variable</code> (from the <a href=\"/repos/v0.29.4/module-ci/modules/terraform-helpers\" class=\"preview__body--description--blue\">terraform-helpers module</a>)</li>\n<li><code>infrastructure-deploy-script</code> from the <a href=\"/repos/v0.29.4/module-ci/modules/infrastructure-deploy-script\" class=\"preview__body--description--blue\">infrastructure-deploy-script module</a></li>\n</ul>\n<p>and tools:</p>\n<ul>\n<li><code>git</code></li>\n<li><code>terraform</code></li>\n<li><code>terragrunt</code></li>\n<li><code>kubergrunt</code></li>\n<li><code>packer</code></li>\n<li><code>git-add-commit-push</code> (from the <a href=\"/repos/v0.29.4/module-ci/modules/git-helpers\" class=\"preview__body--description--blue\">git-helpers module</a>)</li>\n</ul>\n<p>Note that you will only be allowed to invoke the scripts in the trigger directory (<code>/opt/ecs-deploy-runner/scripts</code>) if\nyou use the standard configuration (see <a href=\"#what-configuration-is-recommended-for-container_images\" class=\"preview__body--description--blue\">What configuration is recommended for\ncontainer_images?</a> for more details).</p>\n<p>If your infrastructure code requires additional tools, you can customize the runtime environment by building a new\ncontainer and providing the image reference to this module using the <code>container_images</code> input variable.</p>\n<p>To build the docker container, follow the following steps:</p>\n<ol>\n<li>Set the <code>GITHUB_OAUTH_TOKEN</code> environment variable to a read only machine user with access to Gruntwork.</li>\n<li>Change working directory to the <code>docker</code> folder of this module (<code>modules/ecs-deploy-runner/docker</code> from the root of\nthe repo).</li>\n<li>Run: <code>docker build --build-arg GITHUB_OAUTH_TOKEN --tag gruntwork/ecs-deploy-runner .</code></li>\n</ol>\n<h4 id=\"kaniko\">kaniko</h4>\n<p>The ECS Deploy Runner uses <a href=\"https://aws.amazon.com/fargate/\" class=\"preview__body--description--blue\" target=\"_blank\">ECS Fargate</a> to run the infrastructure code. However, ECS\nFargate does not support bind mounting the docker sock to use Docker in Docker for building images. As such, it is\ncurrently not possible to build docker images directly in ECS Fargate. Instead, we use an indirect method with a tool\ncalled <a href=\"https://github.com/GoogleContainerTools/kaniko\" class=\"preview__body--description--blue\" target=\"_blank\">kaniko</a>. Kaniko is a binary that was originally built for\nbuilding docker images in Kubernetes, but it supports any platform where docker in docker is not supported.</p>\n<p>We need a specialized <code>kaniko</code> container for the ECS deploy runner that is setup to push the built docker images to ECR.\nIn addition to the <code>kaniko</code> command, our version contains:</p>\n<ul>\n<li>A configuration file to setup the <a href=\"https://github.com/awslabs/amazon-ecr-credential-helper\" class=\"preview__body--description--blue\" target=\"_blank\">Amazon ECR Credential\nHelper</a> so that <code>kaniko</code> can authenticate to AWS for pushing\nimages to ECR.</li>\n<li>A trigger command to wrap the <code>kaniko</code> command to simplify the args for AWS based CI/CD use cases.</li>\n<li>An entrypoint script that is compatible with the <code>ecs-deploy-runner</code> for enforcing security restrictions around what\ncommands can be invoked in the container.</li>\n</ul>\n<h3 class=\"preview__body--subtitle\" id=\"what-configuration-is-recommended-for-container-images\">What configuration is recommended for container_images?</h3>\n<p>The ECS Deploy Runner stack supports a wide range of configuration options for each container to maximize the security\nbenefits of the stack. For example, we provide configuration options for controlling which options and arguments to\nallow for each script in a container. This flexibility allows the stack to adapt to almost all CI/CD use cases, but at\nthe expense of requiring time and effort to figure out the best options to minimize the security risk of the stack.</p>\n<p>For convenience, we provide container configurations that are distilled to a set of user friendly options (e.g.,\n<code>infrastructure_live_repositories</code> as opposed to <code>hardcoded_options</code>) that you can use to configure a canonical ECS\nDeploy Runner stack that can be used with most infrastructure and application CI/CD workflows. You can use the\n<a href=\"/repos/v0.29.4/module-ci/modules/ecs-deploy-runner-standard-configuration\" class=\"preview__body--description--blue\">ecs-deploy-runner-standard-configuration module</a> for this purpose.</p>\n<p>The standard configuration will set up:</p>\n<ul>\n<li>A <code>docker-image-builder</code> ECS task using the <code>kaniko</code> container with recommended script configurations for restricting\nwhat repos can be used to build containers.</li>\n<li>An <code>ami-builder</code> ECS task using the <code>deploy-runner</code> container that is restricted to only running\n<code>build-packer-artifact</code>. The task has recommended script configurations for restricting what repos can be used to\nbuild AMIs.</li>\n<li>A <code>terraform-planner</code> ECS task using the <code>deploy-runner</code> container that has recommended script configurations to\nrestrict the container to only allow running <code>plan</code> actions with the <code>infrastructure-deploy-script</code>.</li>\n<li>A <code>terraform-applier</code> ECS task using the <code>deploy-runner</code> container that has recommended script configurations to\nrestrict the cotnainer to only allow running <code>apply</code> actions with the <code>infrastructure-deploy-script</code>. Additionally,\nthis container can be used to run <code>terraform-update-variable</code> if variables need to be updated for a deployment.</li>\n<li>Secrets Manager entries that are passed into the containers as environment variables.</li>\n</ul>\n<h3 class=\"preview__body--subtitle\" id=\"how-do-i-use-the-ecs-deploy-runner-with-a-private-vcs-system-such-as-git-hub-enterprise\">How do I use the ECS Deploy Runner with a private VCS system such as GitHub Enterprise?</h3>\n<p>If you try using the ECS Deploy Runner docker container with a private VCS system such as GitHub Enterprise, you might\nget an error message indicating that the SSH host was not verified. This is expected because we enable SSH host\nverification when accessing Git repos via SSH in the container. This means that the host keys must be validated\nbeforehand at container creation time.</p>\n<p>This is done by copying a precompiled list of host keys for each of the major VCS systems in the\n<a href=\"/repos/v0.29.4/module-ci/modules/ecs-deploy-runner/docker/known_hosts\" class=\"preview__body--description--blue\">docker/known_hosts</a> file. Each entry was added using the <code>ssh-keyscan</code> CLI utility that comes\nwith <code>openssh</code>. To add the host key for your private VCS server, run the following command to add it to the\n<code>known_hosts</code> file:</p>\n<pre><span class=\"hljs-comment\"># Run at root of repo</span>\nssh-keyscan -t rsa DOMAIN_OF_VCS_SERVER >> ./modules/ecs-deploy-runner/docker/known_hosts\n</pre>\n<p>Then, build the container using the steps outlined in <a href=\"#what-container-is-used-for-the-deploy-task\" class=\"preview__body--description--blue\">What container is used for the deploy\ntask?</a></p>\n<h3 class=\"preview__body--subtitle\" id=\"what-scripts-can-be-invoked-as-part-of-the-pipeline\">What scripts can be invoked as part of the pipeline?</h3>\n<p>The pipeline assumes every docker container is equipped with the <code>deploy-runner</code> entrypoint command (see <a href=\"/repos/v0.29.4/module-ci/modules/ecs-deploy-runner/entrypoint\" class=\"preview__body--description--blue\">the entrypoint\ndirectory for the source code</a>). This is a small go binary that enforces the configured trigger directory\nof the Docker container by making sure that the script requested to invoke actually resides in the trigger directory.\nThis enforcement ensures that the ECS tasks with powerful IAM permissions can only be used for running specific,\npre-defined scripts.</p>\n<p>This entrypoint should be configured on the Docker container in the <code>Dockerfile</code> using the <a href=\"https://docs.docker.com/engine/reference/builder/#entrypoint\" class=\"preview__body--description--blue\" target=\"_blank\">ENTRYPOINT\ndirective</a> so that the ECS task automatically passes\nthrough the command args without the option to override it.</p>\n<p>You can install the entrypoint command and configure the trigger directory using the\n<a href=\"/repos/gruntwork-installer\" class=\"preview__body--description--blue\">gruntwork-installer</a>. Note that the install script assumes you\nhave a working go compiler in the <code>PATH</code>. See the <code>Dockerfile</code> for <a href=\"/repos/v0.29.4/module-ci/modules/ecs-deploy-runner/docker/deploy-runner/Dockerfile\" class=\"preview__body--description--blue\">the\ndeploy-runner</a> and <a href=\"/repos/v0.29.4/module-ci/modules/ecs-deploy-runner/docker/kaniko/Dockerfile\" class=\"preview__body--description--blue\">the kaniko</a> containers for an example\nof how to do this in your custom <code>Dockerfile</code>.</p>\n<p>Once deployed, you can use the <a href=\"/repos/v0.29.4/module-ci/modules/infrastructure-deployer\" class=\"preview__body--description--blue\">infrastructure-deployer CLI</a> to look up the supported\nscripts in a given container. Refer to <a href=\"/repos/v0.29.4/module-ci/modules/infrastructure-deployer/core-concepts.md#how-do-i-invoke-the-ecs-deploy-runner\" class=\"preview__body--description--blue\">How do I invoke the ECS deploy\nrunner</a> for more information.</p>\n<h3 class=\"preview__body--subtitle\" id=\"how-do-i-restrict-what-args-can-be-passed-into-the-scripts\">How do I restrict what args can be passed into the scripts?</h3>\n<p>This module exposes a detailed configuration object for each container passed into <code>container_images</code> that can be used\nto configure restrictions on the args that can be passed to the script. This is done through the <code>script_config</code>\nattribute in each entry of the <code>container_images</code> map. Refer to the <a href=\"/repos/v0.29.4/module-ci/modules/ecs-deploy-runner/variables.tf\" class=\"preview__body--description--blue\">variables.tf documentation</a> for the\n<code>script_config</code> map to see the type signature and what attributes you can set on the configuration.</p>\n<p>Each entry in the <code>script_config</code> map corresponds to a script in the trigger directory, with the key referencing the\nscript name. These options can be used to implement complex restrictions for each script to avoid allowing a user to\ninvoke arbitary code with the assigned IAM credentials of the container. Note that by default if a script is not\nincluded in the configuration map, it will not allow any arg to be passed in.</p>\n<p>For example, the following is a simplified version of the script configuration setup for the\n<code>infrastructure-deploy-script</code> in the <code>terraform-applier</code> task:</p>\n<pre><span class=\"hljs-attr\">infrastructure-deploy-script</span> = {\n <span class=\"hljs-attr\">hardcoded_options</span> = {\n <span class=\"hljs-attr\">repo</span> = var.terraform_applier.infrastructure_live_repositories\n <span class=\"hljs-attr\">allowed-apply-refs-json</span> = [jsonencode(var.terraform_applier.allowed_apply_git_refs)]\n }\n <span class=\"hljs-attr\">hardcoded_args</span> = []\n <span class=\"hljs-attr\">allow_positional_args</span> = <span class=\"hljs-literal\">false</span>\n <span class=\"hljs-attr\">allowed_options</span> = [\n <span class=\"hljs-string\">\"--log-level\"</span>,\n <span class=\"hljs-string\">\"--ref\"</span>,\n <span class=\"hljs-string\">\"--deploy-path\"</span>,\n <span class=\"hljs-string\">\"--binary\"</span>,\n <span class=\"hljs-string\">\"--command\"</span>,\n <span class=\"hljs-string\">\"--command-args\"</span>,\n ]\n <span class=\"hljs-attr\">restricted_options</span> = []\n <span class=\"hljs-attr\">restricted_options_regex</span> = {\n <span class=\"hljs-attr\">command</span> = <span class=\"hljs-string\">\"apply(-all)?\"</span>\n }\n}\n</pre>\n<p>Note the following:</p>\n<ul>\n<li>The configuration hardcodes the <code>repo</code> arg and disallows the user from setting that value. This ensures that a user\ncan not change the source of the code by passing in an arbitrary repository with <code>--repo</code>.</li>\n<li>The configuration also hardcodes the <code>allowed-apply-refs-json</code> arg to ensure that the user can not run <code>apply</code> from\nany git ref that isn't approved. This ensures that the user can't modify the CI script in the infrastructure repo to\ntrigger an apply on unreviewed code.</li>\n<li>The configuration also disables positional args. This isn't strictly necessary as the <code>infrastructure-deploy-script</code>\ndoes not support positional args, but is good practice to avoid potential vulnerabilities.</li>\n<li>The configuration allows setting the <code>deploy-path</code>, <code>binary</code>, <code>command</code>, and <code>command-args</code> options, which allow for\nflexibility in the workflow (e.g., running <code>terragrunt plan</code> on a specific path with the <code>-no-color</code> option).</li>\n<li>The configuration restricts the <code>command</code> option to only allow <code>apply</code>. This ensures that the user can't use this\ncontainer for the <code>plan</code> action, which can run on any branch and thus allows arbitrary code execution with powerful\nIAM credentials intended for deploying infrastructure.</li>\n</ul>\n<p>Here is another example from the standard configuration (<code>build-packer-artifact</code>):</p>\n<pre><span class=\"hljs-attr\">build-packer-artifact</span> = {\n <span class=\"hljs-attr\">hardcoded_options</span> = {}\n <span class=\"hljs-attr\">hardcoded_args</span> = []\n <span class=\"hljs-attr\">allow_positional_args</span> = <span class=\"hljs-literal\">false</span>\n <span class=\"hljs-attr\">allowed_options</span> = [\n <span class=\"hljs-string\">\"--packer-template-path\"</span>,\n <span class=\"hljs-string\">\"--build-name\"</span>,\n <span class=\"hljs-string\">\"--var\"</span>,\n ]\n <span class=\"hljs-attr\">restricted_options</span> = []\n <span class=\"hljs-attr\">restricted_options_regex</span> = {\n <span class=\"hljs-attr\">packer-template-path</span> = <span class=\"hljs-string\">\"^git::(<span class=\"hljs-subst\">${local.ami_repositories_as_regex}</span>)//.+\"</span>\n }\n}\n</pre>\n<p>This config:</p>\n<ul>\n<li>Allows <code>build-name</code>, <code>var</code>, and <code>packer-template-path</code> to be set by the user.</li>\n<li><code>packer-template-path</code> is restricted to only build from a git repo, and only those repos that were passed in. However,\nany subpath and ref in those repos are allowed.</li>\n</ul>\n<h3 class=\"preview__body--subtitle\" id=\"what-are-the-iam-permissions-necessary-to-trigger-a-deployment\">What are the IAM permissions necessary to trigger a deployment?</h3>\n<p>You can use the <a href=\"/repos/v0.29.4/module-ci/modules/ecs-deploy-runner-invoke-iam-policy\" class=\"preview__body--description--blue\">ecs-deploy-runner-invoke-iam-policy module</a> to create\nan IAM policy that grants the minimal permissions necessary to trigger a deployment, check the status of the deployment,\nand stream the logs from that deployment.</p>\n<h3 class=\"preview__body--subtitle\" id=\"how-do-i-stream-logs-from-the-deployment-task\">How do I stream logs from the deployment task?</h3>\n<p>The ECS task is configured to stream the <code>stdout</code> and <code>stderr</code> logs from the underlying container running the deploy\nscript to CloudWatch Logs under a deterministic name. You can use the predetermined name to find and stream the log\noutputs from the CloudWatch Log Group and Stream.</p>\n<p>Note that this will be done automatically for you when you invoke a deployment using the <a href=\"/repos/v0.29.4/module-ci/modules/infrastructure-deployer\" class=\"preview__body--description--blue\">infrastructure-deployer\nCLI</a>.</p>\n<h3 class=\"preview__body--subtitle\" id=\"how-do-i-trigger-a-deployment\">How do I trigger a deployment?</h3>\n<p>This module configures an ECS task definition to run infrastructure deployments using the deploy script provided in the\n<a href=\"/repos/v0.29.4/module-ci/modules/infrastructure-deploy-script\" class=\"preview__body--description--blue\">infrastructure-deploy-script</a> module. Additionally, this module will configure\nan AWS Lambda function to be able to trigger the ECS task. You can read more about the architecture in the\n<a href=\"#overview\" class=\"preview__body--description--blue\">Overview</a> and <a href=\"#threat-model-of-the-deploy-runner\" class=\"preview__body--description--blue\">Threat Model</a> sections of this doc, including the\nreasoning behind introducing Lambda instead of directly invoking the ECS task.</p>\n<p>Given that, to trigger a deployment, you need to invoke the deployment Lambda function. This can be done by using the\ndeployment CLI in the <a href=\"/repos/v0.29.4/module-ci/modules/infrastructure-deployer\" class=\"preview__body--description--blue\">infrastructure-deployer module</a>. For example, to invoke a plan\naction for the module <code>dev/us-east-1/services/my-service</code> with version <code>v0.0.1</code> of the code using the standard\nconfiguration:</p>\n<pre>infrastructure-deployer --aws-region us-east-<span class=\"hljs-number\">2</span> -- \\\n <span class=\"hljs-keyword\">terraform</span>-planner \\\n infrastructure-deploy-script \\\n --ref v0.<span class=\"hljs-number\">0.1</span> \\\n --deploy-path dev/us-east-<span class=\"hljs-number\">1</span>/services/my-service \\\n --command plan \\\n --binary <span class=\"hljs-keyword\">terraform</span>\n</pre>\n<p>This will:</p>\n<ul>\n<li>Invoke the deployment lambda function</li>\n<li>Wait for the ECS task to start</li>\n<li>Stream the logs from the ECS task to <code>stdout</code> and <code>stderr</code></li>\n<li>Wait until the task finishes</li>\n<li>Exit with the exit code provided by the task</li>\n</ul>\n<p>Refer to the <a href=\"/repos/v0.29.4/module-ci/modules/infrastructure-deployer\" class=\"preview__body--description--blue\">infrastructure-deployer module doc</a> for more information.</p>\n<h3 class=\"preview__body--subtitle\" id=\"how-do-i-trigger-a-deployment-from-ci\">How do I trigger a deployment from CI?</h3>\n<p>AWS Lambda currently does not have direct integrations with version control tools. Therefore, there is no easy way\nto configure automated git flows to direclty invoke the Lambda function. Instead, you should configure a CI build system\n(e.g Jenkins, CircleCI, Gitlab) to invoke the deployment task using the <a href=\"/repos/v0.29.4/module-ci/modules/infrastructure-deployer\" class=\"preview__body--description--blue\">infrastructure-deployer\nCLI</a> to perform the deployment actions. Refer to the <a href=\"#how-do-i-trigger-a-deployment\" class=\"preview__body--description--blue\">How do I trigger a\ndeployment?</a> section of the docs for more information.</p>\n<p>You can read more about the architecture in the <a href=\"#overview\" class=\"preview__body--description--blue\">Overview</a> and <a href=\"#threat-model-of-the-deploy-runner\" class=\"preview__body--description--blue\">Threat\nModel</a> sections of this doc, including the reasoning behind introducing AWS\nLambda instead of directly invoking the ECS task.</p>\n<p>To summarize:</p>\n<ul>\n<li>Use existing CI servers (e.g Jenkins, CircleCI, Gitlab) to integrate your workflow with version control</li>\n<li>Use this module to set up an ECS task to run your deployments via a trigger Lambda function</li>\n<li>Use the <code>infrastructure-deployer</code> CLI in your CI builds to invoke the ECS task (via Lambda) and stream the logs</li>\n</ul>\n<p>When you integrate all the components together, users can now trigger deployments when they merge infrastructure code.\nFor example, here is an example workflow that can be configured (where <code>USER</code> denotes user actions, <code>BUILD</code> denotes\nCI build server actions, and <code>ECS</code> denotes actions by the ECS task):</p>\n<ul>\n<li>USER: writes some Terraform code and commit it to a git branch</li>\n<li>BUILD: git commit triggers a build job in CI</li>\n<li>USER: logs into the CI server to see the build job</li>\n<li>USER: clicks the build job to see the build output</li>\n<li>BUILD: call out to the <code>infrastructure-deployer</code> to trigger a deployment task</li>\n<li>BUILD: the <code>infrastructure-deployer</code> invokes the trigger Lambda function, which in turns create the ECS task</li>\n<li>ECS: run the desired action (<code>terraform</code>/<code>terragrunt</code> <code>plan</code>/<code>apply</code>), streaming output to CloudWatch logs</li>\n<li>BUILD: the <code>infrastructure-deployer</code> finds the CloudWatch Logs and streams the logs to <code>stdout</code> of the build server.</li>\n<li>USER: sees the logs streamed from the <code>infrastructure-deployer</code> in the CI server UI</li>\n<li>ECS: the task exists</li>\n<li>BUILD: the <code>infrastructure-deployer</code> detects the task has finished and exits as well, exiting with the same exit code\nas the task</li>\n<li>USER: sees if deployment succeeded or failed</li>\n</ul>\n<h3 class=\"preview__body--subtitle\" id=\"how-do-i-provide-access-to-private-git-repositories\">How do I provide access to private git repositories?</h3>\n<p>Since we are not running the deployment from the CI server directly, you can't use the SSH key management mechanisms\nprovided by each CI server. Instead, you must store the private SSH key in AWS Secrets Manager so that it can be shared\nwith the ECS task at runtime. This secret is automatically injected by the ECS container agent as an environment\nvariable when the task is first started.</p>\n<p>You can learn more about how the secret is added to ECS in <a href=\"https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html\" class=\"preview__body--description--blue\" target=\"_blank\">the official documentation from\nAWS</a>.</p>\n<p>In the standard configuration, we will setup the expected environment variables for each container based on the entries\nprovided to the <code>secrets_manager_env_vars</code> input variables of the corresponding task configuration. We recommend the\nfollowing settings for each container:</p>\n<pre><span class=\"hljs-attr\">docker_image_builder</span> = {\n <span class=\"hljs-attr\">secrets_manager_env_vars</span> = {\n <span class=\"hljs-attr\">GIT_USERNAME</span> = <span class=\"hljs-string\">\"ARN of secrets manager entry containing github personal access token for private repos containing Dockerfiles.\"</span>\n <span class=\"hljs-attr\">GITHUB_OAUTH_TOKEN</span> = <span class=\"hljs-string\">\"ARN of secrets manager entry containing github personal access token for use with gruntwork-install during docker build.\"</span>\n }\n}\n\n<span class=\"hljs-attr\">ami_builder</span> = {\n <span class=\"hljs-attr\">secrets_manager_env_vars</span> = {\n <span class=\"hljs-attr\">GITHUB_OAUTH_TOKEN</span> = <span class=\"hljs-string\">\"ARN of secrets manager entry containing github personal access token for use with gruntwork-install during docker build.\"</span>\n }\n}\n\n<span class=\"hljs-attr\">terraform_planner</span> = {\n <span class=\"hljs-attr\">secrets_manager_env_vars</span> = {\n <span class=\"hljs-attr\">DEPLOY_SCRIPT_SSH_PRIVATE_KEY</span> = <span class=\"hljs-string\">\"ARN of secrets manager entry containing raw contents of a SSH private key for accessing private repos containing infrastructure live configuration.\"</span>\n }\n}\n\n<span class=\"hljs-attr\">terraform_applier</span> = {\n <span class=\"hljs-attr\">secrets_manager_env_vars</span> = {\n <span class=\"hljs-attr\">DEPLOY_SCRIPT_SSH_PRIVATE_KEY</span> = <span class=\"hljs-string\">\"ARN of secrets manager entry containing raw contents of a SSH private key for accessing private repos containing infrastructure live configuration. This is also used when updating the config files with terraform-update-variable.\"</span>\n }\n}\n</pre>\n<p>For entries corresponding to SSH keys, you will need to make sure to store the contents of the ssh private key into AWS\nSecrets Manager in order for the ECS task to properly read and use the key. Note that currently the ECS deploy runner\ndoes not support pem keys that require a password.</p>\n<p>You will also want to make sure to use a dedicated machine user with read only privileges for accessing the source code.\nAs mentioned <a href=\"#threat-model-of-the-deploy-runner\" class=\"preview__body--description--blue\">in the threat model</a>, write access to the source code will defeat\nalmost any security measures employed for CI/CD of infrastructure code, so you will want to make sure that damage can be\nlimited even if this secret were to leak. The exception is if you are implementing automated deployment workflows, in\nwhich case you will want to configure <a href=\"#how-do-i-restrict-what-args-can-be-passed-into-the-scripts\" class=\"preview__body--description--blue\">argument boundaries</a>\nto ensure that you can't modify the input variables of arbitrary infrastructure configurations using\n<code>terraform-update-variable</code>.</p>\n<p>To create a machine user and associate its SSH key:</p>\n<ol>\n<li>\n<p>Create the machine user on your version control platform.</p>\n</li>\n<li>\n<p>Create a new SSH key pair on the command line using <code>ssh-keygen</code>:</p>\n<pre><code>ssh-keygen -t rsa -b 4096 -C "MACHINE_USER_EMAIL"\n</code></pre>\n<p>Make sure to set a different path to store the key (to avoid overwriting any existing key). Also avoid setting a\npassphrase on the key.</p>\n</li>\n<li>\n<p>Upload the SSH key pair to the machine user. See the following docs for the major VCS platforms:</p>\n<ul>\n<li><a href=\"https://help.github.com/en/github/authenticating-to-github/adding-a-new-ssh-key-to-your-github-account\" class=\"preview__body--description--blue\" target=\"_blank\">GitHub</a></li>\n<li><a href=\"https://docs.gitlab.com/ee/ssh/README.html#adding-an-ssh-key-to-your-gitlab-account\" class=\"preview__body--description--blue\" target=\"_blank\">GitLab</a></li>\n<li><a href=\"https://confluence.atlassian.com/bitbucket/set-up-an-ssh-key-728138079.html#SetupanSSHkey-#installpublickeyStep3.AddthepublickeytoyourBitbucketsettings\" class=\"preview__body--description--blue\" target=\"_blank\">BitBucket</a>:\n(Note: you will need to expand one of the instructions to see the full instructions for adding an SSH key to the\nmachine user account)</li>\n</ul>\n</li>\n<li>\n<p>Create an AWS Secrets Manager entry with the contents of the private key. In the following example, we use the <code>aws</code>\nCLI to create the entry in <code>us-west-2</code>, sourcing the contents from the SSH private key file <code>~/.ssh/machine_user</code></p>\n<pre><code>cat ~/.ssh/machine_user \\\n | xargs -0 aws secretsmanager create-secret --region us-west-2 --name "SSHPrivateKeyForECSDeployRunner" --secret-string\n</code></pre>\n<p>When you run this command, you should see a JSON output with metadata about the created secret:</p>\n<pre>{\n <span class=\"hljs-attr\">\"ARN\"</span>: <span class=\"hljs-string\">\"arn:aws:secretsmanager:us-west-2:000000000000:secret:SSHPrivateKeyForECSDeployRunner-SOME_RANDOM_STRING\"</span>,\n <span class=\"hljs-attr\">\"Name\"</span>: <span class=\"hljs-string\">\"SSHPrivateKeyForECSDeployRunner\"</span>,\n <span class=\"hljs-attr\">\"VersionId\"</span>: <span class=\"hljs-string\">\"21cda90e-84e0-4976-8914-7954cb6151bd\"</span>\n}\n</pre>\n</li>\n<li>\n<p>Record the ARN from the output and set the relevant <code>secrets_manager_env_vars</code> or\n<code>repo_access_ssh_key_secrets_manager_arn</code> input variables in the standard configuration.</p>\n</li>\n</ol>\n<h2 class=\"preview__body--subtitle\" id=\"contributing\">Contributing</h2>\n<h3 class=\"preview__body--subtitle\" id=\"developing-the-invoker-lambda-function\">Developing the Invoker Lambda function</h3>\n<p>The source code for the invoker lambda function exists in <a href=\"/repos/v0.29.4/module-ci/modules/ecs-deploy-runner/invoker-lambda\" class=\"preview__body--description--blue\">the invoker-lambda</a> folder. In the folder,\nyou will find the following folder structure:</p>\n<ul>\n<li><code>invoker</code>: A python package containing the lambda function handler.</li>\n<li><code>dev_requirements.txt</code>: Additional requirements for enhanced developer experiences. E.g mypy and type stubs for static\nanalysis.</li>\n</ul>\n<p>Note that the invoker code requires Python 3.8 to run. This is primarily to take advantage of the enhanced static types\nthat were added in Python 3.8. Since we can target a known environment (AWS Lambda), we trade off portability of the\nscripts for a better developer experience.</p>\n<p>See the relevant docs for <a href=\"/repos/v0.29.4/module-ci/modules/infrastructure-deploy-script/core-concepts.md#local-development\" class=\"preview__body--description--blue\">python local development for the\ninfrastructure-deploy-script</a> for information\non how to setup your local environment for running the type checker.</p>\n","repoName":"module-ci","repoRef":"v0.29.8","serviceDescriptor":{"serviceName":"EC2 backup","serviceRepoName":"module-ci","serviceRepoOrg":"gruntwork-io","serviceMainReadmePath":"/modules/ec2-backup","cloudProviders":["aws"],"description":"Snapshot your EC2 instances on a scheduled basis.","imageUrl":"grunt.png","licenseType":"subscriber","technologies":["Terraform","JavaScript","Lambda"],"compliance":[],"tags":[""]},"serviceCategoryName":"Backup & recovery","fileName":"core-concepts.md","filePath":"/modules/ecs-deploy-runner/core-concepts.md","title":"Repo Browser: EC2 backup","description":"Browse the repos in the Gruntwork Infrastructure as Code Library."}