To run Docker containers with ECS, you first define an ECS
Task, which is a JSON file that
describes what container(s) to run, the resources (memory, CPU) those containers need, the volumes to mount, the
environment variables to set, and so on. To actually run an ECS Task, you define an ECS Service, which can:
Deploy the requested number of Tasks across an ECS cluster based on the desired_number_of_tasks input variable.
Restart tasks if they fail.
Route traffic across the tasks with an optional Elastic Load Balancer (ELB).
What is ECS Service Discovery?
Many services are not guaranteed to have the same IP address through their lifespan. They can, for example, be
dynamically assigned to run on different hosts, be redeployed after a failure recovery or scale in and out. This makes
it complex for services to send traffic to each other.
Service discovery is the action of detecting and addressing these services, allowing them to be found. Some of the ways
of doing service discovery are, for example, hardcoding IP addresses, using a Load Balancer or using specialized tools.
ECS Service Discovery is an AWS feature that allows you to reach your ECS services through a hostname managed by Route53.
This hostname will consist of a service discovery name and a namespace (private or public), in the shape of
discovery-name.namespace:port. For example, on our namespace sandbox.gruntwork.io, we can have a service with the
discovery name my-test-webapp running on port 3000. This means that we can dig or curl this service at
my-test-webapp.sandbox.gruntwork.io:3000. For more information see the related concepts section.
There are many advantages of using ECS Service Discovery instead of reaching it through a Load Balancer, for example:
Direct communication with the container run by your service
Lower latency, if using AWS internal network and private namespace
You can do service-to-service authentication
Not having a Load Balancer also means fewer resources to manage
You can configure a Health Check and associate it with all records within a namespace
You can make a logical group of services under one namespace
Under the hood, the ECS Service Discovery system uses Amazon Route 53 Auto Naming Service. This service automates the
process of:
Creating a public or private namespace within a new or existing hosted zone
Providing a service with the DNS Records configuration and optional health checks
The latter will be used in the Service Registry of your ECS Service Discovery, and it is the only type of service currently supported for this.
Important considerations:
Public namespaces are accessible on the internet and need the domain to be registered already
Private namespaces are accessible only within your VPC and can be queried immediately
For cleaning up, deregistering the instances from the auto naming service will trigger an automatic deletion of resources in AWS. However, the namespaces themselves are not deleted. Namespaces must be deleted manually and that is only allowed once all services in that namespace no longer exist.
To use ECS, you first deploy one or more EC2 Instances into a "cluster". See the ecs-cluster module
for how to create a cluster.
How do ECS Services deploy new versions of containers?
When you update an ECS Task (e.g. change the version number of a Docker container to deploy), ECS will roll out the change
automatically across your cluster according to two input variables:
deployment_maximum_percent: This variable controls the maximum number of copies of your ECS Task, as a percentage of
desired_number_of_tasks, that can be deployed during an update. For example, if you have 4 Tasks running at version
1, deployment_maximum_percent is set to 200, and you kick off a deployment of version 2 of your Task, ECS will
first deploy 4 Tasks at version 2, wait for them to come up, and then it'll undeploy the 4 Tasks at version 1. Note
that this only works if your ECS cluster has capacity--that is, EC2 instances with the available memory, CPU, ports,
etc requested by your Tasks, which might mean maintaining several empty EC2 instances just for deployment.
deployment_minimum_healthy_percent: This variable controls the minimum number of copies of your ECS Task, as a
percentage of desired_number_of_tasks, that must stay running during an update. For example, if you have 4 Tasks running
at version 1, you set deployment_minimum_healthy_percent to 50, and you kick off a deployment of version 2 of your
Task, ECS will first undeploy 2 Tasks at version 1, then deploy 2 Tasks at version 2 in their place, and then repeat
the process again with the remaining 2 tasks. This allows you to roll out new versions without having to keep spare
EC2 instances, but it also means the availability of your service is somewhat reduced during rollouts.
How do I do a canary deployment?
A canary deployment is a way to test new versions of your Docker
containers in a way that limits the damage any bugs could do. The idea is to deploy the new version onto just a single
server (meanwhile, the old versions are running elsewhere) and to test that new version and compare it to the old
versions. If everything is working well, you roll out the new version everywhere. If there are any problems, they only
affect a small percentage of users, and you can quickly fix them by rolling back the new version.
To do a canary deployment with this module, you need to specify two parameters:
ecs_task_definition_canary: The JSON text of the ECS Task Definition to be run for the canary. This defines the
Docker container(s) to be run along with all their properties.
desired_number_of_canary_tasks_to_run: The number of ECS Tasks to run for the canary. You should typically set
this to 1.
Here's an example that has 10 versions of the original ECS Task running and adds 1 Task to try out a canary:
If this canary has any issues, set desired_number_of_canary_tasks_to_run to 0. If the canary works well and you
want to deploy the new version across the whole cluster, update local.container_definition with the new version of
the Docker container and set desired_number_of_canary_tasks_to_run back to 0.
How does canary deployment work?
The way we do canary deployments with this module is to create a second ECS Service just for the canary that runs
desired_number_of_canary_tasks_to_run instances of your canary ECS Task. This ECS Service registers with the same ELB
or service registry (if you're using one), so some percentage of user requests will randomly hit the canary, and the
rest will go to the original ECS Tasks. For example, if you had 9 ECS Tasks and you deployed 1 canary ECS Task, then
each request would have a 90% chance of hitting the original version of your Docker container and a 10% chance of
hitting the canary version.
Therefore, there are two caveats with using canary deployments:
Do not do canary deployments with user-visible changes. For example, if your Docker container is a frontend service
and the new Docker image version changes the UI, then a user may see a different version of the UI every time they
refresh the page, which could be a jarring experience. You can still use canary deployments with frontend Docker
containers so long as you wrap UI changes in feature toggles and don't enable those toggles until the new version is
rolled out across the entire cluster (i.e. this is known as a dark
launch).
Ensure the new version of your Docker container is backwards compatible with the old version. For example, if the
Docker container runs schema migrations when it boots, make sure the new schema works correctly with the old version
of the Docker container, since both will be running simultaneously. Backwards compatibility is always a good idea
with deployments, but it becomes a hard requirement with canary deployments.
How do you add additional IAM policies to the ECS Service?
This module creates an IAM Role for the ECS
Tasks run by the ECS Service. Any
custom IAM Policies needed by this ECS Service should be attached to that IAM Role.
To do this in Terraform, you can use the
aws_iam_role_policy or
aws_iam_policy_attachment resources, and set
the role property to the Terraform output of this module called ecs_task_iam_role_name. For example, here is how you
can allow the ECS Service in this cluster to access an S3 bucket:
A Fargate ECS service automatically manages and scales your cluster as needed without you needing to manage the
underlying EC2 instances or clusters. Fargate lets you focus on designing and building your applications instead of
managing the infrastructure that runs them, with Fargate, all you have to do is package your application in containers,
specify the CPU and memory requirements, define networking and IAM policies, and launch the application.
To deploy your ECS service using Fargate, you need to set the following inputs:
launch_type should be set to FARGATE.
Fargate currently only works with the awsvpc network mode. This means that you need to set
ecs_task_definition_network_mode to "awsvpc" and configure the service network using
ecs_service_network_configuration.
To scale an ECS service in response to higher load, you have two options:
Scale the number of ECS Tasks: To do this, you first create one or
more aws_appautoscaling_policy
resources that define how to scale the number of ECS Tasks up or down. These should be associated with the
aws_appautoscaling_target that is created
by this module (output service_app_autoscaling_target_arn). Finally, you create one or more
aws_cloudwatch_metric_alarm resources
that trigger your aws_appautoscaling_policy resources when certain metrics cross specific thresholds (e.g. when
CPU usage is over 90%).
Scale the number of ECS Instances and Tasks: If your ECS Cluster doesn't have enough spare capacity, then not
only will you have to scale the number of ECS Tasks as described above, but you'll also have to increase the
size of the cluster by scaling the number of ECS Instances. To do that, you create one or more
aws_autoscaling_policy resources with the
autoscaling_group_name parameter set to the ecs_cluster_asg_name output of the ecs-cluster module. Next, you
create one or more
aws_cloudwatch_metric_alarm resources
that trigger your aws_autoscaling_policy resources when certain metrics cross specific thresholds (e.g. when
CPU usage is over 90%).
To associate the ECS service with an existing CLB, you need to first ensure the CLB exists. Then, you need to pass in
the following inputs to the module:
clb_name should be set to the name of the CLB. This ensures the ECS service will register against the correct CLB.
clb_container_name and clb_container_port should be set to the name of the container (as defined in the task
container definition json) and port of the container. This ensures the CLB routes to the correct container if an ECS
task has multiple containers.
How do I associate the ECS Service with an ALB or NLB?
In AWS, to create an ECS Service with an ALB or NLB, we need the following resources:
ALB or NLB
ALB/NLB itself: This is the load balancer that receives
inbound requests and routes them to our ECS Service.
Load Balancer Listener: An ALB/NLB will only
listen for incoming traffic on ports for which there is a Load Balancer Listener defined. For example, if you want
the ALB/NLB to accept traffic on port 80, you must define an Listener for port 80.
ALB Listener Rule (only for ALB): Once an ALB
Listener receives traffic, which Target
Group (Docker
containers) should it route the requests to? We must define ALB Listener Rules that route inbound requests
based on either their hostname (e.g. gruntwork.io vs amazon.com), their path (e.g. /foo vs. /bar), or both.
Note that for NLBs, there is only one target so this should be set directly on the listener.
Target Group: The ALB Listener Rule (or LB
Listener for NLB) routes requests by determining a "Target Group". It then picks one of the
Targets
in the Target Group (typically, a Docker container or EC2 Instance) as the final destination for the request.
ECS Cluster
ECS Cluster itself: The ECS Cluster is where all
our Docker containers are run.
ECS Service
ECS Task Definition: To define which Docker
image we want to run, how much memory/CPU to allocate it, which docker run commmand to use, environment variables,
and every other aspect of the Docker container configuration,
we create an "ECS Task Definition". The idea behind the name is that an ECS Cluster could, in theory, run many types
of tasks, and Docker is just one such type. Therefore, rather than calling tasks "Docker containers", Amazon uses
the name "ECS Task".
ECS Service itself: When we want to run multiple
ECS Tasks as part of a single service (i.e. run multiple Docker containers as part of a single service), enable
auto-restart if a container fails, and enable the ELB to automatically discover newly launched ECS Tasks, we create
an "ECS Service".
To clarify the relationship between these entities:
When creating your ALB/NLB, ECS Cluster, and ECS Service for the first time:
First create your ALB/NLB (see module
alb for ALBs and the aws_lb
resource)
For ALBs, register listener rules to setup routing rules for your service. For NLBs, create the listener so that it
routes to the target group of the service using aws_lb_listener
resource.
When creating a new ECS Service that uses existing ALBs or NLBs and an existing ECS Cluster, you will need to set the
following inputs:
If creating the LB and ECS service in the same module, dependencies should include the ALB arn so that the module
waits for the LB to be created.
elb_target_groups should be set to a map of keys to objects with one mapping per desired target group. The keys in the map can be any arbitrary name and are used to link the outputs with the inputs. The values of the map are an object containing these attributes:
If you use alb as the key then you'll reference the ARN of the resulting target group like this module.ecs_service.target_group_arns["alb"]
name should be set to a string so that it is not null. This ensures the module creates a target
group for the ECS service.
container_name and container_port should be set to the name of the container (as defined in the task container
definition json) and port of the container. This ensures the CLB routes to the correct container if an ECS task
has multiple containers.
protocol should be set to match the protocol of the LB (ex: "HTTPS" or "HTTP" for an ALB) so that it is not null.
health_check_protocol should be set to match the protocol of the ECS service (ex: "HTTPS" or "HTTP" for a typical web-based service) so that it is not null.
load_balancing_algorithm_type should be set to either "round_robin" or "least_outstanding_requests". It is "round_robin" by default.
elb_target_group_vpc_id should be set to the VPC where the ALB lives.
Note that:
An ECS Cluster may have one or more ECS Services
An ECS Service may be associated with zero or one ALBs/NLBs
An ALB/NLB may be shared among multiple ECS Services
An ALB has zero or more ALB Listeners
Each ALB Listener has zero or more ALB Listener Rules
Each NLB Listener has zero Listener Rules
A Target Group may receive traffic from zero or more ALBs/NLBs
In the first version of this module, we attempted to hide the creation of ALB Listener Rules from users. Our thought
process was that the module's API should simplify as much as possible what was actually happening. But in practice we
found that there was more variation than we expected in the different routing rules that customers required, that
supporting any new ALB Listener Rule type (e.g. host-based routing) was cumbersome, and that by wrapping so much
complexity, we ultimately created more confusion, not less.
For this reason, the intent of this module is now about creating an ECS Service that is ready to be routed to. But to
complete the configuration, the Terraform code that calls this module should directly create its own set of Terraform
lb_listener_rule resources to meet the specific
needs of your ECS Cluster.
Once the namespace is created, you need to pass in the following inputs to the module:
Service Discovery currently only works with the awsvpc network mode. This means that you need to set
ecs_task_definition_network_mode to "awsvpc" and configure the service network using
ecs_service_network_configuration.
use_service_discovery should be set to true. This ensures the module will connect the ECS service with the
provided registry information.
discovery_namespace_id should be set to the ID of the DNS namespace.
discovery_name should be set to the string you wish to use as the DNS subdomain.
Additionally, for public DNS namespaces, you will also need to provide the ID of the Route 53 Hosted Zone that is
associated with the registrar for the domain. When you create a public DNS namespace, it createss a new Hosted Zone that
is not associated with the registrar. This means that DNS calls outside of the VPC will not actually resolve to the ECS
service. To allow the ECS service DNS queries to resolve, we need to create an alias record on the Hosted Zone that is
associated with the registrar to route to the namespace DNS record. This module will create this record for you if you
provide the following inputs:
discovery_use_public_dns should be set to true.
discovery_original_public_route53_zone_id should be set to the ID of the Route 53 Hosted Zone that is associated
with the registrar.
discovery_public_dns_namespace_route53_zone_id should be set to the ID of the Hosted Zone that is associated with
the DNS namespace.
How do I set up App Mesh?
To set up App Mesh using this module, you must first create a mesh, a virtual service, and a virtual node or virtual router. Creation of these resources
is documented in AWS App Mesh Getting Started documentation. Terraform modules are available for all App Mesh resources.
With those resources set up, the Envoy container can be added to container_definitions.
See the AWS documentation for container and proxy configurations. See the "Task definition json" section for more information.
Known Issues
Switching the value of var.use_auto_scaling
If you switch var.use_auto_scaling from true to false or vice versa, Terraform will attempt to destroy and
re-create the aws_ecs_service which has a chain of dependencies that eventually lead to destroying and re-creating
the ECS Service, which will lead to downtime. This is because we conditionally create Terraform resources depending on
the value ofvar.use_auto_scaling, and Terraform can't fully incorporate this concept into its dependency graph.
Fortunately, there's a workaround using manual state manipulation. We'll tell Terraform that the old resource is now
the new one as follows.
# If you are changing var.use_auto_scaling from TRUE to FALSE:terraform state mv module.ecs_service.aws_ecs_service.service_with_auto_scaling module.ecs_service.aws_ecs_service.service_without_auto_scaling
# If you are changing var.use_auto_scaling from FALSE to TRUE:terraform state mv module.ecs_service.aws_ecs_service.service_without_auto_scaling module.ecs_service.aws_ecs_service.service_with_auto_scaling
Now run terragrunt plan to confirm that Terraform will only make modifications.
Gotchas with Service Discovery
The ECS Service Discovery feature is not yet available in all regions.
For a list of regions where this feature is enabled, please see the AWS ECS Service Discovery documentation.
The discovery name is not necessarily the same as the name of your service. You can have a different name by which you want to discover your service.
You can enable ECS Service Discovery only during the creation of your ECS service, not when updating it.
The network mode of the task definition affects the behavior and configuration of ECS Service Discovery DNS Records.
Service discovery with SRV DNS records are not yet supported by this module. This means that tasks defined with with host or bridge network modes that can only be used with this type of record are also not supported.
For enabling service discovery, this module uses the awsvpc network mode. AWS will attach an Elastic Network Interface to your task, so you have to be aware that EC2 instance types have a limit of how many ENIs can be attached to them.
For service discovery with public DNS: The hostname is public (e.g. your-company.com), but it still points to a private IP address. Querying a public hostname that points to a private IP address might sometimes yield in empty results and you might be required to force reading from a specific nameserver (such as an amazon name server like ns-67.awsdns-08.com or google's public nameserver), for example: dig +short @8.8.8.8 my-service.my-company.com
In the aws_lb_target_group, the port = 80 field is merely a placeholder. The actual port is determined dynamically when a container launches, but the resource requires a value. The port = 80 argument can be safely ignored.
Amazon Route 53 auto naming service automates the process of:
Creating a public or private namespace within a new or existing hosted zone
Providing a service with the DNS Records configuration and optional health checks
The latter will be used in the Service Registry of your ECS Service Discovery, and it is the only type of service currently supported for this.
Important considerations:
Public namespaces are accessible on the internet and need the domain to be registered already
Private namespaces are accessible only within your VPC and can be queried immediately
For cleaning up, deregistering the instances from the auto naming service will trigger an automatic deletion of resources in AWS. However, the namespaces themselves are not deleted. Namespaces must be deleted manually and that is only allowed once all services in that namespace no longer exist.
{"treedata":{"name":"root","toggled":true,"children":[{"name":".circleci","children":[{"name":"config.yml","path":".circleci/config.yml","sha":"36b477bc857fffcdc84207000b431d7cbe708ca3"},{"name":"post-upgrade-test-results.sh","path":".circleci/post-upgrade-test-results.sh","sha":"a4867e8fbdc334b7a90259568ee41ea577fbe764"},{"name":"set-upgrade-test-vars.sh","path":".circleci/set-upgrade-test-vars.sh","sha":"04ccab865d51c1169f7ae4648c38a3d98a9889ab"}]},{"name":".github","children":[{"name":"ISSUE_TEMPLATE","children":[{"name":"bug_report.md","path":".github/ISSUE_TEMPLATE/bug_report.md","sha":"fda415fea4a0439c480c37b51958745bb7be5a70"},{"name":"feature_request.md","path":".github/ISSUE_TEMPLATE/feature_request.md","sha":"3f29bb49f5cdb78e7a2c2766d0b2249bd43945ef"}]},{"name":"pull_request_template.md","path":".github/pull_request_template.md","sha":"6b100e40e323b5b07f40ed30616277c51c9f4b9e"}]},{"name":".gitignore","path":".gitignore","sha":"fd639dbdd9eb8402900eaf2baf5708e36ff44431"},{"name":".patcher","children":[{"name":"patches","children":[{"name":"v0.31.0","children":[{"name":"upgrade-aws-provider","children":[{"name":"bump_aws_provider.sh","path":".patcher/patches/v0.31.0/upgrade-aws-provider/bump_aws_provider.sh","sha":"355825dd0598ac9ea07dd406637cc010dec21724"},{"name":"create_script_for_terraform_init.sh","path":".patcher/patches/v0.31.0/upgrade-aws-provider/create_script_for_terraform_init.sh","sha":"a29c419c9c914f82471c0e5bf073bd9552dbf33e"},{"name":"patch.yaml","path":".patcher/patches/v0.31.0/upgrade-aws-provider/patch.yaml","sha":"1498a411e69314a83433b6120b6f94e7d8a7c215"}]}]},{"name":"v0.32.0","children":[{"name":"terraform-1.1-upgrade","children":[{"name":"bump_required_version.sh","path":".patcher/patches/v0.32.0/terraform-1.1-upgrade/bump_required_version.sh","sha":"30abb1d075dbc85ce83dc415869de1c9c8560b0d"},{"name":"patch.yaml","path":".patcher/patches/v0.32.0/terraform-1.1-upgrade/patch.yaml","sha":"151598d0d058c97f47066847e8426c4eabb1a6dc"}]}]},{"name":"v0.33.0","children":[{"name":"drop-python-2","children":[{"name":"create_script_for_python_3.sh","path":".patcher/patches/v0.33.0/drop-python-2/create_script_for_python_3.sh","sha":"b66088ecbbe7a09429cddac4cfcf69d04a142314"},{"name":"patch.yaml","path":".patcher/patches/v0.33.0/drop-python-2/patch.yaml","sha":"68dd714fdef7b31a68fd2c125f874d6a58cd61a9"}]}]}]}]},{"name":".pre-commit-config.yaml","path":".pre-commit-config.yaml","sha":"521a07813be53bb7e25ac822ae33f5065b4e9c8b"},{"name":"CODEOWNERS","path":"CODEOWNERS","sha":"d1ea744a2fffde1dd2918c8bca418aa2dd1b86bd"},{"name":"LICENSE.txt","path":"LICENSE.txt","sha":"f4e3d9bd4717a044ed31ad847a300eee74371a78"},{"name":"README.adoc","path":"README.adoc","sha":"87a2c42f3bfbe38d38c0232201e9a10ffa2dc893"},{"name":"_docs","children":[{"name":"ecs-architecture.png","path":"_docs/ecs-architecture.png","sha":"7caa9342bfc7ff5c74f26626a9831f22e914ff8e"},{"name":"ecs-fargate-service-icon.png","path":"_docs/ecs-fargate-service-icon.png","sha":"b8825b62a8b9170889c747320e1c79a9298c9bcb"},{"name":"ecs-icon.png","path":"_docs/ecs-icon.png","sha":"8ffdf43575d96d27ceced3d492871fa12403140e"},{"name":"ecs-service-architecture.png","path":"_docs/ecs-service-architecture.png","sha":"1bef2e6b95cb016b8e2c0219679d2d2d3ddd1769"},{"name":"ecs-service-icon.png","path":"_docs/ecs-service-icon.png","sha":"30947a9dcd3612d12ab42f40095b81a13fbaaff4"}]},{"name":"core-concepts.md","path":"core-concepts.md","sha":"43acea16b9efbbee1b4b7e23e0340840ac09a853"},{"name":"examples","children":[{"name":"deploy-ecs-scheduled-task","children":[{"name":"containers","children":[{"name":"container-definitions.json","path":"examples/deploy-ecs-scheduled-task/containers/container-definitions.json","sha":"1de2f83af666b622739f89debacc7c7faed35a08"}]},{"name":"main.tf","path":"examples/deploy-ecs-scheduled-task/main.tf","sha":"28c75d56d8c043936f27d683be0a8f50f08202a9"},{"name":"outputs.tf","path":"examples/deploy-ecs-scheduled-task/outputs.tf","sha":"fde450d4a6a52b47d101e50d0a128d2790c7cd41"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/deploy-ecs-scheduled-task/user-data/user-data.sh","sha":"38ecd4127fac4ee0a05f6df37c86bbbf33629d4b"}]},{"name":"variables.tf","path":"examples/deploy-ecs-scheduled-task/variables.tf","sha":"942d6d88da2d864f9c4f4c12214484dcc3e5b2a8"}]},{"name":"deploy-ecs-task","children":[{"name":"README.md","path":"examples/deploy-ecs-task/README.md","sha":"21aae5e552219147d78080324d62f48c018b6324"},{"name":"containers","children":[{"name":"container-definitions.json","path":"examples/deploy-ecs-task/containers/container-definitions.json","sha":"1de2f83af666b622739f89debacc7c7faed35a08"}]},{"name":"main.tf","path":"examples/deploy-ecs-task/main.tf","sha":"6a9f707c6b6569370c95b1d4d37d3474d6b4dedc"},{"name":"outputs.tf","path":"examples/deploy-ecs-task/outputs.tf","sha":"24163bba223940a17f70bd4746ffcd1a6eafaec0"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/deploy-ecs-task/user-data/user-data.sh","sha":"38ecd4127fac4ee0a05f6df37c86bbbf33629d4b"}]},{"name":"variables.tf","path":"examples/deploy-ecs-task/variables.tf","sha":"70a5be0fe5f10906076763b1ec4df79dccf68c34"}]},{"name":"docker-daemon-service","children":[{"name":"containers","children":[{"name":"datadog-agent-ecs.json","path":"examples/docker-daemon-service/containers/datadog-agent-ecs.json","sha":"7f8de1f4c5b716bab279112f14adf7f8dc0f6024"}]},{"name":"main.tf","path":"examples/docker-daemon-service/main.tf","sha":"3d2168c3b0c6a2970a28ea07c7a598a4187df377"},{"name":"outputs.tf","path":"examples/docker-daemon-service/outputs.tf","sha":"2a294a1174fdc88601ebe62f9ab3dd4faf2d89fd"},{"name":"variables.tf","path":"examples/docker-daemon-service/variables.tf","sha":"820dd6293298357da998f1b34283b17d4d204b4f"}]},{"name":"docker-fargate-service-with-alb","children":[{"name":"README.md","path":"examples/docker-fargate-service-with-alb/README.md","sha":"a4a806a58539ba75b88b70f71ead85da08c1a4ec"},{"name":"containers","children":[{"name":"container-definition.json","path":"examples/docker-fargate-service-with-alb/containers/container-definition.json","sha":"657699ec82e7ff57f127e94e65fb804a3771b877"}]},{"name":"main.tf","path":"examples/docker-fargate-service-with-alb/main.tf","sha":"3453333c447d9e41c81fd13f76a37f5d62e319be"},{"name":"outputs.tf","path":"examples/docker-fargate-service-with-alb/outputs.tf","sha":"f6b5bea9f779eaaa2b792f363657e7ba326b605e"},{"name":"variables.tf","path":"examples/docker-fargate-service-with-alb/variables.tf","sha":"cf092a8bc6ec68219d988d141845aa08291eeb03"}]},{"name":"docker-fargate-service-with-efs-volume","children":[{"name":"README.md","path":"examples/docker-fargate-service-with-efs-volume/README.md","sha":"283dd6bfde71e3de96e5a12fa28082878277e082"},{"name":"containers","children":[{"name":"container-definition.json","path":"examples/docker-fargate-service-with-efs-volume/containers/container-definition.json","sha":"0b6cb6d9e9ab8eec112e6120b6eb4def8c94ef35"}]},{"name":"main.tf","path":"examples/docker-fargate-service-with-efs-volume/main.tf","sha":"347375acbab375c3ed161d3d279ac7575b64a561"},{"name":"outputs.tf","path":"examples/docker-fargate-service-with-efs-volume/outputs.tf","sha":"2a10149cd88dc1b73415185c4aaa3ade4bf879bb"},{"name":"variables.tf","path":"examples/docker-fargate-service-with-efs-volume/variables.tf","sha":"112e34fa0892d613126d7b7f5f4acd5d09fba0e1"}]},{"name":"docker-fargate-service-with-nlb","children":[{"name":"README.md","path":"examples/docker-fargate-service-with-nlb/README.md","sha":"82d8e921550c588bc4fc8ec696fc8d409dcda466"},{"name":"containers","children":[{"name":"container-definition.json","path":"examples/docker-fargate-service-with-nlb/containers/container-definition.json","sha":"578cf99bd6afdf8f30382603fb6eb10e69df9122"}]},{"name":"main.tf","path":"examples/docker-fargate-service-with-nlb/main.tf","sha":"a8cc8948fbe9bd3a3d37074f6dd6bdbdc9b892b0"},{"name":"outputs.tf","path":"examples/docker-fargate-service-with-nlb/outputs.tf","sha":"cd3a662c0a122b9bc276ebe207956b84ca203f31"},{"name":"variables.tf","path":"examples/docker-fargate-service-with-nlb/variables.tf","sha":"79ea2304ef0b31aec2925753e493a2bef0555b8b"}]},{"name":"docker-fargate-service-without-lb","children":[{"name":"README.md","path":"examples/docker-fargate-service-without-lb/README.md","sha":"c77781208f465fae225169901cd6228779d47a9f"},{"name":"containers","children":[{"name":"container-definition.json","path":"examples/docker-fargate-service-without-lb/containers/container-definition.json","sha":"0b0b64d1050ebd6c5488fdf408053b5ebfa0c2ed"}]},{"name":"main.tf","path":"examples/docker-fargate-service-without-lb/main.tf","sha":"36edd2a6d94645e2014225147a5d20d53e7d72e6"},{"name":"outputs.tf","path":"examples/docker-fargate-service-without-lb/outputs.tf","sha":"2a10149cd88dc1b73415185c4aaa3ade4bf879bb"},{"name":"variables.tf","path":"examples/docker-fargate-service-without-lb/variables.tf","sha":"833fd1fcafbf09d0d011e6ddcf46b48a554f0eb0"}]},{"name":"docker-fargate-spot-service-with-alb","children":[{"name":"README.md","path":"examples/docker-fargate-spot-service-with-alb/README.md","sha":"57ba5698469db3e09255f66406a036eeb4d92752"},{"name":"containers","children":[{"name":"container-definition.json","path":"examples/docker-fargate-spot-service-with-alb/containers/container-definition.json","sha":"657699ec82e7ff57f127e94e65fb804a3771b877"}]},{"name":"main.tf","path":"examples/docker-fargate-spot-service-with-alb/main.tf","sha":"5318a8ab67d1171222a1a7289d7d7d5d088235d4"},{"name":"outputs.tf","path":"examples/docker-fargate-spot-service-with-alb/outputs.tf","sha":"f6b5bea9f779eaaa2b792f363657e7ba326b605e"},{"name":"variables.tf","path":"examples/docker-fargate-spot-service-with-alb/variables.tf","sha":"4d10332b87159c66947e5cc9c5e3c01a0d1cf386"}]},{"name":"docker-service-with-alb-and-nlb","children":[{"name":"README.md","path":"examples/docker-service-with-alb-and-nlb/README.md","sha":"f5079fe054f27546ea78cebe0ca7de52055cca43"},{"name":"main.tf","path":"examples/docker-service-with-alb-and-nlb/main.tf","sha":"bc8597d7320fe06eb78c15fc38a5324ee45974b4"},{"name":"outputs.tf","path":"examples/docker-service-with-alb-and-nlb/outputs.tf","sha":"4dc100d16915fb67a46359048d1a369aea150db6"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/docker-service-with-alb-and-nlb/user-data/user-data.sh","sha":"a534ef17a47772e610f864b9f764c209657c9d97"}]},{"name":"variables.tf","path":"examples/docker-service-with-alb-and-nlb/variables.tf","sha":"402938bdd18823d212db48fa56bdad9504a11425"}]},{"name":"docker-service-with-alb-autoscaling","children":[{"name":"README.md","path":"examples/docker-service-with-alb-autoscaling/README.md","sha":"8b250d46da14a22c7749f2f0533dde004909cb92"},{"name":"containers","children":[{"name":"container-definition.json","path":"examples/docker-service-with-alb-autoscaling/containers/container-definition.json","sha":"d12e590e2d71b5717998941cf7cb85efa804e26c"}]},{"name":"main.tf","path":"examples/docker-service-with-alb-autoscaling/main.tf","sha":"9758650306a2189ec022c336ea00a9561aef7277"},{"name":"outputs.tf","path":"examples/docker-service-with-alb-autoscaling/outputs.tf","sha":"2d96077e349d62d3e60a0eea34529ceb33e4da52"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/docker-service-with-alb-autoscaling/user-data/user-data.sh","sha":"a534ef17a47772e610f864b9f764c209657c9d97"}]},{"name":"variables.tf","path":"examples/docker-service-with-alb-autoscaling/variables.tf","sha":"18c248f34f93e2bd896c69a2ff1b89df36a9513e"}]},{"name":"docker-service-with-alb-canary","children":[{"name":"README.md","path":"examples/docker-service-with-alb-canary/README.md","sha":"4ea6c55f115724ae77a9b3b86dab57838a71c089"},{"name":"containers","children":[{"name":"container-definition.json","path":"examples/docker-service-with-alb-canary/containers/container-definition.json","sha":"d12e590e2d71b5717998941cf7cb85efa804e26c"}]},{"name":"main.tf","path":"examples/docker-service-with-alb-canary/main.tf","sha":"bda504a6d82d11ec09f9b6f6a143f14ed83884fb"},{"name":"outputs.tf","path":"examples/docker-service-with-alb-canary/outputs.tf","sha":"2d96077e349d62d3e60a0eea34529ceb33e4da52"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/docker-service-with-alb-canary/user-data/user-data.sh","sha":"a534ef17a47772e610f864b9f764c209657c9d97"}]},{"name":"variables.tf","path":"examples/docker-service-with-alb-canary/variables.tf","sha":"57788ff3bef9deb283cb77c8b29e5c3a321faa88"}]},{"name":"docker-service-with-alb","children":[{"name":"README.md","path":"examples/docker-service-with-alb/README.md","sha":"aac8088b2f1342aa3b6321d07d2084c94dd25320"},{"name":"containers","children":[{"name":"container-definition.json","path":"examples/docker-service-with-alb/containers/container-definition.json","sha":"3dc641bce3647b86a602779688cca40ecb457f90"}]},{"name":"main.tf","path":"examples/docker-service-with-alb/main.tf","sha":"1615023569bf4220ffc171d91c995ff4ca9da38b"},{"name":"outputs.tf","path":"examples/docker-service-with-alb/outputs.tf","sha":"5ca6edc9e60ae3bfb6ef13003fe8236ebbb39821"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/docker-service-with-alb/user-data/user-data.sh","sha":"a534ef17a47772e610f864b9f764c209657c9d97"}]},{"name":"variables.tf","path":"examples/docker-service-with-alb/variables.tf","sha":"1899eddd5d6d51101c95d4f64f16367ca08d3fcc"}]},{"name":"docker-service-with-autoscaling","children":[{"name":"README.md","path":"examples/docker-service-with-autoscaling/README.md","sha":"01fa504a880b73e35d0e80fcc772ee37fe337058"},{"name":"containers","children":[{"name":"container-definition.json","path":"examples/docker-service-with-autoscaling/containers/container-definition.json","sha":"24cd7978210344f80257d578f5b3f08671762395"}]},{"name":"main.tf","path":"examples/docker-service-with-autoscaling/main.tf","sha":"04b155e902b269590c44df31bee748f83f966d81"},{"name":"outputs.tf","path":"examples/docker-service-with-autoscaling/outputs.tf","sha":"c30e145beded6bc152c3e290c6230b31cb89ff71"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/docker-service-with-autoscaling/user-data/user-data.sh","sha":"a534ef17a47772e610f864b9f764c209657c9d97"}]},{"name":"variables.tf","path":"examples/docker-service-with-autoscaling/variables.tf","sha":"1be3144d0d4b2539239ec5d6d4882eb0f3c07c3f"}]},{"name":"docker-service-with-canary-deployment","children":[{"name":"README.md","path":"examples/docker-service-with-canary-deployment/README.md","sha":"a834379556fe449d71adf853cd446668ed580ee9"},{"name":"containers","children":[{"name":"container-definition.json","path":"examples/docker-service-with-canary-deployment/containers/container-definition.json","sha":"b946781dd2aab6ec41f080ecb797f4f28aa0a0d7"}]},{"name":"main.tf","path":"examples/docker-service-with-canary-deployment/main.tf","sha":"82e49f8d9ed21e793bfda3b5aadfc40dc7e2c822"},{"name":"outputs.tf","path":"examples/docker-service-with-canary-deployment/outputs.tf","sha":"c30e145beded6bc152c3e290c6230b31cb89ff71"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/docker-service-with-canary-deployment/user-data/user-data.sh","sha":"a534ef17a47772e610f864b9f764c209657c9d97"}]},{"name":"variables.tf","path":"examples/docker-service-with-canary-deployment/variables.tf","sha":"813e14d8e563ad9e5e91ff64f22a144514c7d1a7"}]},{"name":"docker-service-with-elb","children":[{"name":"README.md","path":"examples/docker-service-with-elb/README.md","sha":"9fe41265d3fdba73113eac416901f6be7ab0a1b3"},{"name":"containers","children":[{"name":"container-definition.json","path":"examples/docker-service-with-elb/containers/container-definition.json","sha":"24cd7978210344f80257d578f5b3f08671762395"}]},{"name":"main.tf","path":"examples/docker-service-with-elb/main.tf","sha":"72883b000e2452b147ddf8286358a1eb6319a2eb"},{"name":"outputs.tf","path":"examples/docker-service-with-elb/outputs.tf","sha":"9e06fbf3bd18efdea1c96669b89cf69ddbc69f39"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/docker-service-with-elb/user-data/user-data.sh","sha":"e265eb38080d4cced1a9c75adffbade208fe4882"}]},{"name":"variables.tf","path":"examples/docker-service-with-elb/variables.tf","sha":"89d55cedb7ba91daed4d64e98fb129fb97ccd33b"}]},{"name":"docker-service-with-private-discovery","children":[{"name":"README.md","path":"examples/docker-service-with-private-discovery/README.md","sha":"0b4cca356ffaefda757ed87383292915dee15b45"},{"name":"containers","children":[{"name":"container-definition.json","path":"examples/docker-service-with-private-discovery/containers/container-definition.json","sha":"d83f91ccac477598cc51c3ce80a3a404e388dbf0"}]},{"name":"main.tf","path":"examples/docker-service-with-private-discovery/main.tf","sha":"9da040190c702d6dc7493bcd66106e7b3c6fbad1"},{"name":"outputs.tf","path":"examples/docker-service-with-private-discovery/outputs.tf","sha":"ac153ce17150d268fb0567f0ba66cabce6daf63f"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/docker-service-with-private-discovery/user-data/user-data.sh","sha":"a534ef17a47772e610f864b9f764c209657c9d97"}]},{"name":"variables.tf","path":"examples/docker-service-with-private-discovery/variables.tf","sha":"dc7faeb68bb8f6b9d002cc0ee1d75459fe4cd324"}]},{"name":"docker-service-with-public-discovery","children":[{"name":"README.md","path":"examples/docker-service-with-public-discovery/README.md","sha":"8240ca76c71446eaeff6549299107537b3c77961"},{"name":"containers","children":[{"name":"container-definition.json","path":"examples/docker-service-with-public-discovery/containers/container-definition.json","sha":"d83f91ccac477598cc51c3ce80a3a404e388dbf0"}]},{"name":"main.tf","path":"examples/docker-service-with-public-discovery/main.tf","sha":"99d9885a53b19fbf495766d1b1237ea2a7dba042"},{"name":"outputs.tf","path":"examples/docker-service-with-public-discovery/outputs.tf","sha":"ac153ce17150d268fb0567f0ba66cabce6daf63f"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/docker-service-with-public-discovery/user-data/user-data.sh","sha":"a534ef17a47772e610f864b9f764c209657c9d97"}]},{"name":"variables.tf","path":"examples/docker-service-with-public-discovery/variables.tf","sha":"59e7cc6a095de1bf8fefa19c7eb918964fca1610"}]},{"name":"docker-service-without-elb","children":[{"name":"README.md","path":"examples/docker-service-without-elb/README.md","sha":"0db4d357c7967144b176d0ac462d57313e3f061d"},{"name":"containers","children":[{"name":"container-definition.json","path":"examples/docker-service-without-elb/containers/container-definition.json","sha":"24cd7978210344f80257d578f5b3f08671762395"}]},{"name":"main.tf","path":"examples/docker-service-without-elb/main.tf","sha":"e5e8d7e410a79a499af27ca12b230293d7bbb382"},{"name":"outputs.tf","path":"examples/docker-service-without-elb/outputs.tf","sha":"ca3a10bee0379ce40a82d45806fa72350f5ae641"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/docker-service-without-elb/user-data/user-data.sh","sha":"a534ef17a47772e610f864b9f764c209657c9d97"}]},{"name":"variables.tf","path":"examples/docker-service-without-elb/variables.tf","sha":"0663828c7959ea063a544f60ce1996e6b6203728"}]},{"name":"docker-vpc-service-with-alb","children":[{"name":"README.md","path":"examples/docker-vpc-service-with-alb/README.md","sha":"654268efbb32c0c98dd1b020e6fedda4adbedb36"},{"name":"main.tf","path":"examples/docker-vpc-service-with-alb/main.tf","sha":"8152c9c26e81bd043c415aa8477738327ea4a2a1"},{"name":"outputs.tf","path":"examples/docker-vpc-service-with-alb/outputs.tf","sha":"c5b02eee222895f5c56fcac0e0d42a04fa3cf08f"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/docker-vpc-service-with-alb/user-data/user-data.sh","sha":"a534ef17a47772e610f864b9f764c209657c9d97"}]},{"name":"variables.tf","path":"examples/docker-vpc-service-with-alb/variables.tf","sha":"7db799cde779942474d1612279f99bad4ca85a2f"}]},{"name":"example-docker-image","children":[{"name":"Dockerfile","path":"examples/example-docker-image/Dockerfile","sha":"e507f58e13693a2cd1b57f63cfc952f58469fb3e"},{"name":"README.md","path":"examples/example-docker-image/README.md","sha":"272b6c12cad7ba326582bfca11fce195912021c4"},{"name":"server.js","path":"examples/example-docker-image/server.js","sha":"6a0cd2caa4cd7ee9bc3a81249a0686cddda2b2f3"}]},{"name":"example-ecs-instance-ami","children":[{"name":"README.md","path":"examples/example-ecs-instance-ami/README.md","sha":"0a239a9c1d5aa7e1a889d40650fbed1cb14f8e8a"},{"name":"build.json","path":"examples/example-ecs-instance-ami/build.json","sha":"bb8f946ac8c4aff7dabd8ee89cfc975b7a00e012"}]},{"name":"example-vpc","children":[{"name":"README.md","path":"examples/example-vpc/README.md","sha":"d84ff0ae78abd7732973f26005c76c6aa0f73442"},{"name":"main.tf","path":"examples/example-vpc/main.tf","sha":"e5cc305ae6760c41b650d7f6e843924c22ff14fb"},{"name":"outputs.tf","path":"examples/example-vpc/outputs.tf","sha":"29fe3a59a33e3648c3cdf0afbcc6b7224e1b81ea"},{"name":"variables.tf","path":"examples/example-vpc/variables.tf","sha":"668e867d5bc0938a092cc35a52093d05ede78cfe"}]}]},{"name":"modules","children":[{"name":"ecs-cluster","children":[{"name":"README.md","path":"modules/ecs-cluster/README.md","sha":"6b944e332545ed63550145e59a3356156ac2576a"},{"name":"main.tf","path":"modules/ecs-cluster/main.tf","sha":"ca5e062525d9fa4f3c5b737aea3173b33c4231fa"},{"name":"outputs.tf","path":"modules/ecs-cluster/outputs.tf","sha":"055e5a2894d9bc994625d6004051404615932c7e"},{"name":"roll-out-ecs-cluster-update.py","path":"modules/ecs-cluster/roll-out-ecs-cluster-update.py","sha":"391b4f6d21b5d08e85159513cb5a3c5cefb0e8c2"},{"name":"variables.tf","path":"modules/ecs-cluster/variables.tf","sha":"03dc6e5cc890c1335172e2da0bc94cf79629f51c"}]},{"name":"ecs-daemon-service","children":[{"name":"README.md","path":"modules/ecs-daemon-service/README.md","sha":"3335d6f5fc8c250bed5682bd02910b309a87292a"},{"name":"main.tf","path":"modules/ecs-daemon-service/main.tf","sha":"48c8a5e7c21d574c6c37d5e5df11fe63c84f5061"},{"name":"outputs.tf","path":"modules/ecs-daemon-service/outputs.tf","sha":"b14be6c2f9498c05be9d3843437940b933b3b669"},{"name":"variables.tf","path":"modules/ecs-daemon-service/variables.tf","sha":"939c556009e98468ec9af8d32f821f2253e04f39"}]},{"name":"ecs-deploy-check-binaries","children":[{"name":"README.md","path":"modules/ecs-deploy-check-binaries/README.md","sha":"476b73eebe55f642ba46684e7d9765c0562ab8b9"},{"name":"bin","children":[{"name":"check-ecs-service-deployment","path":"modules/ecs-deploy-check-binaries/bin/check-ecs-service-deployment","sha":"6e556e9d064bf9d03c77bbd31f550cf9ed131981"},{"name":"check_ecs_service_deployment_env.pex","path":"modules/ecs-deploy-check-binaries/bin/check_ecs_service_deployment_env.pex","sha":"089742e3c1d2b6d38c953adb7f66606858fd4ec1"},{"name":"entrypoint.py","path":"modules/ecs-deploy-check-binaries/bin/entrypoint.py","sha":"09af8e557b93844ce66a028e594b885498eef99c"}]},{"name":"build.sh","path":"modules/ecs-deploy-check-binaries/build.sh","sha":"08167bef81f28b383bd68fce0096d12d873f4843"},{"name":"check_ecs_service_deployment","children":[{"name":"__init__.py","path":"modules/ecs-deploy-check-binaries/check_ecs_service_deployment/__init__.py","sha":"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391"},{"name":"checker","children":[{"name":"__init__.py","path":"modules/ecs-deploy-check-binaries/check_ecs_service_deployment/checker/__init__.py","sha":"b3604eaedc6d77c18dd31a282af88377b642073d"},{"name":"active_tasks_checker.py","path":"modules/ecs-deploy-check-binaries/check_ecs_service_deployment/checker/active_tasks_checker.py","sha":"5aa07c2ef3265ecdd3396be3eca962dfed893975"},{"name":"base.py","path":"modules/ecs-deploy-check-binaries/check_ecs_service_deployment/checker/base.py","sha":"b43e7a9f8989b2b6ec9c428c02b7129146c03c9e"},{"name":"daemon_service_checker.py","path":"modules/ecs-deploy-check-binaries/check_ecs_service_deployment/checker/daemon_service_checker.py","sha":"2843fa595ebfb6137601051d3c5183727605c6ad"},{"name":"loadbalancer_checker.py","path":"modules/ecs-deploy-check-binaries/check_ecs_service_deployment/checker/loadbalancer_checker.py","sha":"54070f4b438a48ded5be35753fbc7469caff3cc0"}]},{"name":"exceptions.py","path":"modules/ecs-deploy-check-binaries/check_ecs_service_deployment/exceptions.py","sha":"12ef9651649f2c99ac6cba7a54314a8da197a2b3"},{"name":"utils.py","path":"modules/ecs-deploy-check-binaries/check_ecs_service_deployment/utils.py","sha":"d37d04f0265fe0a6a1faa0bad67358d4c21dc2ac"}]},{"name":"dev_requirements.txt","path":"modules/ecs-deploy-check-binaries/dev_requirements.txt","sha":"923be60db2d99bbeefbedf796478539f78712828"},{"name":"requirements.txt","path":"modules/ecs-deploy-check-binaries/requirements.txt","sha":"e15d0efff63f204df8891b1d02e2236134d2d7ef"}]},{"name":"ecs-deploy","children":[{"name":"README.md","path":"modules/ecs-deploy/README.md","sha":"4bf327f86dab318ad23558afe0cb3d07be205083"},{"name":"bin","children":[{"name":"run-ecs-task","path":"modules/ecs-deploy/bin/run-ecs-task","sha":"a7cd4f0a8cd2240876f43489b3006da7e3425cf1"}]},{"name":"install.sh","path":"modules/ecs-deploy/install.sh","sha":"c322bebba62fd5a63e7bcb73010f9a52da1137f1"}]},{"name":"ecs-fargate","children":[{"name":"README.md","path":"modules/ecs-fargate/README.md","sha":"3eae9218a5c3fb18d3a4dd45f136df25052a93d9"}]},{"name":"ecs-scripts","children":[{"name":"README.md","path":"modules/ecs-scripts/README.md","sha":"4c357a9df12f3f2a56e447ae3a82b4e3e5bdc43a"},{"name":"bin","children":[{"name":"configure-ecs-instance","path":"modules/ecs-scripts/bin/configure-ecs-instance","sha":"fd24f00ac8a0f4cd37c42a589839f1230c45804a"}]},{"name":"install.sh","path":"modules/ecs-scripts/install.sh","sha":"927760f5584ad2019b0ff31424ba8853a27aeffc"}]},{"name":"ecs-service-with-alb","children":[{"name":"README.md","path":"modules/ecs-service-with-alb/README.md","sha":"38c07b1b20f9dbf22479651763535225e716a28c"}]},{"name":"ecs-service-with-discovery","children":[{"name":"README.md","path":"modules/ecs-service-with-discovery/README.md","sha":"fe9dc7350327371959dacddf0471067f8ddbc42b"}]},{"name":"ecs-service","children":[{"name":"README-ECS-Fargate.adoc","path":"modules/ecs-service/README-ECS-Fargate.adoc","sha":"0763095a74fe4a0123fdc74ff8611b6954351f84"},{"name":"README.adoc","path":"modules/ecs-service/README.adoc","sha":"418ebd6775f96616e0139638fdf356fe601d34e0"},{"name":"auto_scaling.tf","path":"modules/ecs-service/auto_scaling.tf","sha":"7131efebd7c0ecfa3a911576af589d56201345c5"},{"name":"core-concepts.md","path":"modules/ecs-service/core-concepts.md","sha":"2dd341838d6c35375e4e4ae5f287ab84511984a9","toggled":true},{"name":"deployment_check.tf","path":"modules/ecs-service/deployment_check.tf","sha":"fbfef8291c0b904b5cd832b70ea839c5b6e65eef"},{"name":"elb.tf","path":"modules/ecs-service/elb.tf","sha":"e00440050a622d4a66df91c2315fcfc74693188d"},{"name":"main.tf","path":"modules/ecs-service/main.tf","sha":"fdfa495e25036936f9d0265c75bf33cd8080a91f"},{"name":"outputs.tf","path":"modules/ecs-service/outputs.tf","sha":"0323af2fbe25f6f72fbf409bf92f1d36fd317e67"},{"name":"service_discovery.tf","path":"modules/ecs-service/service_discovery.tf","sha":"27096ac9b2593fdd4b78dc548ddfb05d0f26c10c"},{"name":"task_definition.tf","path":"modules/ecs-service/task_definition.tf","sha":"73dd13d1c9470e820e024db0e1445bd4857166c9"},{"name":"variables.tf","path":"modules/ecs-service/variables.tf","sha":"2f360c0174e8c75af6e0c6720826fc8b844ddf54"}],"toggled":true},{"name":"ecs-task-scheduler","children":[{"name":"README.md","path":"modules/ecs-task-scheduler/README.md","sha":"4bd4b259b979fd7b20b600758f44abd3d4ed41a8"},{"name":"bin","children":[{"name":"check-ecs-tasks","path":"modules/ecs-task-scheduler/bin/check-ecs-tasks","sha":"a739a5da8710dcb348f33f47d919a68c61394c58"}]},{"name":"main.tf","path":"modules/ecs-task-scheduler/main.tf","sha":"57648ee9a28334185ab2ed9d50cb1652f1d85601"},{"name":"outputs.tf","path":"modules/ecs-task-scheduler/outputs.tf","sha":"b6a8d2137696a4a5a524d21be548c4702ce648e4"},{"name":"variables.tf","path":"modules/ecs-task-scheduler/variables.tf","sha":"ce70be84674a4645c87158090d713c4483f24aae"}]}],"toggled":true},{"name":"setup.cfg","path":"setup.cfg","sha":"6deafc261704e20369c0983af88042e502ae4880"},{"name":"terraform-cloud-enterprise-private-module-registry-placeholder.tf","path":"terraform-cloud-enterprise-private-module-registry-placeholder.tf","sha":"ae586c0fe830819580e1009d41a9074f16e65bed"},{"name":"test","children":[{"name":"README.md","path":"test/README.md","sha":"2a539a451e7fc594839829f5c25fe27dd799f52e"},{"name":"common","children":[{"name":"docker_service_failure_testing_utils.go","path":"test/common/docker_service_failure_testing_utils.go","sha":"e16b3371f04a6c0b6e05b3927cd102e1fd1cfccf"},{"name":"docker_service_utils.go","path":"test/common/docker_service_utils.go","sha":"b9bd1081dd81b99262db46368e606abdcec6e6d8"},{"name":"terratest_options.go","path":"test/common/terratest_options.go","sha":"ff2d7189bf154eb1250a1172fef743c27e2e44e8"},{"name":"test_helpers.go","path":"test/common/test_helpers.go","sha":"721bd358f2562398093c6e21229cf53371726cbd"}]},{"name":"ec2","children":[{"name":"deploy_ecs_scheduled_task_test.go","path":"test/ec2/deploy_ecs_scheduled_task_test.go","sha":"e76a1324f95bb6be3436d7f4a1bf1238658d957e"},{"name":"deploy_ecs_task_test.go","path":"test/ec2/deploy_ecs_task_test.go","sha":"41eb2e13f0620e3c4b65006ba3b739bc5200ae1d"},{"name":"docker_daemon_service_test.go","path":"test/ec2/docker_daemon_service_test.go","sha":"17f9071ef1d398ea34630f1150a5cc3a649b8582"},{"name":"docker_ec2_service_test.go","path":"test/ec2/docker_ec2_service_test.go","sha":"65d4ea3a0d5e20baeaf8a5b5e842c355dd91c1ee"},{"name":"docker_service_with_alb_and_nlb_test.go","path":"test/ec2/docker_service_with_alb_and_nlb_test.go","sha":"bd1b81611ecf54fb6d01a8bbb3104623951186a7"},{"name":"docker_service_with_alb_deployment_check_fail_test.go","path":"test/ec2/docker_service_with_alb_deployment_check_fail_test.go","sha":"641e0fd4475ac157ecae0863351a5a37bbd32109"},{"name":"docker_service_with_alb_test.go","path":"test/ec2/docker_service_with_alb_test.go","sha":"4870f37814afbffd84b9a9d089e2667d54ce7e1c"},{"name":"docker_service_with_autoscaling_test.go","path":"test/ec2/docker_service_with_autoscaling_test.go","sha":"95558be9bd7434d02f67fd63acd724603c0256ac"},{"name":"docker_service_with_canary_deployment_check_fail_test.go","path":"test/ec2/docker_service_with_canary_deployment_check_fail_test.go","sha":"bb5b13a28b0be05ecf9ab58361074eac574c7dd2"},{"name":"docker_service_with_canary_deployment_test.go","path":"test/ec2/docker_service_with_canary_deployment_test.go","sha":"9563b599a9c71a357428d9424e46b2ab8fab599d"},{"name":"docker_service_with_discovery_check_fail_test.go","path":"test/ec2/docker_service_with_discovery_check_fail_test.go","sha":"86cef5081c22df6bbaae74f32b41d2a625693ee8"},{"name":"docker_service_with_discovery_test.go","path":"test/ec2/docker_service_with_discovery_test.go","sha":"9b4d7262690ffef830331b0bbb769a077c3b6ea8"},{"name":"docker_service_with_elb_deployment_check_fail_test.go","path":"test/ec2/docker_service_with_elb_deployment_check_fail_test.go","sha":"95084370cb419e73911d9ae1a26a83f09d96262b"},{"name":"docker_service_with_elb_test.go","path":"test/ec2/docker_service_with_elb_test.go","sha":"031c553b2d9905770eb533dc5a7e244fa4a97a47"},{"name":"docker_service_without_elb_deployment_check_fail_test.go","path":"test/ec2/docker_service_without_elb_deployment_check_fail_test.go","sha":"50146027679868fb203061e27531716e05acf646"},{"name":"docker_service_without_elb_test.go","path":"test/ec2/docker_service_without_elb_test.go","sha":"443494f87c6583c9b5d689765f612cb36432f3ef"},{"name":"docker_vpc_service_with_alb_test.go","path":"test/ec2/docker_vpc_service_with_alb_test.go","sha":"a5206fdc66b80ce49dcd01fa5223c068d63133bf"},{"name":"ec2_amazon_linux2_test.go","path":"test/ec2/ec2_amazon_linux2_test.go","sha":"66360c7b21a64a44611b47a8deb908bef2d94407"},{"name":"terratest_options.go","path":"test/ec2/terratest_options.go","sha":"3f1135fb93058b32f911ec51114ea17d38d288f8"}]},{"name":"fargate","children":[{"name":"docker_fargate_service_alb_deployment_check_fail_by_container_test.go","path":"test/fargate/docker_fargate_service_alb_deployment_check_fail_by_container_test.go","sha":"cae40eba384db05ec8e994dee7f75916ef7c7e52"},{"name":"docker_fargate_service_nlb_deployment_check_fail_by_container_test.go","path":"test/fargate/docker_fargate_service_nlb_deployment_check_fail_by_container_test.go","sha":"bd43e7f186695a13a7c9fc4c7e04f5e5f481cc42"},{"name":"docker_fargate_service_with_alb_test.go","path":"test/fargate/docker_fargate_service_with_alb_test.go","sha":"f032257ac1c0afab9f29bdb85d831110f0cd255b"},{"name":"docker_fargate_service_with_efs_volume_test.go","path":"test/fargate/docker_fargate_service_with_efs_volume_test.go","sha":"6398577f85e2c043d22360d85177fb1cbf222efd"},{"name":"docker_fargate_service_with_nlb_test.go","path":"test/fargate/docker_fargate_service_with_nlb_test.go","sha":"2314a84cb16e7c417be7a3850ef044bcfdcd2826"},{"name":"docker_fargate_service_without_lb_deployment_check_fail_by_container_test.go","path":"test/fargate/docker_fargate_service_without_lb_deployment_check_fail_by_container_test.go","sha":"ef6dd4240594abec1a8a9f8d1d5141b4af1aceaf"},{"name":"docker_fargate_service_without_lb_test.go","path":"test/fargate/docker_fargate_service_without_lb_test.go","sha":"358a5c22c5b72f6013fb370eed8aabf5e53b9b23"},{"name":"docker_fargate_spot_service_with_alb_test.go","path":"test/fargate/docker_fargate_spot_service_with_alb_test.go","sha":"0db6cba4695864a8a8b47a374c48d9231958148c"},{"name":"terratest_options.go","path":"test/fargate/terratest_options.go","sha":"1be5ddaa4ca5f6908d0479c41f277ddf2bdbe240"}]},{"name":"go.mod","path":"test/go.mod","sha":"70b44e4690e58d4a4e44cb75ac3547caa61071ee"},{"name":"go.sum","path":"test/go.sum","sha":"4e9c6d3e43d5f9baff6b9dfdd7bf05af5f38fba2"},{"name":"script_tests","children":[{"name":"executor.sh","path":"test/script_tests/executor.sh","sha":"dedf71d5d3120275daa4df86b8a91c85b58a66b6"},{"name":"requirements.txt","path":"test/script_tests/requirements.txt","sha":"f1b96782e711f3dbf230026ba91f78818299406f"},{"name":"test_check_ecs_service_deployment.py","path":"test/script_tests/test_check_ecs_service_deployment.py","sha":"31b07df008ef2409dd670cc7b9034244f97007ec"},{"name":"tox.ini","path":"test/script_tests/tox.ini","sha":"0777a50ac1fd8f6e44c25ac941d61a335a5e3d76"}]},{"name":"upgrades","children":[{"name":"upgrade_test.go","path":"test/upgrades/upgrade_test.go","sha":"8f5bac68ecb74b6d5d018f1b6f4d868441dfe3c8"}]},{"name":"validation","children":[{"name":"validate_all_modules_and_examples_test.go","path":"test/validation/validate_all_modules_and_examples_test.go","sha":"74c928d0cbc2914e5cd708277bd857cb2375b660"}]}]}]},"detailsContent":"<ul>\n<li><a href=\"#background\" class=\"preview__body--description--blue\">Background</a>\n<ul>\n<li><a href=\"#what-is-an-ecs-service\" class=\"preview__body--description--blue\">What is an ECS Service?</a></li>\n<li><a href=\"#what-is-ecs-service-discovery\" class=\"preview__body--description--blue\">What is ECS Service Discovery?</a></li>\n</ul>\n</li>\n<li><a href=\"#operations\" class=\"preview__body--description--blue\">Operations</a>\n<ul>\n<li><a href=\"#how-do-you-create-an-ecs-cluster\" class=\"preview__body--description--blue\">How do you create an ECS cluster?</a></li>\n<li><a href=\"#how-do-ecs-services-deploy-new-versions-of-containers\" class=\"preview__body--description--blue\">How do ECS Services deploy new versions of containers?</a></li>\n<li><a href=\"#how-do-i-do-a-canary-deployment\" class=\"preview__body--description--blue\">How do I do a canary deployment?</a></li>\n<li><a href=\"#how-does-canary-deployment-work\" class=\"preview__body--description--blue\">How does canary deployment work?</a></li>\n<li><a href=\"#how-do-you-add-additional-iam-policies-to-the-ecs-service\" class=\"preview__body--description--blue\">How do you add additional IAM policies to the ECS Service?</a></li>\n<li><a href=\"#how-do-i-use-fargate\" class=\"preview__body--description--blue\">How do I use Fargate?</a></li>\n<li><a href=\"#how-do-you-scale-an-ecs-service\" class=\"preview__body--description--blue\">How do you scale an ECS Service?</a></li>\n<li><a href=\"#how-do-i-associate-the-ecs-service-with-a-clb\" class=\"preview__body--description--blue\">How do I associate the ECS Service with a CLB?</a></li>\n<li><a href=\"#how-do-i-associate-the-ecs-service-with-an-alb-or-nlb\" class=\"preview__body--description--blue\">How do I associate the ECS Service with an ALB or NLB?</a>\n<ul>\n<li><a href=\"#why-doesnt-this-module-create-alb-listener-rules-directly\" class=\"preview__body--description--blue\">Why doesn't this module create ALB Listener Rules directly?</a></li>\n</ul>\n</li>\n<li><a href=\"#how-do-i-setup-service-discovery\" class=\"preview__body--description--blue\">How do I setup Service Discovery?</a></li>\n<li><a href=\"#how-do-i-set-up-app-mesh\" class=\"preview__body--description--blue\">How do I set up App Mesh?</a></li>\n</ul>\n</li>\n<li><a href=\"#known-issues\" class=\"preview__body--description--blue\">Known Issues</a>\n<ul>\n<li><a href=\"#switching-the-value-of-varuse_auto_scaling\" class=\"preview__body--description--blue\">Switching the value of var.use_auto_scaling</a></li>\n<li><a href=\"#gotchas-with-service-discovery\" class=\"preview__body--description--blue\">Gotchas with Service Discovery</a></li>\n</ul>\n</li>\n<li><a href=\"#related-concepts\" class=\"preview__body--description--blue\">Related Concepts</a>\n<ul>\n<li><a href=\"#ecs-clusters\" class=\"preview__body--description--blue\">ECS clusters</a></li>\n<li><a href=\"#ecs-services-and-tasks\" class=\"preview__body--description--blue\">ECS services and tasks</a></li>\n<li><a href=\"#route-53-auto-naming-service\" class=\"preview__body--description--blue\">Route 53 Auto Naming Service</a></li>\n</ul>\n</li>\n</ul>\n<h1 class=\"preview__body--title\" id=\"background\">Background</h1><div class=\"preview__body--border\"></div><h2 class=\"preview__body--subtitle\" id=\"what-is-an-ecs-service\">What is an ECS Service?</h2>\n<p>To run Docker containers with ECS, you first define an <a href=\"http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_defintions.html\" class=\"preview__body--description--blue\" target=\"_blank\">ECS\nTask</a>, which is a JSON file that\ndescribes what container(s) to run, the resources (memory, CPU) those containers need, the volumes to mount, the\nenvironment variables to set, and so on. To actually run an ECS Task, you define an ECS Service, which can:</p>\n<ol>\n<li>Deploy the requested number of Tasks across an ECS cluster based on the <code>desired_number_of_tasks</code> input variable.</li>\n<li>Restart tasks if they fail.</li>\n<li>Route traffic across the tasks with an optional Elastic Load Balancer (ELB).</li>\n</ol>\n<h2 class=\"preview__body--subtitle\" id=\"what-is-ecs-service-discovery\">What is ECS Service Discovery?</h2>\n<p>Many services are not guaranteed to have the same IP address through their lifespan. They can, for example, be\ndynamically assigned to run on different hosts, be redeployed after a failure recovery or scale in and out. This makes\nit complex for services to send traffic to each other.</p>\n<p>Service discovery is the action of detecting and addressing these services, allowing them to be found. Some of the ways\nof doing service discovery are, for example, hardcoding IP addresses, using a Load Balancer or using specialized tools.</p>\n<p>ECS Service Discovery is an AWS feature that allows you to reach your ECS services through a hostname managed by Route53.\nThis hostname will consist of a service discovery name and a namespace (private or public), in the shape of\n<code>discovery-name.namespace:port</code>. For example, on our namespace <code>sandbox.gruntwork.io</code>, we can have a service with the\ndiscovery name <code>my-test-webapp</code> running on port <code>3000</code>. This means that we can <code>dig</code> or <code>curl</code> this service at\n<code>my-test-webapp.sandbox.gruntwork.io:3000</code>. For more information see the <a href=\"#related-concepts\" class=\"preview__body--description--blue\">related concepts</a> section.</p>\n<p>There are many advantages of using ECS Service Discovery instead of reaching it through a Load Balancer, for example:</p>\n<ul>\n<li>Direct communication with the container run by your service</li>\n<li>Lower latency, if using AWS internal network and private namespace</li>\n<li>You can do service-to-service authentication</li>\n<li>Not having a Load Balancer also means fewer resources to manage</li>\n<li>You can configure a Health Check and associate it with all records within a namespace</li>\n<li>You can make a logical group of services under one namespace</li>\n</ul>\n<p>Under the hood, the ECS Service Discovery system uses Amazon Route 53 Auto Naming Service. This service automates the\nprocess of:</p>\n<ul>\n<li>Creating a public or private namespace within a new or existing hosted zone</li>\n<li>Providing a service with the DNS Records configuration and optional health checks</li>\n</ul>\n<p>The latter will be used in the Service Registry of your ECS Service Discovery, and it is the only type of service currently supported for this.</p>\n<p>Important considerations:</p>\n<ul>\n<li>Public namespaces are accessible on the internet and need the domain to be registered already</li>\n<li>Private namespaces are accessible only within your VPC and can be queried immediately</li>\n<li>For cleaning up, deregistering the instances from the auto naming service will trigger an automatic deletion of resources in AWS. However, the namespaces themselves are not deleted. Namespaces must be deleted manually and that is only allowed once all services in that namespace no longer exist.</li>\n</ul>\n<p>For more information on Route 53 Auto Naming Service, please see the AWS documentation on <a href=\"https://docs.aws.amazon.com/Route53/latest/APIReference/overview-service-discovery.html\" class=\"preview__body--description--blue\" target=\"_blank\">Using Auto Naming for Service Discovery</a>.</p>\n<h1 class=\"preview__body--title\" id=\"operations\">Operations</h1><div class=\"preview__body--border\"></div><h2 class=\"preview__body--subtitle\" id=\"how-do-you-create-an-ecs-cluster\">How do you create an ECS cluster?</h2>\n<p>To use ECS, you first deploy one or more EC2 Instances into a "cluster". See the <a href=\"/repos/v0.35.0/module-ecs/modules/ecs-cluster\" class=\"preview__body--description--blue\">ecs-cluster module</a>\nfor how to create a cluster.</p>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-ecs-services-deploy-new-versions-of-containers\">How do ECS Services deploy new versions of containers?</h2>\n<p>When you update an ECS Task (e.g. change the version number of a Docker container to deploy), ECS will roll out the change\nautomatically across your cluster according to two input variables:</p>\n<ul>\n<li><code>deployment_maximum_percent</code>: This variable controls the maximum number of copies of your ECS Task, as a percentage of\n<code>desired_number_of_tasks</code>, that can be deployed during an update. For example, if you have 4 Tasks running at version\n1, <code>deployment_maximum_percent</code> is set to 200, and you kick off a deployment of version 2 of your Task, ECS will\nfirst deploy 4 Tasks at version 2, wait for them to come up, and then it'll undeploy the 4 Tasks at version 1. Note\nthat this only works if your ECS cluster has capacity--that is, EC2 instances with the available memory, CPU, ports,\netc requested by your Tasks, which might mean maintaining several empty EC2 instances just for deployment.</li>\n<li><code>deployment_minimum_healthy_percent</code>: This variable controls the minimum number of copies of your ECS Task, as a\npercentage of <code>desired_number_of_tasks</code>, that must stay running during an update. For example, if you have 4 Tasks running\nat version 1, you set <code>deployment_minimum_healthy_percent</code> to 50, and you kick off a deployment of version 2 of your\nTask, ECS will first undeploy 2 Tasks at version 1, then deploy 2 Tasks at version 2 in their place, and then repeat\nthe process again with the remaining 2 tasks. This allows you to roll out new versions without having to keep spare\nEC2 instances, but it also means the availability of your service is somewhat reduced during rollouts.</li>\n</ul>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-i-do-a-canary-deployment\">How do I do a canary deployment?</h2>\n<p>A <a href=\"http://martinfowler.com/bliki/CanaryRelease.html\" class=\"preview__body--description--blue\" target=\"_blank\">canary deployment</a> is a way to test new versions of your Docker\ncontainers in a way that limits the damage any bugs could do. The idea is to deploy the new version onto just a single\nserver (meanwhile, the old versions are running elsewhere) and to test that new version and compare it to the old\nversions. If everything is working well, you roll out the new version everywhere. If there are any problems, they only\naffect a small percentage of users, and you can quickly fix them by rolling back the new version.</p>\n<p>To do a canary deployment with this module, you need to specify two parameters:</p>\n<ul>\n<li><code>ecs_task_definition_canary</code>: The JSON text of the ECS Task Definition to be run for the canary. This defines the\nDocker container(s) to be run along with all their properties.</li>\n<li><code>desired_number_of_canary_tasks_to_run</code>: The number of ECS Tasks to run for the canary. You should typically set\nthis to 1.</li>\n</ul>\n<p>Here's an example that has 10 versions of the original ECS Task running and adds 1 Task to try out a canary:</p>\n<pre><span class=\"hljs-keyword\">module</span> <span class=\"hljs-string\">\"ecs_service\"</span> {\n ecs_task_container_definitions = local.container_definition\n desired_number_of_tasks = <span class=\"hljs-number\">10</span>\n\n ecs_task_definition_canary = local.canary_container_definition\n desired_number_of_canary_tasks_to_run = <span class=\"hljs-number\">1</span>\n\n <span class=\"hljs-comment\"># (... all other params omitted ...)</span>\n}\n</pre>\n<p>If this canary has any issues, set <code>desired_number_of_canary_tasks_to_run</code> to 0. If the canary works well and you\nwant to deploy the new version across the whole cluster, update <code>local.container_definition</code> with the new version of\nthe Docker container and set <code>desired_number_of_canary_tasks_to_run</code> back to 0.</p>\n<h2 class=\"preview__body--subtitle\" id=\"how-does-canary-deployment-work\">How does canary deployment work?</h2>\n<p>The way we do canary deployments with this module is to create a second ECS Service just for the canary that runs\n<code>desired_number_of_canary_tasks_to_run</code> instances of your canary ECS Task. This ECS Service registers with the same ELB\nor service registry (if you're using one), so some percentage of user requests will randomly hit the canary, and the\nrest will go to the original ECS Tasks. For example, if you had 9 ECS Tasks and you deployed 1 canary ECS Task, then\neach request would have a 90% chance of hitting the original version of your Docker container and a 10% chance of\nhitting the canary version.</p>\n<p>Therefore, there are two caveats with using canary deployments:</p>\n<ol>\n<li>Do not do canary deployments with user-visible changes. For example, if your Docker container is a frontend service\nand the new Docker image version changes the UI, then a user may see a different version of the UI every time they\nrefresh the page, which could be a jarring experience. You can still use canary deployments with frontend Docker\ncontainers so long as you wrap UI changes in feature toggles and don't enable those toggles until the new version is\nrolled out across the entire cluster (i.e. this is known as a <a href=\"http://tech.co/the-dark-launch-how-googlefacebook-release-new-features-2016-04\" class=\"preview__body--description--blue\" target=\"_blank\">dark\nlaunch</a>).</li>\n<li>Ensure the new version of your Docker container is backwards compatible with the old version. For example, if the\nDocker container runs schema migrations when it boots, make sure the new schema works correctly with the old version\nof the Docker container, since both will be running simultaneously. Backwards compatibility is always a good idea\nwith deployments, but it becomes a hard requirement with canary deployments.</li>\n</ol>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-you-add-additional-iam-policies-to-the-ecs-service\">How do you add additional IAM policies to the ECS Service?</h2>\n<p>This module creates an <a href=\"http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html\" class=\"preview__body--description--blue\" target=\"_blank\">IAM Role for the ECS\nTasks</a> run by the ECS Service. Any\ncustom IAM Policies needed by this ECS Service should be attached to that IAM Role.</p>\n<p>To do this in Terraform, you can use the\n<a href=\"https://www.terraform.io/docs/providers/aws/r/iam_role_policy.html\" class=\"preview__body--description--blue\" target=\"_blank\">aws_iam_role_policy</a> or\n<a href=\"https://www.terraform.io/docs/providers/aws/r/iam_policy_attachment.html\" class=\"preview__body--description--blue\" target=\"_blank\">aws_iam_policy_attachment</a> resources, and set\nthe <code>role</code> property to the Terraform output of this module called <code>ecs_task_iam_role_name</code>. For example, here is how you\ncan allow the ECS Service in this cluster to access an S3 bucket:</p>\n<pre>module <span class=\"hljs-string\">\"ecs_service\"</span> {\n # (arguments omitted)\n}\n<span class=\"hljs-built_in\">\nresource </span><span class=\"hljs-string\">\"aws_iam_role_policy\"</span> <span class=\"hljs-string\">\"access_s3_bucket\"</span> {\n name = <span class=\"hljs-string\">\"access_s3_bucket\"</span>\n role = module.ecs_service.ecs_task_iam_role_name\n <span class=\"hljs-built_in\"> policy </span>= data.aws_iam_policy_document.access_s3_bucket.json\n}\n\ndata <span class=\"hljs-string\">\"aws_iam_policy_document\"</span> <span class=\"hljs-string\">\"access_s3_bucket\"</span> {\n statement {\n effect = <span class=\"hljs-string\">\"Allow\"</span>\n actions = [<span class=\"hljs-string\">\"s3:GetObject\"</span>]\n resources = [<span class=\"hljs-string\">\"arn:aws:s3:::examplebucket/*\"</span>]\n }\n}\n</pre>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-i-use-fargate\">How do I use Fargate?</h2>\n<p>A Fargate ECS service automatically manages and scales your cluster as needed without you needing to manage the\nunderlying EC2 instances or clusters. Fargate lets you focus on designing and building your applications instead of\nmanaging the infrastructure that runs them, with Fargate, all you have to do is package your application in containers,\nspecify the CPU and memory requirements, define networking and IAM policies, and launch the application.</p>\n<p>To deploy your ECS service using Fargate, you need to set the following inputs:</p>\n<ul>\n<li><code>launch_type</code> should be set to <code>FARGATE</code>.</li>\n<li>Fargate currently only works with the <code>awsvpc</code> network mode. This means that you need to set\n<code>ecs_task_definition_network_mode</code> to <code>"awsvpc"</code> and configure the service network using\n<code>ecs_service_network_configuration</code>.</li>\n<li>You must specify <code>task_cpu</code> and <code>task_memory</code>. See <a href=\"https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html#fargate-tasks-size\" class=\"preview__body--description--blue\" target=\"_blank\">the official documentation for information on how to configure\nthis</a>.</li>\n</ul>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-you-scale-an-ecs-service\">How do you scale an ECS Service?</h2>\n<p>To scale an ECS service in response to higher load, you have two options:</p>\n<ol>\n<li><strong>Scale the number of ECS Tasks</strong>: To do this, you first create one or\nmore <a href=\"https://www.terraform.io/docs/providers/aws/r/appautoscaling_policy.html\" class=\"preview__body--description--blue\" target=\"_blank\"><code>aws_appautoscaling_policy</code></a>\nresources that define how to scale the number of ECS Tasks up or down. These should be associated with the\n<a href=\"https://www.terraform.io/docs/providers/aws/r/appautoscaling_target.html\" class=\"preview__body--description--blue\" target=\"_blank\"><code>aws_appautoscaling_target</code></a> that is created\nby this module (output <code>service_app_autoscaling_target_arn</code>). Finally, you create one or more\n<a href=\"https://www.terraform.io/docs/providers/aws/r/cloudwatch_metric_alarm.html\" class=\"preview__body--description--blue\" target=\"_blank\"><code>aws_cloudwatch_metric_alarm</code></a> resources\nthat trigger your <code>aws_appautoscaling_policy</code> resources when certain metrics cross specific thresholds (e.g. when\nCPU usage is over 90%).</li>\n<li><strong>Scale the number of ECS Instances and Tasks</strong>: If your ECS Cluster doesn't have enough spare capacity, then not\nonly will you have to scale the number of ECS Tasks as described above, but you'll also have to increase the\nsize of the cluster by scaling the number of ECS Instances. To do that, you create one or more\n<a href=\"https://www.terraform.io/docs/providers/aws/r/autoscaling_policy.html\" class=\"preview__body--description--blue\" target=\"_blank\"><code>aws_autoscaling_policy</code></a> resources with the\n<code>autoscaling_group_name</code> parameter set to the <code>ecs_cluster_asg_name</code> output of the <code>ecs-cluster</code> module. Next, you\ncreate one or more\n<a href=\"https://www.terraform.io/docs/providers/aws/r/cloudwatch_metric_alarm.html\" class=\"preview__body--description--blue\" target=\"_blank\"><code>aws_cloudwatch_metric_alarm</code></a> resources\nthat trigger your <code>aws_autoscaling_policy</code> resources when certain metrics cross specific thresholds (e.g. when\nCPU usage is over 90%).</li>\n</ol>\n<p>See the <a href=\"/repos/v0.35.0/module-ecs/examples/docker-service-with-autoscaling\" class=\"preview__body--description--blue\">docker-service-with-autoscaling example</a> for sample code.</p>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-i-associate-the-ecs-service-with-a-clb\">How do I associate the ECS Service with a CLB?</h2>\n<p>To associate the ECS service with an existing CLB, you need to first ensure the CLB exists. Then, you need to pass in\nthe following inputs to the module:</p>\n<ul>\n<li><code>clb_name</code> should be set to the name of the CLB. This ensures the ECS service will register against the correct CLB.</li>\n<li><code>clb_container_name</code> and <code>clb_container_port</code> should be set to the name of the container (as defined in the task\ncontainer definition json) and port of the container. This ensures the CLB routes to the correct container if an ECS\ntask has multiple containers.</li>\n</ul>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-i-associate-the-ecs-service-with-an-alb-or-nlb\">How do I associate the ECS Service with an ALB or NLB?</h2>\n<p>In AWS, to create an ECS Service with an ALB or NLB, we need the following resources:</p>\n<ul>\n<li>\n<p>ALB or NLB</p>\n<ul>\n<li><a href=\"https://www.terraform.io/docs/providers/aws/r/lb.html\" class=\"preview__body--description--blue\" target=\"_blank\">ALB/NLB itself</a>: This is the load balancer that receives\ninbound requests and routes them to our ECS Service.</li>\n<li><a href=\"https://www.terraform.io/docs/providers/aws/r/lb_listener.html\" class=\"preview__body--description--blue\" target=\"_blank\">Load Balancer Listener</a>: An ALB/NLB will only\nlisten for incoming traffic on ports for which there is a Load Balancer Listener defined. For example, if you want\nthe ALB/NLB to accept traffic on port 80, you must define an Listener for port 80.</li>\n<li><a href=\"https://www.terraform.io/docs/providers/aws/r/lb_listener_rule.html\" class=\"preview__body--description--blue\" target=\"_blank\">ALB Listener Rule (only for ALB)</a>: Once an ALB\nListener receives traffic, which <a href=\"http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html\" class=\"preview__body--description--blue\" target=\"_blank\">Target\nGroup</a> (Docker\ncontainers) should it route the requests to? We must define ALB Listener Rules that route inbound requests\nbased on either their hostname (e.g. <code>gruntwork.io</code> vs <code>amazon.com</code>), their path (e.g. <code>/foo</code> vs. <code>/bar</code>), or both.\nNote that for NLBs, there is only one target so this should be set directly on the listener.</li>\n<li><a href=\"https://www.terraform.io/docs/providers/aws/r/lb_target_group.html\" class=\"preview__body--description--blue\" target=\"_blank\">Target Group</a>: The ALB Listener Rule (or LB\nListener for NLB) routes requests by determining a "Target Group". It then picks one of the\n<a href=\"http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#registered-targets\" class=\"preview__body--description--blue\" target=\"_blank\">Targets</a>\nin the Target Group (typically, a Docker container or EC2 Instance) as the final destination for the request.</li>\n</ul>\n</li>\n<li>\n<p>ECS Cluster</p>\n<ul>\n<li><a href=\"https://www.terraform.io/docs/providers/aws/r/ecs_cluster.html\" class=\"preview__body--description--blue\" target=\"_blank\">ECS Cluster itself</a>: The ECS Cluster is where all\nour Docker containers are run.</li>\n</ul>\n</li>\n<li>\n<p>ECS Service</p>\n<ul>\n<li><a href=\"https://www.terraform.io/docs/providers/aws/r/ecs_task_definition.html\" class=\"preview__body--description--blue\" target=\"_blank\">ECS Task Definition</a>: To define which Docker\nimage we want to run, how much memory/CPU to allocate it, which <code>docker run</code> commmand to use, environment variables,\nand <a href=\"http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html\" class=\"preview__body--description--blue\" target=\"_blank\">every other aspect of the Docker container configuration</a>,\nwe create an "ECS Task Definition". The idea behind the name is that an ECS Cluster could, in theory, run many types\nof tasks, and Docker is just one such type. Therefore, rather than calling tasks "Docker containers", Amazon uses\nthe name "ECS Task".</li>\n<li><a href=\"https://www.terraform.io/docs/providers/aws/r/ecs_service.html\" class=\"preview__body--description--blue\" target=\"_blank\">ECS Service itself</a>: When we want to run multiple\nECS Tasks as part of a single service (i.e. run multiple Docker containers as part of a single service), enable\nauto-restart if a container fails, and enable the ELB to automatically discover newly launched ECS Tasks, we create\nan "ECS Service".</li>\n</ul>\n</li>\n</ul>\n<p>To clarify the relationship between these entities:</p>\n<p>When creating your ALB/NLB, ECS Cluster, and ECS Service for the first time:</p>\n<ul>\n<li>First create your ALB/NLB (see module\n<a href=\"/repos/terraform-aws-load-balancer/modules/alb\" class=\"preview__body--description--blue\">alb</a> for ALBs and the <a href=\"https://www.terraform.io/docs/providers/aws/r/lb.html\" class=\"preview__body--description--blue\" target=\"_blank\">aws_lb\nresource</a>)</li>\n<li>Then create your ECS Cluster (see module <a href=\"/repos/v0.35.0/module-ecs/modules/ecs-cluster\" class=\"preview__body--description--blue\">ecs-cluster</a> for EC2 based clusters and <a href=\"https://www.terraform.io/docs/providers/aws/d/ecs_cluster.html\" class=\"preview__body--description--blue\" target=\"_blank\">aws_ecs_cluster resource</a> for Fargate)</li>\n<li>Finally, create your ECS Service (this module!)</li>\n<li>For ALBs, register listener rules to setup routing rules for your service. For NLBs, create the listener so that it\nroutes to the target group of the service using <a href=\"https://www.terraform.io/docs/providers/aws/r/lb_listener.html\" class=\"preview__body--description--blue\" target=\"_blank\">aws_lb_listener\nresource</a>.</li>\n</ul>\n<p>When creating a new ECS Service that uses existing ALBs or NLBs and an existing ECS Cluster, you will need to set the\nfollowing inputs:</p>\n<ul>\n<li>If creating the LB and ECS service in the same module, <code>dependencies</code> should include the ALB arn so that the module\nwaits for the LB to be created.</li>\n<li><code>elb_target_groups</code> should be set to a map of keys to objects with one mapping per desired target group. The keys in the map can be any arbitrary name and are used to link the outputs with the inputs. The values of the map are an object containing these attributes:\n<ul>\n<li>If you use <code>alb</code> as the key then you'll reference the ARN of the resulting target group like this <code>module.ecs_service.target_group_arns["alb"]</code></li>\n<li><code>name</code> should be set to a string so that it is not null. This ensures the module creates a target\ngroup for the ECS service.</li>\n<li><code>container_name</code> and <code>container_port</code> should be set to the name of the container (as defined in the task container\ndefinition json) and port of the container. This ensures the CLB routes to the correct container if an ECS task\nhas multiple containers.</li>\n<li><code>protocol</code> should be set to match the protocol of the LB (ex: "HTTPS" or "HTTP" for an ALB) so that it is not null.</li>\n<li><code>health_check_protocol</code> should be set to match the protocol of the ECS service (ex: "HTTPS" or "HTTP" for a typical web-based service) so that it is not null.</li>\n<li><code>load_balancing_algorithm_type</code> should be set to either "round_robin" or "least_outstanding_requests". It is "round_robin" by default.</li>\n</ul>\n</li>\n<li><code>elb_target_group_vpc_id</code> should be set to the VPC where the ALB lives.</li>\n</ul>\n<p>Note that:</p>\n<ul>\n<li>An ECS Cluster may have one or more ECS Services</li>\n<li>An ECS Service may be associated with zero or one ALBs/NLBs</li>\n<li>An ALB/NLB may be shared among multiple ECS Services</li>\n<li>An ALB has zero or more ALB Listeners</li>\n<li>Each ALB Listener has zero or more ALB Listener Rules</li>\n<li>Each NLB Listener has zero Listener Rules</li>\n<li>A Target Group may receive traffic from zero or more ALBs/NLBs</li>\n</ul>\n<h3 class=\"preview__body--subtitle\" id=\"why-doesnt-this-module-create-alb-listener-rules-http-docs-aws-amazon-com-elasticloadbalancing-latest-application-load-balancer-listeners-html-listener-rules-directly\">Why doesn't this module create <a href=\"http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#listener-rules\" class=\"preview__body--description--blue\" target=\"_blank\">ALB Listener Rules</a> directly?</h3>\n<p>In the first version of this module, we attempted to hide the creation of ALB Listener Rules from users. Our thought\nprocess was that the module's API should simplify as much as possible what was actually happening. But in practice we\nfound that there was more variation than we expected in the different routing rules that customers required, that\nsupporting any new ALB Listener Rule type (e.g. host-based routing) was cumbersome, and that by wrapping so much\ncomplexity, we ultimately created more confusion, not less.</p>\n<p>For this reason, the intent of this module is now about creating an ECS Service that is <em>ready</em> to be routed to. But to\ncomplete the configuration, the Terraform code that calls this module should directly create its own set of Terraform\n<a href=\"https://www.terraform.io/docs/providers/aws/r/lb_listener_rule.html\" class=\"preview__body--description--blue\" target=\"_blank\">lb_listener_rule</a> resources to meet the specific\nneeds of your ECS Cluster.</p>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-i-setup-service-discovery\">How do I setup Service Discovery?</h2>\n<p>To setup ECS Service Discovery using this module, you need to first create a Service Discovery DNS Namespace (Private or\nPublic) that the Service Discovery feature can use to manage DNS records for the ECS Service. You can use the\n<a href=\"https://www.terraform.io/docs/providers/aws/r/service_discovery_private_dns_namespace.html\" class=\"preview__body--description--blue\" target=\"_blank\">aws_service_discovery_private_dns_namespace\nresource</a>\n(for private DNS namespaces) and the\n<a href=\"https://www.terraform.io/docs/providers/aws/r/service_discovery_public_dns_namespace.html\" class=\"preview__body--description--blue\" target=\"_blank\">aws_service_discovery_public_dns_namespace resource</a>\n(for public DNS namespaces).</p>\n<p>Once the namespace is created, you need to pass in the following inputs to the module:</p>\n<ul>\n<li>Service Discovery currently only works with the <code>awsvpc</code> network mode. This means that you need to set\n<code>ecs_task_definition_network_mode</code> to <code>"awsvpc"</code> and configure the service network using\n<code>ecs_service_network_configuration</code>.</li>\n<li><code>use_service_discovery</code> should be set to <code>true</code>. This ensures the module will connect the ECS service with the\nprovided registry information.</li>\n<li><code>discovery_namespace_id</code> should be set to the ID of the DNS namespace.</li>\n<li><code>discovery_name</code> should be set to the string you wish to use as the DNS subdomain.</li>\n</ul>\n<p>Additionally, for public DNS namespaces, you will also need to provide the ID of the Route 53 Hosted Zone that is\nassociated with the registrar for the domain. When you create a public DNS namespace, it createss a new Hosted Zone that\nis not associated with the registrar. This means that DNS calls outside of the VPC will not actually resolve to the ECS\nservice. To allow the ECS service DNS queries to resolve, we need to create an alias record on the Hosted Zone that is\nassociated with the registrar to route to the namespace DNS record. This module will create this record for you if you\nprovide the following inputs:</p>\n<ul>\n<li><code>discovery_use_public_dns</code> should be set to <code>true</code>.</li>\n<li><code>discovery_original_public_route53_zone_id</code> should be set to the ID of the Route 53 Hosted Zone that is associated\nwith the registrar.</li>\n<li><code>discovery_public_dns_namespace_route53_zone_id</code> should be set to the ID of the Hosted Zone that is associated with\nthe DNS namespace.</li>\n</ul>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-i-set-up-app-mesh\">How do I set up App Mesh?</h2>\n<p>To set up App Mesh using this module, you must first create a mesh, a virtual service, and a virtual node or virtual router. Creation of these resources\nis documented in AWS App Mesh <a href=\"https://docs.aws.amazon.com/app-mesh/latest/userguide/getting-started-ecs.html\" class=\"preview__body--description--blue\" target=\"_blank\">Getting Started documentation</a>. <a href=\"https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/appmesh_mesh\" class=\"preview__body--description--blue\" target=\"_blank\">Terraform modules</a> are available for all App Mesh resources.</p>\n<p>With those resources set up, the Envoy container can be added to <code>container_definitions</code>.</p>\n<pre><span class=\"hljs-attr\">container_definitions</span> = [\n {\n <span class=\"hljs-attr\">name</span> = <span class=\"hljs-string\">\"envoy_proxy\"</span>,\n <span class=\"hljs-attr\">image</span> = <span class=\"hljs-string\">\"840364872350.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.19.0.0-prod\"</span>,\n <span class=\"hljs-attr\">essential</span> = <span class=\"hljs-literal\">true</span>,\n <span class=\"hljs-attr\">environment</span> = [{\n <span class=\"hljs-attr\">name</span> = <span class=\"hljs-string\">\"APPMESH_RESOURCE_ARN\"</span>,\n <span class=\"hljs-attr\">value</span> = <span class=\"hljs-string\">\"arn:aws:appmesh:us-west-2:111122223333:mesh/apps/virtualNode/serviceB\"</span>\n }],\n <span class=\"hljs-attr\">healthCheck</span> = {\n <span class=\"hljs-attr\">command</span> = [\n <span class=\"hljs-string\">\"CMD-SHELL\"</span>,\n <span class=\"hljs-string\">\"curl -s http://localhost:9901/server_info | grep state | grep -q LIVE\"</span>\n ],\n <span class=\"hljs-attr\">startPeriod</span> = <span class=\"hljs-number\">10</span>,\n <span class=\"hljs-attr\">interval</span> = <span class=\"hljs-number\">5</span>,\n <span class=\"hljs-attr\">timeout</span> = <span class=\"hljs-number\">2</span>,\n <span class=\"hljs-attr\">retries</span> = <span class=\"hljs-number\">3</span>\n },\n <span class=\"hljs-attr\">user</span> = <span class=\"hljs-string\">\"1337\"</span>\n },\n {\n <span class=\"hljs-comment\"># App container</span>\n <span class=\"hljs-comment\"># ...</span>\n <span class=\"hljs-attr\">dependsOn</span> = [{\n <span class=\"hljs-attr\">containerName</span> = <span class=\"hljs-string\">\"envoy_proxy\"</span>\n <span class=\"hljs-attr\">condition</span> = <span class=\"hljs-string\">\"HEALTHY\"</span>\n }]\n }\n]\n\n</pre>\n<p>The <code>proxy_configuration</code> variable needs to be configured as follows:</p>\n<pre>proxy_configuration = {\n <span class=\"hljs-built_in\"> type </span> = <span class=\"hljs-string\">\"APPMESH\"</span>\n container_name = <span class=\"hljs-string\">\"envoy_proxy\"</span>\n properties = {\n AppPorts = <span class=\"hljs-string\">\"8080\"</span>\n EgressIgnoredIPs = <span class=\"hljs-string\">\"169.254.170.2,169.254.169.254\"</span>\n IgnoredUID = <span class=\"hljs-string\">\"1337\"</span>\n ProxyEgressPort = 15001\n ProxyIngressPort = 15000\n }\n}\n</pre>\n<p>See the <a href=\"https://docs.aws.amazon.com/app-mesh/latest/userguide/getting-started-ecs.html#update-services\" class=\"preview__body--description--blue\" target=\"_blank\">AWS documentation</a> for container and proxy configurations. See the "Task definition json" section for more information.</p>\n<h2 class=\"preview__body--subtitle\" id=\"known-issues\">Known Issues</h2>\n<h3 class=\"preview__body--subtitle\" id=\"switching-the-value-of-var-use-auto-scaling\">Switching the value of <code>var.use_auto_scaling</code></h3>\n<p>If you switch <code>var.use_auto_scaling</code> from true to false or vice versa, Terraform will attempt to destroy and\nre-create the <code>aws_ecs_service</code> which has a chain of dependencies that eventually lead to destroying and re-creating\nthe ECS Service, which will lead to downtime. This is because we conditionally create Terraform resources depending on\nthe value of<code>var.use_auto_scaling</code>, and Terraform can't fully incorporate this concept into its dependency graph.</p>\n<p>Fortunately, there's a workaround using manual state manipulation. We'll tell Terraform that the old resource is now\nthe new one as follows.</p>\n<pre><span class=\"hljs-comment\"># If you are changing var.use_auto_scaling from TRUE to FALSE:</span>\n<span class=\"hljs-keyword\">terraform</span> state mv <span class=\"hljs-keyword\">module</span>.ecs_service.aws_ecs_service.service_with_auto_scaling <span class=\"hljs-keyword\">module</span>.ecs_service.aws_ecs_service.service_without_auto_scaling\n\n<span class=\"hljs-comment\"># If you are changing var.use_auto_scaling from FALSE to TRUE:</span>\n<span class=\"hljs-keyword\">terraform</span> state mv <span class=\"hljs-keyword\">module</span>.ecs_service.aws_ecs_service.service_without_auto_scaling <span class=\"hljs-keyword\">module</span>.ecs_service.aws_ecs_service.service_with_auto_scaling\n</pre>\n<p>Now run <code>terragrunt plan</code> to confirm that Terraform will only make modifications.</p>\n<h3 class=\"preview__body--subtitle\" id=\"gotchas-with-service-discovery\">Gotchas with Service Discovery</h3>\n<ul>\n<li>The ECS Service Discovery feature is not yet available in all regions.\nFor a list of regions where this feature is enabled, please see the <a href=\"https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-discovery.html\" class=\"preview__body--description--blue\" target=\"_blank\">AWS ECS Service Discovery documentation</a>.</li>\n<li>The discovery name is not necessarily the same as the name of your service. You can have a different name by which you want to discover your service.</li>\n<li>You can enable ECS Service Discovery only during the creation of your ECS service, not when updating it.</li>\n<li>The network mode of the task definition affects the behavior and configuration of ECS Service Discovery DNS Records.\n<ul>\n<li>Service discovery with <code>SRV</code> DNS records are not yet supported by this module. This means that tasks defined with with <code>host</code> or <code>bridge</code> network modes that can only be used with this type of record are also not supported.</li>\n<li>For enabling service discovery, this module uses the <code>awsvpc</code> network mode. AWS will attach an Elastic Network Interface to your task, so you have to be aware that EC2 instance types have a <a href=\"https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI\" class=\"preview__body--description--blue\" target=\"_blank\">limit of how many ENIs can be attached to them</a>.</li>\n</ul>\n</li>\n<li>For service discovery with public DNS: The hostname is public (e.g. your-company.com), but it still points to a private IP address. Querying a public hostname that points to a private IP address might sometimes yield in empty results and you might be required to force reading from a specific nameserver (such as an amazon name server like ns-67.awsdns-08.com or google's public nameserver), for example: <code>dig +short @8.8.8.8 my-service.my-company.com</code></li>\n<li>In the <code>aws_lb_target_group</code>, the <code>port = 80</code> field is merely a placeholder. The actual port is determined dynamically when a container launches, but the resource requires a value. The <code>port = 80</code> argument can be safely ignored.</li>\n</ul>\n<h2 class=\"preview__body--subtitle\" id=\"related-concepts\">Related Concepts</h2>\n<h3 class=\"preview__body--subtitle\" id=\"ecs-clusters\">ECS clusters</h3>\n<p>See the <a href=\"/repos/v0.35.0/module-ecs/modules/ecs-cluster\" class=\"preview__body--description--blue\">ecs-cluster module</a>.</p>\n<h3 class=\"preview__body--subtitle\" id=\"ecs-services-and-tasks\">ECS services and tasks</h3>\n<p>See the <a href=\"/repos/v0.35.0/module-ecs/modules/ecs-service\" class=\"preview__body--description--blue\">ecs-service module</a>.</p>\n<h3 class=\"preview__body--subtitle\" id=\"route-53-auto-naming-service\">Route 53 Auto Naming Service</h3>\n<p>Amazon Route 53 auto naming service automates the process of:</p>\n<ul>\n<li>Creating a public or private namespace within a new or existing hosted zone</li>\n<li>Providing a service with the DNS Records configuration and optional health checks</li>\n</ul>\n<p>The latter will be used in the Service Registry of your ECS Service Discovery, and it is the only type of service currently supported for this.</p>\n<p>Important considerations:</p>\n<ul>\n<li>Public namespaces are accessible on the internet and need the domain to be registered already</li>\n<li>Private namespaces are accessible only within your VPC and can be queried immediately</li>\n<li>For cleaning up, deregistering the instances from the auto naming service will trigger an automatic deletion of resources in AWS. However, the namespaces themselves are not deleted. Namespaces must be deleted manually and that is only allowed once all services in that namespace no longer exist.</li>\n</ul>\n<p>For more information on Route 53 Auto Naming Service, please see the AWS documentation on <a href=\"https://docs.aws.amazon.com/Route53/latest/APIReference/overview-service-discovery.html\" class=\"preview__body--description--blue\" target=\"_blank\">Using Auto Naming for Service Discovery</a>.</p>\n","repoName":"module-ecs","repoRef":"v0.34.2","serviceDescriptor":{"serviceName":"ECS Service","serviceRepoName":"module-ecs","serviceRepoOrg":"gruntwork-io","serviceMainReadmePath":"/modules/ecs-service-with-alb","cloudProviders":["aws"],"description":"Deploy an ECS service with zero-downtime, rolling deployment, IAM Role, auto scaling, and more.","imageUrl":"ecs.png","licenseType":"subscriber","technologies":["Terraform","Python"],"compliance":[],"tags":[""]},"serviceCategoryName":"Docker services","fileName":"core-concepts.md","filePath":"/modules/ecs-service/core-concepts.md","title":"Repo Browser: ECS Service","description":"Browse the repos in the Gruntwork Infrastructure as Code Library."}