Do a zero-downtime, rolling deployment, where each server is shut down, the EBS Volume and/or ENI detached, and new
server is brought up that reattaches the EBS Volume and/or ENI.
The main use case for this module is to run data stores such as MongoDB and ZooKeeper. See the Background
section to understand how this module works and in what use cases you should use it instead of an Auto
Scaling Group (ASG).
Quick start
Check out the server-group examples for sample code that demonstrates how to use this module.
The code above will spin up 3 t3.micro servers in the specified VPC and subnets. Any server that fails EC2 status
checks will be automatically replaced. Any time you update any of the parameters and run terraform apply, it will
kick off a zero-downtime rolling deployment (see How does rolling deployment work?
for details).
Optionally integrate a load balancer for health checks
By default, the server-group module uses EC2 status
checks to determine
server health. This is used both during a rolling deployment (i.e., only replace the next server when the previous
server is healthy) and for auto-recovery (i.e., replace any server that has failed). While EC2 status checks are good
enough to detect when the EC2 Instance has completely died or is malfunctioning, they do NOT determine if the code
running on that EC2 Instance is actually working (e.g., is your database or application actually running and capable
of serving traffic).
Therefore, we strongly recommend associating a load balancer with your server-group. The load balancer can perform
health checks on your application code by actually making HTTP or TCP requests to the application, which is a far more
robust way to tell if the server is healthy.
Here is how to associate an ELB with your server-group and use it for health checks:
Note: The health_check_type value above is not a typo, it should be ELB in both cases.
Once you've associated a load balancer with your server-group, new servers will automatically register with the load
balancer while deploying, deregister while undeploying, and use the load balancer's health checks to determine when a
server is healthy or needs replacing.
Optionally create ENIs and EBS Volumes for each server
By default, the server-group module does not create any ENIs or EBS Volumes. If you would like to create ENIs, set the
num_enis parameter to the number of ENIs you want per server:
Note: When using an io1 disk type, the iops parameter must be specified.
Each ENI and server pair will get a matching eni-xxx tag (e.g., eni-0, eni-1, etc). Each EBS Volume and server
pair will get a matching ebs-volume-xxx tag (e.g., ebs-volume-0, ebs-volume-1, etc). You will need to attach
these ENIs and Volumes while your server is booting, as described in the next section.
If you created ENIs, optionally create DNS records
You may wish to have a DNS record associated with each ENI. This has the special advantage that even if a server is
replaced, the new server will attach the existing ENI and retain the same IP address! This means that the DNS record
will be permanently valid as long the Server Group size does not shrink.
If you would like to create DNS records, set the route53_hosted_zone_id parameter to the Route53 Hosted Zone where DNS records
should be created and the dns_name_common_portion parameter to the common portion of the DNS name to be shared by each server in the
Server Group. For example:
While the server-group module can create ENIs and EBS Volumes for you, you have to attach them to your servers
yourself. The easiest way to do that is to use the following modules from
terraform-aws-server:
Install the attach-eni and/or persistent-ebs-volume modules in the AMI that gets deployed in your server-group.
The easiest way to do this is to use the Gruntwork Installer
in a Packer template:
Run the attach-eni and/or mount-ebs-volume scripts while each server is booting, typically as part of
User Data. You can use the
--eni-with-same-tag and --volume-with-same-tag parameters of the scripts, respectively, to automatically mount
the ENIs and/or EBS Volumes with the same eni-xxx and ebs-volume-xxx tags as the server.
Optionally Order the Deployment of Other Terraform Resources
There are times when you may wish to block a Terraform resource from being created until the resources deployed by this
module are finished. For example, when you deploy a Kafka cluster, you also need to deploy
a Zookeeper cluster and the Kafka cluster cannot boot until the Zookeeper cluster is
fully booted. To avoid messy log entries of Kafka failing while Zookeeper boots, you could just start the creation of the
Kafka cluster after the Zookeeper cluster has finished booting.
Unfortunately, as of June 7, 2018, Terraform does not support module dependencies, so we have to hack this support by making clever use of modules
outputs and inputs (variables).
Here's how to use the ordering feature of the server-group module:
Suppose you have two Terraform modules: Module A and Module B, both of which are instances of this server-group
module. You want Module B to be created after Module A.
The following code will achieve the desired ordering:
module"a" {
# Be sure to update to the latest version of this module
source = "git::git@github.com:gruntwork-io/terraform-aws-asg.git//modules/server-group?ref=v1.0.8"
...
}
module"b" {
source = "git::git@github.com:gruntwork-io/terraform-aws-asg.git//modules/server-group?ref=v1.0.8"
wait_for = "${module.a.rolling_deployment_done}"
...
}
Make sure that you specifically use the rolling_deployment_done output value of Module A, not just any arbitrary output
value.
With this pattern, Module A will now fully deploy, and only then will Module B create its Launch Configuration and Auto Scaling Group and
begin the rolling deploy.
The first question you may ask is, how is this different than an Auto Scaling Group
(ASG)? While an ASG does allow you to
run a cluster of servers, automaticaly replace failed servers, and do zero-downtime deployment (see the
asg-rolling-deploy module), attaching ENIs and EBS Volumes to servers in an ASG is very
tricky:
Using ENIs and EBS Volumes with ASGs is not natively supported by Terraform. The
aws_network_interface_attachment
and aws_volume_attachment resources only
work with individual EC2 Instances and not ASGs. Therefore, you typically create a pool of ENIs and EBS Volumes in
Terraform, and your servers, while booting, use the AWS CLI to attach those ENIs and EBS Volumes.
Attaching ENIs and EBS Volumes from a pool requires that each server has a way to uniquely pick which ENI or EBS
Volume belongs to it. Picking at random and retrying can be slow and error prone.
With EBS Volumes, attaching them from an ASG is particularly problematic, as you can only attach an EBS Volume in
the same Availability Zone (AZ) as the server. If you have, for example, three AZs and five servers, it's entirely
possible that the ASG will launch a server in an AZ that does not have any EBS Volumes available.
The goal of this module is to give you a way to run a cluster of servers where attaching ENIs and EBS Volumes is easy.
How does this module work?
The solution used in this module is to:
Create one ASG for each server. So if you create a cluster with five servers, you'll end up with five ASGs. Using
ASGs gives us the ability to automatically integrate with the ALB and ELB and to replace failed servers.
Each ASG is assigned to exactly one subnet, and therefore, one AZ.
Create ENIs and EBS Volumes for each server, in the same AZ as that server's ASG. This ensures a server will
never launch in an AZ that doesn't have an EBS Volume.
Each server and ENI pair and each server and EBS Volume pair get matching tags, so each server can always uniquely
identify the ENIs and EBS Volumes that belong to it.
The server-group module will perform a zero-downtime, rolling deployment every time you make a change to the code and
run terraform apply. This deployment process is implemented in a Python script called
rolling_deployment.py which runs in a local-exec
provisioner.
Wait for the server-group to be healthy before starting the deployment. That means the server-group has the expected
number of servers up and running and passing health checks.
Pick one of the ASGs in the server-group and set its size to 0. This will terminate the Instance in that ASG,
respecting any connection draining settings you may have set up. It will also detach any ENI or EBS Volume.
Once the instance has terminated, set the ASG size back to 1. This will launch a new Instance with your new code
and reattach its ENI or EBS Volume.
Wait for the new Instance to pass health checks.
Once the Instance is healthy, repeat the process with the next ASG, until all ASGs have been redeployed.
Deployment configuration options
You can customize the way the rolling deployment works by specifying the following parameters to the server-group
module in your Terraform code:
script_log_level: Specify the logging level the
script should use. Default is INFO. To debug issues, you may want to turn it up to DEBUG. To quiet the script
down, you may want to turn it down to ERROR.
deployment_batch_size: How many servers to redeploy at a time. The default is 1. If you have a lot of servers to
redeploy, you may want to increase this number to do the deployment in larger batches. However, make sure that taking
down a batch of this size does not cause an unintended outage for your service!
skip_health_check: If set to true, the rolling deployment process will not wait for the server-group to be
healthy before starting the deployment. This is useful if your server-group is already experiencing some sort of
downtime or problem and you want to force a deployment as a way to fix it.
skip_rolling_deploy: If set to true, skip the rolling deployment process entirely. That means your Terraform
changes will be applied to the launch configuration underneath the ASGs, but no new code will be deployed until
something triggers the ASG to launch new instances. This is primarily useful if the rolling deployment script turns
out to have some sort of bug in it.
Questions? Ask away.
We're here to talk about our services, answer any questions, give advice, or just to chat.
{"treedata":{"name":"root","toggled":true,"children":[{"name":".circleci","children":[{"name":"config.yml","path":".circleci/config.yml","sha":"b94d0a76cfd53c1c265cecdfb7fc5709783c6cb5"}]},{"name":".gitignore","path":".gitignore","sha":"fd04ff401a18d9c8595968dbbd3a9996d37b6a8b"},{"name":".pre-commit-config.yaml","path":".pre-commit-config.yaml","sha":"addd5d0b1e36748c1c8c751c3fa7755f5dd2522d"},{"name":"CODEOWNERS","path":"CODEOWNERS","sha":"555c0c6e23a7502acbef94fb0b77bfa759ba11e8"},{"name":"CONTRIBUTING.md","path":"CONTRIBUTING.md","sha":"2fb126e11410f30d644f9219847f0a24a52ef4dc"},{"name":"LICENSE.txt","path":"LICENSE.txt","sha":"f4e3d9bd4717a044ed31ad847a300eee74371a78"},{"name":"README.md","path":"README.md","sha":"f8cc0a2b41af68c7781d726a6adb5c68f97ba274"},{"name":"examples","children":[{"name":"asg-rolling-deploy","children":[{"name":"README.md","path":"examples/asg-rolling-deploy/README.md","sha":"f5f2a8e2db00bb7c95975437ac913ef2f9769f7c"},{"name":"with-elb","children":[{"name":"main.tf","path":"examples/asg-rolling-deploy/with-elb/main.tf","sha":"3b6af13eb434d64ebace8b5b95752c6e9d45387d"},{"name":"outputs.tf","path":"examples/asg-rolling-deploy/with-elb/outputs.tf","sha":"330a02ec9378c2c9c4a1423b075384f4ae3ed241"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/asg-rolling-deploy/with-elb/user-data/user-data.sh","sha":"7b5fbe6f33805eb5356e9c49db9bd5b141b0816a"}]},{"name":"vars.tf","path":"examples/asg-rolling-deploy/with-elb/vars.tf","sha":"dab89e417460b5fa2676796e945acf27803c6553"}]},{"name":"without-elb","children":[{"name":"main.tf","path":"examples/asg-rolling-deploy/without-elb/main.tf","sha":"04a3e849d0484816b5c7144c0a52276fadf8af6e"},{"name":"outputs.tf","path":"examples/asg-rolling-deploy/without-elb/outputs.tf","sha":"c8db3c807aab3d75888c2dd039b9e81b1312a137"},{"name":"vars.tf","path":"examples/asg-rolling-deploy/without-elb/vars.tf","sha":"1d6d2ee9904841723c7e7c29d9fe5e12e7f6fd6a"}]}]},{"name":"server-group","children":[{"name":"README.md","path":"examples/server-group/README.md","sha":"63c6917a20e10cb2b569e5314bbf3d4ec38ff491"},{"name":"ami","children":[{"name":"server.json","path":"examples/server-group/ami/server.json","sha":"bb08fa43aa2c146a2280af4df25fabc1a39cf65f"}]},{"name":"with-alb","children":[{"name":"main.tf","path":"examples/server-group/with-alb/main.tf","sha":"fac9c2eb58f114d0504d2a95a7eecf5c583314c2"},{"name":"outputs.tf","path":"examples/server-group/with-alb/outputs.tf","sha":"3565caece2f11d61dc0b0a6ed23cc58ce7e6e61c"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/server-group/with-alb/user-data/user-data.sh","sha":"064d042a5d0ba6956f4eb8d7ece309b4d6eb4b33"}]},{"name":"vars.tf","path":"examples/server-group/with-alb/vars.tf","sha":"dc708850a0e1169ba4101135f6a707e79bd82850"}]},{"name":"with-elb","children":[{"name":"main.tf","path":"examples/server-group/with-elb/main.tf","sha":"02072d1d82f8f4b87485dab724e562a1513a49f5"},{"name":"outputs.tf","path":"examples/server-group/with-elb/outputs.tf","sha":"fd38915d96770f9588e2fcc79369e64449492286"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/server-group/with-elb/user-data/user-data.sh","sha":"f0b175520e85da002b8c26a4a92347fb1eaa1d13"}]},{"name":"vars.tf","path":"examples/server-group/with-elb/vars.tf","sha":"4eb70bfff00b837c8404d044fee311b16e0e0c73"}]},{"name":"without-load-balancer","children":[{"name":"main.tf","path":"examples/server-group/without-load-balancer/main.tf","sha":"36c01e4999e6198d81987ce01383361fcc19d9e8"},{"name":"outputs.tf","path":"examples/server-group/without-load-balancer/outputs.tf","sha":"27911554f10688f23ba9f8e31eadd4409c635f97"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/server-group/without-load-balancer/user-data/user-data.sh","sha":"064d042a5d0ba6956f4eb8d7ece309b4d6eb4b33"}]},{"name":"vars.tf","path":"examples/server-group/without-load-balancer/vars.tf","sha":"0724aa38905b313165b621e7866575e0e0abdd4b"}]}]}]},{"name":"modules","children":[{"name":"asg-rolling-deploy","children":[{"name":"README.md","path":"modules/asg-rolling-deploy/README.md","sha":"fbbc6657f0cef2493e9fdb7b42549d69e5ff4080"},{"name":"describe-autoscaling-group","children":[{"name":"README.md","path":"modules/asg-rolling-deploy/describe-autoscaling-group/README.md","sha":"062e4ebc0b65610874998a354f441f56114b4e7e"},{"name":"boto3-1.7.10.zip","path":"modules/asg-rolling-deploy/describe-autoscaling-group/boto3-1.7.10.zip","sha":"4b76be11cfa98ddb4314e11a0b28700a11cd2fcc"},{"name":"get-desired-capacity.py","path":"modules/asg-rolling-deploy/describe-autoscaling-group/get-desired-capacity.py","sha":"a8e429f631655ba95eebc11a76fa4100b78eb4a6"}]},{"name":"main.tf","path":"modules/asg-rolling-deploy/main.tf","sha":"5a50b13f6a3eb8f096f4533f0c56fca775dd46de"},{"name":"outputs.tf","path":"modules/asg-rolling-deploy/outputs.tf","sha":"5225c1f98cfc9f91411d91eae7bd692168ea8f4c"},{"name":"vars.tf","path":"modules/asg-rolling-deploy/vars.tf","sha":"5de5dd0407b049772956cdd0c742d63e2eb4bd8d"}]},{"name":"server-group","children":[{"name":"README.md","path":"modules/server-group/README.md","sha":"13d133fb833f3298f4e4755ad9fe0d767a0682ad","toggled":true},{"name":"main.tf","path":"modules/server-group/main.tf","sha":"73cf9740aee0dd7a3bdd14f834c68e810d006ecd"},{"name":"outputs.tf","path":"modules/server-group/outputs.tf","sha":"42217027f4a9807a5eae6e786b7a1cd0f6976137"},{"name":"rolling-deploy","children":[{"name":"boto3-1.7.10.zip","path":"modules/server-group/rolling-deploy/boto3-1.7.10.zip","sha":"852dcda88e4e760ce8bdb5c56823f08659959a50"},{"name":"helpers.py","path":"modules/server-group/rolling-deploy/helpers.py","sha":"35dc0d9d154895e3ede805fcb72a7fd6ac8c7c1f"},{"name":"rolling_deployment.py","path":"modules/server-group/rolling-deploy/rolling_deployment.py","sha":"e63f3e9a2072daf9d261b739ea9b6eddb2a95f0f"}]},{"name":"vars.tf","path":"modules/server-group/vars.tf","sha":"0e1897c77a9f88117e5c804acfd7cf5e1d52d72c"}],"toggled":true}],"toggled":true},{"name":"terraform-cloud-enterprise-private-module-registry-placeholder.tf","path":"terraform-cloud-enterprise-private-module-registry-placeholder.tf","sha":"ae586c0fe830819580e1009d41a9074f16e65bed"},{"name":"test","children":[{"name":"README.md","path":"test/README.md","sha":"cfa55a38cc6fbd09a311291216eb758159973629"},{"name":"asg_rolling_deploy_test.go","path":"test/asg_rolling_deploy_test.go","sha":"ca7bc5e7ac8b57f3eaee07f4f7417d1ab1e67cd3"},{"name":"go.mod","path":"test/go.mod","sha":"bcd74270514848b3d05920090296c46c3c992d6a"},{"name":"go.sum","path":"test/go.sum","sha":"ffcbd50ec065ec260e1e7c32a867d8acc8b25c58"},{"name":"server_group_test.go","path":"test/server_group_test.go","sha":"7248bb6bbb54a515d8ec9363febf4b4fed90390b"},{"name":"test_helpers.go","path":"test/test_helpers.go","sha":"6541cfcd06db09ede8a03dab111f93baea44e51a"}]}]},"detailsContent":"<h1 class=\"preview__body--title\" id=\"server-group-module\">Server Group Module</h1><div class=\"preview__body--border\"></div><p>This module allows you to run a fixed-size cluster of servers that can:</p>\n<ol>\n<li>Attach <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html\" class=\"preview__body--description--blue\" target=\"_blank\">EBS Volumes</a> to each server.</li>\n<li>Attach <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html\" class=\"preview__body--description--blue\" target=\"_blank\">Elastic Network Interfaces (ENIs)</a> to\neach server.</li>\n<li>Do a zero-downtime, rolling deployment, where each server is shut down, the EBS Volume and/or ENI detached, and new\nserver is brought up that reattaches the EBS Volume and/or ENI.</li>\n<li>Integrate with an <a href=\"https://aws.amazon.com/elasticloadbalancing/applicationloadbalancer/\" class=\"preview__body--description--blue\" target=\"_blank\">Application Load Balancer\n(ALB)</a> or <a href=\"https://aws.amazon.com/elasticloadbalancing/classicloadbalancer/\" class=\"preview__body--description--blue\" target=\"_blank\">Elastic Load Balancer\n(ELB)</a> for routing and health checks.</li>\n<li>Automatically replace failed servers.</li>\n</ol>\n<p>The main use case for this module is to run data stores such as MongoDB and ZooKeeper. See the <a href=\"#background\" class=\"preview__body--description--blue\">Background\nsection</a> to understand how this module works and in what use cases you should use it instead of an Auto\nScaling Group (ASG).</p>\n<h2 class=\"preview__body--subtitle\" id=\"quick-start\">Quick start</h2>\n<p>Check out the <a href=\"/repos/v0.14.3/module-asg/examples/server-group\" class=\"preview__body--description--blue\">server-group examples</a> for sample code that demonstrates how to use this module.</p>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-you-use-this-module\">How do you use this module?</h2>\n<p>To use this module, you need to do the following:</p>\n<ol>\n<li><a href=\"#add-the-module-to-your-terraform-code\" class=\"preview__body--description--blue\">Add the module to your Terraform code</a></li>\n<li><a href=\"#optionally-create-enis-and-ebs-volumes-for-each-server\" class=\"preview__body--description--blue\">Optionally create ENIs and EBS Volumes for each server</a></li>\n<li><a href=\"#if-you-created-enis-optionally-create-dns-records\" class=\"preview__body--description--blue\">If you created ENIs, optionally create DNS records</a></li>\n<li><a href=\"#optionally-integrate-a-load-balancer-for-health-checks\" class=\"preview__body--description--blue\">Optionally integrate a load balancer for health checks</a></li>\n<li><a href=\"#attach-an-eni-and-ebs-volume-during-boot\" class=\"preview__body--description--blue\">Attach an ENI and EBS Volume during boot</a></li>\n</ol>\n<h3 class=\"preview__body--subtitle\" id=\"add-the-module-to-your-terraform-code\">Add the module to your Terraform code</h3>\n<p>As with all Terraform modules, you include this one in your code using the <code>module</code> keyword and pointing the <code>source</code>\nURL at this repo:</p>\n<pre>module <span class=\"hljs-string\">\"servers\"</span> {\n <span class=\"hljs-attr\">source</span> = <span class=\"hljs-string\">\"git::git@github.com:gruntwork-io/terraform-aws-asg.git//modules/server-group?ref=v0.3.1\"</span>\n\n <span class=\"hljs-attr\">name</span> = <span class=\"hljs-string\">\"my-server-group\"</span>\n <span class=\"hljs-attr\">size</span> = <span class=\"hljs-number\">3</span>\n <span class=\"hljs-attr\">instance_type</span> = <span class=\"hljs-string\">\"t3.micro\"</span>\n <span class=\"hljs-attr\">ami_id</span> = <span class=\"hljs-string\">\"ami-abcd1234\"</span>\n\n <span class=\"hljs-attr\">aws_region</span> = <span class=\"hljs-string\">\"us-east-1\"</span>\n <span class=\"hljs-attr\">vpc_id</span> = <span class=\"hljs-string\">\"vpc-abcd12345\"</span>\n <span class=\"hljs-attr\">subnet_ids</span> = [<span class=\"hljs-string\">\"subnet-abcd1111\"</span>, <span class=\"hljs-string\">\"subnet-abcd2222\"</span>, <span class=\"hljs-string\">\"subnet-abcd3333\"</span>]\n}\n</pre>\n<p>The code above will spin up 3 <code>t3.micro</code> servers in the specified VPC and subnets. Any server that fails EC2 status\nchecks will be automatically replaced. Any time you update any of the parameters and run <code>terraform apply</code>, it will\nkick off a zero-downtime rolling deployment (see <a href=\"#how-does-rolling-deployment-work\" class=\"preview__body--description--blue\">How does rolling deployment work?</a>\nfor details).</p>\n<h3 class=\"preview__body--subtitle\" id=\"optionally-integrate-a-load-balancer-for-health-checks\">Optionally integrate a load balancer for health checks</h3>\n<p>By default, the server-group module uses <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-system-instance-status-check.html\" class=\"preview__body--description--blue\" target=\"_blank\">EC2 status\nchecks</a> to determine\nserver health. This is used both during a rolling deployment (i.e., only replace the next server when the previous\nserver is healthy) and for auto-recovery (i.e., replace any server that has failed). While EC2 status checks are good\nenough to detect when the EC2 Instance has completely died or is malfunctioning, they do NOT determine if the code\nrunning on that EC2 Instance is actually working (e.g., is your database or application actually running and capable\nof serving traffic).</p>\n<p>Therefore, we strongly recommend associating a load balancer with your server-group. The load balancer can perform\nhealth checks on your application code by actually making HTTP or TCP requests to the application, which is a far more\nrobust way to tell if the server is healthy.</p>\n<p>Here is how to associate an ELB with your server-group and use it for health checks:</p>\n<pre><span class=\"hljs-keyword\">module</span> <span class=\"hljs-string\">\"servers\"</span> {\n source = <span class=\"hljs-string\">\"git::git@github.com:gruntwork-io/terraform-aws-asg.git//modules/server-group?ref=v0.3.1\"</span>\n \n <span class=\"hljs-comment\"># (other params omitted)</span>\n \n health_check_type = <span class=\"hljs-string\">\"ELB\"</span>\n elb_names = [<span class=\"hljs-string\">\"<span class=\"hljs-variable\">${aws_elb.my_elb.name}</span>\"</span>]\n}\n</pre>\n<p>And here is how to associate an ALB with your server-group module and use it for health checks:</p>\n<pre><span class=\"hljs-keyword\">module</span> <span class=\"hljs-string\">\"servers\"</span> {\n source = <span class=\"hljs-string\">\"git::git@github.com:gruntwork-io/terraform-aws-asg.git//modules/server-group?ref=v0.3.1\"</span>\n \n <span class=\"hljs-comment\"># (other params omitted)</span>\n \n health_check_type = <span class=\"hljs-string\">\"ELB\"</span>\n alb_target_group_arns = [<span class=\"hljs-string\">\"<span class=\"hljs-variable\">${aws_alb_target_group.my_target_group.arn}</span>\"</span>]\n}\n</pre>\n<p><em>Note: The <code>health_check_type</code> value above is not a typo, it should be <code>ELB</code> in both cases.</em></p>\n<p>Once you've associated a load balancer with your server-group, new servers will automatically register with the load\nbalancer while deploying, deregister while undeploying, and use the load balancer's health checks to determine when a\nserver is healthy or needs replacing.</p>\n<h3 class=\"preview__body--subtitle\" id=\"optionally-create-en-is-and-ebs-volumes-for-each-server\">Optionally create ENIs and EBS Volumes for each server</h3>\n<p>By default, the server-group module does not create any ENIs or EBS Volumes. If you would like to create ENIs, set the\n<code>num_enis</code> parameter to the number of ENIs you want per server:</p>\n<pre><span class=\"hljs-keyword\">module</span> <span class=\"hljs-string\">\"servers\"</span> {\n source = <span class=\"hljs-string\">\"git::git@github.com:gruntwork-io/terraform-aws-asg.git//modules/server-group?ref=v0.3.1\"</span>\n \n <span class=\"hljs-comment\"># (other params omitted)</span>\n \n num_enis = <span class=\"hljs-number\">1</span>\n}\n</pre>\n<p>If you would like to create EBS Volumes, set the <code>ebs_volumes</code> parameter to a list of volumes to create for each\nserver:</p>\n<pre>module \"servers\" {\n source = \"git::git@github.com:gruntwork-io/terraform-aws-asg.git//modules/server-group?ref=v0.3.1\"\n \n # (other params omitted)\n \n ebs_volumes = [{\n <span class=\"hljs-keyword\">type</span> = \"gp2\"\n size = <span class=\"hljs-number\">100</span>\n <span class=\"hljs-keyword\">encrypted</span> = <span class=\"hljs-keyword\">false</span>\n },{\n <span class=\"hljs-keyword\">type</span> = \"standard\"\n size = <span class=\"hljs-number\">500</span>\n <span class=\"hljs-keyword\">encrypted</span> = <span class=\"hljs-keyword\">true</span>\n },{\n <span class=\"hljs-keyword\">type</span> = \"io1\"\n size = <span class=\"hljs-number\">500</span>\n iops = <span class=\"hljs-number\">2000</span>\n <span class=\"hljs-keyword\">encrypted</span> = <span class=\"hljs-keyword\">true</span> \n }]\n}\n</pre>\n<p><strong>Note:</strong> When using an <code>io1</code> disk type, the <code>iops</code> parameter must be specified.</p>\n<p>Each ENI and server pair will get a matching <code>eni-xxx</code> tag (e.g., <code>eni-0</code>, <code>eni-1</code>, etc). Each EBS Volume and server\npair will get a matching <code>ebs-volume-xxx</code> tag (e.g., <code>ebs-volume-0</code>, <code>ebs-volume-1</code>, etc). You will need to attach\nthese ENIs and Volumes while your server is booting, as described in the next section.</p>\n<h3 class=\"preview__body--subtitle\" id=\"if-you-created-en-is-optionally-create-dns-records\">If you created ENIs, optionally create DNS records</h3>\n<p>You may wish to have a DNS record associated with each ENI. This has the special advantage that even if a server is\nreplaced, the new server will attach the existing ENI and retain the same IP address! This means that the DNS record\nwill be permanently valid as long the Server Group size does not shrink.</p>\n<p>If you would like to create DNS records, set the <code>route53_hosted_zone_id</code> parameter to the Route53 Hosted Zone where DNS records\nshould be created and the <code>dns_name_common_portion</code> parameter to the common portion of the DNS name to be shared by each server in the\nServer Group. For example:</p>\n<pre><span class=\"hljs-keyword\">module</span> <span class=\"hljs-string\">\"servers\"</span> {\n source = <span class=\"hljs-string\">\"git::git@github.com:gruntwork-io/terraform-aws-asg.git//modules/server-group?ref=v0.3.1\"</span>\n \n <span class=\"hljs-comment\"># (other params omitted)</span>\n \n size = <span class=\"hljs-number\">3</span>\n route53_hosted_zone_id = <span class=\"hljs-string\">\"<obtain-this-from-another-terraform-module>\"</span>\n dns_name = <span class=\"hljs-string\">\"kafka.internal\"</span>\n}\n</pre>\n<p>will create the following DNS records that point to each ENI:</p>\n<pre><span class=\"hljs-number\">0.</span>kafka.<span class=\"hljs-built_in\">int</span>ernal\n<span class=\"hljs-number\">1.</span>kafka.<span class=\"hljs-built_in\">int</span>ernal\n<span class=\"hljs-number\">2.</span>kafka.<span class=\"hljs-built_in\">int</span>ernal\n</pre>\n<h3 class=\"preview__body--subtitle\" id=\"attach-an-eni-and-ebs-volume-during-boot\">Attach an ENI and EBS Volume during boot</h3>\n<p>While the server-group module can create ENIs and EBS Volumes for you, you have to attach them to your servers\nyourself. The easiest way to do that is to use the following modules from\n<a href=\"/repos/terraform-aws-server\" class=\"preview__body--description--blue\">terraform-aws-server</a>:</p>\n<ul>\n<li><a href=\"/repos/terraform-aws-server/modules/attach-eni\" class=\"preview__body--description--blue\">attach-eni</a></li>\n<li><a href=\"/repos/terraform-aws-server/modules/persistent-ebs-volume\" class=\"preview__body--description--blue\">persistent-ebs-volume</a></li>\n</ul>\n<p>Here's how it works:</p>\n<ol>\n<li>\n<p>Install the <code>attach-eni</code> and/or <code>persistent-ebs-volume</code> modules in the AMI that gets deployed in your server-group.\nThe easiest way to do this is to use the <a href=\"/repos/gruntwork-installer\" class=\"preview__body--description--blue\">Gruntwork Installer</a>\nin a <a href=\"https://www.packer.io/\" class=\"preview__body--description--blue\" target=\"_blank\">Packer</a> template:</p>\n<pre>gruntwork-install --<span class=\"hljs-keyword\">module</span>-name 'persistent-ebs-volume' --repo 'https://github.com/gruntwork-io/<span class=\"hljs-keyword\">terraform</span>-aws-server' --tag 'v0.<span class=\"hljs-number\">8.0</span>'\ngruntwork-install --<span class=\"hljs-keyword\">module</span>-name 'attach-eni' --repo 'https://github.com/gruntwork-io/<span class=\"hljs-keyword\">terraform</span>-aws-server' --tag 'v0.<span class=\"hljs-number\">8.0</span>'\n</pre>\n</li>\n<li>\n<p>Run the <code>attach-eni</code> and/or <code>mount-ebs-volume</code> scripts while each server is booting, typically as part of\n<a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-api-cli\" class=\"preview__body--description--blue\" target=\"_blank\">User Data</a>. You can use the\n<code>--eni-with-same-tag</code> and <code>--volume-with-same-tag</code> parameters of the scripts, respectively, to automatically mount\nthe ENIs and/or EBS Volumes with the same <code>eni-xxx</code> and <code>ebs-volume-xxx</code> tags as the server.</p>\n</li>\n</ol>\n<h3 class=\"preview__body--subtitle\" id=\"optionally-order-the-deployment-of-other-terraform-resources\">Optionally Order the Deployment of Other Terraform Resources</h3>\n<p>There are times when you may wish to block a Terraform resource from being created until the resources deployed by this\nmodule are finished. For example, when you deploy a <a href=\"https://kafka.apache.org/\" class=\"preview__body--description--blue\" target=\"_blank\">Kafka</a> cluster, you also need to deploy\na <a href=\"https://zookeeper.apache.org/\" class=\"preview__body--description--blue\" target=\"_blank\">Zookeeper</a> cluster and the Kafka cluster cannot boot until the Zookeeper cluster is\nfully booted. To avoid messy log entries of Kafka failing while Zookeeper boots, you could just start the creation of the\nKafka cluster <em>after</em> the Zookeeper cluster has finished booting.</p>\n<p>Unfortunately, as of June 7, 2018, Terraform <a href=\"https://github.com/hashicorp/terraform/issues/10462\" class=\"preview__body--description--blue\" target=\"_blank\">does not support module dependencies</a>, so we have to hack this support by making clever use of modules\noutputs and inputs (variables).</p>\n<p>Here's how to use the ordering feature of the server-group module:</p>\n<ol>\n<li>\n<p>Suppose you have two Terraform modules: Module A and Module B, both of which are instances of this server-group\nmodule. You want Module B to be created <em>after</em> Module A.</p>\n</li>\n<li>\n<p>The following code will achieve the desired ordering:</p>\n</li>\n</ol>\n<pre><span class=\"hljs-keyword\">module</span> <span class=\"hljs-string\">\"a\"</span> {\n <span class=\"hljs-comment\"># Be sure to update to the latest version of this module</span>\n source = <span class=\"hljs-string\">\"git::git@github.com:gruntwork-io/terraform-aws-asg.git//modules/server-group?ref=v1.0.8\"</span>\n\n ...\n}\n\n<span class=\"hljs-keyword\">module</span> <span class=\"hljs-string\">\"b\"</span> {\n source = <span class=\"hljs-string\">\"git::git@github.com:gruntwork-io/terraform-aws-asg.git//modules/server-group?ref=v1.0.8\"</span>\n\n wait_for = <span class=\"hljs-string\">\"<span class=\"hljs-variable\">${module.a.rolling_deployment_done}</span>\"</span>\n ...\n}\n</pre>\n<p>Make sure that you specifically use the <code>rolling_deployment_done</code> output value of Module A, not just any arbitrary output\nvalue.</p>\n<p>With this pattern, Module A will now fully deploy, and only then will Module B create its Launch Configuration and Auto Scaling Group and\nbegin the rolling deploy.</p>\n<h2 class=\"preview__body--subtitle\" id=\"background\">Background</h2>\n<ol>\n<li><a href=\"#why-not-an-auto-scaling-group\" class=\"preview__body--description--blue\">Why not an Auto Scaling Group?</a></li>\n<li><a href=\"#how-does-this-module-work\" class=\"preview__body--description--blue\">How does this module work?</a></li>\n<li><a href=\"#how-does-rolling-deployment-work\" class=\"preview__body--description--blue\">How does rolling deployment work?</a></li>\n</ol>\n<h3 class=\"preview__body--subtitle\" id=\"why-not-an-auto-scaling-group\">Why not an Auto Scaling Group?</h3>\n<p>The first question you may ask is, how is this different than an <a href=\"http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroup.html\" class=\"preview__body--description--blue\" target=\"_blank\">Auto Scaling Group\n(ASG)</a>? While an ASG does allow you to\nrun a cluster of servers, automaticaly replace failed servers, and do zero-downtime deployment (see the\n<a href=\"/repos/v0.14.3/module-asg/modules/asg-rolling-deploy\" class=\"preview__body--description--blue\">asg-rolling-deploy module</a>), attaching ENIs and EBS Volumes to servers in an ASG is very\ntricky:</p>\n<ol>\n<li>\n<p>Using ENIs and EBS Volumes with ASGs is not natively supported by Terraform. The\n<a href=\"https://www.terraform.io/docs/providers/aws/r/network_interface_attachment.html\" class=\"preview__body--description--blue\" target=\"_blank\">aws_network_interface_attachment</a>\nand <a href=\"https://www.terraform.io/docs/providers/aws/r/volume_attachment.html\" class=\"preview__body--description--blue\" target=\"_blank\">aws_volume_attachment</a> resources only\nwork with individual EC2 Instances and not ASGs. Therefore, you typically create a pool of ENIs and EBS Volumes in\nTerraform, and your servers, while booting, use the AWS CLI to attach those ENIs and EBS Volumes.</p>\n</li>\n<li>\n<p>Attaching ENIs and EBS Volumes from a pool requires that each server has a way to uniquely pick which ENI or EBS\nVolume belongs to it. Picking at random and retrying can be slow and error prone.</p>\n</li>\n<li>\n<p>With EBS Volumes, attaching them from an ASG is particularly problematic, as you can only attach an EBS Volume in\nthe same Availability Zone (AZ) as the server. If you have, for example, three AZs and five servers, it's entirely\npossible that the ASG will launch a server in an AZ that does not have any EBS Volumes available.</p>\n</li>\n</ol>\n<p>The goal of this module is to give you a way to run a cluster of servers where attaching ENIs and EBS Volumes is easy.</p>\n<h3 class=\"preview__body--subtitle\" id=\"how-does-this-module-work\">How does this module work?</h3>\n<p>The solution used in this module is to:</p>\n<ol>\n<li>Create one ASG for each server. So if you create a cluster with five servers, you'll end up with five ASGs. Using\nASGs gives us the ability to automatically integrate with the ALB and ELB and to replace failed servers.</li>\n<li>Each ASG is assigned to exactly one subnet, and therefore, one AZ.</li>\n<li>Create ENIs and EBS Volumes for each server, in the same AZ as that server's ASG. This ensures a server will\nnever launch in an AZ that doesn't have an EBS Volume.</li>\n<li>Each server and ENI pair and each server and EBS Volume pair get matching tags, so each server can always uniquely\nidentify the ENIs and EBS Volumes that belong to it.</li>\n<li>Zero-downtime deployment is done using a Python script in a <a href=\"https://www.terraform.io/docs/provisioners/local-exec.html\" class=\"preview__body--description--blue\" target=\"_blank\">local-exec\nprovisioner</a>. See <a href=\"#how-does-rolling-deployment-work\" class=\"preview__body--description--blue\">How does rolling deployment\nwork?</a> for more details.</li>\n</ol>\n<h2 class=\"preview__body--subtitle\" id=\"how-does-rolling-deployment-work\">How does rolling deployment work?</h2>\n<p>The server-group module will perform a zero-downtime, rolling deployment every time you make a change to the code and\nrun <code>terraform apply</code>. This deployment process is implemented in a Python script called\n<a href=\"/repos/v0.14.3/module-asg/modules/server-group/rolling-deploy/rolling_deployment.py\" class=\"preview__body--description--blue\">rolling_deployment.py</a> which runs in a <a href=\"https://www.terraform.io/docs/provisioners/local-exec.html\" class=\"preview__body--description--blue\" target=\"_blank\">local-exec\nprovisioner</a>.</p>\n<p>Here is how it works:</p>\n<ol>\n<li><a href=\"#the-rolling-deployment-process\" class=\"preview__body--description--blue\">The rolling deployment process</a></li>\n<li><a href=\"#deployment-configuration-options\" class=\"preview__body--description--blue\">Deployment configuration options</a></li>\n</ol>\n<h3 class=\"preview__body--subtitle\" id=\"the-rolling-deployment-process\">The rolling deployment process</h3>\n<p>The rolling deployment process works as follows:</p>\n<ol>\n<li>\n<p>Wait for the server-group to be healthy before starting the deployment. That means the server-group has the expected\nnumber of servers up and running and passing health checks.</p>\n</li>\n<li>\n<p>Pick one of the ASGs in the server-group and set its size to 0. This will terminate the Instance in that ASG,\nrespecting any connection draining settings you may have set up. It will also detach any ENI or EBS Volume.</p>\n</li>\n<li>\n<p>Once the instance has terminated, set the ASG size back to 1. This will launch a new Instance with your new code\nand reattach its ENI or EBS Volume.</p>\n</li>\n<li>\n<p>Wait for the new Instance to pass health checks.</p>\n</li>\n<li>\n<p>Once the Instance is healthy, repeat the process with the next ASG, until all ASGs have been redeployed.</p>\n</li>\n</ol>\n<h3 class=\"preview__body--subtitle\" id=\"deployment-configuration-options\">Deployment configuration options</h3>\n<p>You can customize the way the rolling deployment works by specifying the following parameters to the server-group\nmodule in your Terraform code:</p>\n<ul>\n<li>\n<p><code>script_log_level</code>: Specify the <a href=\"https://docs.python.org/2/library/logging.html#logging-levels\" class=\"preview__body--description--blue\" target=\"_blank\">logging level</a> the\nscript should use. Default is <code>INFO</code>. To debug issues, you may want to turn it up to <code>DEBUG</code>. To quiet the script\ndown, you may want to turn it down to <code>ERROR</code>.</p>\n</li>\n<li>\n<p><code>deployment_batch_size</code>: How many servers to redeploy at a time. The default is 1. If you have a lot of servers to\nredeploy, you may want to increase this number to do the deployment in larger batches. However, make sure that taking\ndown a batch of this size does not cause an unintended outage for your service!</p>\n</li>\n<li>\n<p><code>skip_health_check</code>: If set to <code>true</code>, the rolling deployment process will not wait for the server-group to be\nhealthy before starting the deployment. This is useful if your server-group is already experiencing some sort of\ndowntime or problem and you want to force a deployment as a way to fix it.</p>\n</li>\n<li>\n<p><code>skip_rolling_deploy</code>: If set to <code>true</code>, skip the rolling deployment process entirely. That means your Terraform\nchanges will be applied to the launch configuration underneath the ASGs, but no new code will be deployed until\nsomething triggers the ASG to launch new instances. This is primarily useful if the rolling deployment script turns\nout to have some sort of bug in it.</p>\n</li>\n</ul>\n","repoName":"module-asg","repoRef":"v0.14.0","serviceDescriptor":{"serviceName":"Auto Scaling Group (stateful)","serviceRepoName":"module-asg","serviceRepoOrg":"gruntwork-io","serviceMainReadmePath":"/modules/server-group","cloudProviders":["aws"],"description":"Run an Auto Scaling Group for stateful apps. Supports zero-downtime, rolling deployment, auto healing, IAM Roles, EBS Volumes, and ENIs.","imageUrl":"auto-scaling2.png","licenseType":"subscriber","technologies":["Terraform","Bash"],"compliance":[],"tags":[""]},"serviceCategoryName":"Server orchestration","fileName":"README.md","filePath":"/modules/server-group","title":"Repo Browser: Auto Scaling Group (stateful)","description":"Browse the repos in the Gruntwork Infrastructure as Code Library."}