This folder contains a Terraform module that can be used to deploy a
Nomad cluster in AWS on top of an Auto Scaling Group. This
module is designed to deploy an Amazon Machine Image (AMI)
that had Nomad installed via the install-nomad module in this Module.
This folder defines a Terraform module, which you can use in your
code by adding a module configuration and setting its source parameter to URL of this folder:
module"nomad_cluster" {
# TODO: update this to the final URL# Use version v0.0.1 of the nomad-cluster module
source = "github.com/hashicorp/terraform-aws-nomad//modules/nomad-cluster?ref=v0.0.1"# Specify the ID of the Nomad AMI. You should build this using the scripts in the install-nomad module.
ami_id = "ami-abcd1234"# Configure and start Nomad during boot. It will automatically connect to the Consul cluster specified in its# configuration and form a cluster with other Nomad nodes connected to that Consul cluster.
user_data = <<-EOF
#!/bin/bash
/opt/nomad/bin/run-nomad --server --num-servers 3
EOF
# ... See variables.tf for the other parameters you must define for the nomad-cluster module
}
Note the following parameters:
source: Use this parameter to specify the URL of the nomad-cluster module. The double slash (//) is intentional
and required. Terraform uses it to specify subfolders within a Git repo (see module
sources). The ref parameter specifies a specific Git tag in
this repo. That way, instead of using the latest version of this module from the master branch, which
will change every time you run Terraform, you're using a fixed version of the repo.
ami_id: Use this parameter to specify the ID of a Nomad Amazon Machine Image
(AMI) to deploy on each server in the cluster. You
should install Nomad in this AMI using the scripts in the install-nomad module.
user_data: Use this parameter to specify a User
Data script that each
server will run during boot. This is where you can use the run-nomad script to configure and
run Nomad. The run-nomad script is one of the scripts installed by the install-nomad
module.
You can find the other parameters in variables.tf.
Check out the nomad-consul-separate-cluster example example for working
sample code. Note that if you want to run Nomad and Consul on the same cluster, see the [nomad-consul-colocated-cluster
example](https://github.com/hashicorp/terraform-aws-nomad/tree/master/MAIN.md example) instead.
How do you connect to the Nomad cluster?
Using the Node agent from your own computer
If you want to connect to the cluster from your own computer, install
Nomad and execute commands with the -address parameter set to
the IP address of one of the servers in your Nomad cluster. Note that this only works if the Nomad cluster is running
in public subnets and/or your default VPC (as in both examples), which is OK for testing and
experimentation, but NOT recommended for production usage.
> ../nomad-examples-helper/nomad-examples-helper.sh
Your Nomad servers are running at the following IP addresses:
34.204.85.13952.23.167.20454.236.16.38
Copy and paste one of these IPs and use it with the -address argument for any Nomad
command. For example, to see the status of all the Nomad
servers:
> nomad server-members -address=http://<INSTANCE_IP_ADDR>:4646
ip-172-31-23-140.global 172.31.23.1404648 alive true20.5.4 dc1 global
ip-172-31-23-141.global 172.31.23.1414648 alive true20.5.4 dc1 global
ip-172-31-23-142.global 172.31.23.1424648 alive true20.5.4 dc1 global
To see the status of all the Nomad agents:
> nomad node-status -address=http://<INSTANCE_IP_ADDR>:4646
ID DC Name Class Drain Status
ec2796cd us-east-1e i-0059e5cafb8103834 <none> false ready
ec2f799e us-east-1d i-0a5552c3c375e9ea0 <none> false ready
ec226624 us-east-1b i-0d647981f5407ae32 <none> false ready
ec2d4635 us-east-1a i-0c43dcc509e3d8bdf <none> false ready
ec232ea5 us-east-1d i-0eff2e6e5989f51c1 <none> false ready
ec2d4bd6 us-east-1c i-01523bf946d98003e <none> false ready
And to submit a job called example.nomad:
> nomad run-address=http://<INSTANCE_IP_ADDR>:4646 example.nomad
==> Monitoring evaluation "0d159869"
Evaluation triggered by job "example"
Allocation "5cbf23a1" created: node "1e1aa1e0", group "example"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "0d159869" finished with status "complete"
Using the Nomad agent on another EC2 Instance
For production usage, your EC2 Instances should be running the Nomad
agent. The agent nodes should discover the Nomad server nodes
automatically using Consul. Check out the Service Discovery
documentation for details.
What's included in this module?
This module creates the following architecture:
This architecture consists of the following resources:
This module runs Nomad on top of an Auto Scaling Group (ASG). Typically, you
should run the ASG with 3 or 5 EC2 Instances spread across multiple Availability
Zones. Each of the EC2
Instances should be running an AMI that has had Nomad installed via the install-nomad
module. You pass in the ID of the AMI to run using the ami_id input parameter.
Security Group
Each EC2 Instance in the ASG has a Security Group that allows:
Each EC2 Instance in the ASG has an IAM Role attached.
We give this IAM role a small set of IAM permissions that each EC2 Instance can use to automatically discover the other
Instances in its ASG and form a cluster with them.
The IAM Role ARN is exported as an output variable if you need to add additional permissions.
How do you roll out updates?
If you want to deploy a new version of Nomad across the cluster, the best way to do that is to:
Build a new AMI.
Set the ami_id parameter to the ID of the new AMI.
Run terraform apply.
This updates the Launch Configuration of the ASG, so any new Instances in the ASG will have your new AMI, but it does
NOT actually deploy those new instances. To make that happen, you should do the following:
Issue an API call to one of the old Instances in the ASG to have it leave gracefully. E.g.:
After a minute or two, the ASG should automatically launch a new Instance, with the new AMI, to replace the old one.
Wait for the new Instance to boot and join the cluster.
Repeat these steps for each of the other old Instances in the ASG.
We will add a script in the future to automate this process (PRs are welcome!).
What happens if a node crashes?
There are two ways a Nomad node may go down:
The Nomad process may crash. In that case, supervisor should restart it automatically.
The EC2 Instance running Nomad dies. In that case, the Auto Scaling Group should launch a replacement automatically.
Note that in this case, since the Nomad agent did not exit gracefully, and the replacement will have a different ID,
you may have to manually clean out the old nodes using the server-force-leave
command. We may add a script to do this
automatically in the future. For more info, see the Nomad Outage
documentation.
How do you connect load balancers to the Auto Scaling Group (ASG)?
The EC2 Instances in the cluster store all their data on the root EBS Volume. To enable encryption for the data at
rest, you must enable encryption in your Nomad AMI. If you're creating the AMI using Packer (e.g. as shown in
the nomad-consul-ami example), you need to set the encrypt_boot
parameter to true.
Dedicated instances
If you wish to use dedicated instances, you can set the tenancy parameter to "dedicated" in this module.
Security groups
This module attaches a security group to each EC2 Instance that allows inbound requests as follows:
Nomad: For all the ports used by Nomad,
you can use the allowed_inbound_cidr_blocks parameter to control the list of
CIDR blocks that will be allowed access.
SSH: For the SSH port (default: 22), you can use the allowed_ssh_cidr_blocks parameter to control the list of
CIDR blocks that will be allowed access.
Note that all the ports mentioned above are configurable via the xxx_port variables (e.g. http_port). See
variables.tf for the full list.
SSH access
You can associate an EC2 Key Pair with each
of the EC2 Instances in this cluster by specifying the Key Pair's name in the ssh_key_name variable. If you don't
want to associate a Key Pair with these servers, set ssh_key_name to an empty string.
What's NOT included in this module?
This module does NOT handle the following items, which you may want to provide on your own:
This module assumes you already have Consul deployed in a separate cluster. If you want to run Nomad and Consul on the
same cluster, instead of using this module, see the Deploy Nomad and Consul in the same cluster
documentation.
Monitoring, alerting, log aggregation
This module does not include anything for monitoring, alerting, or log aggregation. All ASGs and EC2 Instances come
with limited CloudWatch metrics built-in, but beyond that, you will have to
provide your own solutions.
VPCs, subnets, route tables
This module assumes you've already created your network topology (VPC, subnets, route tables, etc). You will need to
pass in the the relevant info about your network topology (e.g. vpc_id, subnet_ids) as input variables to this
module.
DNS entries
This module does not create any DNS entries for Nomad (e.g. in Route 53).
Questions? Ask away.
We're here to talk about our services, answer any questions, give advice, or just to chat.
{"treedata":{"name":"root","toggled":true,"children":[{"name":".circleci","children":[{"name":"config.yml","path":".circleci/config.yml","sha":"20968ed882b09a03245770f84b523b73cd64df78"}]},{"name":".gitignore","path":".gitignore","sha":"6c4ebe4426586b7febbaba178294ef59b8272c05"},{"name":"CODEOWNERS","path":"CODEOWNERS","sha":"4be01a6334d39aa5bf6abe6baae701f5e2a8c5ac"},{"name":"CONTRIBUTING.md","path":"CONTRIBUTING.md","sha":"4c6520097a38c2b63e7a91e20a4c06f8005e6fe4"},{"name":"LICENSE","path":"LICENSE","sha":"7a4a3ea2424c09fbe48d455aed1eaa94d9124835"},{"name":"NOTICE","path":"NOTICE","sha":"4653ef2dace926e046f74ab82c82647558c7e94f"},{"name":"README.md","path":"README.md","sha":"48583f19e9257d330a8d9e6f041b101932490d56"},{"name":"_ci","children":[{"name":"publish-amis-in-new-account.md","path":"_ci/publish-amis-in-new-account.md","sha":"3182a0a90775f7bb9622c037196ac2a1f15e455d"},{"name":"publish-amis.sh","path":"_ci/publish-amis.sh","sha":"6902cb1e3d7624ecc91096bcd48c5c91e248c653"}]},{"name":"_docs","children":[{"name":"amazon-linux-ami-list.md","path":"_docs/amazon-linux-ami-list.md","sha":"6607a70497e27d57222552281825053783ad7bb2"},{"name":"architecture-nomad-consul-colocated.png","path":"_docs/architecture-nomad-consul-colocated.png","sha":"438a8b71d1afdc7f91065b910e9de2d6d7d9517c"},{"name":"architecture-nomad-consul-separate.png","path":"_docs/architecture-nomad-consul-separate.png","sha":"df28d183fb8090fabc457ee56f3fb43ded6a5b13"},{"name":"architecture.png","path":"_docs/architecture.png","sha":"e539a77e88af6849d0893be35a8c5b5270edb195"},{"name":"nomad-icon.png","path":"_docs/nomad-icon.png","sha":"193298e1f719d6fb51513d0d9631dafaa17cfaf3"},{"name":"ubuntu16-ami-list.md","path":"_docs/ubuntu16-ami-list.md","sha":"4c1af1d6d863e0e6120520b65d823ee2ae2e2079"}]},{"name":"core-concepts.md","path":"core-concepts.md","sha":"314e6e4769a0316adbce0ff39196f1821544c9ce"},{"name":"examples","children":[{"name":"nomad-consul-ami","children":[{"name":"README.md","path":"examples/nomad-consul-ami/README.md","sha":"3e1ffc13902ee7ff46ef750a1e03d316ff4afedf"},{"name":"nomad-consul-docker.json","path":"examples/nomad-consul-ami/nomad-consul-docker.json","sha":"89df67eabeeb9e07f039dbe085e1932f592992d3"},{"name":"nomad-consul.json","path":"examples/nomad-consul-ami/nomad-consul.json","sha":"58a462d3443bf5f067f24f438919aab6c696b715"},{"name":"setup_amazon-linux.sh","path":"examples/nomad-consul-ami/setup_amazon-linux.sh","sha":"afa4e86365348ab91616f08d3fae0a28c64e158f"},{"name":"setup_nomad_consul.sh","path":"examples/nomad-consul-ami/setup_nomad_consul.sh","sha":"c4e68a5affa34caab8e4197028dfc1970c33da6e"},{"name":"setup_ubuntu.sh","path":"examples/nomad-consul-ami/setup_ubuntu.sh","sha":"81268f86b1ccf5a567ec913dd1dbbe3b3868ad93"}]},{"name":"nomad-consul-separate-cluster","children":[{"name":"README.md","path":"examples/nomad-consul-separate-cluster/README.md","sha":"4f8ee73c20f575cb86ed28fb05840d093e6f2f15"},{"name":"main.tf","path":"examples/nomad-consul-separate-cluster/main.tf","sha":"c29c23f9c54e171826309ed3f00a13552b3c6147"},{"name":"outputs.tf","path":"examples/nomad-consul-separate-cluster/outputs.tf","sha":"fab958b55d52594d98df8391c4282e5c4c1f008a"},{"name":"user-data-consul-server.sh","path":"examples/nomad-consul-separate-cluster/user-data-consul-server.sh","sha":"659e77d66aa4140f776cfbeb9e71f1a874b00682"},{"name":"user-data-nomad-client.sh","path":"examples/nomad-consul-separate-cluster/user-data-nomad-client.sh","sha":"c52069299ee4fe73fbd9cd5d4f48be8ef6a35b3d"},{"name":"user-data-nomad-server.sh","path":"examples/nomad-consul-separate-cluster/user-data-nomad-server.sh","sha":"1b99ff7d6b56999da42c04d5405e81c738214af1"},{"name":"variables.tf","path":"examples/nomad-consul-separate-cluster/variables.tf","sha":"78d0882b72b80b9b8eea67d4950e15b826647e1d"}]},{"name":"nomad-examples-helper","children":[{"name":"README.md","path":"examples/nomad-examples-helper/README.md","sha":"4b42111e7abf289798df0c62847edc722bcd6256"},{"name":"example.nomad","path":"examples/nomad-examples-helper/example.nomad","sha":"63958e3d491757da48a72255ec3b8882302ba33e"},{"name":"nomad-examples-helper.sh","path":"examples/nomad-examples-helper/nomad-examples-helper.sh","sha":"7f5f10afb2331d88268d53fe7e7d8deb0585a8db"}]},{"name":"root-example","children":[{"name":"README.md","path":"examples/root-example/README.md","sha":"9c2e4ffd4e0ffcf6e4d4a6d31583205251a6c67d"},{"name":"user-data-client.sh","path":"examples/root-example/user-data-client.sh","sha":"d6bac10fb2bb654d3255052d863ab19c9cdd41bc"},{"name":"user-data-server.sh","path":"examples/root-example/user-data-server.sh","sha":"109bdeb1f8df56b35d6bf1a6e5346cae4aca61f5"}]}]},{"name":"main.tf","path":"main.tf","sha":"5b21e3855375c578f5946961181efd25ad8426bc"},{"name":"modules","children":[{"name":"install-nomad","children":[{"name":"README.md","path":"modules/install-nomad/README.md","sha":"f793a3975dc6d8a06873a7f6a8ef1892eba01d84"},{"name":"install-nomad","path":"modules/install-nomad/install-nomad","sha":"5a74c98520010121d95d3432f08dffced7c2f761"},{"name":"supervisor-initd-script.sh","path":"modules/install-nomad/supervisor-initd-script.sh","sha":"171b91613e98ab2bd10282025caff1707918c95a"},{"name":"supervisord.conf","path":"modules/install-nomad/supervisord.conf","sha":"d96beb0ca9a16279ed1bdf74cbb6516275d85085"}]},{"name":"nomad-cluster","children":[{"name":"README.md","path":"modules/nomad-cluster/README.md","sha":"e47de8d5174ca6020601898c386a8de2e06f021d","toggled":true},{"name":"main.tf","path":"modules/nomad-cluster/main.tf","sha":"e46194c1170b25016dd4ed1dfb068af1583d47a6"},{"name":"outputs.tf","path":"modules/nomad-cluster/outputs.tf","sha":"341778300126873e11e2cf9d964bccd927c2644e"},{"name":"variables.tf","path":"modules/nomad-cluster/variables.tf","sha":"b0293a8dc30f988ca63af42e06e7a7cefaabde91"}],"toggled":true},{"name":"nomad-security-group-rules","children":[{"name":"README.md","path":"modules/nomad-security-group-rules/README.md","sha":"c35eab862bdd870569408a9ad55e8abb6894e4fe"},{"name":"main.tf","path":"modules/nomad-security-group-rules/main.tf","sha":"50e1045b13b51852e1a410ea5d9cd3150eff1a48"},{"name":"variables.tf","path":"modules/nomad-security-group-rules/variables.tf","sha":"a3d9d4b0b2abcce058d41b61b099fa115a7babd3"}]},{"name":"run-nomad","children":[{"name":"README.md","path":"modules/run-nomad/README.md","sha":"c66430e400076f3afbbfa9ff55abcc2aec7564f2"},{"name":"run-nomad","path":"modules/run-nomad/run-nomad","sha":"0216b1713f86310fa21fba9d4feacbffc45c9067"}]}],"toggled":true},{"name":"outputs.tf","path":"outputs.tf","sha":"f3efe59a6784255e79e0a7a77cf4b4ce2461278f"},{"name":"test","children":[{"name":"Gopkg.lock","path":"test/Gopkg.lock","sha":"fc3214f34d7c2f6d5d1c1ab6f9ecf1a85bfb06f8"},{"name":"Gopkg.toml","path":"test/Gopkg.toml","sha":"a84c6ed7e5bfce6f72e9e08666f1665af01a3f84"},{"name":"README.md","path":"test/README.md","sha":"874818e6da7a9c0c9338edde0c27fa3f8a3b3d05"},{"name":"aws_helpers.go","path":"test/aws_helpers.go","sha":"c7b6601bf58485e5deddbb1e17f433b0c12c9dae"},{"name":"nomad_consul_cluster_colocated_test.go","path":"test/nomad_consul_cluster_colocated_test.go","sha":"365b422c08e52295c1ff0bb211df4d5c973766a9"},{"name":"nomad_consul_cluster_separate_test.go","path":"test/nomad_consul_cluster_separate_test.go","sha":"0750f6f9bc45a650355e0d3c2557abae45ac7e73"},{"name":"nomad_helpers.go","path":"test/nomad_helpers.go","sha":"1750e3d729d429c9062fde804f4a66a0d514d22d"},{"name":"terratest_helpers.go","path":"test/terratest_helpers.go","sha":"f6e176e37bc9ce4c5322834a0325bc9ff1b836b4"}]},{"name":"variables.tf","path":"variables.tf","sha":"0941f7c8577a1db5d7fdafbd274deb87813c0ca9"}]},"detailsContent":"<h1 class=\"preview__body--title\" id=\"nomad-cluster\">Nomad Cluster</h1><div class=\"preview__body--border\"></div><p>This folder contains a <a href=\"https://www.terraform.io/\" class=\"preview__body--description--blue\" target=\"_blank\">Terraform</a> module that can be used to deploy a\n<a href=\"https://www.nomadproject.io/\" class=\"preview__body--description--blue\" target=\"_blank\">Nomad</a> cluster in <a href=\"https://aws.amazon.com/\" class=\"preview__body--description--blue\" target=\"_blank\">AWS</a> on top of an Auto Scaling Group. This\nmodule is designed to deploy an <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html\" class=\"preview__body--description--blue\" target=\"_blank\">Amazon Machine Image (AMI)</a>\nthat had Nomad installed via the <a href=\"/repos/v0.5.1/terraform-aws-nomad/modules/install-nomad\" class=\"preview__body--description--blue\">install-nomad</a> module in this Module.</p>\n<p>Note that this module assumes you have a separate <a href=\"https://www.consul.io/\" class=\"preview__body--description--blue\" target=\"_blank\">Consul</a> cluster already running. If you want\nto run Consul and Nomad in the same cluster, instead of using this module, see the <a href=\"/repos/v0.5.1/terraform-aws-nomad/README.md#deploy-nomad-and-consul-in-the-same-cluster\" class=\"preview__body--description--blue\">Deploy Nomad and Consul in the same\ncluster documentation</a>.</p>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-you-use-this-module\">How do you use this module?</h2>\n<p>This folder defines a <a href=\"https://www.terraform.io/docs/modules/usage.html\" class=\"preview__body--description--blue\" target=\"_blank\">Terraform module</a>, which you can use in your\ncode by adding a <code>module</code> configuration and setting its <code>source</code> parameter to URL of this folder:</p>\n<pre><span class=\"hljs-keyword\">module</span> <span class=\"hljs-string\">\"nomad_cluster\"</span> {\n <span class=\"hljs-comment\"># <span class=\"hljs-doctag\">TODO:</span> update this to the final URL</span>\n <span class=\"hljs-comment\"># Use version v0.0.1 of the nomad-cluster module</span>\n source = <span class=\"hljs-string\">\"github.com/hashicorp/terraform-aws-nomad//modules/nomad-cluster?ref=v0.0.1\"</span>\n\n <span class=\"hljs-comment\"># Specify the ID of the Nomad AMI. You should build this using the scripts in the install-nomad module.</span>\n ami_id = <span class=\"hljs-string\">\"ami-abcd1234\"</span>\n\n <span class=\"hljs-comment\"># Configure and start Nomad during boot. It will automatically connect to the Consul cluster specified in its</span>\n <span class=\"hljs-comment\"># configuration and form a cluster with other Nomad nodes connected to that Consul cluster.</span>\n user_data = <<-EOF\n <span class=\"hljs-comment\">#!/bin/bash</span>\n /opt/nomad/bin/run-nomad --server --num-servers <span class=\"hljs-number\">3</span>\n EOF\n\n <span class=\"hljs-comment\"># ... See variables.tf for the other parameters you must define for the nomad-cluster module</span>\n}\n</pre>\n<p>Note the following parameters:</p>\n<ul>\n<li>\n<p><code>source</code>: Use this parameter to specify the URL of the nomad-cluster module. The double slash (<code>//</code>) is intentional\nand required. Terraform uses it to specify subfolders within a Git repo (see <a href=\"https://www.terraform.io/docs/modules/sources.html\" class=\"preview__body--description--blue\" target=\"_blank\">module\nsources</a>). The <code>ref</code> parameter specifies a specific Git tag in\nthis repo. That way, instead of using the latest version of this module from the <code>master</code> branch, which\nwill change every time you run Terraform, you're using a fixed version of the repo.</p>\n</li>\n<li>\n<p><code>ami_id</code>: Use this parameter to specify the ID of a Nomad <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html\" class=\"preview__body--description--blue\" target=\"_blank\">Amazon Machine Image\n(AMI)</a> to deploy on each server in the cluster. You\nshould install Nomad in this AMI using the scripts in the <a href=\"/repos/v0.5.1/terraform-aws-nomad/modules/install-nomad\" class=\"preview__body--description--blue\">install-nomad</a> module.</p>\n</li>\n<li>\n<p><code>user_data</code>: Use this parameter to specify a <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts\" class=\"preview__body--description--blue\" target=\"_blank\">User\nData</a> script that each\nserver will run during boot. This is where you can use the <a href=\"/repos/v0.5.1/terraform-aws-nomad/modules/run-nomad\" class=\"preview__body--description--blue\">run-nomad script</a> to configure and\nrun Nomad. The <code>run-nomad</code> script is one of the scripts installed by the <a href=\"/repos/v0.5.1/terraform-aws-nomad/modules/install-nomad\" class=\"preview__body--description--blue\">install-nomad</a>\nmodule.</p>\n</li>\n</ul>\n<p>You can find the other parameters in <a href=\"/repos/v0.5.1/terraform-aws-nomad/modules/nomad-cluster/variables.tf\" class=\"preview__body--description--blue\">variables.tf</a>.</p>\n<p>Check out the <a href=\"/repos/v0.5.1/terraform-aws-nomad/examples/nomad-consul-separate-cluster\" class=\"preview__body--description--blue\">nomad-consul-separate-cluster example</a> example for working\nsample code. Note that if you want to run Nomad and Consul on the same cluster, see the [nomad-consul-colocated-cluster\nexample](https://github.com/hashicorp/terraform-aws-nomad/tree/master/MAIN.md example) instead.</p>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-you-connect-to-the-nomad-cluster\">How do you connect to the Nomad cluster?</h2>\n<h3 class=\"preview__body--subtitle\" id=\"using-the-node-agent-from-your-own-computer\">Using the Node agent from your own computer</h3>\n<p>If you want to connect to the cluster from your own computer, <a href=\"https://www.nomadproject.io/docs/install/index.html\" class=\"preview__body--description--blue\" target=\"_blank\">install\nNomad</a> and execute commands with the <code>-address</code> parameter set to\nthe IP address of one of the servers in your Nomad cluster. Note that this only works if the Nomad cluster is running\nin public subnets and/or your default VPC (as in both <a href=\"/repos/v0.5.1/terraform-aws-nomad/examples\" class=\"preview__body--description--blue\">examples</a>), which is OK for testing and\nexperimentation, but NOT recommended for production usage.</p>\n<p>To use the HTTP API, you first need to get the public IP address of one of the Nomad Instances. If you deployed the\n<a href=\"/repos/v0.5.1/terraform-aws-nomad/MAIN.md\" class=\"preview__body--description--blue\">nomad-consul-colocated-cluster</a> or\n<a href=\"/repos/v0.5.1/terraform-aws-nomad/examples/nomad-consul-separate-cluster\" class=\"preview__body--description--blue\">nomad-consul-separate-cluster</a> example, the\n<a href=\"/repos/v0.5.1/terraform-aws-nomad/examples/nomad-examples-helper/nomad-examples-helper.sh\" class=\"preview__body--description--blue\">nomad-examples-helper.sh script</a> will do the tag lookup for\nyou automatically (note, you must have the <a href=\"https://aws.amazon.com/cli/\" class=\"preview__body--description--blue\" target=\"_blank\">AWS CLI</a>,\n<a href=\"https://stedolan.github.io/jq/\" class=\"preview__body--description--blue\" target=\"_blank\">jq</a>, and the <a href=\"https://www.nomadproject.io/\" class=\"preview__body--description--blue\" target=\"_blank\">Nomad agent</a> installed locally):</p>\n<pre>> ../nomad-examples-helper/nomad-examples-helper.sh\n\nYour Nomad servers are running at the following IP addresses:\n\n<span class=\"hljs-number\">34.204.85.139</span>\n<span class=\"hljs-number\">52.23.167.204</span>\n<span class=\"hljs-number\">54.236.16.38</span>\n</pre>\n<p>Copy and paste one of these IPs and use it with the <code>-address</code> argument for any <a href=\"https://www.nomadproject.io/docs/commands/index.html\" class=\"preview__body--description--blue\" target=\"_blank\">Nomad\ncommand</a>. For example, to see the status of all the Nomad\nservers:</p>\n<pre>> nomad server-members -address=http:<span class=\"hljs-comment\">//<INSTANCE_IP_ADDR>:4646</span>\n\nip<span class=\"hljs-number\">-172</span><span class=\"hljs-number\">-31</span><span class=\"hljs-number\">-23</span><span class=\"hljs-number\">-140.</span>global <span class=\"hljs-number\">172.31</span><span class=\"hljs-number\">.23</span><span class=\"hljs-number\">.140</span> <span class=\"hljs-number\">4648</span> alive <span class=\"hljs-literal\">true</span> <span class=\"hljs-number\">2</span> <span class=\"hljs-number\">0.5</span><span class=\"hljs-number\">.4</span> dc1 global\nip<span class=\"hljs-number\">-172</span><span class=\"hljs-number\">-31</span><span class=\"hljs-number\">-23</span><span class=\"hljs-number\">-141.</span>global <span class=\"hljs-number\">172.31</span><span class=\"hljs-number\">.23</span><span class=\"hljs-number\">.141</span> <span class=\"hljs-number\">4648</span> alive <span class=\"hljs-literal\">true</span> <span class=\"hljs-number\">2</span> <span class=\"hljs-number\">0.5</span><span class=\"hljs-number\">.4</span> dc1 global\nip<span class=\"hljs-number\">-172</span><span class=\"hljs-number\">-31</span><span class=\"hljs-number\">-23</span><span class=\"hljs-number\">-142.</span>global <span class=\"hljs-number\">172.31</span><span class=\"hljs-number\">.23</span><span class=\"hljs-number\">.142</span> <span class=\"hljs-number\">4648</span> alive <span class=\"hljs-literal\">true</span> <span class=\"hljs-number\">2</span> <span class=\"hljs-number\">0.5</span><span class=\"hljs-number\">.4</span> dc1 global\n</pre>\n<p>To see the status of all the Nomad agents:</p>\n<pre>> nomad node-status -address=http:<span class=\"hljs-comment\">//<INSTANCE_IP_ADDR>:4646</span>\n\nID DC Name Class Drain Status\nec2796cd us-east<span class=\"hljs-number\">-1</span>e i<span class=\"hljs-number\">-0059e5</span>cafb8103834 <none> <span class=\"hljs-literal\">false</span> ready\nec2f799e us-east<span class=\"hljs-number\">-1</span>d i<span class=\"hljs-number\">-0</span>a5552c3c375e9ea0 <none> <span class=\"hljs-literal\">false</span> ready\nec226624 us-east<span class=\"hljs-number\">-1</span>b i<span class=\"hljs-number\">-0</span>d647981f5407ae32 <none> <span class=\"hljs-literal\">false</span> ready\nec2d4635 us-east<span class=\"hljs-number\">-1</span>a i<span class=\"hljs-number\">-0</span>c43dcc509e3d8bdf <none> <span class=\"hljs-literal\">false</span> ready\nec232ea5 us-east<span class=\"hljs-number\">-1</span>d i<span class=\"hljs-number\">-0</span>eff2e6e5989f51c1 <none> <span class=\"hljs-literal\">false</span> ready\nec2d4bd6 us-east<span class=\"hljs-number\">-1</span>c i<span class=\"hljs-number\">-01523</span>bf946d98003e <none> <span class=\"hljs-literal\">false</span> ready\n</pre>\n<p>And to submit a job called <code>example.nomad</code>:</p>\n<pre>> nomad <span class=\"hljs-builtin-name\">run</span> <span class=\"hljs-attribute\">-address</span>=http://<INSTANCE_IP_ADDR>:4646 example.nomad\n\n==> Monitoring evaluation <span class=\"hljs-string\">\"0d159869\"</span>\n Evaluation triggered by job <span class=\"hljs-string\">\"example\"</span>\n Allocation <span class=\"hljs-string\">\"5cbf23a1\"</span> created: node <span class=\"hljs-string\">\"1e1aa1e0\"</span>,<span class=\"hljs-built_in\"> group </span><span class=\"hljs-string\">\"example\"</span>\n Evaluation status changed: <span class=\"hljs-string\">\"pending\"</span> -> <span class=\"hljs-string\">\"complete\"</span>\n==> Evaluation <span class=\"hljs-string\">\"0d159869\"</span> finished with status <span class=\"hljs-string\">\"complete\"</span>\n</pre>\n<h3 class=\"preview__body--subtitle\" id=\"using-the-nomad-agent-on-another-ec-2-instance\">Using the Nomad agent on another EC2 Instance</h3>\n<p>For production usage, your EC2 Instances should be running the <a href=\"https://www.nomadproject.io/docs/agent/index.html\" class=\"preview__body--description--blue\" target=\"_blank\">Nomad\nagent</a>. The agent nodes should discover the Nomad server nodes\nautomatically using Consul. Check out the <a href=\"https://www.nomadproject.io/docs/service-discovery/index.html\" class=\"preview__body--description--blue\" target=\"_blank\">Service Discovery\ndocumentation</a> for details.</p>\n<h2 class=\"preview__body--subtitle\" id=\"whats-included-in-this-module\">What's included in this module?</h2>\n<p>This module creates the following architecture:</p>\n<p><img src=\"/repos/images/v0.5.1/terraform-aws-nomad/_docs/architecture.png\" alt=\"Nomad architecture\" class=\"preview__body--diagram\"></p>\n<p>This architecture consists of the following resources:</p>\n<ul>\n<li><a href=\"#auto-scaling-group\" class=\"preview__body--description--blue\">Auto Scaling Group</a></li>\n<li><a href=\"#security-group\" class=\"preview__body--description--blue\">Security Group</a></li>\n<li><a href=\"#iam-role-and-permissions\" class=\"preview__body--description--blue\">IAM Role and Permissions</a></li>\n</ul>\n<h3 class=\"preview__body--subtitle\" id=\"auto-scaling-group\">Auto Scaling Group</h3>\n<p>This module runs Nomad on top of an <a href=\"https://aws.amazon.com/autoscaling/\" class=\"preview__body--description--blue\" target=\"_blank\">Auto Scaling Group (ASG)</a>. Typically, you\nshould run the ASG with 3 or 5 EC2 Instances spread across multiple <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html\" class=\"preview__body--description--blue\" target=\"_blank\">Availability\nZones</a>. Each of the EC2\nInstances should be running an AMI that has had Nomad installed via the <a href=\"/repos/v0.5.1/terraform-aws-nomad/modules/install-nomad\" class=\"preview__body--description--blue\">install-nomad</a>\nmodule. You pass in the ID of the AMI to run using the <code>ami_id</code> input parameter.</p>\n<h3 class=\"preview__body--subtitle\" id=\"security-group\">Security Group</h3>\n<p>Each EC2 Instance in the ASG has a Security Group that allows:</p>\n<ul>\n<li>All outbound requests</li>\n<li>All the inbound ports specified in the <a href=\"https://www.nomadproject.io/docs/agent/configuration/index.html#ports\" class=\"preview__body--description--blue\" target=\"_blank\">Nomad\ndocumentation</a></li>\n</ul>\n<p>The Security Group ID is exported as an output variable if you need to add additional rules.</p>\n<p>Check out the <a href=\"#security\" class=\"preview__body--description--blue\">Security section</a> for more details.</p>\n<h3 class=\"preview__body--subtitle\" id=\"iam-role-and-permissions\">IAM Role and Permissions</h3>\n<p>Each EC2 Instance in the ASG has an <a href=\"http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html\" class=\"preview__body--description--blue\" target=\"_blank\">IAM Role</a> attached.\nWe give this IAM role a small set of IAM permissions that each EC2 Instance can use to automatically discover the other\nInstances in its ASG and form a cluster with them.</p>\n<p>The IAM Role ARN is exported as an output variable if you need to add additional permissions.</p>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-you-roll-out-updates\">How do you roll out updates?</h2>\n<p>If you want to deploy a new version of Nomad across the cluster, the best way to do that is to:</p>\n<ol>\n<li>Build a new AMI.</li>\n<li>Set the <code>ami_id</code> parameter to the ID of the new AMI.</li>\n<li>Run <code>terraform apply</code>.</li>\n</ol>\n<p>This updates the Launch Configuration of the ASG, so any new Instances in the ASG will have your new AMI, but it does\nNOT actually deploy those new instances. To make that happen, you should do the following:</p>\n<ol>\n<li>\n<p>Issue an API call to one of the old Instances in the ASG to have it leave gracefully. E.g.:</p>\n<pre>nomad server-<span class=\"hljs-literal\">force</span>-<span class=\"hljs-literal\">leave</span> -address=<OLD_INSTANCE_IP>:<span class=\"hljs-number\">4646</span>\n</pre>\n</li>\n<li>\n<p>Once the instance has left the cluster, terminate it:</p>\n<pre>aws ec2 <span class=\"hljs-keyword\">terminate</span>-instances <span class=\"hljs-comment\">--instance-ids <OLD_INSTANCE_ID></span>\n</pre>\n</li>\n<li>\n<p>After a minute or two, the ASG should automatically launch a new Instance, with the new AMI, to replace the old one.</p>\n</li>\n<li>\n<p>Wait for the new Instance to boot and join the cluster.</p>\n</li>\n<li>\n<p>Repeat these steps for each of the other old Instances in the ASG.</p>\n</li>\n</ol>\n<p>We will add a script in the future to automate this process (PRs are welcome!).</p>\n<h2 class=\"preview__body--subtitle\" id=\"what-happens-if-a-node-crashes\">What happens if a node crashes?</h2>\n<p>There are two ways a Nomad node may go down:</p>\n<ol>\n<li>The Nomad process may crash. In that case, <code>supervisor</code> should restart it automatically.</li>\n<li>The EC2 Instance running Nomad dies. In that case, the Auto Scaling Group should launch a replacement automatically.\nNote that in this case, since the Nomad agent did not exit gracefully, and the replacement will have a different ID,\nyou may have to manually clean out the old nodes using the <a href=\"https://www.nomadproject.io/docs/commands/server-force-leave.html\" class=\"preview__body--description--blue\" target=\"_blank\">server-force-leave\ncommand</a>. We may add a script to do this\nautomatically in the future. For more info, see the <a href=\"https://www.nomadproject.io/guides/outage.html\" class=\"preview__body--description--blue\" target=\"_blank\">Nomad Outage\ndocumentation</a>.</li>\n</ol>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-you-connect-load-balancers-to-the-auto-scaling-group-asg\">How do you connect load balancers to the Auto Scaling Group (ASG)?</h2>\n<p>You can use the <a href=\"https://www.terraform.io/docs/providers/aws/r/autoscaling_attachment.html\" class=\"preview__body--description--blue\" target=\"_blank\"><code>aws_autoscaling_attachment</code></a> resource.</p>\n<p>For example, if you are using the new application or network load balancers:</p>\n<pre><span class=\"hljs-keyword\">resource</span> <span class=\"hljs-string\">\"aws_lb_target_group\"</span> <span class=\"hljs-string\">\"test\"</span> {\n // ...\n}\n\n<span class=\"hljs-comment\"># Create a new Nomad Cluster</span>\n<span class=\"hljs-keyword\">module</span> <span class=\"hljs-string\">\"nomad\"</span> {\n source =<span class=\"hljs-string\">\"...\"</span>\n // ...\n}\n\n<span class=\"hljs-comment\"># Create a new load balancer attachment</span>\n<span class=\"hljs-keyword\">resource</span> <span class=\"hljs-string\">\"aws_autoscaling_attachment\"</span> <span class=\"hljs-string\">\"asg_attachment_bar\"</span> {\n autoscaling_group_name = <span class=\"hljs-keyword\">module</span>.nomad.asg_name\n alb_target_group_arn = aws_alb_target_group.test.arn\n}\n</pre>\n<p>If you are using a "classic" load balancer:</p>\n<pre><span class=\"hljs-comment\"># Create a new load balancer</span>\n<span class=\"hljs-keyword\">resource</span> <span class=\"hljs-string\">\"aws_elb\"</span> <span class=\"hljs-string\">\"bar\"</span> {\n // ...\n}\n\n<span class=\"hljs-comment\"># Create a new Nomad Cluster</span>\n<span class=\"hljs-keyword\">module</span> <span class=\"hljs-string\">\"nomad\"</span> {\n source =<span class=\"hljs-string\">\"...\"</span>\n // ...\n}\n\n<span class=\"hljs-comment\"># Create a new load balancer attachment</span>\n<span class=\"hljs-keyword\">resource</span> <span class=\"hljs-string\">\"aws_autoscaling_attachment\"</span> <span class=\"hljs-string\">\"asg_attachment_bar\"</span> {\n autoscaling_group_name = <span class=\"hljs-keyword\">module</span>.nomad.asg_name\n elb = aws_elb.bar.id\n}\n</pre>\n<h2 class=\"preview__body--subtitle\" id=\"security\">Security</h2>\n<p>Here are some of the main security considerations to keep in mind when using this module:</p>\n<ol>\n<li><a href=\"#encryption-in-transit\" class=\"preview__body--description--blue\">Encryption in transit</a></li>\n<li><a href=\"#encryption-at-rest\" class=\"preview__body--description--blue\">Encryption at rest</a></li>\n<li><a href=\"#dedicated-instances\" class=\"preview__body--description--blue\">Dedicated instances</a></li>\n<li><a href=\"#security-groups\" class=\"preview__body--description--blue\">Security groups</a></li>\n<li><a href=\"#ssh-access\" class=\"preview__body--description--blue\">SSH access</a></li>\n</ol>\n<h3 class=\"preview__body--subtitle\" id=\"encryption-in-transit\">Encryption in transit</h3>\n<p>Nomad can encrypt all of its network traffic. For instructions on enabling network encryption, have a look at the\n<a href=\"/repos/v0.5.1/terraform-aws-nomad/modules/run-nomad#how-do-you-handle-encryption\" class=\"preview__body--description--blue\">How do you handle encryption documentation</a>.</p>\n<h3 class=\"preview__body--subtitle\" id=\"encryption-at-rest\">Encryption at rest</h3>\n<p>The EC2 Instances in the cluster store all their data on the root EBS Volume. To enable encryption for the data at\nrest, you must enable encryption in your Nomad AMI. If you're creating the AMI using Packer (e.g. as shown in\nthe <a href=\"/repos/v0.5.1/terraform-aws-nomad/examples/nomad-consul-ami\" class=\"preview__body--description--blue\">nomad-consul-ami example</a>), you need to set the <a href=\"https://www.packer.io/docs/builders/amazon-ebs.html#encrypt_boot\" class=\"preview__body--description--blue\" target=\"_blank\">encrypt_boot\nparameter</a> to <code>true</code>.</p>\n<h3 class=\"preview__body--subtitle\" id=\"dedicated-instances\">Dedicated instances</h3>\n<p>If you wish to use dedicated instances, you can set the <code>tenancy</code> parameter to <code>"dedicated"</code> in this module.</p>\n<h3 class=\"preview__body--subtitle\" id=\"security-groups\">Security groups</h3>\n<p>This module attaches a security group to each EC2 Instance that allows inbound requests as follows:</p>\n<ul>\n<li>\n<p><strong>Nomad</strong>: For all the <a href=\"https://www.nomadproject.io/docs/agent/configuration/index.html#ports\" class=\"preview__body--description--blue\" target=\"_blank\">ports used by Nomad</a>,\nyou can use the <code>allowed_inbound_cidr_blocks</code> parameter to control the list of\n<a href=\"https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing\" class=\"preview__body--description--blue\" target=\"_blank\">CIDR blocks</a> that will be allowed access.</p>\n</li>\n<li>\n<p><strong>SSH</strong>: For the SSH port (default: 22), you can use the <code>allowed_ssh_cidr_blocks</code> parameter to control the list of\n<a href=\"https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing\" class=\"preview__body--description--blue\" target=\"_blank\">CIDR blocks</a> that will be allowed access.</p>\n</li>\n</ul>\n<p>Note that all the ports mentioned above are configurable via the <code>xxx_port</code> variables (e.g. <code>http_port</code>). See\n<a href=\"/repos/v0.5.1/terraform-aws-nomad/modules/nomad-cluster/variables.tf\" class=\"preview__body--description--blue\">variables.tf</a> for the full list.</p>\n<h3 class=\"preview__body--subtitle\" id=\"ssh-access\">SSH access</h3>\n<p>You can associate an <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html\" class=\"preview__body--description--blue\" target=\"_blank\">EC2 Key Pair</a> with each\nof the EC2 Instances in this cluster by specifying the Key Pair's name in the <code>ssh_key_name</code> variable. If you don't\nwant to associate a Key Pair with these servers, set <code>ssh_key_name</code> to an empty string.</p>\n<h2 class=\"preview__body--subtitle\" id=\"whats-not-included-in-this-module\">What's NOT included in this module?</h2>\n<p>This module does NOT handle the following items, which you may want to provide on your own:</p>\n<ul>\n<li><a href=\"#consul\" class=\"preview__body--description--blue\">Consul</a></li>\n<li><a href=\"#monitoring-alerting-log-aggregation\" class=\"preview__body--description--blue\">Monitoring, alerting, log aggregation</a></li>\n<li><a href=\"#vpcs-subnets-route-tables\" class=\"preview__body--description--blue\">VPCs, subnets, route tables</a></li>\n<li><a href=\"#dns-entries\" class=\"preview__body--description--blue\">DNS entries</a></li>\n</ul>\n<h3 class=\"preview__body--subtitle\" id=\"consul\">Consul</h3>\n<p>This module assumes you already have Consul deployed in a separate cluster. If you want to run Nomad and Consul on the\nsame cluster, instead of using this module, see the <a href=\"/repos/v0.5.1/terraform-aws-nomad/README.md#deploy-nomad-and-consul-in-the-same-cluster\" class=\"preview__body--description--blue\">Deploy Nomad and Consul in the same cluster\ndocumentation</a>.</p>\n<h3 class=\"preview__body--subtitle\" id=\"monitoring-alerting-log-aggregation\">Monitoring, alerting, log aggregation</h3>\n<p>This module does not include anything for monitoring, alerting, or log aggregation. All ASGs and EC2 Instances come\nwith limited <a href=\"https://aws.amazon.com/cloudwatch/\" class=\"preview__body--description--blue\" target=\"_blank\">CloudWatch</a> metrics built-in, but beyond that, you will have to\nprovide your own solutions.</p>\n<h3 class=\"preview__body--subtitle\" id=\"vp-cs-subnets-route-tables\">VPCs, subnets, route tables</h3>\n<p>This module assumes you've already created your network topology (VPC, subnets, route tables, etc). You will need to\npass in the the relevant info about your network topology (e.g. <code>vpc_id</code>, <code>subnet_ids</code>) as input variables to this\nmodule.</p>\n<h3 class=\"preview__body--subtitle\" id=\"dns-entries\">DNS entries</h3>\n<p>This module does not create any DNS entries for Nomad (e.g. in Route 53).</p>\n","repoName":"terraform-aws-nomad","repoRef":"v0.5.2","serviceDescriptor":{"serviceName":"HashiCorp Nomad","serviceRepoName":"terraform-aws-nomad","serviceRepoOrg":"hashicorp","cloudProviders":["aws"],"description":"Deploy a Nomad cluster. Supports automatic bootstrapping, discovery of Consul servers, automatic recovery of failed servers.","imageUrl":"nomad.png","licenseType":"open-source","technologies":["Terraform","Bash"],"compliance":[],"tags":[""]},"serviceCategoryName":"Docker orchestration","fileName":"README.md","filePath":"/modules/nomad-cluster/README.md","title":"Repo Browser: HashiCorp Nomad","description":"Browse the repos in the Gruntwork Infrastructure as Code Library."}