This folder contains a Terraform module that can be used to deploy a
Vault cluster in AWS on top of an Auto Scaling Group. This
module is designed to deploy an Amazon Machine Image (AMI)
that had Vault installed via the install-vault module in this Module.
How do you use this module?
This folder defines a Terraform module, which you can use in your
code by adding a module configuration and setting its source parameter to URL of this folder:
module"vault_cluster" {
# Use version v0.0.1 of the vault-cluster module
source = "github.com/hashicorp/terraform-aws-vault//modules/vault-cluster?ref=v0.0.1"# Specify the ID of the Vault AMI. You should build this using the scripts in the install-vault module.
ami_id = "ami-abcd1234"# Configure and start Vault during boot.
user_data = <<-EOF
#!/bin/bash
/opt/vault/bin/run-vault --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem
EOF
# Add tag to each node in the cluster with value set to var.cluster_name
cluster_tag_key = "Name"# Optionally add extra tags to each node in the cluster
cluster_extra_tags = [
{
key = "Environment"
value = "Dev"
propagate_at_launch = true
},
{
key = "Department"
value = "Ops"
propagate_at_launch = true
}
]
# ... See variables.tf for the other parameters you must define for the vault-cluster module
}
Note the following parameters:
source: Use this parameter to specify the URL of the vault-cluster module. The double slash (//) is intentional
and required. Terraform uses it to specify subfolders within a Git repo (see module
sources). The ref parameter specifies a specific Git tag in
this repo. That way, instead of using the latest version of this module from the master branch, which
will change every time you run Terraform, you're using a fixed version of the repo.
ami_id: Use this parameter to specify the ID of a Vault Amazon Machine Image
(AMI) to deploy on each server in the cluster. You
should install Vault in this AMI using the scripts in the install-vault module.
user_data: Use this parameter to specify a User
Data script that each
server will run during boot. This is where you can use the run-vault script to configure and
run Vault. The run-vault script is one of the scripts installed by the install-vault
module.
You can find the other parameters in variables.tf.
> ../vault-examples-helper/vault-examples-helper.sh
Your Vault servers are running at the following IP addresses:
11.22.33.4411.22.33.5511.22.33.66
Initializing the Vault cluster
The very first time you deploy a new Vault cluster, you need to initialize the
Vault. The easiest way to do
this is to SSH to one of the servers that has Vault installed and run:
vault operator init
Key 1: 427cd2c310be3b84fe69372e683a790e01
Key 2: 0e2b8f3555b42a232f7ace6fe0e68eaf02
Key 3: 37837e5559b322d0585a6e411614695403
Key 4: 8dd72fd7d1af254de5f82d1270fd87ab04
Key 5: b47fdeb7dda82dbe92d88d3c860f605005
Initial Root Token: eaf5cc32-b48f-7785-5c94-90b5ce300e9b
Vault initialized with 5 keys and a key threshold of 3!
Vault will print out the unseal keys and a root
token. This is the only time ever that all of
this data is known by Vault, so you MUST save it in a secure place immediately! Also, this is the only time that
the unseal keys should ever be so close together. You should distribute each one to a different, trusted administrator
for safe keeping in completely separate secret stores and NEVER store them all in the same place.
In fact, a better option is to initialize Vault with PGP, GPG, or
Keybase so that each unseal key is encrypted with a
different user's public key. That way, no one, not even the operator running the init command can see all the keys
in one place:
Now that you have the unseal keys, you can unseal Vault by
having 3 out of the 5 administrators (or whatever your key shard threshold is) do the following:
SSH to a Vault server.
Run vault operator unseal.
Enter the unseal key when prompted.
Repeat for each of the other Vault servers.
Once this process is complete, all the Vault servers will be unsealed and you will be able to start reading and writing
secrets.
Setting up a secrets engine
In previous versions of Vault (< 1.1.0), a key-value secrets engine was automatically mounted at the path secret/. This
module. The examples in this module use versions >= 1.1.0 and thus mount a key-value secrets engine at secret/ explicitly.
vault secrets enable-version=1 -path=secret kv
Connecting to the Vault cluster to read and write secrets
When you SSH to a Vault server, the Vault client is already configured to talk to the Vault server on localhost, so
you can directly run Vault commands:
vault read secret/foo
Key Value--- -----
refresh_interval 768h0m0s
value bar
Access Vault from other servers in the same AWS account
To access Vault from a different server in the same account, you need to specify the URL of the Vault cluster. You
could manually look up the Vault cluster's IP address, but since this module uses Consul not only as a storage
backend but also as a way to register DNS
entries, you can access Vault
using a nice domain name instead, such as vault.service.consul.
To set this up, use the install-dnsmasq
module on each server that
needs to access Vault or setup-systemd-resolved if using Ubuntu 18.04. This allows you to access Vault from your EC2 Instances as follows:
vault -address=https://vault.service.consul:8200read secret/foo
Key Value
--- -----
refresh_interval 768h0m0s
value bar
You can configure the Vault address as an environment variable:
That way, you don't have to remember to pass the Vault address every time:
vault read secret/foo
Key Value--- -----
refresh_interval 768h0m0s
value bar
Note that if you're using a self-signed TLS cert (e.g. generated from the private-tls-cert
module), you'll need to have the public key of the CA that signed that cert or you'll get
an "x509: certificate signed by unknown authority" error. You could pass the certificate manually:
vault read -ca-cert=/opt/vault/tls/ca.crt.pem secret/foo
Key Value--- -----
refresh_interval 768h0m0s
value bar
However, to avoid having to add the -ca-cert argument to every single call, you can use the update-certificate-store
module to configure the server to trust the CA.
We strongly recommend only running Vault in private subnets. That means it is not directly accessible from the
public Internet, which reduces your surface area to attackers. If you need users to be able to access Vault from
outside of AWS, we recommend using VPN to connect to AWS.
If VPN is not an option, and Vault must be accessible from the public Internet, you can use the vault-elb
module to deploy an Elastic Load Balancer
(ELB) in your public subnets, and have all your users
access Vault via this ELB:
This module runs Vault on top of an Auto Scaling Group (ASG). Typically, you
should run the ASG with 3 or 5 EC2 Instances spread across multiple Availability
Zones. Each of the EC2
Instances should be running an AMI that has had Vault installed via the install-vault
module. You pass in the ID of the AMI to run using the ami_id input parameter.
Security Group
Each EC2 Instance in the ASG has a Security Group that allows:
All outbound requests
Inbound requests on Vault's API port (default: port 8200)
Inbound requests on Vault's cluster port for server-to-server communication (default: port 8201)
Inbound SSH requests (default: port 22)
The Security Group ID is exported as an output variable if you need to add additional rules.
Each EC2 Instance in the ASG has an IAM Role attached.
The IAM Role ARN is exported as an output variable so you can add custom permissions.
S3 bucket (Optional)
If configure_s3_backend is set to true, this module will create an S3 bucket that Vault
can use as a storage backend. S3 is a good choice for storage because it provides outstanding durability (99.999999999%)
and availability (99.99%). Unfortunately, S3 cannot be used for Vault High Availability coordination, so this module expects
a separate Consul server cluster to be deployed as a high availability backend.
How do you roll out updates?
Please note that Vault does not support true zero-downtime upgrades, but with proper upgrade procedure the downtime
should be very short (a few hundred milliseconds to a second depending on how the speed of access to the storage
backend). See the Vault upgrade guide instructions for
details.
If you want to deploy a new version of Vault across a cluster deployed with this module, the best way to do that is to:
Build a new AMI.
Set the ami_id parameter to the ID of the new AMI.
Run terraform apply.
This updates the Launch Configuration of the ASG, so any new Instances in the ASG will have your new AMI, but it does
NOT actually deploy those new instances. To make that happen, you need to:
SSH to the EC2 Instance where the Vault standby is running.
Execute sudo systemctl stop vault to have Vault shut down gracefully.
Terminate the EC2 Instance.
After a minute or two, the ASG should automatically launch a new Instance, with the new AMI, to replace the old one.
Have each Vault admin SSH to the new EC2 Instance and unseal it.
Replace the primary node
The procedure for the primary node is the same, but should be done LAST, after all the standbys have already been
upgraded:
SSH to the EC2 Instance where the Vault primary is running. This should be the last server that has the old version
of your AMI.
Execute sudo systemctl stop vault to have Vault shut down gracefully.
Terminate the EC2 Instance.
After a minute or two, the ASG should automatically launch a new Instance, with the new AMI, to replace the old one.
Have each Vault admin SSH to the new EC2 Instance and unseal it.
What happens if a node crashes?
There are two ways a Vault node may go down:
The Vault process may crash. In that case, systemd should restart it automatically. At this point, you will
need to have each Vault admin SSH to the Instance to unseal it again.
The EC2 Instance running Vault dies. In that case, the Auto Scaling Group should launch a replacement automatically.
Once again, the Vault admins will have to SSH to the replacement Instance and unseal it.
Given the need for manual intervention, you will want to have alarms set up that go off any time a Vault node gets
restarted.
Security
Here are some of the main security considerations to keep in mind when using this module:
Vault servers keep everything in memory and does not write any data to the local hard disk. To persist data, Vault
encrypts it, and sends it off to its storage backends, so no matter how the backend stores that data, it is already
encrypted. By default, this Module uses Consul as a storage backend, so if you want an additional layer of
protection, you can check out the official Consul encryption docs
and the Consul AWS Module How do you handle encryption
docs
for more info.
Note that if you want to enable encryption for the root EBS Volume for your Vault Instances (despite the fact that
Vault itself doesn't write anything to this volume), you need to enable that in your AMI. If you're creating the AMI
using Packer (e.g. as shown in the vault-consul-ami example), you need to set the encrypt_boot
parameter to true.
Dedicated instances
If you wish to use dedicated instances, you can set the tenancy parameter to "dedicated" in this module.
Security groups
This module attaches a security group to each EC2 Instance that allows inbound requests as follows:
Vault: For the Vault API port (default: 8200), you can use the allowed_inbound_cidr_blocks parameter to control
the list of CIDR blocks that will be allowed access
and the allowed_inbound_security_group_ids parameter to control the security groups that will be allowed access.
SSH: For the SSH port (default: 22), you can use the allowed_ssh_cidr_blocks parameter to control the list of CIDR blocks that will be allowed access. You can use the allowed_ssh_security_group_ids parameter to control the list of source Security Groups that will be allowed access.
Note that all the ports mentioned above are configurable via the xxx_port variables (e.g. api_port). See
variables.tf for the full list.
SSH access
You can associate an EC2 Key Pair with each
of the EC2 Instances in this cluster by specifying the Key Pair's name in the ssh_key_name variable. If you don't
want to associate a Key Pair with these servers, set ssh_key_name to an empty string.
What's NOT included in this module?
This module does NOT handle the following items, which you may want to provide on your own:
This module configures Vault to use Consul as a high availability storage backend. This module assumes you already
have Consul servers deployed in a separate cluster. We do not recommend co-locating Vault and Consul servers in the
same cluster because:
Vault is a tool built specifically for security, and running any other software on the same server increases its
surface area to attackers.
This Vault Module uses Consul as a high availability storage backend and both Vault and Consul keep their working
set in memory. That means for every 1 byte of data in Vault, you'd also have 1 byte of data in Consul, doubling
your memory consumption on each server.
Check out the Consul AWS Module for how to deploy a Consul
server cluster in AWS. See the root example and
vault-cluster-private examples for sample code that shows how to run both a
Vault server cluster and Consul server cluster.
Monitoring, alerting, log aggregation
This module does not include anything for monitoring, alerting, or log aggregation. All ASGs and EC2 Instances come
with limited CloudWatch metrics built-in, but beyond that, you will have to
provide your own solutions. We especially recommend looking into Vault's Audit
backends for how you can capture detailed logging and audit
information.
Given that any time Vault crashes, reboots, or restarts, you have to have the Vault admins manually unseal it (see
What happens if a node crashes?), we strongly recommend configuring alerts that
notify these admins whenever they need to take action!
VPCs, subnets, route tables
This module assumes you've already created your network topology (VPC, subnets, route tables, etc). You will need to
pass in the the relevant info about your network topology (e.g. vpc_id, subnet_ids) as input variables to this
module.
Questions? Ask away.
We're here to talk about our services, answer any questions, give advice, or just to chat.
{"treedata":{"name":"root","toggled":true,"children":[{"name":".circleci","children":[{"name":"config.yml","path":".circleci/config.yml","sha":"be1841a927697869a942fb91e86672c646cc32bb"}]},{"name":".gitignore","path":".gitignore","sha":"6c4ebe4426586b7febbaba178294ef59b8272c05"},{"name":"CODEOWNERS","path":"CODEOWNERS","sha":"5949dbc0fa6d4dd6610575e3c878c353d92da44a"},{"name":"CONTRIBUTING.md","path":"CONTRIBUTING.md","sha":"ea1ca5c8d6ff2d0d62880ee0ea80ef86e0b87dad"},{"name":"LICENSE","path":"LICENSE","sha":"7a4a3ea2424c09fbe48d455aed1eaa94d9124835"},{"name":"NOTICE","path":"NOTICE","sha":"2288082e33ae18a610f6a7747180f7e05e47a001"},{"name":"README.md","path":"README.md","sha":"1a2de50f26400eda43c1067fccf4aa49b3db8dfe"},{"name":"_ci","children":[{"name":"publish-amis-in-new-account.md","path":"_ci/publish-amis-in-new-account.md","sha":"3182a0a90775f7bb9622c037196ac2a1f15e455d"},{"name":"publish-amis.sh","path":"_ci/publish-amis.sh","sha":"3d4a46a02f26d45a5fc27cce07cd3db7bc140399"}]},{"name":"_docs","children":[{"name":"amazon-linux-ami-list.md","path":"_docs/amazon-linux-ami-list.md","sha":"be9f50c689839b099d0222711ec13a86108660f0"},{"name":"architecture-elb.png","path":"_docs/architecture-elb.png","sha":"9e02e4f53afdd2929ec4fc4246ae5e47bd49f295"},{"name":"architecture-with-s3.png","path":"_docs/architecture-with-s3.png","sha":"8a91ef2d06665e40fe82a8ccf7ae4281f338fd50"},{"name":"architecture.png","path":"_docs/architecture.png","sha":"a9f6098b37b1aaafe8c744b154208efc3e642881"},{"name":"ubuntu16-ami-list.md","path":"_docs/ubuntu16-ami-list.md","sha":"60caafe1f2b90046e819f373ed22c0df47043f03"}]},{"name":"examples","children":[{"name":"root-example","children":[{"name":"README.md","path":"examples/root-example/README.md","sha":"4d73916c181c9c4157905162d4ed66d2d7427342"},{"name":"user-data-consul.sh","path":"examples/root-example/user-data-consul.sh","sha":"5043e6904cab4564ed0c7f8337599a884f96a194"},{"name":"user-data-vault.sh","path":"examples/root-example/user-data-vault.sh","sha":"26fad57bb49a78e4e2a4b7ce52427efb27e87ced"}]},{"name":"vault-agent","children":[{"name":"README.md","path":"examples/vault-agent/README.md","sha":"0a80c92a455171b6af0e1774a1e67adee32579d6"},{"name":"main.tf","path":"examples/vault-agent/main.tf","sha":"1411aff0b44e6554a96d0481d0ffa31a1b4a27ea"},{"name":"outputs.tf","path":"examples/vault-agent/outputs.tf","sha":"16bb9676e7fa2ec2bb5148c5ca5763d7c01db837"},{"name":"user-data-auth-client.sh","path":"examples/vault-agent/user-data-auth-client.sh","sha":"9ff5ebc6c45f791f9357a71a7f3415f1e333b61e"},{"name":"user-data-consul.sh","path":"examples/vault-agent/user-data-consul.sh","sha":"0c96497e38b05e5b5a54277c95ae129827a3daa2"},{"name":"user-data-vault.sh","path":"examples/vault-agent/user-data-vault.sh","sha":"49983b4b543bd7d28c2adde81629d4a3867ffe13"},{"name":"variables.tf","path":"examples/vault-agent/variables.tf","sha":"9abf58af8a0dc24bd445a1b779f07fcf48a05a0e"}]},{"name":"vault-auto-unseal","children":[{"name":"README.md","path":"examples/vault-auto-unseal/README.md","sha":"770b559d99f84ce103f01fddcdc10c1fef58d482"},{"name":"main.tf","path":"examples/vault-auto-unseal/main.tf","sha":"56169fcd17ecacb9dd028c7f9e8a1e880a9badd6"},{"name":"outputs.tf","path":"examples/vault-auto-unseal/outputs.tf","sha":"9e7ebd3be30c61662e8647cfecfec210de53e6d2"},{"name":"user-data-consul.sh","path":"examples/vault-auto-unseal/user-data-consul.sh","sha":"0c96497e38b05e5b5a54277c95ae129827a3daa2"},{"name":"user-data-vault.sh","path":"examples/vault-auto-unseal/user-data-vault.sh","sha":"1d9533ea3ba6f9b89242ce503e8b7ea1e59579ba"},{"name":"variables.tf","path":"examples/vault-auto-unseal/variables.tf","sha":"03847da844d2c5a5c24a27872324da11249d11de"}]},{"name":"vault-cluster-private","children":[{"name":"README.md","path":"examples/vault-cluster-private/README.md","sha":"ca0abbac27030e0041b221b8c96b68868615d46c"},{"name":"main.tf","path":"examples/vault-cluster-private/main.tf","sha":"8d799c376e723c81a781fee11a5ca279fc6aeac4"},{"name":"outputs.tf","path":"examples/vault-cluster-private/outputs.tf","sha":"9e7ebd3be30c61662e8647cfecfec210de53e6d2"},{"name":"user-data-consul.sh","path":"examples/vault-cluster-private/user-data-consul.sh","sha":"5043e6904cab4564ed0c7f8337599a884f96a194"},{"name":"user-data-vault.sh","path":"examples/vault-cluster-private/user-data-vault.sh","sha":"ef32d804ab9f1807730bae1551fc3fd3fff6da95"},{"name":"variables.tf","path":"examples/vault-cluster-private/variables.tf","sha":"3e919aff20454c6ef004986d3f28b7f65c5d9379"}]},{"name":"vault-consul-ami","children":[{"name":"README.md","path":"examples/vault-consul-ami/README.md","sha":"97b6eeaf3f45cb12b227eb47059042630ec342a4"},{"name":"auth","children":[{"name":"sign-request.py","path":"examples/vault-consul-ami/auth/sign-request.py","sha":"cba97708676a0d3aa8068ee1b5ecb3bf8d14067f"}]},{"name":"tls","children":[{"name":"README.md","path":"examples/vault-consul-ami/tls/README.md","sha":"92f88219562304b995bd78889a24047bdde336af"},{"name":"ca.crt.pem","path":"examples/vault-consul-ami/tls/ca.crt.pem","sha":"9bf1a62b0649d1ab5c0b16710166c146a1fd1fa3"},{"name":"vault.crt.pem","path":"examples/vault-consul-ami/tls/vault.crt.pem","sha":"e642f0b108bfdebe56331111ce9ce75f8ff42f52"},{"name":"vault.key.pem","path":"examples/vault-consul-ami/tls/vault.key.pem","sha":"0103aa55a5a68ffc002c7c9c14a292adbd97fd2d"}]},{"name":"vault-consul.json","path":"examples/vault-consul-ami/vault-consul.json","sha":"34fc05d0337fd83fdb42faa143e6b216a8f6585b"}]},{"name":"vault-ec2-auth","children":[{"name":"README.md","path":"examples/vault-ec2-auth/README.md","sha":"29af1121fa99b3903b09447c79e127daecb30bfb"},{"name":"images","children":[{"name":"ec2-auth.png","path":"examples/vault-ec2-auth/images/ec2-auth.png","sha":"a98fb916ed6a32204efbc525cac59c0d570d619d"}]},{"name":"main.tf","path":"examples/vault-ec2-auth/main.tf","sha":"5417c9d851c4b9ad99033205e615aff8c9b59cf1"},{"name":"outputs.tf","path":"examples/vault-ec2-auth/outputs.tf","sha":"8694fbce70e13690b8bca4bab50d2570dcd7bdd9"},{"name":"user-data-auth-client.sh","path":"examples/vault-ec2-auth/user-data-auth-client.sh","sha":"e049ec6dca2d35d6fde5badec4e48ecafe8bfc38"},{"name":"user-data-consul.sh","path":"examples/vault-ec2-auth/user-data-consul.sh","sha":"0c96497e38b05e5b5a54277c95ae129827a3daa2"},{"name":"user-data-vault.sh","path":"examples/vault-ec2-auth/user-data-vault.sh","sha":"dd8a73e43e9a4c42e4687ad4cc3c84a543ce548a"},{"name":"variables.tf","path":"examples/vault-ec2-auth/variables.tf","sha":"f04b84eac1668fa2ca3b92d50b27ca6139fde834"}]},{"name":"vault-examples-helper","children":[{"name":"README.md","path":"examples/vault-examples-helper/README.md","sha":"a28a95258bee372025e4282daf60a20d1bf96bdb"},{"name":"vault-examples-helper.sh","path":"examples/vault-examples-helper/vault-examples-helper.sh","sha":"ebe3d8b9bb599384add9a7c635b397529b10fde5"}]},{"name":"vault-iam-auth","children":[{"name":"README.md","path":"examples/vault-iam-auth/README.md","sha":"7557e5abb41341b82464a36eebd0e759d857625d"},{"name":"images","children":[{"name":"iam-auth.png","path":"examples/vault-iam-auth/images/iam-auth.png","sha":"095dcd0060f6cd1f5dad3be9d5ec83dcbba8316f"}]},{"name":"main.tf","path":"examples/vault-iam-auth/main.tf","sha":"6e1034d29495a9b8895e79f5cf716689782a51cc"},{"name":"outputs.tf","path":"examples/vault-iam-auth/outputs.tf","sha":"16bb9676e7fa2ec2bb5148c5ca5763d7c01db837"},{"name":"user-data-auth-client.sh","path":"examples/vault-iam-auth/user-data-auth-client.sh","sha":"4122511229818b6ddf8fe03fd2c314f8a1521ee2"},{"name":"user-data-consul.sh","path":"examples/vault-iam-auth/user-data-consul.sh","sha":"0c96497e38b05e5b5a54277c95ae129827a3daa2"},{"name":"user-data-vault.sh","path":"examples/vault-iam-auth/user-data-vault.sh","sha":"1f32c36dc968467fc59b44f624638e1437703fb9"},{"name":"variables.tf","path":"examples/vault-iam-auth/variables.tf","sha":"9abf58af8a0dc24bd445a1b779f07fcf48a05a0e"}]},{"name":"vault-s3-backend","children":[{"name":"README.md","path":"examples/vault-s3-backend/README.md","sha":"e37fbaec6982c87a87a16d3499db3c17f85dbbfd"},{"name":"main.tf","path":"examples/vault-s3-backend/main.tf","sha":"64617b4235bca44d381e7007a29d39a02e0edd03"},{"name":"outputs.tf","path":"examples/vault-s3-backend/outputs.tf","sha":"e1af7046390871d4e63797089c39aebab5d9ac26"},{"name":"user-data-consul.sh","path":"examples/vault-s3-backend/user-data-consul.sh","sha":"5043e6904cab4564ed0c7f8337599a884f96a194"},{"name":"user-data-vault.sh","path":"examples/vault-s3-backend/user-data-vault.sh","sha":"cfc21ee0525b0cee2753e1823b8656bf504a910a"},{"name":"variables.tf","path":"examples/vault-s3-backend/variables.tf","sha":"f526eaaa0c65aa5f8be3d4dbde0dd453781d4461"}]}]},{"name":"main.tf","path":"main.tf","sha":"3e2db19f150bfb9ae8b8d1b33ce9e20d3b076dde"},{"name":"modules","children":[{"name":"install-vault","children":[{"name":"README.md","path":"modules/install-vault/README.md","sha":"6bb7538adb7dd8f8527690d96fc06d701cd79462"},{"name":"install-vault","path":"modules/install-vault/install-vault","sha":"e1564049029f50af3507fb2e57dc188c607cb1aa"}]},{"name":"private-tls-cert","children":[{"name":"README.md","path":"modules/private-tls-cert/README.md","sha":"42f2d131477fae97cdfaeef893b3c916f2f7f209"},{"name":"main.tf","path":"modules/private-tls-cert/main.tf","sha":"f906b61efe2b5356bcf759dc60c47a89cf853894"},{"name":"outputs.tf","path":"modules/private-tls-cert/outputs.tf","sha":"078afd869917866e91d2beab7f91fa0d14af524e"},{"name":"variables.tf","path":"modules/private-tls-cert/variables.tf","sha":"57720d8462ddd0a472082d76f1605ea32c443612"}]},{"name":"run-vault","children":[{"name":"README.md","path":"modules/run-vault/README.md","sha":"b2f1e1e074ffd65b4c715675bd59657c6eac6992"},{"name":"run-vault","path":"modules/run-vault/run-vault","sha":"192feb7aa74fde7c93df0e091352780adfeb46c4"}]},{"name":"update-certificate-store","children":[{"name":"README.md","path":"modules/update-certificate-store/README.md","sha":"1348a7aba71475b5a17d31f3f8d66663f656e672"},{"name":"update-certificate-store","path":"modules/update-certificate-store/update-certificate-store","sha":"e07d9a1d997843d62033ee019121895c91e29447"}]},{"name":"vault-cluster","children":[{"name":"README.md","path":"modules/vault-cluster/README.md","sha":"7b4c4ee5f59dc3a216154c4402acd70b96d6585f","toggled":true},{"name":"main.tf","path":"modules/vault-cluster/main.tf","sha":"d8e6b486f28dc2fc35591d7389e6b2ad4d4bf4df"},{"name":"outputs.tf","path":"modules/vault-cluster/outputs.tf","sha":"ab03f0accf81c6722c79656844acd1fd39b41e87"},{"name":"variables.tf","path":"modules/vault-cluster/variables.tf","sha":"4067580ffe82b3c9aaf558887c413ba2992e9394"}],"toggled":true},{"name":"vault-elb","children":[{"name":"README.md","path":"modules/vault-elb/README.md","sha":"9dc6564baaaaa8176f650e3c548b8c8066631b6f"},{"name":"main.tf","path":"modules/vault-elb/main.tf","sha":"0f85aea4f41332461dadcda41e767f983d53ad66"},{"name":"outputs.tf","path":"modules/vault-elb/outputs.tf","sha":"024b1c73b457ed1c9256b39fc3ee283b39ed6544"},{"name":"variables.tf","path":"modules/vault-elb/variables.tf","sha":"f6ec2cedeb90b046d4caf020482f0169f872f17d"}]},{"name":"vault-security-group-rules","children":[{"name":"README.md","path":"modules/vault-security-group-rules/README.md","sha":"48df12587b14b7a0d93333b6c12c19dc7082d8b0"},{"name":"main.tf","path":"modules/vault-security-group-rules/main.tf","sha":"c42c6e6d296dd17c021b134bb2f4c5774cf0079c"},{"name":"variables.tf","path":"modules/vault-security-group-rules/variables.tf","sha":"2e18f3fef1b2ff2b3a32f62a49085480ed61763e"}]}],"toggled":true},{"name":"outputs.tf","path":"outputs.tf","sha":"9d46ba8bb2ee80bf8bb1ba3ac5b7660280be3e1c"},{"name":"test","children":[{"name":"Gopkg.lock","path":"test/Gopkg.lock","sha":"568bc5956806e4aed616ba1416be9f34c6297153"},{"name":"Gopkg.toml","path":"test/Gopkg.toml","sha":"0b963bee63cabb891409e7bc306361206047d368"},{"name":"README.md","path":"test/README.md","sha":"dd3f97e937dd02cdd9142d0c25006bd6367e7fef"},{"name":"aws_helpers.go","path":"test/aws_helpers.go","sha":"f686b13f45c0deafbec5215d251c8936e30de421"},{"name":"terratest_helpers.go","path":"test/terratest_helpers.go","sha":"61cb21eeaa80d5c93a2eb1d61964991b6710a770"},{"name":"tls_helpers.go","path":"test/tls_helpers.go","sha":"9b95b015104a0c7a684f6f3af999407218121619"},{"name":"vault_cluster_auth_test.go","path":"test/vault_cluster_auth_test.go","sha":"6dc38ca9feb145131336742a05305a63716a663d"},{"name":"vault_cluster_autounseal_test.go","path":"test/vault_cluster_autounseal_test.go","sha":"6378645baf5b1882e25cc1a9a6ea33c2a499670a"},{"name":"vault_cluster_enterprise_test.go","path":"test/vault_cluster_enterprise_test.go","sha":"4b2ca281392b651c889ea0a6f9b4c4afb703ddee"},{"name":"vault_cluster_private_test.go","path":"test/vault_cluster_private_test.go","sha":"9b4c9c7e3c58a9b87df4ab34952b9f908f890f1b"},{"name":"vault_cluster_public_test.go","path":"test/vault_cluster_public_test.go","sha":"adeceaf1a85f323c920117c27992048335bd38a8"},{"name":"vault_cluster_s3_backend_test.go","path":"test/vault_cluster_s3_backend_test.go","sha":"cb028cf873c350aeb24bf5b01e9574790cf2fddb"},{"name":"vault_helpers.go","path":"test/vault_helpers.go","sha":"68cf62618b5510e55577780c65b48528c39a2c44"},{"name":"vault_main_test.go","path":"test/vault_main_test.go","sha":"905a37d2df09a4053104f163ddbd8d0d8bbab28d"}]},{"name":"variables.tf","path":"variables.tf","sha":"c1e78c623452213f943f69d3a1fac13b3bc3d3d9"}]},"detailsContent":"<h1 class=\"preview__body--title\" id=\"vault-cluster\">Vault Cluster</h1><div class=\"preview__body--border\"></div><p>This folder contains a <a href=\"https://www.terraform.io/\" class=\"preview__body--description--blue\" target=\"_blank\">Terraform</a> module that can be used to deploy a\n<a href=\"https://www.vaultproject.io/\" class=\"preview__body--description--blue\" target=\"_blank\">Vault</a> cluster in <a href=\"https://aws.amazon.com/\" class=\"preview__body--description--blue\" target=\"_blank\">AWS</a> on top of an Auto Scaling Group. This\nmodule is designed to deploy an <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html\" class=\"preview__body--description--blue\" target=\"_blank\">Amazon Machine Image (AMI)</a>\nthat had Vault installed via the <a href=\"/repos/v0.15.1/terraform-aws-vault/modules/install-vault\" class=\"preview__body--description--blue\">install-vault</a> module in this Module.</p>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-you-use-this-module\">How do you use this module?</h2>\n<p>This folder defines a <a href=\"https://www.terraform.io/docs/modules/usage.html\" class=\"preview__body--description--blue\" target=\"_blank\">Terraform module</a>, which you can use in your\ncode by adding a <code>module</code> configuration and setting its <code>source</code> parameter to URL of this folder:</p>\n<pre><span class=\"hljs-keyword\">module</span> <span class=\"hljs-string\">\"vault_cluster\"</span> {\n <span class=\"hljs-comment\"># Use version v0.0.1 of the vault-cluster module</span>\n source = <span class=\"hljs-string\">\"github.com/hashicorp/terraform-aws-vault//modules/vault-cluster?ref=v0.0.1\"</span>\n\n <span class=\"hljs-comment\"># Specify the ID of the Vault AMI. You should build this using the scripts in the install-vault module.</span>\n ami_id = <span class=\"hljs-string\">\"ami-abcd1234\"</span>\n\n <span class=\"hljs-comment\"># Configure and start Vault during boot.</span>\n user_data = <<-EOF\n <span class=\"hljs-comment\">#!/bin/bash</span>\n /opt/vault/bin/run-vault --tls-cert-file /opt/vault/tls/vault.crt.pem --tls-key-file /opt/vault/tls/vault.key.pem\n EOF\n\n <span class=\"hljs-comment\"># Add tag to each node in the cluster with value set to var.cluster_name</span>\n cluster_tag_key = <span class=\"hljs-string\">\"Name\"</span>\n\n <span class=\"hljs-comment\"># Optionally add extra tags to each node in the cluster</span>\n cluster_extra_tags = [\n {\n key = <span class=\"hljs-string\">\"Environment\"</span>\n value = <span class=\"hljs-string\">\"Dev\"</span>\n propagate_at_launch = true\n },\n {\n key = <span class=\"hljs-string\">\"Department\"</span>\n value = <span class=\"hljs-string\">\"Ops\"</span>\n propagate_at_launch = true\n }\n ]\n\n <span class=\"hljs-comment\"># ... See variables.tf for the other parameters you must define for the vault-cluster module</span>\n}\n</pre>\n<p>Note the following parameters:</p>\n<ul>\n<li>\n<p><code>source</code>: Use this parameter to specify the URL of the vault-cluster module. The double slash (<code>//</code>) is intentional\nand required. Terraform uses it to specify subfolders within a Git repo (see <a href=\"https://www.terraform.io/docs/modules/sources.html\" class=\"preview__body--description--blue\" target=\"_blank\">module\nsources</a>). The <code>ref</code> parameter specifies a specific Git tag in\nthis repo. That way, instead of using the latest version of this module from the <code>master</code> branch, which\nwill change every time you run Terraform, you're using a fixed version of the repo.</p>\n</li>\n<li>\n<p><code>ami_id</code>: Use this parameter to specify the ID of a Vault <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html\" class=\"preview__body--description--blue\" target=\"_blank\">Amazon Machine Image\n(AMI)</a> to deploy on each server in the cluster. You\nshould install Vault in this AMI using the scripts in the <a href=\"/repos/v0.15.1/terraform-aws-vault/modules/install-vault\" class=\"preview__body--description--blue\">install-vault</a> module.</p>\n</li>\n<li>\n<p><code>user_data</code>: Use this parameter to specify a <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts\" class=\"preview__body--description--blue\" target=\"_blank\">User\nData</a> script that each\nserver will run during boot. This is where you can use the <a href=\"/repos/v0.15.1/terraform-aws-vault/modules/run-vault\" class=\"preview__body--description--blue\">run-vault script</a> to configure and\nrun Vault. The <code>run-vault</code> script is one of the scripts installed by the <a href=\"/repos/v0.15.1/terraform-aws-vault/modules/install-vault\" class=\"preview__body--description--blue\">install-vault</a>\nmodule.</p>\n</li>\n</ul>\n<p>You can find the other parameters in <a href=\"/repos/v0.15.1/terraform-aws-vault/modules/vault-cluster/variables.tf\" class=\"preview__body--description--blue\">variables.tf</a>.</p>\n<p>Check out the <a href=\"/repos/v0.15.1/terraform-aws-vault/examples/root-example\" class=\"preview__body--description--blue\">root example</a> and\n<a href=\"/repos/v0.15.1/terraform-aws-vault/examples/vault-cluster-private\" class=\"preview__body--description--blue\">vault-cluster-private</a> examples for working sample code.</p>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-you-use-the-vault-cluster\">How do you use the Vault cluster?</h2>\n<p>To use the Vault cluster, you will typically need to SSH to each of the Vault servers. If you deployed the\n<a href=\"/repos/v0.15.1/terraform-aws-vault/examples/vault-cluster-private\" class=\"preview__body--description--blue\">vault-cluster-private</a> or <a href=\"/repos/v0.15.1/terraform-aws-vault/examples/root-example\" class=\"preview__body--description--blue\">the root example</a>\nexamples, the <a href=\"/repos/v0.15.1/terraform-aws-vault/examples/vault-examples-helper/vault-examples-helper.sh\" class=\"preview__body--description--blue\">vault-examples-helper.sh script</a> will do the\ntag lookup for you automatically (note, you must have the <a href=\"https://aws.amazon.com/cli/\" class=\"preview__body--description--blue\" target=\"_blank\">AWS CLI</a> and\n<a href=\"https://stedolan.github.io/jq/\" class=\"preview__body--description--blue\" target=\"_blank\">jq</a> installed locally):</p>\n<pre>> ../vault-examples-helper/vault-examples-helper.sh\n\nYour Vault servers are running at the following IP addresses:\n\n<span class=\"hljs-number\">11.22.33.44</span>\n<span class=\"hljs-number\">11.22.33.55</span>\n<span class=\"hljs-number\">11.22.33.66</span>\n</pre>\n<h3 class=\"preview__body--subtitle\" id=\"initializing-the-vault-cluster\">Initializing the Vault cluster</h3>\n<p>The very first time you deploy a new Vault cluster, you need to <a href=\"https://www.vaultproject.io/intro/getting-started/deploy.html#initializing-the-vault\" class=\"preview__body--description--blue\" target=\"_blank\">initialize the\nVault</a>. The easiest way to do\nthis is to SSH to one of the servers that has Vault installed and run:</p>\n<pre>vault operator init\n\nKey <span class=\"hljs-number\">1</span>: <span class=\"hljs-number\">427</span>cd2c310be3b84fe69372e683a790e01\nKey <span class=\"hljs-number\">2</span>: <span class=\"hljs-number\">0e2</span>b8f3555b42a232f7ace6fe0e68eaf02\nKey <span class=\"hljs-number\">3</span>: <span class=\"hljs-number\">37837e5559</span>b322d0585a6e411614695403\nKey <span class=\"hljs-number\">4</span>: <span class=\"hljs-number\">8</span>dd72fd7d1af254de5f82d1270fd87ab04\nKey <span class=\"hljs-number\">5</span>: b47fdeb7dda82dbe92d88d3c860f605005\nInitial Root Token: eaf5cc32-b48f<span class=\"hljs-number\">-7785</span><span class=\"hljs-number\">-5</span>c94<span class=\"hljs-number\">-90</span>b5ce300e9b\n\nVault initialized with <span class=\"hljs-number\">5</span> keys <span class=\"hljs-keyword\">and</span> a key threshold of <span class=\"hljs-number\">3</span>!\n</pre>\n<p>Vault will print out the <a href=\"https://www.vaultproject.io/docs/concepts/seal.html\" class=\"preview__body--description--blue\" target=\"_blank\">unseal keys</a> and a <a href=\"https://www.vaultproject.io/docs/concepts/tokens.html#root-tokens\" class=\"preview__body--description--blue\" target=\"_blank\">root\ntoken</a>. This is the <strong>only time ever</strong> that all of\nthis data is known by Vault, so you <strong>MUST</strong> save it in a secure place immediately! Also, this is the only time that\nthe unseal keys should ever be so close together. You should distribute each one to a different, trusted administrator\nfor safe keeping in completely separate secret stores and NEVER store them all in the same place.</p>\n<p>In fact, a better option is to initialize Vault with <a href=\"https://www.vaultproject.io/docs/concepts/pgp-gpg-keybase.html\" class=\"preview__body--description--blue\" target=\"_blank\">PGP, GPG, or\nKeybase</a> so that each unseal key is encrypted with a\ndifferent user's public key. That way, no one, not even the operator running the <code>init</code> command can see all the keys\nin one place:</p>\n<pre>vault operator init <span class=\"hljs-attribute\">-pgp-keys</span>=<span class=\"hljs-string\">\"keybase:jefferai,keybase:vishalnayak,keybase:sethvargo\"</span>\n\nKey 1: wcBMA37rwGt6FS1VAQgAk1q8XQh6yc<span class=\"hljs-built_in\">..</span>.\nKey 2: wcBMA0wwnMXgRzYYAQgAavqbTCxZGD<span class=\"hljs-built_in\">..</span>.\nKey 3: wcFMA2DjqDb4YhTAARAAeTFyYxPmUd<span class=\"hljs-built_in\">..</span>.\n<span class=\"hljs-built_in\">..</span>.\n</pre>\n<p>See <a href=\"https://www.vaultproject.io/docs/concepts/pgp-gpg-keybase.html\" class=\"preview__body--description--blue\" target=\"_blank\">Using PGP, GPG, and Keybase</a> for more info.</p>\n<h3 class=\"preview__body--subtitle\" id=\"unsealing-the-vault-cluster\">Unsealing the Vault cluster</h3>\n<p>Now that you have the unseal keys, you can <a href=\"https://www.vaultproject.io/docs/concepts/seal.html\" class=\"preview__body--description--blue\" target=\"_blank\">unseal Vault</a> by\nhaving 3 out of the 5 administrators (or whatever your key shard threshold is) do the following:</p>\n<ol>\n<li>SSH to a Vault server.</li>\n<li>Run <code>vault operator unseal</code>.</li>\n<li>Enter the unseal key when prompted.</li>\n<li>Repeat for each of the other Vault servers.</li>\n</ol>\n<p>Once this process is complete, all the Vault servers will be unsealed and you will be able to start reading and writing\nsecrets.</p>\n<h3 class=\"preview__body--subtitle\" id=\"setting-up-a-secrets-engine\">Setting up a secrets engine</h3>\n<p>In previous versions of Vault (< 1.1.0), a key-value secrets engine was automatically mounted at the path <code>secret/</code>. This\nmodule. The examples in this module use versions >= 1.1.0 and thus mount a key-value secrets engine at <code>secret/</code> explicitly.</p>\n<pre>vault secrets <span class=\"hljs-builtin-name\">enable</span> <span class=\"hljs-attribute\">-version</span>=1 <span class=\"hljs-attribute\">-path</span>=secret kv\n</pre>\n<h3 class=\"preview__body--subtitle\" id=\"connecting-to-the-vault-cluster-to-read-and-write-secrets\">Connecting to the Vault cluster to read and write secrets</h3>\n<p>There are three ways to connect to Vault:</p>\n<ol>\n<li><a href=\"#access-vault-from-a-vault-server\" class=\"preview__body--description--blue\">Access Vault from a Vault server</a></li>\n<li><a href=\"#access-vault-from-other-servers-in-the-same-aws-account\" class=\"preview__body--description--blue\">Access Vault from other servers in the same AWS account</a></li>\n<li><a href=\"#access-vault-from-the-public-internet\" class=\"preview__body--description--blue\">Access Vault from the public Internet</a></li>\n</ol>\n<h4 id=\"access-vault-from-a-vault-server\">Access Vault from a Vault server</h4>\n<p>When you SSH to a Vault server, the Vault client is already configured to talk to the Vault server on localhost, so\nyou can directly run Vault commands:</p>\n<pre>vault <span class=\"hljs-keyword\">read</span> secret/foo\n\nKey <span class=\"hljs-keyword\">Value</span>\n<span class=\"hljs-comment\">--- -----</span>\nrefresh_interval <span class=\"hljs-number\">768</span>h0m0s\n<span class=\"hljs-keyword\">value</span> bar\n</pre>\n<h4 id=\"access-vault-from-other-servers-in-the-same-aws-account\">Access Vault from other servers in the same AWS account</h4>\n<p>To access Vault from a different server in the same account, you need to specify the URL of the Vault cluster. You\ncould manually look up the Vault cluster's IP address, but since this module uses Consul not only as a <a href=\"https://www.vaultproject.io/docs/configuration/storage/consul.html\" class=\"preview__body--description--blue\" target=\"_blank\">storage\nbackend</a> but also as a way to register <a href=\"https://www.consul.io/docs/guides/forwarding.html\" class=\"preview__body--description--blue\" target=\"_blank\">DNS\nentries</a>, you can access Vault\nusing a nice domain name instead, such as <code>vault.service.consul</code>.</p>\n<p>To set this up, use the <a href=\"/repos/terraform-aws-consul/modules/install-dnsmasq\" class=\"preview__body--description--blue\">install-dnsmasq\nmodule</a> on each server that\nneeds to access Vault or <a href=\"/repos/terraform-aws-consul/modules/setup-systemd-resolved\" class=\"preview__body--description--blue\">setup-systemd-resolved</a> if using Ubuntu 18.04. This allows you to access Vault from your EC2 Instances as follows:</p>\n<pre>vault -address=<span class=\"hljs-keyword\">https</span>://vault.service.consul:<span class=\"hljs-number\">8200</span> <span class=\"hljs-built_in\">read</span> secret/foo\n\nKey Value\n<span class=\"hljs-comment\">--- -----</span>\nrefresh_interval <span class=\"hljs-number\">768</span>h0m0s\n<span class=\"hljs-built_in\">value</span> bar\n</pre>\n<p>You can configure the Vault address as an environment variable:</p>\n<pre><span class=\"hljs-builtin-name\">export</span> <span class=\"hljs-attribute\">VAULT_ADDR</span>=https://vault.service.consul:8200\n</pre>\n<p>That way, you don't have to remember to pass the Vault address every time:</p>\n<pre>vault <span class=\"hljs-keyword\">read</span> secret/foo\n\nKey <span class=\"hljs-keyword\">Value</span>\n<span class=\"hljs-comment\">--- -----</span>\nrefresh_interval <span class=\"hljs-number\">768</span>h0m0s\n<span class=\"hljs-keyword\">value</span> bar\n</pre>\n<p>Note that if you're using a self-signed TLS cert (e.g. generated from the <a href=\"/repos/v0.15.1/terraform-aws-vault/modules/private-tls-cert\" class=\"preview__body--description--blue\">private-tls-cert\nmodule</a>), you'll need to have the public key of the CA that signed that cert or you'll get\nan "x509: certificate signed by unknown authority" error. You could pass the certificate manually:</p>\n<pre>vault <span class=\"hljs-keyword\">read</span> -ca-cert=/opt/vault/tls/ca.crt.pem secret/foo\n\nKey <span class=\"hljs-keyword\">Value</span>\n<span class=\"hljs-comment\">--- -----</span>\nrefresh_interval <span class=\"hljs-number\">768</span>h0m0s\n<span class=\"hljs-keyword\">value</span> bar\n</pre>\n<p>However, to avoid having to add the <code>-ca-cert</code> argument to every single call, you can use the <a href=\"/repos/v0.15.1/terraform-aws-vault/modules/update-certificate-store\" class=\"preview__body--description--blue\">update-certificate-store\nmodule</a> to configure the server to trust the CA.</p>\n<p>Check out the <a href=\"/repos/v0.15.1/terraform-aws-vault/examples/vault-cluster-private\" class=\"preview__body--description--blue\">vault-cluster-private example</a> for working sample code.</p>\n<h4 id=\"access-vault-from-the-public-internet\">Access Vault from the public Internet</h4>\n<p>We <strong>strongly</strong> recommend only running Vault in private subnets. That means it is not directly accessible from the\npublic Internet, which reduces your surface area to attackers. If you need users to be able to access Vault from\noutside of AWS, we recommend using VPN to connect to AWS.</p>\n<p>If VPN is not an option, and Vault must be accessible from the public Internet, you can use the <a href=\"/repos/v0.15.1/terraform-aws-vault/modules/vault-elb\" class=\"preview__body--description--blue\">vault-elb\nmodule</a> to deploy an <a href=\"https://aws.amazon.com/elasticloadbalancing/classicloadbalancer/\" class=\"preview__body--description--blue\" target=\"_blank\">Elastic Load Balancer\n(ELB)</a> in your public subnets, and have all your users\naccess Vault via this ELB:</p>\n<pre>vault -address=http<span class=\"hljs-variable\">s:</span>//<span class=\"hljs-symbol\"><ELB_DNS_NAME></span> <span class=\"hljs-keyword\">read</span> secret/foo\n</pre>\n<p>Where <code>ELB_DNS_NAME</code> is the DNS name for your ELB, such as <code>vault.example.com</code>. You can configure the Vault address as\nan environment variable:</p>\n<pre><span class=\"hljs-builtin-name\">export</span> <span class=\"hljs-attribute\">VAULT_ADDR</span>=https://vault.example.com\n</pre>\n<p>That way, you don't have to remember to pass the Vault address every time:</p>\n<pre>vault <span class=\"hljs-built_in\">read</span> secret/foo\n</pre>\n<h2 class=\"preview__body--subtitle\" id=\"whats-included-in-this-module\">What's included in this module?</h2>\n<p>This module creates the following architecture:</p>\n<p><img src=\"/repos/images/v0.15.1/terraform-aws-vault/_docs/architecture.png?raw=true\" alt=\"Vault architecture\" class=\"preview__body--diagram\"></p>\n<p>This architecture consists of the following resources:</p>\n<ul>\n<li><a href=\"#auto-scaling-group\" class=\"preview__body--description--blue\">Auto Scaling Group</a></li>\n<li><a href=\"#security-group\" class=\"preview__body--description--blue\">Security Group</a></li>\n<li><a href=\"#iam-role-and-permissions\" class=\"preview__body--description--blue\">IAM Role and Permissions</a></li>\n<li><a href=\"#s3-bucket\" class=\"preview__body--description--blue\">S3 bucket</a> (Optional)</li>\n</ul>\n<h3 class=\"preview__body--subtitle\" id=\"auto-scaling-group\">Auto Scaling Group</h3>\n<p>This module runs Vault on top of an <a href=\"https://aws.amazon.com/autoscaling/\" class=\"preview__body--description--blue\" target=\"_blank\">Auto Scaling Group (ASG)</a>. Typically, you\nshould run the ASG with 3 or 5 EC2 Instances spread across multiple <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html\" class=\"preview__body--description--blue\" target=\"_blank\">Availability\nZones</a>. Each of the EC2\nInstances should be running an AMI that has had Vault installed via the <a href=\"/repos/v0.15.1/terraform-aws-vault/modules/install-vault\" class=\"preview__body--description--blue\">install-vault</a>\nmodule. You pass in the ID of the AMI to run using the <code>ami_id</code> input parameter.</p>\n<h3 class=\"preview__body--subtitle\" id=\"security-group\">Security Group</h3>\n<p>Each EC2 Instance in the ASG has a Security Group that allows:</p>\n<ul>\n<li>All outbound requests</li>\n<li>Inbound requests on Vault's API port (default: port 8200)</li>\n<li>Inbound requests on Vault's cluster port for server-to-server communication (default: port 8201)</li>\n<li>Inbound SSH requests (default: port 22)</li>\n</ul>\n<p>The Security Group ID is exported as an output variable if you need to add additional rules.</p>\n<p>Check out the <a href=\"#security\" class=\"preview__body--description--blue\">Security section</a> for more details.</p>\n<h3 class=\"preview__body--subtitle\" id=\"iam-role-and-permissions\">IAM Role and Permissions</h3>\n<p>Each EC2 Instance in the ASG has an <a href=\"http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html\" class=\"preview__body--description--blue\" target=\"_blank\">IAM Role</a> attached.\nThe IAM Role ARN is exported as an output variable so you can add custom permissions.</p>\n<h3 class=\"preview__body--subtitle\" id=\"s-3-bucket-optional\">S3 bucket (Optional)</h3>\n<p>If <code>configure_s3_backend</code> is set to <code>true</code>, this module will create an <a href=\"https://aws.amazon.com/s3/\" class=\"preview__body--description--blue\" target=\"_blank\">S3 bucket</a> that Vault\ncan use as a storage backend. S3 is a good choice for storage because it provides outstanding durability (99.999999999%)\nand availability (99.99%). Unfortunately, S3 cannot be used for Vault High Availability coordination, so this module expects\na separate Consul server cluster to be deployed as a high availability backend.</p>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-you-roll-out-updates\">How do you roll out updates?</h2>\n<p>Please note that Vault does not support true zero-downtime upgrades, but with proper upgrade procedure the downtime\nshould be very short (a few hundred milliseconds to a second depending on how the speed of access to the storage\nbackend). See the <a href=\"https://www.vaultproject.io/docs/guides/upgrading/index.html\" class=\"preview__body--description--blue\" target=\"_blank\">Vault upgrade guide instructions</a> for\ndetails.</p>\n<p>If you want to deploy a new version of Vault across a cluster deployed with this module, the best way to do that is to:</p>\n<ol>\n<li>Build a new AMI.</li>\n<li>Set the <code>ami_id</code> parameter to the ID of the new AMI.</li>\n<li>Run <code>terraform apply</code>.</li>\n</ol>\n<p>This updates the Launch Configuration of the ASG, so any new Instances in the ASG will have your new AMI, but it does\nNOT actually deploy those new instances. To make that happen, you need to:</p>\n<ol>\n<li><a href=\"#replace-the-standby-nodes\" class=\"preview__body--description--blue\">Replace the standby nodes</a></li>\n<li><a href=\"#replace-the-primary-node\" class=\"preview__body--description--blue\">Replace the primary node</a></li>\n</ol>\n<h3 class=\"preview__body--subtitle\" id=\"replace-the-standby-nodes\">Replace the standby nodes</h3>\n<p>For each of the standby nodes:</p>\n<ol>\n<li>SSH to the EC2 Instance where the Vault standby is running.</li>\n<li>Execute <code>sudo systemctl stop vault</code> to have Vault shut down gracefully.</li>\n<li>Terminate the EC2 Instance.</li>\n<li>After a minute or two, the ASG should automatically launch a new Instance, with the new AMI, to replace the old one.</li>\n<li>Have each Vault admin SSH to the new EC2 Instance and unseal it.</li>\n</ol>\n<h3 class=\"preview__body--subtitle\" id=\"replace-the-primary-node\">Replace the primary node</h3>\n<p>The procedure for the primary node is the same, but should be done LAST, after all the standbys have already been\nupgraded:</p>\n<ol>\n<li>SSH to the EC2 Instance where the Vault primary is running. This should be the last server that has the old version\nof your AMI.</li>\n<li>Execute <code>sudo systemctl stop vault</code> to have Vault shut down gracefully.</li>\n<li>Terminate the EC2 Instance.</li>\n<li>After a minute or two, the ASG should automatically launch a new Instance, with the new AMI, to replace the old one.</li>\n<li>Have each Vault admin SSH to the new EC2 Instance and unseal it.</li>\n</ol>\n<h2 class=\"preview__body--subtitle\" id=\"what-happens-if-a-node-crashes\">What happens if a node crashes?</h2>\n<p>There are two ways a Vault node may go down:</p>\n<ol>\n<li>The Vault process may crash. In that case, <code>systemd</code> should restart it automatically. At this point, you will\nneed to have each Vault admin SSH to the Instance to unseal it again.</li>\n<li>The EC2 Instance running Vault dies. In that case, the Auto Scaling Group should launch a replacement automatically.\nOnce again, the Vault admins will have to SSH to the replacement Instance and unseal it.</li>\n</ol>\n<p>Given the need for manual intervention, you will want to have alarms set up that go off any time a Vault node gets\nrestarted.</p>\n<h2 class=\"preview__body--subtitle\" id=\"security\">Security</h2>\n<p>Here are some of the main security considerations to keep in mind when using this module:</p>\n<ol>\n<li><a href=\"#encryption-in-transit\" class=\"preview__body--description--blue\">Encryption in transit</a></li>\n<li><a href=\"#encryption-at-rest\" class=\"preview__body--description--blue\">Encryption at rest</a></li>\n<li><a href=\"#dedicated-instances\" class=\"preview__body--description--blue\">Dedicated instances</a></li>\n<li><a href=\"#security-groups\" class=\"preview__body--description--blue\">Security groups</a></li>\n<li><a href=\"#ssh-access\" class=\"preview__body--description--blue\">SSH access</a></li>\n</ol>\n<h3 class=\"preview__body--subtitle\" id=\"encryption-in-transit\">Encryption in transit</h3>\n<p>Vault uses TLS to encrypt its network traffic. For instructions on configuring TLS, have a look at the\n<a href=\"/repos/v0.15.1/terraform-aws-vault/modules/run-vault#how-do-you-handle-encryption\" class=\"preview__body--description--blue\">How do you handle encryption documentation</a>.</p>\n<h3 class=\"preview__body--subtitle\" id=\"encryption-at-rest\">Encryption at rest</h3>\n<p>Vault servers keep everything in memory and does not write any data to the local hard disk. To persist data, Vault\nencrypts it, and sends it off to its storage backends, so no matter how the backend stores that data, it is already\nencrypted. By default, this Module uses Consul as a storage backend, so if you want an additional layer of\nprotection, you can check out the <a href=\"https://www.consul.io/docs/agent/encryption.html\" class=\"preview__body--description--blue\" target=\"_blank\">official Consul encryption docs</a>\nand the Consul AWS Module <a href=\"/repos/terraform-aws-consul/modules/run-consul#how-do-you-handle-encryption\" class=\"preview__body--description--blue\">How do you handle encryption\ndocs</a>\nfor more info.</p>\n<p>Note that if you want to enable encryption for the root EBS Volume for your Vault Instances (despite the fact that\nVault itself doesn't write anything to this volume), you need to enable that in your AMI. If you're creating the AMI\nusing Packer (e.g. as shown in the <a href=\"/repos/v0.15.1/terraform-aws-vault/examples/vault-consul-ami\" class=\"preview__body--description--blue\">vault-consul-ami example</a>), you need to set the <a href=\"https://www.packer.io/docs/builders/amazon-ebs.html#encrypt_boot\" class=\"preview__body--description--blue\" target=\"_blank\">encrypt_boot\nparameter</a> to <code>true</code>.</p>\n<h3 class=\"preview__body--subtitle\" id=\"dedicated-instances\">Dedicated instances</h3>\n<p>If you wish to use dedicated instances, you can set the <code>tenancy</code> parameter to <code>"dedicated"</code> in this module.</p>\n<h3 class=\"preview__body--subtitle\" id=\"security-groups\">Security groups</h3>\n<p>This module attaches a security group to each EC2 Instance that allows inbound requests as follows:</p>\n<ul>\n<li>\n<p><strong>Vault</strong>: For the Vault API port (default: 8200), you can use the <code>allowed_inbound_cidr_blocks</code> parameter to control\nthe list of <a href=\"https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing\" class=\"preview__body--description--blue\" target=\"_blank\">CIDR blocks</a> that will be allowed access\nand the <code>allowed_inbound_security_group_ids</code> parameter to control the security groups that will be allowed access.</p>\n</li>\n<li>\n<p><strong>SSH</strong>: For the SSH port (default: 22), you can use the <code>allowed_ssh_cidr_blocks</code> parameter to control the list of<br>\n<a href=\"https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing\" class=\"preview__body--description--blue\" target=\"_blank\">CIDR blocks</a> that will be allowed access. You can use the <code>allowed_ssh_security_group_ids</code> parameter to control the list of source Security Groups that will be allowed access.</p>\n</li>\n</ul>\n<p>Note that all the ports mentioned above are configurable via the <code>xxx_port</code> variables (e.g. <code>api_port</code>). See\n<a href=\"/repos/v0.15.1/terraform-aws-vault/modules/vault-cluster/variables.tf\" class=\"preview__body--description--blue\">variables.tf</a> for the full list.</p>\n<h3 class=\"preview__body--subtitle\" id=\"ssh-access\">SSH access</h3>\n<p>You can associate an <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html\" class=\"preview__body--description--blue\" target=\"_blank\">EC2 Key Pair</a> with each\nof the EC2 Instances in this cluster by specifying the Key Pair's name in the <code>ssh_key_name</code> variable. If you don't\nwant to associate a Key Pair with these servers, set <code>ssh_key_name</code> to an empty string.</p>\n<h2 class=\"preview__body--subtitle\" id=\"whats-not-included-in-this-module\">What's NOT included in this module?</h2>\n<p>This module does NOT handle the following items, which you may want to provide on your own:</p>\n<ul>\n<li><a href=\"#consul\" class=\"preview__body--description--blue\">Consul</a></li>\n<li><a href=\"#monitoring-alerting-log-aggregation\" class=\"preview__body--description--blue\">Monitoring, alerting, log aggregation</a></li>\n<li><a href=\"#vpcs-subnets-route-tables\" class=\"preview__body--description--blue\">VPCs, subnets, route tables</a></li>\n</ul>\n<h3 class=\"preview__body--subtitle\" id=\"consul\">Consul</h3>\n<p>This module configures Vault to use Consul as a high availability storage backend. This module assumes you already\nhave Consul servers deployed in a separate cluster. We do not recommend co-locating Vault and Consul servers in the\nsame cluster because:</p>\n<ol>\n<li>Vault is a tool built specifically for security, and running any other software on the same server increases its\nsurface area to attackers.</li>\n<li>This Vault Module uses Consul as a high availability storage backend and both Vault and Consul keep their working\nset in memory. That means for every 1 byte of data in Vault, you'd also have 1 byte of data in Consul, doubling\nyour memory consumption on each server.</li>\n</ol>\n<p>Check out the <a href=\"/repos/terraform-aws-consul\" class=\"preview__body--description--blue\">Consul AWS Module</a> for how to deploy a Consul\nserver cluster in AWS. See the <a href=\"/repos/v0.15.1/terraform-aws-vault/examples/root-example\" class=\"preview__body--description--blue\">root example</a> and\n<a href=\"/repos/v0.15.1/terraform-aws-vault/examples/vault-cluster-private\" class=\"preview__body--description--blue\">vault-cluster-private</a> examples for sample code that shows how to run both a\nVault server cluster and Consul server cluster.</p>\n<h3 class=\"preview__body--subtitle\" id=\"monitoring-alerting-log-aggregation\">Monitoring, alerting, log aggregation</h3>\n<p>This module does not include anything for monitoring, alerting, or log aggregation. All ASGs and EC2 Instances come\nwith limited <a href=\"https://aws.amazon.com/cloudwatch/\" class=\"preview__body--description--blue\" target=\"_blank\">CloudWatch</a> metrics built-in, but beyond that, you will have to\nprovide your own solutions. We especially recommend looking into Vault's <a href=\"https://www.vaultproject.io/docs/audit/index.html\" class=\"preview__body--description--blue\" target=\"_blank\">Audit\nbackends</a> for how you can capture detailed logging and audit\ninformation.</p>\n<p>Given that any time Vault crashes, reboots, or restarts, you have to have the Vault admins manually unseal it (see\n<a href=\"#what-happens-if-a_node-crashes\" class=\"preview__body--description--blue\">What happens if a node crashes?</a>), we <strong>strongly</strong> recommend configuring alerts that\nnotify these admins whenever they need to take action!</p>\n<h3 class=\"preview__body--subtitle\" id=\"vp-cs-subnets-route-tables\">VPCs, subnets, route tables</h3>\n<p>This module assumes you've already created your network topology (VPC, subnets, route tables, etc). You will need to\npass in the the relevant info about your network topology (e.g. <code>vpc_id</code>, <code>subnet_ids</code>) as input variables to this\nmodule.</p>\n","repoName":"terraform-aws-vault","repoRef":"v0.13.4","serviceDescriptor":{"serviceName":"HashiCorp Vault","serviceRepoName":"terraform-aws-vault","serviceRepoOrg":"hashicorp","cloudProviders":["aws"],"description":"Deploy a Vault cluster. Supports automatic bootstrapping, Consul and S3 backends, self-signed TLS certificates, and auto healing.","imageUrl":"vault.png","licenseType":"open-source","technologies":["Terraform","Bash"],"compliance":[],"tags":[""]},"serviceCategoryName":"Secrets management","fileName":"README.md","filePath":"/modules/vault-cluster","title":"Repo Browser: HashiCorp Vault","description":"Browse the repos in the Gruntwork Infrastructure as Code Library."}