Has two dedicated worker node groups managed by separate ASGs for different functions:
core nodes: Worker nodes intended to run core services, such as kiam. These services require additional
permissions to function, and therefore require a more locked down EC2 instance configuration.
application nodes: Worker nodes intended to run application services.
Nodes are tagged with labels inherited from the EC2 instance tags, using the map-ec2-tags-to-node-labels script in
the eks-scripts module.
Deploys Tiller (the Helm Server) with TLS configuration turned on in the kube-system namespace. This Tiller
deployment is intended to be used to deploy the core admin services, which typically need to run in the kube-system
namespace.
Prerequisites
This example depends on Terraform, Packer, kubergrunt, and helm. You can also optionally install kubectl if
you would like explore the newly provisioned cluster. You can find instructions on how to install each tool below:
Finally, before you begin, be sure to set up your AWS credentials as environment variables so that all the commands
below can authenticate to the AWS account where you wish to deploy this example. You can refer to our blog post series
on AWS authentication (A Comprehensive Guide to Authenticating to AWS on the Command
Line) for
more information.
Once the cluster is deployed, take a look at Where to go from here for ideas on what to do
next.
Create a New AMI with the Helper Scripts Installed
This example depends on the map-ec2-tags-to-node-labels script to assist with mapping the EC2 instance
tags into Node
Labels. We will use packer to build a
customized AMI on based on the EKS optimized
AMI that includes the script.
To build the AMI, you need to provide packer the build template and required variables. Since we will be installing a
Gruntwork module, we will need to setup Github access. This can be done by defining the GITHUB_OAUTH_TOKEN environment
variable with a personal access token. See https://github.com/gruntwork-io/gruntwork-installer#authentication for more
information on how to set this up.
Once the environment variable is set, you can run packer build to build the AMI:
packerbuild packer/build.json
This will spin up an EC2 instance, run the shell scripts to provision the machine, burn a new AMI, spin down the
instance, and then output the newly built AMI.
Note: By default, the provided packer template will build a new AMI in the us-east-1 region. If you would like to
change the region to build in, you can pass in -var "region=us-east-2" to override the default region.
Apply Terraform Templates
Once the AMI is built, we are ready to use it to deploy our EKS cluster. Unlike the other examples in this repo, this
example breaks up the code into multiple submodules. Refer to Why are there multiple Terraform submodules in this
example? for more information on why the example is
structured this way.
To deploy our cluster, we will apply the templates in the following order:
The code for deploying an EKS cluster with its worker groups is defined in the eks-cluster submodule.
This Terraform example, when applied, will deploy a VPC, launch an EKS control plane in there, and then provision two
worker groups to run workloads. The two groups provided by the example are:
A core worker group that is dedicated for running supporting services like kiam that may require more lockdown.
The nodes in this group are tainted with NoSchedule so that Pods are not scheduled there by default.
An application worker group that is dedicated for running application services.
To deploy the example, we need to first define the required variables. To define variables to use, create a new file in
the example directory called terraform.tfvars:
touch ./eks-cluster/terraform.tfvars
Then, create a new entry for each required variable (and any optional variables you would like to override). See the
variables.tf file for a list of available variables. Below is a sample terraform.tfvars file:
NOTE: If you attempt to deploy into the us-east-1 region, note that the availability zone us-east-1e does not
support EKS. To work around this, use the availability_zone_whitelist to control which zones are used to deploy EKS by
adding the following to the tfvars file:
availability_zone_whitelist = ["us-east-1a", "us-east-1b", "us-east-1c", "us-east-1d", "us-east-1f"].
Once the variables are filled out, we are ready to apply the templates to provision our cluster. To do this, we need to
run terraform init followed by terraform apply:
cd eks-cluster
terraform init
terraform apply
cd .. # go back to eks-cluster-with-supporting-services example folder
At the end of this, you will have an EKS cluster with 2 ASG node worker pools. We will use kubectl to verify this.
In order to use kubectl, we need to first set it up so that it can authenticate with our new EKS cluster. You can
learn more about how authentication works with EKS in our guide How do I authenticate kubectl to the EKS
cluster?. For now, you can run the kubergrunt eks configure command:
At the end of this command, your default kubeconfig file (located at ~/.kube/config) will have a new context that
authenticates with EKS. This context will be set as the default so that subsequent kubectl calls will target your
deployed eks cluster.
You can now use kubectl to verify the two worker groups. Run kubectl get nodes and kubectl describe nodes to see
the associated labels of the nodes and verify there are two distinct labels.
This will output information about the deployed cluster. Record the entries for vpc_id,
eks_app_worker_iam_role_name, and eks_core_worker_iam_role_name, as we will be using those in the next step.
Deploy Core Services
Once our EKS cluster is deployed, we can deploy core services on to it. The code for core services is defined in the
core-services submodule. This Terraform example, when applied, will deploy Tiller (the Helm Server)
on to the cluster to manage services deployed into the kube-system namespace. It will then use the deployed Tiller to
launch additional services:
fluentd-cloudwatch: Used to ship Kubernetes logs (including container logs) to CloudWatch.
aws-alb-ingress-controller: Used to map Ingress resources into AWS ALBs.
Like the previous step, create a terraform.tfvars file and fill it in. Below is a sample entry:
The eks_vpc_id, eks_application_worker_iam_role_name, and eks_core_worker_iam_role_name input variables should be
the outputs that you recorded in the previous step.
NOTE: The deployed Tiller version must exactly match the version of the helm client you have installed in order for
the client to work. You can use the tiller_version variable to control which version of Tiller is installed. Run helm version -c to see your client version, and set the tiller_version variable to the SemVer value (note: the v in
the version string is required and should be lower case).
Once the tfvars file is created, you can init and apply the templates:
touch ./core-services/terraform.tfvars
# Fill in the tfvars file with the variable inputs
cd core-services
terraform init
terraform apply
cd .. # go back to eks-cluster-with-supporting-services example folder
At the end of this, the cluster will be deployed with a working Tiller install. To verify the Tiller deployment, first
setup helm to authenticate to the deployed Tiller instance using the installed environment file:
HELM_HOME="<FILL THIS IN>"source $HELM_HOME/env
NOTE: If you did not pass in a custom helm home directory to Terraform (the variable helm_home), HELM_HOME will
be ~/.helm.
Once the environment variables are setup, you can verify your connection using helm version and helm ls.
Additionally, the cluster should be shipping logs to CloudWatch. You can load the CloudWatch logs in the UI by
navigating to CloudWatch in the AWS console and looking for the log group with the same name as the EKS cluster.
(Optional) Deploy Nginx Service
The nginx-service submodule shows an example of how to use Helm to deploy an application on to your
EKS cluster. This example will:
Once the tfvars file is created, you can init and apply the templates:
touch ./nginx-service/terraform.tfvars
# Fill in the tfvars file with the variable inputs
cd nginx-service
terraform init
terraform apply
cd .. # go back to eks-cluster-with-supporting-services example folder
Once Terraform finishes, the nginx service should be available. To get the endpoint, you can query Kubernetes using
kubectl for the Service information:
# Prerequisite: Setup environment variables to auth to AWS# Use kubectl to get Service endpoint
kubectl \
get ingresses \
--namespace kube-system \
--selector app.kubernetes.io/instance=nginx-test,app.kubernetes.io/name=nginx \
-o jsonpath \
--template '{.items[0].status.loadBalancer.ingress[0].hostname}'
This will output the ALB endpoint to the console. When you hit the endpoint, you should be able to see the welcome page
for nginx. If the service isn't available or you don't get the endpoint, wait a few minutes to give Kubernetes a chance
provision the ALB.
Where to go from here
Now that you have your cluster, you can do a few things to explore the cluster:
If you setup an SSH key with the eks_worker_keypair_name variable, try SSH-ing to the nodes to see the running
processes.
Why are there multiple Terraform submodules in this example?
Breaking up your code not only improves readability, but it also helps with maintainability by keeping the surface area
small. Typically you would want to structure your code so that resources that are deployed more frequently are isolated
from critical resources that might bring down the cluster. For example, you might upgrade Helm regularly, but you might
not touch your VPC once it is set up. Therefore, you do not want to manage your Helm resources with the VPC, as
everytime you update Helm you would be putting the cluster at risk.
Additionally, breaking up the code into modules helps introduce dependency ordering. Terraform is notorious for having
bugs and subtle issues that make it difficult to reliably introduce a dependency chain, especially when you have
modules. For example, you can't easily define dependencies such that the resources in a module depend on other resources
or modules external to the module (see https://github.com/hashicorp/terraform/issues/1178).
The dependency logic between launching the EKS cluster, setting up Helm, and deploying services using Helm is tricky to
encode in terraform such that it works reliably. For example, while it is fairly easy to get the resources to deploy in
order such that they are available, it is tricky to destroy the resources in the right order. For example, it is ok to
deploy Helm in parallel with the worker nodes, such that the worker nodes are coming up while the helm deployment script
is executing. The same thing can't be done for destruction, because the script needs to communicate with the Helm Server
while destroying it.
In summary, this example breaks up the Terraform code as an example of how one might modularize their EKS cluster code,
in addition to making the dependency management more explicit.
Troubleshooting
When destroying core-services, I get the following error:
* helm_release.fluentd-cloudwatch: rpc error: code = Unknown desc = secrets is forbidden: User "system:serviceaccount:kube-system:cluster-admin-tiller-account" cannot list secrets in the namespace "kube-system"
There is an issue in the destroy order where many resources that the helm release depends on is deleted in parallel
with the release, which causes helm to explode on delete. To fix the issue, first run apply again so that all the
permissions are restored, and then run
to destroy the helm releases first. Once that completes, it is safe to run terraform destroy again.
When destroying eks-cluster, I get an error with destroying VPC related resources.
EKS relies on the amazon-vpc-cni-k8s plugin to allocate IP addresses to
the pods in the Kubernetes cluster. This plugin works by allocating secondary ENI devices to the underlying worker
instances. Depending on timing, this plugin could interfere with destroying the cluster in this example. Specifically,
terraform could shutdown the instances before the VPC CNI pod had a chance to cull the ENI devices. These devices are
managed outside of terraform, so if they linger, it could interfere with destroying the VPC.
To workaround this limitation, you have to go into the console and delete the ENI associated with the VPC. Then,
retry the destroy call.
Questions? Ask away.
We're here to talk about our services, answer any questions, give advice, or just to chat.
{"treedata":{"name":"root","toggled":true,"children":[{"name":".circleci","children":[{"name":"config.yml","path":".circleci/config.yml","sha":"e634c582e53ab8c0571118cebf8033c4f558cee3"}]},{"name":".gitignore","path":".gitignore","sha":"7f6cf4bc746bbfd6da4c7a21dbcf1a2296aa0c10"},{"name":".pre-commit-config.yaml","path":".pre-commit-config.yaml","sha":"b008949ef10a7bad93ab93e8821da77577a30c5c"},{"name":"CODEOWNERS","path":"CODEOWNERS","sha":"db8a871849f2583384d581e2a4c35eb5d2c50625"},{"name":"CONTRIBUTING.md","path":"CONTRIBUTING.md","sha":"a7cc7bd94443c252390564fa988755dbbe80d87d"},{"name":"GRUNTWORK_PHILOSOPHY.md","path":"GRUNTWORK_PHILOSOPHY.md","sha":"02d9873a74c99fe6d9b6b26bd9f8eb4a7a699c32"},{"name":"LICENSE.md","path":"LICENSE.md","sha":"a2cf01ecdd725fddd718ab91c80c115882c94f3c"},{"name":"README.adoc","path":"README.adoc","sha":"cbe4892028f4853adb1a57c7b23ded38a2266854"},{"name":"_docs","children":[{"name":"eks-architecture.png","path":"_docs/eks-architecture.png","sha":"b4c9c46f88ed465c5575e915af54ad9920b56941"},{"name":"eks-icon.png","path":"_docs/eks-icon.png","sha":"83a29dc46e7bc6234ba5bb825e8ae283c56229a0"}]},{"name":"core-concepts.md","path":"core-concepts.md","sha":"3c504a547fc55ecff5536141534a32ed8a4a4ae7"},{"name":"examples","children":[{"name":"README.md","path":"examples/README.md","sha":"a70f3adc0c888e07b0b03cb32fbd156547c354da"},{"name":"eks-cluster-managed-workers","children":[{"name":"README.md","path":"examples/eks-cluster-managed-workers/README.md","sha":"21acaeb73c1d8a1819480bc7a8d1c35b8fa69081"},{"name":"dependencies.tf","path":"examples/eks-cluster-managed-workers/dependencies.tf","sha":"cf1b48a0d58571356ce788dda915332a48bb45c2"},{"name":"main.tf","path":"examples/eks-cluster-managed-workers/main.tf","sha":"e9da9c96de6c2fd4672f1821b208df7cacb83519"},{"name":"outputs.tf","path":"examples/eks-cluster-managed-workers/outputs.tf","sha":"431bebd71e3f9d5c299c1740ba16b2eef717cbf0"},{"name":"variables.tf","path":"examples/eks-cluster-managed-workers/variables.tf","sha":"a6ac89b858b4ade66cfd7034fa17732ed44547d3"}]},{"name":"eks-cluster-with-iam-role-mappings","children":[{"name":"README.md","path":"examples/eks-cluster-with-iam-role-mappings/README.md","sha":"6479e81678f2e08df477d467f2124f5dc53e9e53"},{"name":"dependencies.tf","path":"examples/eks-cluster-with-iam-role-mappings/dependencies.tf","sha":"df21d64c3435ff2859c4099ce0b854e98483f624"},{"name":"main.tf","path":"examples/eks-cluster-with-iam-role-mappings/main.tf","sha":"5c84af2ccdf02579c56bda6ec4962554d6b93578"},{"name":"outputs.tf","path":"examples/eks-cluster-with-iam-role-mappings/outputs.tf","sha":"3876c30890ffef1726d533a869c23e66fa244e6c"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/eks-cluster-with-iam-role-mappings/user-data/user-data.sh","sha":"b10c34bfe4c9d10101472b47edbc3b7dff42a88e"}]},{"name":"variables.tf","path":"examples/eks-cluster-with-iam-role-mappings/variables.tf","sha":"f97a3c500bfcc88e1e6d3ca7acd208accab4dc54"}]},{"name":"eks-cluster-with-supporting-services","children":[{"name":"README.md","path":"examples/eks-cluster-with-supporting-services/README.md","sha":"1af610f60977f2f05bb6917c8a3040449028ddd5","toggled":true},{"name":"core-services","children":[{"name":"README.md","path":"examples/eks-cluster-with-supporting-services/core-services/README.md","sha":"e0bac13c7fd97d206766cbe3db0e7f269f7f0126"},{"name":"dependencies.tf","path":"examples/eks-cluster-with-supporting-services/core-services/dependencies.tf","sha":"8ef506ceacdd4b57bcdea3ad91c84c3c2544ba03"},{"name":"main.tf","path":"examples/eks-cluster-with-supporting-services/core-services/main.tf","sha":"35ecbae3c848a60de5e7e2d07a517bad76fdca3e"},{"name":"outputs.tf","path":"examples/eks-cluster-with-supporting-services/core-services/outputs.tf","sha":"35eb7dffb12786d50f580e64fa4a6ef496c160e8"},{"name":"variables.tf","path":"examples/eks-cluster-with-supporting-services/core-services/variables.tf","sha":"e43a616334814e86479287150cfc822187226708"}]},{"name":"eks-cluster","children":[{"name":"README.md","path":"examples/eks-cluster-with-supporting-services/eks-cluster/README.md","sha":"8a60a01004a93bbbf2091b730f0207f6dd2cc07e"},{"name":"dependencies.tf","path":"examples/eks-cluster-with-supporting-services/eks-cluster/dependencies.tf","sha":"fdc70a25511df461747927bc6874cff7bc787def"},{"name":"main.tf","path":"examples/eks-cluster-with-supporting-services/eks-cluster/main.tf","sha":"ae6cbf87e7ea5f61b4fca1045c5b254a035303d3"},{"name":"outputs.tf","path":"examples/eks-cluster-with-supporting-services/eks-cluster/outputs.tf","sha":"534f5957ab5d9225aebf863e7849baec0da96dbb"},{"name":"user-data","children":[{"name":"app_worker_user_data.sh","path":"examples/eks-cluster-with-supporting-services/eks-cluster/user-data/app_worker_user_data.sh","sha":"c5fdd13d5bb04f765f1c90e9f12d23c48e94a252"},{"name":"core_worker_user_data.sh","path":"examples/eks-cluster-with-supporting-services/eks-cluster/user-data/core_worker_user_data.sh","sha":"0fa26153108b3d030ceeaae777aeb0a7e115404e"}]},{"name":"variables.tf","path":"examples/eks-cluster-with-supporting-services/eks-cluster/variables.tf","sha":"0171cf340977e66e663b8acdf852f6853da1ff46"}]},{"name":"nginx-service","children":[{"name":"README.md","path":"examples/eks-cluster-with-supporting-services/nginx-service/README.md","sha":"58b899364432605520b890c407d1bcd0fafc8b27"},{"name":"dependencies.tf","path":"examples/eks-cluster-with-supporting-services/nginx-service/dependencies.tf","sha":"a2819acb9c726887612d04e224c9473cb7e293fd"},{"name":"main.tf","path":"examples/eks-cluster-with-supporting-services/nginx-service/main.tf","sha":"24d8cbbff07b1aa3e5a4ef7eae83851a0e895a3f"},{"name":"templates","children":[{"name":"values.yaml","path":"examples/eks-cluster-with-supporting-services/nginx-service/templates/values.yaml","sha":"298435e01df9fa495b15d512073c62662d292cd3"}]},{"name":"variables.tf","path":"examples/eks-cluster-with-supporting-services/nginx-service/variables.tf","sha":"36ea6f8a36b19e34dbeeb25ae7e5fcf30c956b0f"}]},{"name":"packer","children":[{"name":"README.md","path":"examples/eks-cluster-with-supporting-services/packer/README.md","sha":"6a974a7fd5da7ac13309d9e0c4aaba7bd8cb46c7"},{"name":"build.json","path":"examples/eks-cluster-with-supporting-services/packer/build.json","sha":"34760ce3ea4fe41078097d7a34092e2c6bf3ee43"}]}],"toggled":true},{"name":"eks-fargate-cluster-with-irsa","children":[{"name":"README.md","path":"examples/eks-fargate-cluster-with-irsa/README.md","sha":"89f62860fc905c957a71500aa73547c0c6e4c72b"},{"name":"dependencies.tf","path":"examples/eks-fargate-cluster-with-irsa/dependencies.tf","sha":"b422b2aa58d724243115464cebd86dfc9d22de19"},{"name":"main.tf","path":"examples/eks-fargate-cluster-with-irsa/main.tf","sha":"a3b87d3f502af9378c9b93be7b40efe5d9c0c9c1"},{"name":"outputs.tf","path":"examples/eks-fargate-cluster-with-irsa/outputs.tf","sha":"f059d7b74ffbfb06a0868d6d0a5d1831c8f45f10"},{"name":"variables.tf","path":"examples/eks-fargate-cluster-with-irsa/variables.tf","sha":"0fe801b7b652580a8cd72d6332b3776e3a9b95ca"}]},{"name":"eks-fargate-cluster-with-supporting-services","children":[{"name":"README.md","path":"examples/eks-fargate-cluster-with-supporting-services/README.md","sha":"e597364fdf056051daa5b24e43afb02b22d8ec5c"},{"name":"core-services","children":[{"name":"README.md","path":"examples/eks-fargate-cluster-with-supporting-services/core-services/README.md","sha":"e0bac13c7fd97d206766cbe3db0e7f269f7f0126"},{"name":"dependencies.tf","path":"examples/eks-fargate-cluster-with-supporting-services/core-services/dependencies.tf","sha":"c8a0975403bb81f5c9e8c2cddea1666df0adb8b0"},{"name":"main.tf","path":"examples/eks-fargate-cluster-with-supporting-services/core-services/main.tf","sha":"6476d1e073caffbe5999320b9609d1dbba2aa7a0"},{"name":"outputs.tf","path":"examples/eks-fargate-cluster-with-supporting-services/core-services/outputs.tf","sha":"35eb7dffb12786d50f580e64fa4a6ef496c160e8"},{"name":"variables.tf","path":"examples/eks-fargate-cluster-with-supporting-services/core-services/variables.tf","sha":"da657291959044d32535597ed3d384ddaa6f83bd"}]},{"name":"eks-cluster","children":[{"name":"dependencies.tf","path":"examples/eks-fargate-cluster-with-supporting-services/eks-cluster/dependencies.tf","sha":"c1fa9e2c0d794ed6a8bf8afe6773d9645ea161d8"},{"name":"main.tf","path":"examples/eks-fargate-cluster-with-supporting-services/eks-cluster/main.tf","sha":"c3664e03f462418d1b11b7ba5cfdb3b98e7d8c57"},{"name":"outputs.tf","path":"examples/eks-fargate-cluster-with-supporting-services/eks-cluster/outputs.tf","sha":"db0e767fd7ed3a0bcad5628a0c13b6208a442f13"},{"name":"variables.tf","path":"examples/eks-fargate-cluster-with-supporting-services/eks-cluster/variables.tf","sha":"b57caa8a22404f04a3042d4de398b4bc701b4287"}]},{"name":"nginx-service","children":[{"name":"dependencies.tf","path":"examples/eks-fargate-cluster-with-supporting-services/nginx-service/dependencies.tf","sha":"3165e5c71fb1642d39a60f544be708d547825e7f"},{"name":"main.tf","path":"examples/eks-fargate-cluster-with-supporting-services/nginx-service/main.tf","sha":"d66648b4557bfb3ed32b094248fc137f41e98975"},{"name":"templates","children":[{"name":"values.yaml","path":"examples/eks-fargate-cluster-with-supporting-services/nginx-service/templates/values.yaml","sha":"655914f91177135cb7c5f15b62166cfc82a62a91"}]},{"name":"variables.tf","path":"examples/eks-fargate-cluster-with-supporting-services/nginx-service/variables.tf","sha":"d3c166441cdc556b0839930fbc281b7e8a1bd57f"}]}]},{"name":"eks-fargate-cluster","children":[{"name":"README.md","path":"examples/eks-fargate-cluster/README.md","sha":"df681cdbe945d0592ca57bd3a8eb9ae5d88c2f4a"},{"name":"dependencies.tf","path":"examples/eks-fargate-cluster/dependencies.tf","sha":"b422b2aa58d724243115464cebd86dfc9d22de19"},{"name":"main.tf","path":"examples/eks-fargate-cluster/main.tf","sha":"5ec913fc253d103360bb32e29f825b99b7be5417"},{"name":"outputs.tf","path":"examples/eks-fargate-cluster/outputs.tf","sha":"5115288e4192921035aba980990103fe4c4b7150"},{"name":"terraform.tfvars.back","path":"examples/eks-fargate-cluster/terraform.tfvars.back","sha":"6cb73f75cc7828c6b3efdc2a9b1787f75ed276d1"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/eks-fargate-cluster/user-data/user-data.sh","sha":"b10c34bfe4c9d10101472b47edbc3b7dff42a88e"}]},{"name":"variables.tf","path":"examples/eks-fargate-cluster/variables.tf","sha":"0844b3c9dfbb16eb4d9eccbbe33d4bb41336800c"}]}],"toggled":true},{"name":"modules","children":[{"name":"eks-alb-ingress-controller-iam-policy","children":[{"name":"README.md","path":"modules/eks-alb-ingress-controller-iam-policy/README.md","sha":"d85eecf670ea161dcfe4b69c09926f31eef55c73"},{"name":"iampolicy.json","path":"modules/eks-alb-ingress-controller-iam-policy/iampolicy.json","sha":"5cba0c1500ee2520d72e8d47b86e318958e4dbc7"},{"name":"main.tf","path":"modules/eks-alb-ingress-controller-iam-policy/main.tf","sha":"a79f5a2e6a0ba72562c5a87182db516d8824ed21"},{"name":"outputs.tf","path":"modules/eks-alb-ingress-controller-iam-policy/outputs.tf","sha":"b551b0bcc6eb1b43bfff1606696566658564cfb4"},{"name":"variables.tf","path":"modules/eks-alb-ingress-controller-iam-policy/variables.tf","sha":"250152e6bfeb02a16bed4151ffc7156636db1bd9"}]},{"name":"eks-alb-ingress-controller","children":[{"name":"README.md","path":"modules/eks-alb-ingress-controller/README.md","sha":"f85f8d19d71b230c56f71d085d300c3135284a1e"},{"name":"main.tf","path":"modules/eks-alb-ingress-controller/main.tf","sha":"a9afcdabc54036bc7626ce8604523d802de21a3b"},{"name":"templates","children":[{"name":"node_affinity.yaml","path":"modules/eks-alb-ingress-controller/templates/node_affinity.yaml","sha":"c6eaf8e94fa7c893857cc009df954443239a8fe0"},{"name":"values.yaml","path":"modules/eks-alb-ingress-controller/templates/values.yaml","sha":"e2a11271abc9ec1937a082db6bef91a5e0d69a6c"}]},{"name":"variables.tf","path":"modules/eks-alb-ingress-controller/variables.tf","sha":"35941c1c6bdac42f50c810e61edee43829247d52"}]},{"name":"eks-cloudwatch-container-logs","children":[{"name":"README.md","path":"modules/eks-cloudwatch-container-logs/README.md","sha":"047fb9b3b97437261911c3fa4acec0cb419b1f1b"},{"name":"main.tf","path":"modules/eks-cloudwatch-container-logs/main.tf","sha":"f26b582dc8dad236cdf723d68fcd475285a29b8d"},{"name":"outputs.tf","path":"modules/eks-cloudwatch-container-logs/outputs.tf","sha":"7061ed458fec528c8b8b587291f0eccb4324fb72"},{"name":"templates","children":[{"name":"node_affinity.yaml","path":"modules/eks-cloudwatch-container-logs/templates/node_affinity.yaml","sha":"cf47b63d7c2b9699e0ab1e36e9a8dadad3a7f4c0"},{"name":"values.yaml","path":"modules/eks-cloudwatch-container-logs/templates/values.yaml","sha":"56bb63870ca40f0b60a3e1eb68dee108b59dae16"}]},{"name":"variables.tf","path":"modules/eks-cloudwatch-container-logs/variables.tf","sha":"e1b89a574ff63017bd992278048e690e1db6faf9"}]},{"name":"eks-cluster-control-plane","children":[{"name":"README.md","path":"modules/eks-cluster-control-plane/README.md","sha":"c583d597bbd30336813b87b01219bed382c39393"},{"name":"control_plane_scripts","children":[{"name":"bin","children":[{"name":"control_plane_scripts_py27_env.pex","path":"modules/eks-cluster-control-plane/control_plane_scripts/bin/control_plane_scripts_py27_env.pex","sha":"3b75ea0e3f39c5a2be32f1d17c370826fe062fcf"},{"name":"control_plane_scripts_py3_env.pex","path":"modules/eks-cluster-control-plane/control_plane_scripts/bin/control_plane_scripts_py3_env.pex","sha":"f5602767c99f0addee9cdf1ea1f1bfb7a26bfbc9"}]},{"name":"build.sh","path":"modules/eks-cluster-control-plane/control_plane_scripts/build.sh","sha":"33b5e9231babdb0c2c0997b04a964c27b98a4e13"},{"name":"cleanup_cluster_resources","children":[{"name":"__init__.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/cleanup_cluster_resources/__init__.py","sha":"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391"},{"name":"global_vars.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/cleanup_cluster_resources/global_vars.py","sha":"47920d25645a8c168f196beb76eb37da60055dd3"},{"name":"main.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/cleanup_cluster_resources/main.py","sha":"21dfb38d1bf8f4d15a03da5e09ae3ba575eb4501"},{"name":"vpc.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/cleanup_cluster_resources/vpc.py","sha":"76d1c2084906d1ce04c2e2e527859f47eddc6530"}]},{"name":"control_plane_scripts_utils","children":[{"name":"__init__.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/control_plane_scripts_utils/__init__.py","sha":"37d050d1afd8ebb0c9d6916cff61fa674e6ac8a3"},{"name":"project_logging.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/control_plane_scripts_utils/project_logging.py","sha":"c29bfb0dfe0a3d4e04aeaabff0b2e58387ccf12b"}]},{"name":"dev_requirements.txt","path":"modules/eks-cluster-control-plane/control_plane_scripts/dev_requirements.txt","sha":"430b91474dc8220624012e70d8c2e43582f17161"},{"name":"requirements.txt","path":"modules/eks-cluster-control-plane/control_plane_scripts/requirements.txt","sha":"0ae8cdb74f4c793658c5dfdd13ce1ec723f7b2a1"},{"name":"upgrade_cluster","children":[{"name":"__init__.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/upgrade_cluster/__init__.py","sha":"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391"},{"name":"eks.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/upgrade_cluster/eks.py","sha":"d0aca412ffa983300df0d8926bee8829e148f85e"},{"name":"exceptions.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/upgrade_cluster/exceptions.py","sha":"c35893a0f70e2c0d86dd64b7bce8d092e84355b3"},{"name":"global_vars.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/upgrade_cluster/global_vars.py","sha":"e223eefafed2576c8988a708395d92f6908b3f49"},{"name":"k8s.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/upgrade_cluster/k8s.py","sha":"83b3a0d7419d4a21872d9416f7b76d589650895d"},{"name":"k8s_version_map.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/upgrade_cluster/k8s_version_map.py","sha":"ed3b86c032b7829ba2983c1363efe936d85e4328"},{"name":"main.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/upgrade_cluster/main.py","sha":"af8d29a692f2530b74b9581464aca7bd06c255cd"}]}]},{"name":"dependencies.tf","path":"modules/eks-cluster-control-plane/dependencies.tf","sha":"ff5c5efe0c1f84b9b17b995462f08d609ec454e6"},{"name":"main.tf","path":"modules/eks-cluster-control-plane/main.tf","sha":"53c6838fcd2f8087ab7a46ce5f8bf0e65c8836bf"},{"name":"outputs.tf","path":"modules/eks-cluster-control-plane/outputs.tf","sha":"a68f4000d7524e2f2db24d3c12d2a3bac273a42a"},{"name":"templates","children":[{"name":"kubectl_config.tpl","path":"modules/eks-cluster-control-plane/templates/kubectl_config.tpl","sha":"083a5e914505363541190db3ee412d8d9e15b4ec"}]},{"name":"variables.tf","path":"modules/eks-cluster-control-plane/variables.tf","sha":"6d6b70a6c7d19cc9438f215114c28a44e5127b18"}]},{"name":"eks-cluster-managed-workers","children":[{"name":"README.md","path":"modules/eks-cluster-managed-workers/README.md","sha":"7c02b6cb8463d50ab1f7f0d64ede5617be7b8b71"},{"name":"main.tf","path":"modules/eks-cluster-managed-workers/main.tf","sha":"13454d6ece32b306cc703c23fa7dad39d99107b3"},{"name":"outputs.tf","path":"modules/eks-cluster-managed-workers/outputs.tf","sha":"ff528cd4101033d79defb8e8a6a9616a8b427849"},{"name":"variables.tf","path":"modules/eks-cluster-managed-workers/variables.tf","sha":"d8f332eaa8b195a7a7923f79d8ec05ccb2bc6539"}]},{"name":"eks-cluster-workers-cross-access","children":[{"name":"README.md","path":"modules/eks-cluster-workers-cross-access/README.md","sha":"6c4e50bda62acc6c06d836488ef54f7119f27aee"},{"name":"main.tf","path":"modules/eks-cluster-workers-cross-access/main.tf","sha":"30885a053867992d0c3ee3804ba6833ae463c116"},{"name":"outputs.tf","path":"modules/eks-cluster-workers-cross-access/outputs.tf","sha":"c6c7f7a89007c55be5470ffd639c05c3fb052ad7"},{"name":"variables.tf","path":"modules/eks-cluster-workers-cross-access/variables.tf","sha":"d64aab893b6e909416189e985f072dd8809dfa2f"}]},{"name":"eks-cluster-workers","children":[{"name":"README.md","path":"modules/eks-cluster-workers/README.md","sha":"87a618c5b138a0da843139fc7fb785e1883d2262"},{"name":"main.tf","path":"modules/eks-cluster-workers/main.tf","sha":"8c4bc978bf1cd62b7c6255218a6d5bdcb38955a9"},{"name":"outputs.tf","path":"modules/eks-cluster-workers/outputs.tf","sha":"a9c37412a97c287000f2000c9c092b87e2487c11"},{"name":"variables.tf","path":"modules/eks-cluster-workers/variables.tf","sha":"d4b78bd1444cc595bce91006e7f02d6921a7ed96"}]},{"name":"eks-iam-role-assume-role-policy-for-service-account","children":[{"name":"README.md","path":"modules/eks-iam-role-assume-role-policy-for-service-account/README.md","sha":"1994a871d9e1e17c67b34453b835dfa99a3c02c6"},{"name":"main.tf","path":"modules/eks-iam-role-assume-role-policy-for-service-account/main.tf","sha":"be2fefe5e1a29a2582d1dcdc0b700b74f198cfc9"},{"name":"outputs.tf","path":"modules/eks-iam-role-assume-role-policy-for-service-account/outputs.tf","sha":"c2910cec89910bb06a157311ac8c4bf72835dfe5"},{"name":"variables.tf","path":"modules/eks-iam-role-assume-role-policy-for-service-account/variables.tf","sha":"dc660ddf84158851145289f6036a0fc19fbf7ce4"}]},{"name":"eks-k8s-cluster-autoscaler-iam-policy","children":[{"name":"README.md","path":"modules/eks-k8s-cluster-autoscaler-iam-policy/README.md","sha":"cfd86f6261a849f9204b0b7c80e96f9b03efd79d"},{"name":"main.tf","path":"modules/eks-k8s-cluster-autoscaler-iam-policy/main.tf","sha":"c743f0e3523119155e2f2a6434e6f634d659aaee"},{"name":"outputs.tf","path":"modules/eks-k8s-cluster-autoscaler-iam-policy/outputs.tf","sha":"a053ab9f76af3a83301a0a67eeedac9683ee5bc4"},{"name":"variables.tf","path":"modules/eks-k8s-cluster-autoscaler-iam-policy/variables.tf","sha":"be3db9023160b3754187f2f21ce77772b43ced53"}]},{"name":"eks-k8s-cluster-autoscaler","children":[{"name":"README.md","path":"modules/eks-k8s-cluster-autoscaler/README.md","sha":"6f2a76b27d33ffbd760ae7c8a40ab9e56853479d"},{"name":"main.tf","path":"modules/eks-k8s-cluster-autoscaler/main.tf","sha":"f877c9a88c0c82656675f40556dcb8c2774e265f"},{"name":"templates","children":[{"name":"node_affinity.yaml","path":"modules/eks-k8s-cluster-autoscaler/templates/node_affinity.yaml","sha":"c6eaf8e94fa7c893857cc009df954443239a8fe0"},{"name":"values.yaml","path":"modules/eks-k8s-cluster-autoscaler/templates/values.yaml","sha":"51e4cf44a9d8f054c1eced5d7b422255c5c9a481"}]},{"name":"variables.tf","path":"modules/eks-k8s-cluster-autoscaler/variables.tf","sha":"5b21aece34f5fd6f68ce9a88535de6b0b790b07d"}]},{"name":"eks-k8s-external-dns-iam-policy","children":[{"name":"README.md","path":"modules/eks-k8s-external-dns-iam-policy/README.md","sha":"aa9431f2e6f81e507d73482adb339d543b9d1051"},{"name":"main.tf","path":"modules/eks-k8s-external-dns-iam-policy/main.tf","sha":"b346bd0324c30907dd62ac89f93fe9cc7799fd4d"},{"name":"outputs.tf","path":"modules/eks-k8s-external-dns-iam-policy/outputs.tf","sha":"21604a63b741b94ea9ebffd20b18772131020fcf"},{"name":"variables.tf","path":"modules/eks-k8s-external-dns-iam-policy/variables.tf","sha":"250152e6bfeb02a16bed4151ffc7156636db1bd9"}]},{"name":"eks-k8s-external-dns","children":[{"name":"README.md","path":"modules/eks-k8s-external-dns/README.md","sha":"851e8d68beb5998b33d20f1e8cb56ee2f93c6bc2"},{"name":"main.tf","path":"modules/eks-k8s-external-dns/main.tf","sha":"39070bbbd47829cf3c82af84dd3c3092cee76c6c"},{"name":"templates","children":[{"name":"node_affinity.yaml","path":"modules/eks-k8s-external-dns/templates/node_affinity.yaml","sha":"c6eaf8e94fa7c893857cc009df954443239a8fe0"},{"name":"values.yaml","path":"modules/eks-k8s-external-dns/templates/values.yaml","sha":"233c10fd4723c4e515fed2870c778c4d8bf2e29f"}]},{"name":"variables.tf","path":"modules/eks-k8s-external-dns/variables.tf","sha":"8f6ef907c965091277e215b5d003d3a365f952ed"}]},{"name":"eks-k8s-role-mapping","children":[{"name":"README.md","path":"modules/eks-k8s-role-mapping/README.md","sha":"b90014a5cb1917eef8cb1cd0c234d1b7240185d1"},{"name":"aws_auth_configmap_generator","children":[{"name":"aws_auth_configmap_generator","children":[{"name":"__init__.py","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/aws_auth_configmap_generator/__init__.py","sha":"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391"},{"name":"generator.py","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/aws_auth_configmap_generator/generator.py","sha":"4057d70cebc26cb56e95d861618eda4629e41b19"},{"name":"global_vars.py","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/aws_auth_configmap_generator/global_vars.py","sha":"31c2b91932d79d37e284bdf708e506faf0a59649"},{"name":"main.py","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/aws_auth_configmap_generator/main.py","sha":"e69d8517efe23c680e9e67dc48dbd0478723b88f"},{"name":"utils.py","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/aws_auth_configmap_generator/utils.py","sha":"0874f15d63301e4f32cb0517817a515fb18f113e"}]},{"name":"bin","children":[{"name":"aws_auth_configmap_generator_py27_env.pex","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/bin/aws_auth_configmap_generator_py27_env.pex","sha":"d00c0aff5ef5ea8b7ad9a0ce9318e7e5e7a6da9f"},{"name":"aws_auth_configmap_generator_py3_env.pex","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/bin/aws_auth_configmap_generator_py3_env.pex","sha":"c4500959687a373596395a4c275bab61029ea2a9"}]},{"name":"build_scripts","children":[{"name":"build.sh","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/build_scripts/build.sh","sha":"34f496ada6fdc2d33028c6b8df7d3ba172a3dbdd"}]},{"name":"dev_requirements.txt","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/dev_requirements.txt","sha":"40f29298c05348c2f1227a53da3f88c89632feb3"},{"name":"requirements.txt","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/requirements.txt","sha":"97397a79f826def4e1023a6bc9b4cb346bdcafbe"}]},{"name":"main.tf","path":"modules/eks-k8s-role-mapping/main.tf","sha":"27557e43793f1ad7d021b8da3413c006075a0660"},{"name":"outputs.tf","path":"modules/eks-k8s-role-mapping/outputs.tf","sha":"95d4d4ec652bb541b91a2844e00f68064b423e60"},{"name":"variables.tf","path":"modules/eks-k8s-role-mapping/variables.tf","sha":"19ce18b4f61497d7366db872a40ce973f9db8549"}]},{"name":"eks-scripts","children":[{"name":"README.md","path":"modules/eks-scripts/README.md","sha":"96baaf535647b9f4c364d6a19057bcccb42df2be"},{"name":"bin","children":[{"name":"map-ec2-tags-to-node-labels","path":"modules/eks-scripts/bin/map-ec2-tags-to-node-labels","sha":"8087c82d4d47f25439f118c2a51e59d22689ada7"},{"name":"map_ec2_tags_to_node_labels.py","path":"modules/eks-scripts/bin/map_ec2_tags_to_node_labels.py","sha":"f75ad19587e95b2bd8924125ea2a1a697154909f"}]},{"name":"dev_requirements.txt","path":"modules/eks-scripts/dev_requirements.txt","sha":"f56f9d1629a85734fe16ed70f00f36b830cd97c9"},{"name":"install.sh","path":"modules/eks-scripts/install.sh","sha":"7f192fca97b098482a8a398019d4d53f45dba478"}]},{"name":"eks-vpc-tags","children":[{"name":"README.md","path":"modules/eks-vpc-tags/README.md","sha":"b53e923baaa79718b55a272158ff9b710871a6ce"},{"name":"outputs.tf","path":"modules/eks-vpc-tags/outputs.tf","sha":"0ef2787cfd02ea8668c687302b1929618079a0b2"},{"name":"variables.tf","path":"modules/eks-vpc-tags/variables.tf","sha":"a6e332e9da4e473e1e42b1ca6c7b0ba139a77cfb"},{"name":"versions.tf","path":"modules/eks-vpc-tags/versions.tf","sha":"e5d003c3e7a7296ca0f610fc77f94f2139fc59d2"}]}]},{"name":"rfc","children":[{"name":"locking-down-kiam.adoc","path":"rfc/locking-down-kiam.adoc","sha":"3e92efcc57dda26c406ed66c5f95fe76049b3d2c"},{"name":"shipping-logs-to-cloudwatch.md","path":"rfc/shipping-logs-to-cloudwatch.md","sha":"6199b55bfe1faea80833bbf0c411adc90b88b84b"}]},{"name":"setup.cfg","path":"setup.cfg","sha":"981bc2bfd0b35029438d56c6d862a7f1519b8fe6"},{"name":"test","children":[{"name":"Gopkg.lock","path":"test/Gopkg.lock","sha":"7dd58506d83164b594e3d650cae5c540987858e9"},{"name":"Gopkg.toml","path":"test/Gopkg.toml","sha":"a0159c5ca6bab4a7e77117edb9ab4b752517d4eb"},{"name":"README.md","path":"test/README.md","sha":"9bf8180d731bdc892279fcdbcbb03d245f31f83a"},{"name":"eks_cluster_integration_test.go","path":"test/eks_cluster_integration_test.go","sha":"e898491b14abb78d8c7c0bf6191547d3c7fa3fa1"},{"name":"eks_cluster_managed_workers_test.go","path":"test/eks_cluster_managed_workers_test.go","sha":"5c52034ff6ddf39d59169f1bc248d91867f0cdb7"},{"name":"eks_cluster_test_helpers.go","path":"test/eks_cluster_test_helpers.go","sha":"0ac527d18778dd162198297adb57e93927e5eb57"},{"name":"eks_cluster_upgrade_test.go","path":"test/eks_cluster_upgrade_test.go","sha":"73bb2f8bfe1a3cb2547e026840dc9bc6a88a7cc8"},{"name":"eks_cluster_with_iam_role_test.go","path":"test/eks_cluster_with_iam_role_test.go","sha":"ca0b2f65ebffee9c417c59c49884b4034c6ca895"},{"name":"eks_cluster_with_supporting_services_test.go","path":"test/eks_cluster_with_supporting_services_test.go","sha":"e90389ff9fd393a53e813000f3b22552913d0304"},{"name":"eks_fargate_cluster_disable_public_endpoint_test.go","path":"test/eks_fargate_cluster_disable_public_endpoint_test.go","sha":"25ba0984ef5979ca146d16b63654559939d822db"},{"name":"eks_fargate_cluster_irsa_test.go","path":"test/eks_fargate_cluster_irsa_test.go","sha":"ee867e5ad391a426146af448986959542b829490"},{"name":"eks_fargate_cluster_test.go","path":"test/eks_fargate_cluster_test.go","sha":"49809cf53d4defb19e4672520d42c55d4d32d3f4"},{"name":"eks_fargate_cluster_with_supporting_services_test.go","path":"test/eks_fargate_cluster_with_supporting_services_test.go","sha":"196cb7393ea7159f75e189c3e2d235f0665043ad"},{"name":"errors.go","path":"test/errors.go","sha":"be062fe0205ff82db8183d0fde639aa1883013ad"},{"name":"kubefixtures","children":[{"name":"autoscaler-test-pods-deployment.yml","path":"test/kubefixtures/autoscaler-test-pods-deployment.yml","sha":"b2d94c4bfa729b639290ee21629c19ca6ea694ee"},{"name":"eks-irsa-test.yml","path":"test/kubefixtures/eks-irsa-test.yml","sha":"db5439cf6d38873dbae71daa4197d6947990a94a"},{"name":"eks-k8s-role-mapping-test-role.yml","path":"test/kubefixtures/eks-k8s-role-mapping-test-role.yml","sha":"ede7587308d2a4ecf55042b05800099c43f3af7d"},{"name":"kube-system-sa-admin-binding.yml","path":"test/kubefixtures/kube-system-sa-admin-binding.yml","sha":"282d406512102cbe54e952575f26e7e0fbb2aa9a"},{"name":"nginx-deployment.yml","path":"test/kubefixtures/nginx-deployment.yml","sha":"a58866e59c113635af24982cfb0b530f0c416af0"},{"name":"robust-nginx-deployment.yml","path":"test/kubefixtures/robust-nginx-deployment.yml","sha":"a71c2bb24c75b2ebcf54563df799281938a49ca5"}]},{"name":"script_tests","children":[{"name":"executor.sh","path":"test/script_tests/executor.sh","sha":"f2a571ab875195d450a942d684ce41f86f824e70"},{"name":"requirements.txt","path":"test/script_tests/requirements.txt","sha":"e78b3b8c7b4bdecf8d1f235c1f55dcf227ee19c6"},{"name":"test_aws_auth_configmap_generator.py","path":"test/script_tests/test_aws_auth_configmap_generator.py","sha":"8da981d07d31745a1db59e9693995e60cea14abc"},{"name":"test_map_ec2_tags_to_node_labels.py","path":"test/script_tests/test_map_ec2_tags_to_node_labels.py","sha":"1bb3a5eae3727c0e6caf29c2cf4b7d596bb9a161"},{"name":"tox.ini","path":"test/script_tests/tox.ini","sha":"088400028aa4cf08b188b449875cf243222f2250"}]},{"name":"terratest_options.go","path":"test/terratest_options.go","sha":"d7ed6a80b1de9893846a4c751e73188cc2850248"},{"name":"test_debug_helpers.go","path":"test/test_debug_helpers.go","sha":"c71a7a9d5b68f0f59d2518496d9f5893206b5e22"},{"name":"test_helpers.go","path":"test/test_helpers.go","sha":"c0aa8112f2958c98fce5e1bf6193e04824b19aa7"}]}]},"detailsContent":"<h1 class=\"preview__body--title\" id=\"eks-cluster-with-supporting-services\">EKS Cluster with Supporting Services</h1><div class=\"preview__body--border\"></div><p>This example provisions an EKS cluster that:</p>\n<ul>\n<li>Has two dedicated worker node groups managed by separate ASGs for different functions:\n<ul>\n<li><code>core</code> nodes: Worker nodes intended to run core services, such as <code>kiam</code>. These services require additional\npermissions to function, and therefore require a more locked down EC2 instance configuration.</li>\n<li><code>application</code> nodes: Worker nodes intended to run application services.</li>\n</ul>\n</li>\n<li>Nodes are tagged with labels inherited from the EC2 instance tags, using the <code>map-ec2-tags-to-node-labels</code> script in\n<a href=\"/repos/v0.13.0/terraform-aws-eks/modules/eks-scripts\" class=\"preview__body--description--blue\">the eks-scripts module</a>.</li>\n<li>Deploys Tiller (the Helm Server) with TLS configuration turned on in the <code>kube-system</code> namespace. This Tiller\ndeployment is intended to be used to deploy the core admin services, which typically need to run in the <code>kube-system</code>\nnamespace.</li>\n</ul>\n<h2 class=\"preview__body--subtitle\" id=\"prerequisites\">Prerequisites</h2>\n<p>This example depends on <code>Terraform</code>, <code>Packer</code>, <code>kubergrunt</code>, and <code>helm</code>. You can also optionally install <code>kubectl</code> if\nyou would like explore the newly provisioned cluster. You can find instructions on how to install each tool below:</p>\n<ul>\n<li><a href=\"https://learn.hashicorp.com/terraform/getting-started/install.html\" class=\"preview__body--description--blue\" target=\"_blank\">Terraform</a></li>\n<li><a href=\"https://www.packer.io/intro/getting-started/install.html\" class=\"preview__body--description--blue\" target=\"_blank\">Packer</a></li>\n<li><a href=\"/repos/kubergrunt#installation\" class=\"preview__body--description--blue\">kubergrunt</a>, minimum version: <code>0.5.3</code></li>\n<li><a href=\"https://helm.sh/docs/using_helm/#install-helm\" class=\"preview__body--description--blue\" target=\"_blank\">helm</a></li>\n<li>(Optional) <a href=\"https://kubernetes.io/docs/tasks/tools/install-kubectl/\" class=\"preview__body--description--blue\" target=\"_blank\">kubectl</a></li>\n</ul>\n<p>Finally, before you begin, be sure to set up your AWS credentials as environment variables so that all the commands\nbelow can authenticate to the AWS account where you wish to deploy this example. You can refer to our blog post series\non AWS authentication (<a href=\"https://blog.gruntwork.io/a-comprehensive-guide-to-authenticating-to-aws-on-the-command-line-63656a686799\" class=\"preview__body--description--blue\" target=\"_blank\">A Comprehensive Guide to Authenticating to AWS on the Command\nLine</a>) for\nmore information.</p>\n<h2 class=\"preview__body--subtitle\" id=\"overview\">Overview</h2>\n<p>Unlike the other examples, this example is spread across multiple terraform submodules. Refer to <a href=\"#why-are-there-multiple-terraform-submodules-in-this-example\" class=\"preview__body--description--blue\">Why are there\nmultiple Terraform submodules in this example?</a> for more\ninformation on why the example is structured this way.</p>\n<p>As such, you will be deploying the example through a multi step process involving the following steps:</p>\n<ol>\n<li><a href=\"#create-a-new-ami-with-the-helper-scripts-installed\" class=\"preview__body--description--blue\">Create a new AMI with the helper scripts installed using <code>packer</code></a></li>\n<li><a href=\"#apply-terraform-templates\" class=\"preview__body--description--blue\">Apply Terraform Templates</a></li>\n<li><a href=\"#deploy-eks-cluster\" class=\"preview__body--description--blue\">Deploy EKS cluster</a></li>\n<li><a href=\"#deploy-core-services\" class=\"preview__body--description--blue\">Deploy Core Services</a></li>\n<li><a href=\"#optional-deploy-nginx\" class=\"preview__body--description--blue\">(Optional) Deploy Nginx</a></li>\n</ol>\n<p>Once the cluster is deployed, take a look at <a href=\"#where-to-go-from-here\" class=\"preview__body--description--blue\">Where to go from here</a> for ideas on what to do\nnext.</p>\n<h2 class=\"preview__body--subtitle\" id=\"create-a-new-ami-with-the-helper-scripts-installed\">Create a New AMI with the Helper Scripts Installed</h2>\n<p>This example depends on the <code>map-ec2-tags-to-node-labels</code> script to assist with mapping <a href=\"https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html\" class=\"preview__body--description--blue\" target=\"_blank\">the EC2 instance\ntags</a> into Node\n<a href=\"https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/\" class=\"preview__body--description--blue\" target=\"_blank\">Labels</a>. We will use <code>packer</code> to build a\ncustomized AMI on based on <a href=\"https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html\" class=\"preview__body--description--blue\" target=\"_blank\">the EKS optimized\nAMI</a> that includes the script.</p>\n<p>To build the AMI, you need to provide <code>packer</code> the build template and required variables. Since we will be installing a\nGruntwork module, we will need to setup Github access. This can be done by defining the <code>GITHUB_OAUTH_TOKEN</code> environment\nvariable with a personal access token. See https://github.com/gruntwork-io/gruntwork-installer#authentication for more\ninformation on how to set this up.</p>\n<p>Once the environment variable is set, you can run <code>packer build</code> to build the AMI:</p>\n<pre><span class=\"hljs-symbol\">packer</span> <span class=\"hljs-keyword\">build </span>packer/<span class=\"hljs-keyword\">build.json\n</span></pre>\n<p>This will spin up an EC2 instance, run the shell scripts to provision the machine, burn a new AMI, spin down the\ninstance, and then output the newly built AMI.</p>\n<p>Note: By default, the provided <code>packer</code> template will build a new AMI in the <code>us-east-1</code> region. If you would like to\nchange the region to build in, you can pass in <code>-var "region=us-east-2"</code> to override the default region.</p>\n<h2 class=\"preview__body--subtitle\" id=\"apply-terraform-templates\">Apply Terraform Templates</h2>\n<p>Once the AMI is built, we are ready to use it to deploy our EKS cluster. Unlike the other examples in this repo, this\nexample breaks up the code into multiple submodules. Refer to <a href=\"#why-are-there-multiple-terraform-submodules-in-this-example\" class=\"preview__body--description--blue\">Why are there multiple Terraform submodules in this\nexample?</a> for more information on why the example is\nstructured this way.</p>\n<p>To deploy our cluster, we will apply the templates in the following order:</p>\n<ol>\n<li><a href=\"#deploy-eks-cluster\" class=\"preview__body--description--blue\">eks-cluster: Deploy the EKS cluster with workers</a></li>\n<li><a href=\"#deploy-core-services\" class=\"preview__body--description--blue\">core-services: Deploy Core Services (e.g Helm Server) on to EKS cluster</a></li>\n<li><a href=\"#optional-deploy-nginx-service\" class=\"preview__body--description--blue\">(Optional) nginx-service: Deploy nginx on to EKS cluster</a></li>\n</ol>\n<h3 class=\"preview__body--subtitle\" id=\"deploy-eks-cluster\">Deploy EKS cluster</h3>\n<p>The code for deploying an EKS cluster with its worker groups is defined in <a href=\"/repos/v0.13.0/terraform-aws-eks/examples/eks-cluster-with-supporting-services/eks-cluster\" class=\"preview__body--description--blue\">the <code>eks-cluster</code> submodule</a>.\nThis Terraform example, when applied, will deploy a VPC, launch an EKS control plane in there, and then provision two\nworker groups to run workloads. The two groups provided by the example are:</p>\n<ol>\n<li>A <code>core</code> worker group that is dedicated for running supporting services like <code>kiam</code> that may require more lockdown.\nThe nodes in this group are tainted with <code>NoSchedule</code> so that <code>Pods</code> are not scheduled there by default.</li>\n<li>An <code>application</code> worker group that is dedicated for running application services.</li>\n</ol>\n<p>To deploy the example, we need to first define the required variables. To define variables to use, create a new file in\nthe example directory called terraform.tfvars:</p>\n<pre>touch ./eks-cluster/<span class=\"hljs-keyword\">terraform</span>.tfvars\n</pre>\n<p>Then, create a new entry for each required variable (and any optional variables you would like to override). See the\n<code>variables.tf</code> file for a list of available variables. Below is a sample <code>terraform.tfvars</code> file:</p>\n<pre><span class=\"hljs-attr\">aws_region</span> = <span class=\"hljs-string\">\"us-west-2\"</span>\n<span class=\"hljs-attr\">eks_cluster_name</span> = <span class=\"hljs-string\">\"test-eks-cluster-with-supporting-services\"</span>\n<span class=\"hljs-attr\">vpc_name</span> = <span class=\"hljs-string\">\"test-eks-cluster-with-supporting-services-vpc\"</span>\n<span class=\"hljs-attr\">eks_worker_ami</span> = <span class=\"hljs-string\">\"ami-00000000000000000\"</span>\n</pre>\n<p><strong>NOTE</strong>: If you attempt to deploy into the <code>us-east-1</code> region, note that the availability zone <code>us-east-1e</code> does not\nsupport EKS. To work around this, use the <code>availability_zone_whitelist</code> to control which zones are used to deploy EKS by\nadding the following to the tfvars file:\n<code>availability_zone_whitelist = ["us-east-1a", "us-east-1b", "us-east-1c", "us-east-1d", "us-east-1f"]</code>.</p>\n<p>Once the variables are filled out, we are ready to apply the templates to provision our cluster. To do this, we need to\nrun <code>terraform init</code> followed by <code>terraform apply</code>:</p>\n<pre>cd eks-cluster\n<span class=\"hljs-keyword\">terraform</span> init\n<span class=\"hljs-keyword\">terraform</span> apply\ncd .. <span class=\"hljs-comment\"># go back to eks-cluster-with-supporting-services example folder</span>\n</pre>\n<p>At the end of this, you will have an EKS cluster with 2 ASG node worker pools. We will use <code>kubectl</code> to verify this.</p>\n<p>In order to use <code>kubectl</code>, we need to first set it up so that it can authenticate with our new EKS cluster. You can\nlearn more about how authentication works with EKS in our guide <a href=\"/repos/v0.13.0/terraform-aws-eks/core-concepts.md#how-do-i-authenticate-kubectl-to-the-eks-cluster\" class=\"preview__body--description--blue\">How do I authenticate kubectl to the EKS\ncluster?</a>. For now, you can run the <code>kubergrunt eks configure</code> command:</p>\n<pre>EKS_CLUSTER_ARN=$(cd eks-cluster && <span class=\"hljs-keyword\">terraform</span> <span class=\"hljs-keyword\">output</span> eks_cluster_arn)\nkubergrunt eks configure --eks-cluster-arn $EKS_CLUSTER_ARN\n</pre>\n<p>At the end of this command, your default kubeconfig file (located at <code>~/.kube/config</code>) will have a new context that\nauthenticates with EKS. This context will be set as the default so that subsequent <code>kubectl</code> calls will target your\ndeployed eks cluster.</p>\n<p>You can now use <code>kubectl</code> to verify the two worker groups. Run <code>kubectl get nodes</code> and <code>kubectl describe nodes</code> to see\nthe associated labels of the nodes and verify there are two distinct labels.</p>\n<p>This will output information about the deployed cluster. Record the entries for <code>vpc_id</code>,\n<code>eks_app_worker_iam_role_name</code>, and <code>eks_core_worker_iam_role_name</code>, as we will be using those in the next step.</p>\n<h3 class=\"preview__body--subtitle\" id=\"deploy-core-services\">Deploy Core Services</h3>\n<p>Once our EKS cluster is deployed, we can deploy core services on to it. The code for core services is defined in <a href=\"/repos/v0.13.0/terraform-aws-eks/examples/eks-cluster-with-supporting-services/core-services\" class=\"preview__body--description--blue\">the\n<code>core-services</code> submodule</a>. This Terraform example, when applied, will deploy Tiller (the Helm Server)\non to the cluster to manage services deployed into the <code>kube-system</code> namespace. It will then use the deployed Tiller to\nlaunch additional services:</p>\n<ul>\n<li>fluentd-cloudwatch: Used to ship Kubernetes logs (including container logs) to CloudWatch.</li>\n<li>aws-alb-ingress-controller: Used to map <code>Ingress</code> resources into AWS ALBs.</li>\n</ul>\n<p>Like the previous step, create a <code>terraform.tfvars</code> file and fill it in. Below is a sample entry:</p>\n<pre><span class=\"hljs-attr\">aws_region</span> = <span class=\"hljs-string\">\"us-west-2\"</span>\n<span class=\"hljs-attr\">eks_cluster_name</span> = <span class=\"hljs-string\">\"test-eks-cluster-with-supporting-services\"</span>\n<span class=\"hljs-attr\">eks_application_worker_iam_role_name</span> = <span class=\"hljs-string\">\"application-worker-iam-role\"</span>\n<span class=\"hljs-attr\">eks_core_worker_iam_role_name</span> = <span class=\"hljs-string\">\"core-worker-iam-role\"</span>\n<span class=\"hljs-attr\">eks_vpc_id</span> = <span class=\"hljs-string\">\"VPC ID\"</span>\n</pre>\n<p>The <code>eks_vpc_id</code>, <code>eks_application_worker_iam_role_name</code>, and <code>eks_core_worker_iam_role_name</code> input variables should be\nthe outputs that you recorded in the previous step.</p>\n<p><strong>NOTE</strong>: The deployed Tiller version must exactly match the version of the helm client you have installed in order for\nthe client to work. You can use the <code>tiller_version</code> variable to control which version of Tiller is installed. Run <code>helm version -c</code> to see your client version, and set the <code>tiller_version</code> variable to the <code>SemVer</code> value (note: the <code>v</code> in\nthe version string is required and should be lower case).</p>\n<p>Once the tfvars file is created, you can init and apply the templates:</p>\n<pre>touch ./core-services/<span class=\"hljs-keyword\">terraform</span>.tfvars\n<span class=\"hljs-comment\"># Fill in the tfvars file with the variable inputs</span>\ncd core-services\n<span class=\"hljs-keyword\">terraform</span> init\n<span class=\"hljs-keyword\">terraform</span> apply\ncd .. <span class=\"hljs-comment\"># go back to eks-cluster-with-supporting-services example folder</span>\n</pre>\n<p>At the end of this, the cluster will be deployed with a working Tiller install. To verify the Tiller deployment, first\nsetup helm to authenticate to the deployed Tiller instance using the installed environment file:</p>\n<pre>HELM_HOME=<span class=\"hljs-string\">\"<FILL THIS IN>\"</span>\n<span class=\"hljs-keyword\">source</span> $HELM_HOME/<span class=\"hljs-keyword\">env</span>\n</pre>\n<p><strong>NOTE</strong>: If you did not pass in a custom helm home directory to Terraform (the variable <code>helm_home</code>), <code>HELM_HOME</code> will\nbe <code>~/.helm</code>.</p>\n<p>Once the environment variables are setup, you can verify your connection using <code>helm version</code> and <code>helm ls</code>.</p>\n<p>Additionally, the cluster should be shipping logs to CloudWatch. You can load the CloudWatch logs in the UI by\nnavigating to CloudWatch in the AWS console and looking for the log group with the same name as the EKS cluster.</p>\n<h3 class=\"preview__body--subtitle\" id=\"optional-deploy-nginx-service\">(Optional) Deploy Nginx Service</h3>\n<p>The <a href=\"/repos/v0.13.0/terraform-aws-eks/examples/eks-cluster-with-supporting-services/nginx-service\" class=\"preview__body--description--blue\"><code>nginx-service</code> submodule</a> shows an example of how to use Helm to deploy an application on to your\nEKS cluster. This example will:</p>\n<ul>\n<li>Setup <a href=\"/repos/helmcharts\" class=\"preview__body--description--blue\">the Gruntwork Helm Chart Repository</a></li>\n<li>Install Nginx using the <a href=\"/repos/helm-kubernetes-services/charts/k8s-service\" class=\"preview__body--description--blue\"><code>k8s-service</code> helm chart</a></li>\n<li>As part of the install, an ALB will be provisioned that routes to the nginx Pods.</li>\n</ul>\n<p>Like the previous step, create a <code>terraform.tfvars</code> file and fill it in. Below is a sample entry:</p>\n<pre><span class=\"hljs-attr\">aws_region</span> = <span class=\"hljs-string\">\"us-west-2\"</span>\n<span class=\"hljs-attr\">eks_cluster_name</span> = <span class=\"hljs-string\">\"test-eks-cluster-with-supporting-services\"</span>\n</pre>\n<p>Once the tfvars file is created, you can init and apply the templates:</p>\n<pre>touch ./nginx-service/<span class=\"hljs-keyword\">terraform</span>.tfvars\n<span class=\"hljs-comment\"># Fill in the tfvars file with the variable inputs</span>\ncd nginx-service\n<span class=\"hljs-keyword\">terraform</span> init\n<span class=\"hljs-keyword\">terraform</span> apply\ncd .. <span class=\"hljs-comment\"># go back to eks-cluster-with-supporting-services example folder</span>\n</pre>\n<p>Once Terraform finishes, the nginx service should be available. To get the endpoint, you can query Kubernetes using\n<code>kubectl</code> for the Service information:</p>\n<pre><span class=\"hljs-comment\"># Prerequisite: Setup environment variables to auth to AWS</span>\n<span class=\"hljs-comment\"># Use kubectl to get Service endpoint</span>\nkubectl \\\n get ingresses \\\n --namespace kube-system \\\n --selector app.kubernetes.io/instance=nginx-test,app.kubernetes.io/name=nginx \\\n -o jsonpath \\\n --template '{.items[<span class=\"hljs-number\">0</span>].status.loadBalancer.ingress[<span class=\"hljs-number\">0</span>].hostname}'\n</pre>\n<p>This will output the ALB endpoint to the console. When you hit the endpoint, you should be able to see the welcome page\nfor nginx. If the service isn't available or you don't get the endpoint, wait a few minutes to give Kubernetes a chance\nprovision the ALB.</p>\n<h2 class=\"preview__body--subtitle\" id=\"where-to-go-from-here\">Where to go from here</h2>\n<p>Now that you have your cluster, you can do a few things to explore the cluster:</p>\n<ul>\n<li>If you setup an SSH key with the <code>eks_worker_keypair_name</code> variable, try SSH-ing to the nodes to see the running\nprocesses.</li>\n<li>Try granting access to Tiller to other RBAC entities using <a href=\"/repos/kubergrunt#grant\" class=\"preview__body--description--blue\"><code>kubergrunt helm grant</code></a>.</li>\n</ul>\n<h2 class=\"preview__body--subtitle\" id=\"why-are-there-multiple-terraform-submodules-in-this-example\">Why are there multiple Terraform submodules in this example?</h2>\n<p>Breaking up your code not only improves readability, but it also helps with maintainability by keeping the surface area\nsmall. Typically you would want to structure your code so that resources that are deployed more frequently are isolated\nfrom critical resources that might bring down the cluster. For example, you might upgrade Helm regularly, but you might\nnot touch your VPC once it is set up. Therefore, you do not want to manage your Helm resources with the VPC, as\neverytime you update Helm you would be putting the cluster at risk.</p>\n<p>Additionally, breaking up the code into modules helps introduce dependency ordering. Terraform is notorious for having\nbugs and subtle issues that make it difficult to reliably introduce a dependency chain, especially when you have\nmodules. For example, you can't easily define dependencies such that the resources in a module depend on other resources\nor modules external to the module (see https://github.com/hashicorp/terraform/issues/1178).</p>\n<p>The dependency logic between launching the EKS cluster, setting up Helm, and deploying services using Helm is tricky to\nencode in terraform such that it works reliably. For example, while it is fairly easy to get the resources to deploy in\norder such that they are available, it is tricky to destroy the resources in the right order. For example, it is ok to\ndeploy Helm in parallel with the worker nodes, such that the worker nodes are coming up while the helm deployment script\nis executing. The same thing can't be done for destruction, because the script needs to communicate with the Helm Server\nwhile destroying it.</p>\n<p>In summary, this example breaks up the Terraform code as an example of how one might modularize their EKS cluster code,\nin addition to making the dependency management more explicit.</p>\n<h2 class=\"preview__body--subtitle\" id=\"troubleshooting\">Troubleshooting</h2>\n<p><strong>When destroying <code>core-services</code>, I get the following error:</strong></p>\n<pre>* helm_release.fluentd-cloudwatch: rpc error: code = Unknown desc = secrets is forbidden:<span class=\"hljs-built_in\"> User </span><span class=\"hljs-string\">\"system:serviceaccount:kube-system:cluster-admin-tiller-account\"</span> cannot list secrets <span class=\"hljs-keyword\">in</span> the namespace <span class=\"hljs-string\">\"kube-system\"</span>\n</pre>\n<ul>\n<li>\n<p>There is an issue in the destroy order where many resources that the helm release depends on is deleted in parallel\nwith the release, which causes helm to explode on delete. To fix the issue, first run <code>apply</code> again so that all the\npermissions are restored, and then run</p>\n<pre><span class=\"hljs-keyword\">terraform</span> destroy \\\n -target <span class=\"hljs-keyword\">module</span>.fluentd_cloudwatch.helm_release.fluentd-cloudwatch \\\n -target <span class=\"hljs-keyword\">module</span>.alb_ingress_controller.helm_release.aws_alb_ingress_controller \\\n -target <span class=\"hljs-keyword\">module</span>.k8s_external_dns.helm_release.k8s_external_dns\n</pre>\n<p>to destroy the helm releases first. Once that completes, it is safe to run <code>terraform destroy</code> again.</p>\n</li>\n</ul>\n<p><strong>When destroying <code>eks-cluster</code>, I get an error with destroying VPC related resources.</strong></p>\n<ul>\n<li>EKS relies on the <a href=\"https://github.com/aws/amazon-vpc-cni-k8s\" class=\"preview__body--description--blue\" target=\"_blank\"><code>amazon-vpc-cni-k8s</code></a> plugin to allocate IP addresses to\nthe pods in the Kubernetes cluster. This plugin works by allocating secondary ENI devices to the underlying worker\ninstances. Depending on timing, this plugin could interfere with destroying the cluster in this example. Specifically,\nterraform could shutdown the instances before the VPC CNI pod had a chance to cull the ENI devices. These devices are\nmanaged outside of terraform, so if they linger, it could interfere with destroying the VPC.\n<ul>\n<li>To workaround this limitation, you have to go into the console and delete the ENI associated with the VPC. Then,\nretry the destroy call.</li>\n</ul>\n</li>\n</ul>\n","repoName":"terraform-aws-eks","repoRef":"v0.13.0","serviceDescriptor":{"serviceName":"EC2 Kubernetes Service (EKS) Cluster","serviceRepoName":"terraform-aws-eks","serviceRepoOrg":"gruntwork-io","cloudProviders":["aws"],"description":"Deploy a Kubernetes cluster on top of Amazon EC2 Kubernetes Service (EKS).","imageUrl":"eks.png","licenseType":"subscriber","technologies":["Terraform","Python","Bash"],"compliance":[],"tags":[""]},"serviceCategoryName":"Docker orchestration","fileName":"README.md","filePath":"/examples/eks-cluster-with-supporting-services","title":"Repo Browser: EC2 Kubernetes Service (EKS) Cluster","description":"Browse the repos in the Gruntwork Infrastructure as Code Library."}