Browse the Repo

file-type-icon.circleci
file-type-icon_docs
file-type-iconexamples
file-type-iconmodules
file-type-iconeks-alb-ingress-controller-iam-policy
file-type-iconeks-alb-ingress-controller
file-type-iconeks-cloudwatch-container-logs
file-type-iconeks-cluster-control-plane
file-type-iconeks-cluster-managed-workers
file-type-iconeks-cluster-workers-cross-access
file-type-iconeks-cluster-workers
file-type-iconREADME.md
file-type-icondependencies.tf
file-type-iconmain.tf
file-type-iconoutputs.tf
file-type-iconvariables.tf
file-type-iconeks-iam-role-assume-role-policy-for-servic...
file-type-iconeks-k8s-cluster-autoscaler-iam-policy
file-type-iconeks-k8s-cluster-autoscaler
file-type-iconeks-k8s-external-dns-iam-policy
file-type-iconeks-k8s-external-dns
file-type-iconeks-k8s-role-mapping
file-type-iconeks-scripts
file-type-iconeks-vpc-tags
file-type-iconrfc
file-type-icontest
file-type-icon.gitignore
file-type-icon.pre-commit-config.yaml
file-type-iconCODEOWNERS
file-type-iconCONTRIBUTING.md
file-type-iconGRUNTWORK_PHILOSOPHY.md
file-type-iconLICENSE.md
file-type-iconREADME.adoc
file-type-iconcore-concepts.md
file-type-iconsetup.cfg

Browse the Repo

file-type-icon.circleci
file-type-icon_docs
file-type-iconexamples
file-type-iconmodules
file-type-iconeks-alb-ingress-controller-iam-policy
file-type-iconeks-alb-ingress-controller
file-type-iconeks-cloudwatch-container-logs
file-type-iconeks-cluster-control-plane
file-type-iconeks-cluster-managed-workers
file-type-iconeks-cluster-workers-cross-access
file-type-iconeks-cluster-workers
file-type-iconREADME.md
file-type-icondependencies.tf
file-type-iconmain.tf
file-type-iconoutputs.tf
file-type-iconvariables.tf
file-type-iconeks-iam-role-assume-role-policy-for-servic...
file-type-iconeks-k8s-cluster-autoscaler-iam-policy
file-type-iconeks-k8s-cluster-autoscaler
file-type-iconeks-k8s-external-dns-iam-policy
file-type-iconeks-k8s-external-dns
file-type-iconeks-k8s-role-mapping
file-type-iconeks-scripts
file-type-iconeks-vpc-tags
file-type-iconrfc
file-type-icontest
file-type-icon.gitignore
file-type-icon.pre-commit-config.yaml
file-type-iconCODEOWNERS
file-type-iconCONTRIBUTING.md
file-type-iconGRUNTWORK_PHILOSOPHY.md
file-type-iconLICENSE.md
file-type-iconREADME.adoc
file-type-iconcore-concepts.md
file-type-iconsetup.cfg
EC2 Kubernetes Service (EKS) Cluster

EC2 Kubernetes Service (EKS) Cluster

Deploy a Kubernetes cluster on top of Amazon EC2 Kubernetes Service (EKS).

Code Preview

Preview the Code

mobile file icon

README.md

down

EKS Cluster Workers Module

This module provisions self managed ASGs, in contrast to EKS Managed Node Groups. See the eks-cluster-managed-workers module for a module to deploy Managed Node Groups.

This Terraform Module launches worker nodes for an Elastic Container Service for Kubernetes Cluster that you can use to run Kubernetes Pods and Deployments.

This module is responsible for the EKS Worker Nodes in the EKS cluster topology. You must launch a control plane in order for the worker nodes to function. See the eks-cluster-control-plane module for managing an EKS control plane.

How do you use this module?

  • See the root README for instructions on using Terraform modules.
  • See the examples folder for example usage.
  • See variables.tf for all the variables you can set on this module.
  • See outputs.tf for all the variables that are outputed by this module.

Differences with managed node groups

See the [Differences with self managed workers] section in the documentation for eks-cluster-managed-workers module for a detailed overview of differences with EKS Managed Node Groups.

What should be included in the user-data script?

In order for the EKS worker nodes to function, it must register itself to the Kubernetes API run by the EKS control plane. This is handled by the bootstrap script provided in the EKS optimized AMI. The user-data script should call the bootstrap script at some point during its execution. You can get this information from the eks-cluster-control-plane module.

For an example of a user data script, see the eks-cluster example's user-data.sh script.

You can read more about the bootstrap script in the official documentation for EKS.

Which security group should I use?

EKS clusters using Kubernetes version 1.14 and above automatically create a managed security group known as the cluster security group. The cluster security group is designed to allow all traffic from the control plane and worker nodes to flow freely between each other. This security group has the following rules:

  • Allow Kubernetes API traffic between the security group and the control plane security group.
  • Allow all traffic between instances of the security group ("ingress all from self").
  • Allow all outbound traffic.

EKS will automatically use this security group for the underlying worker instances used with managed node groups or Fargate. This allows traffic to flow freely between Fargate Pods and worker instances managed with managed node groups.

You can read more about the cluster security group in the AWS docs.

By default this module will attach two security groups to the worker nodes managed by the module:

  • The cluster security group.
  • A custom security group that can be extended with additional rules.

You can attach additional security groups to the nodes using the var.additional_security_group_ids input variable.

If you would like to avoid the cluster security group (this is useful if you wish to isolate at the network level the workers managed by this module from other workers in your cluster like Fargate, Managed Node Groups, or other self managed ASGs), set the use_cluster_security_group input variable to false. With this setting, the module will apply recommended security group rules to the custom group to allow the node to function as a EKS worker. The rules used for the new security group are based on the recommendations provided by AWS for configuring an EKS cluster.

<a name="how-to-extend-security-group"></a>How do you add additional security group rules?

To add additional security group rules to the EKS cluster worker nodes, you can use the aws_security_group_rule resource, and set its security_group_id argument to the Terraform output of this module called eks_worker_security_group_id for the worker nodes. For example, here is how you can allow the EC2 Instances in this cluster to allow incoming HTTP requests on port 8080:

module "eks_workers" {
  # (arguments omitted)
}

resource "aws_security_group_rule" "allow_inbound_http_from_anywhere" {
  type = "ingress"
  from_port = 8080
  to_port = 8080
  protocol = "tcp"
  cidr_blocks = ["0.0.0.0/0"]

  security_group_id = "${module.eks_workers.eks_worker_security_group_id}"
}

Note: The security group rules you add will apply to ALL Pods running on these EC2 Instances. There is currently no way in EKS to manage security group rules on a per-Pod basis. Instead, rely on Kubernetes Network Policies to restrict network access within a Kubernetes cluster.

What IAM policies are attached to the EKS Cluster?

This module will create IAM roles for the EKS cluster worker nodes with the minimum set of policies necessary for the cluster to function as a Kubernetes cluster. The policies attached to the roles are the same as those documented in the AWS getting started guide for EKS.

How do you add additional IAM policies?

To add additional IAM policies to the EKS cluster worker nodes, you can use the aws_iam_role_policy or aws_iam_policy_attachment resources, and set the IAM role id to the Terraform output of this module called eks_worker_iam_role_name for the worker nodes. For example, here is how you can allow the worker nodes in this cluster to access an S3 bucket:

module "eks_workers" {
  # (arguments omitted)
}

resource "aws_iam_role_policy" "access_s3_bucket" {
    name = "access_s3_bucket"
    role = "${module.eks_workers.eks_worker_iam_role_name}"
    policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect":"Allow",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::examplebucket/*"
    }
  ]
}
EOF
}

Note: The IAM policies you add will apply to ALL Pods running on these EC2 Instances. See the How do I associate IAM roles to the Pods? section of the eks-cluster-control-plane module README for more fine-grained allocation of IAM credentials to Pods.

How do I SSH into the nodes?

This module provides options to allow you to SSH into the worker nodes of an EKS cluster that are managed by this module. To do so, you must first use an AMI that is configured to allow SSH access. Then, you must setup the auto scaling group to launch instances with a known keypair that you have access to by using the cluster_instance_keypair_name option of the module. Finally, you need to configure the security group of the worker node to allow access to the port for SSH by extending the security group of the worker nodes by following the guide above. This will allow SSH access to the instance using the specified keypair, provided the server AMI is configured to run the ssh daemon.

Note: Using a single key pair shared with your whole team for all of your SSH access is not secure. For a more secure option that allows each developer to use their own SSH key, and to manage server access via IAM or your Identity Provider (e.g. Google, ADFS, Okta, etc), see ssh-grunt.

How do I roll out an update to the instances?

Terraform and AWS do not provide a way to automatically roll out a change to the Instances in an EKS Cluster. Due to Terraform limitations (see here for a discussion), there is currently no way to implement this purely in Terraform code. Therefore, we've embedded this functionality into kubergrunt that can do a zero-downtime roll out for you.

Refer to the deploy subcommand documentation for more details on how this works.

How do I enable cluster auto-scaling?

This module will not automatically scale in response to resource usage by default, the autoscaling_group_configurations.*.max_size option is only used to give room for new instances during rolling updates. To enable auto-scaling in response to resource utilization, you must set the include_autoscaler_discovery_tags input variable to true and also deploy the Kubernetes Cluster Autoscaler module.

Note that the cluster autoscaler only supports ASGs that manage nodes in a single availability zone. This means that you need to carefully provision the managed node groups such that you have one group per AZ if you wish to use the cluster autoscaler. To accomplish this, ensure that the subnet_ids in each autoscaling_group_configurations input map entry come from the same AZ.

Refer to the Kubernetes Autoscaler documentation for more details.

Questions? Ask away.

We're here to talk about our services, answer any questions, give advice, or just to chat.

Ready to hand off the Gruntwork?