This Module can be used to manage the mapping of AWS IAM roles and users to Kubernetes RBAC groups for finer grained
access control of your EKS Cluster.
This Module only manages the mapping between IAM roles and Kubernetes RBAC groups. This Module does not create, modify,
or configure either roles. We recommend managing them in a separate Terraform template in the context of your needs,
that are then provided as inputs to this module.
How do you use this module?
See the root README for instructions on using Terraform modules.
See variables.tf for all the variables you can set on this module.
See outputs.tf for all the variables that are outputed by this module.
This module depends on a packaged python binary, which requires a working python install. See the aws-auth ConfigMap
Generartor Binary section of the docs for more information
What is Kubernetes Role Based Access Control (RBAC)?
Role Based Access Control (RBAC) is a method to regulate
access to resources based on the role that individual users assume in an organization. Kubernetes allows you to define
roles in the system that individual users inherit, and explicitly grant permissions to resources within the system to
those roles. The Control Plane will then honor those permissions when accessing the resources on Kubernetes through
clients such as kubectl. When combined with namespaces, you can implement sophisticated control schemes that limit the
access of resources across the roles in your organization.
The RBAC system is managed using ClusterRole and ClusterRoleBinding resources (or Role and RoleBinding resources
if restricting to a single namespace). The ClusterRole (or Role) object defines a role in the Kubernetes system that
has explicit permissions on what it can and cannot do. These roles are then bound to users and groups using the
ClusterRoleBinding (or RoleBinding) resource. An important thing to note here is that you do not explicitly create
users and groups using RBAC, and instead rely on the authentication system to implicitly create these entities.
AWS IAM role is AWS's implementation of RBAC. Users
and clients authenticate to AWS to assume an IAM role, that then have a set of permissions that grant or deny access to
various resources within AWS. Unlike users, IAM roles do not have long standing credentials associated with them.
Instead, a user uses the AWS API to assume a role, which will issue temporary credentials that can be used to access the
AWS resources as the assumed role. Like the roles in Kubernetes RBAC implementation, you can configure the roles to have
as much or as little permissions as necessary when accessing resources in the AWS system.
This Module provides code for you to manage the mapping between AWS IAM roles and Kubernetes RBAC roles so that you can
maintain a consistent set of mappings between the two systems. This works hand in hand with the EKS authentication
system, providing the information to Kubernetes to resolve the
user to the right RBAC group based on the provided IAM role credentials.
Examples
Restricting specific actions
Suppose that you are setting up your EKS cluster for your organization that has an ops team and a dev team. Suppose
further that your organization would like to restrict access to your dev team so that they can only list and update
existing Pods, but can not create new ones, while the ops team is able to manage all resources in your Kubernetes
cluster.
To support this, we need to first define the roles in Kubernetes that map to the explicit permissions granted to each
team. For the ops team in Kubernetes, since we want to grant them admin level privileges on the cluster, we can use the
default system:admin group that will already obtain those permissions. For the dev group however, there is no
default group and role that fits our needs, so we need to define a new ClusterRole and bind it to the dev group. To do
this, we will first define the ClusterRole resource using the RBAC API:
This creates a new role dev that allows the role to get, list, and update Pods in any namespace in the cluster. We can
apply this on the cluster using kubectl:
kubectl apply -f dev-role.yml
We then need to bind this to the dev group using a ClusterRoleMapping resource:
This config binds the ClusterRole named dev to the Group named dev. Like the ClusterRole config, we can apply
this on the cluster using kubectl:
kubectl apply -f dev-role-binding.yml
Now that we have the two roles and bindings in the system, we need some way for users in the ops and dev teams to inherit
the roles. This is done implicitly by mapping their authentication credentials to their respective groups. In EKS,
authentication is handled by IAM, which means that we need to tell Kubernetes to map their IAM credentials to their
respective groups. We will use this Module to do exactly that.
This Module takes as input a mapping between IAM roles and RBAC groups as part of the iam_role_to_rbac_group_mapping
input variable. In this example, we will assume that members of the ops team access the cluster by assuming the ops
IAM role and members of the dev team access the cluster by assuming the dev IAM role, so we will map these to their
respective groups in Kubernetes:
When you terraform apply the above code, the Module will configure Kubernetes to resolve the provided AWS IAM roles to
the specified RBAC groups when fulfilling client requests. In this case, any kubectl authentications using the dev
IAM role will resolve to the dev Kubernetes RBAC group, while any authentications using the ops IAM role will resolve
to the system:admin Kubernetes RBAC group. The dev team will then implicitly inherit the devClusterRole based
on the ClusterRoleBinding that binds that role to the dev group.
Important: Note that we did not need to define the dev group explicitly in Kubernetes. This is automatically
handled by the authentication system. In Kubernetes, the group is implicitly defined as part of defining a user entity
that can map to it. As such, it is important to take care to avoid typos here to ensure that the string you use for the
group here matches any groups referenced in the role bindings.
Restricting by namespace
In this example, suppose that you are setting up a dev EKS cluster for your dev team that is organized into multiple
subteams working on different products. In this scenario, you want to give members of the dev team full access to deploy
and manage their applications, including deleting resources. However, you may want to implement controls so that teams
can only manage their own resources, and not others' resources.
To support this, you would use Kubernetes
namespaces to partition your Kubernetes
cluster. Namespaces allow you to divide your resources into logical groups on the cluster. By utilizing namespaces, you
can grant teams full access to resources launched in their own namespace, but restrict access to resources in other
namespaces.
To implement this on your EKS cluster, you would first need to create namespaces for each team. For this example, we
will assume there are two dev teams in the organization: api and backend. So we will create a namespace for each
team:
This will create two namespaces: one named apiteam and one named backendteam. We can apply this on the cluster using
kubectl:
kubectl apply -f namespaces.yml
Next, we need to create RBAC roles in Kubernetes that grant access to each of the namespaces, but not others. To do this
we will rely on the Role resource, instead of the ClusterRole resource because we want to scope the permissions to a
particular namespace:
This will create two roles in the Kubernetes cluster: apiteam-full-access and backendteam-full-access, each giving
full access to all resources in the respective namespaces. Like the YAML file for the namespaces, you can apply this on
the cluster using kubectl:
kubectl apply -f roles.yml
To allow authenticating entities to be able to inherit these roles, we need to map these to a group. We can do that by
defining RoleBinding resources:
These two resources bind the apiteam to the apiteam-full-access role and the backendteam to the
backendteam-full-access role so that any client that maps to those groups will inherit the right permissions. We can
apply this to the cluster using kubectl:
kubectl apply -f role-bindings.yml
Now that we have the namespaces, the roles, and the bindings in the system, we need to create the AWS IAM roles that map
to each team and tell Kubernetes to map the AWS IAM role to the proper RBAC role when authenticating the client. We will
assume that the IAM roles already exist (named ApiDeveloper and BackendDeveloper). To map the IAM roles to the RBAC
groups, we will use this Module. This Module takes as input a mapping between IAM roles and RBAC roles as part of the
iam_role_to_rbac_group_mapping input variable:
When you terraform apply the above code, the Module will configure Kubernetes to resolve the provided AWS IAM roles to
the specified RBAC groups when fulfilling client requests. In this case, any kubectl authentications using the
ApiDeveloper IAM role will resolve to the apiteam Kubernetes RBAC group, while any authentications using the
BackendDeveloper IAM role will resolve to the backendteam Kubernetes RBAC group. In this way, the
developers who authenticate as ApiDeveloper will only be able to access the apiteam namespace in the Kubernetes
cluster, while the developers who authenticate as BackendDeveloper will only be able to access the backendteam
namespace.
Important: Note that we did not need to define the apiteam and backendteam group explicitly in Kubernetes. This
is automatically handled by the authentication system. In Kubernetes, the group is implicitly defined as part of
defining a user entity that can map to it. As such, it is important to take care to avoid typos here to ensure that the
string you use for the group here matches any groups referenced in the role bindings.
Why not use a Helm Chart?
This Module cannot be implemented as a helm chart due to the functionality of the ConfigMap being generated here. In
EKS, the worker nodes also use an IAM role to authenticate against the EKS Control Plane. As such, the worker nodes rely
on the mapping from the aws-auth ConfigMap generated by this module to be able to successfully register to the EKS
cluster as a worker node.
To use Helm, the Kubernetes cluster must be running the Tiller (Helm Server) Pods on the cluster. However, to run the
Tiller Pods, the cluster must have worker nodes online and available. As such, we have a chicken and egg situation,
where to use Helm we need to have worker nodes, which need the aws-auth ConfigMap, which needs Helm.
To avoid this cyclic dependency, we implement this module using the kubernetes provider which will use kubectl under
the hood. The cluster requirement for a working kubectl is the EKS control plane, which will be available without the
ConfigMap and as such does not have the cyclic dependency problem of Helm.
aws-auth ConfigMap Generator Binary
The aws-auth ConfigMap requires two string entries: mapRoles, which defines the mapping of IAM roles to Kubernetes
RBAC roles, and mapUsers, which defines the mapping of IAM users to Kubernetes RBAC roles. Both entries in the
ConfigMap need to be defined as a YAML string for EKS to parse correctly.
There are inherent challenges in the Terraform syntax that make it difficult to generate this YAML. Specifically, the
need to have a nested loop to generate the groups entry for each IAM role/user mapping. Therefore, to have better
flexibility in generating the YAML entries, we resort to using a python binary to handle the YAML generation based on
the Terraform inputs to this module.
NOTE: When Terraform 0.12 lands, there will be richer syntax including for loops and complex map types that will
support the generation of the YAML using pure Terraform. This binary will be replaced with a pure Terraform version when
0.12 is released.
The operator machine must have a valid python interpreter available in the PATH under the name python. The binary
supports python versions 2.7, 3.5, 3.6, 3.7, and 3.8, on Mac OSX or Linux.
Usage
The binary is intended to be used as part of a Terraform external data
source. As such, the script will read JSON data from
stdin and output JSON data to stdout.
The binary is a python executable that includes the necessary third party requirements. This special version of python
embeds cross platform versions of the requirements that are unpacked at runtime into a virtualenv. This executable is
then used to call out to the entrypoint script, which will import the library function.
As such, the binary only needs to be built when the requirements change. You do not need to rebuild the binary for any
changes to the source files in the aws_auth_configmap_generator library.
This approach is taken so that consumers of the module do not need to install additional third party libraries on top of
python to utilize the script. To make this work, the pex binaries need to be checked into the repository so that they
are distributed with the module.
The binary is generated using the pex utility. Pex will package
the python script with all its requirements into a single binary, that can be made to be compatible with multiple
versions of python and multiple OS platforms.
To build the binary, you will need the following:
A working python environment with all compatible versions of python setup (so that you can build binaries for all
versions)
tox and pex installed (use pip install -r dev_requirements.txt)
You can then build the binary using the helper script build.sh which will build the binary and copy it to the bin
directory for distribution. After that, you just need to check in the updated binaries.
It is recommended to use pyenv to help setup an environment with multiple python
interpreters. The latest binaries are built with the following python environment:
pyenv shell 2.7.153.5.23.6.63.7.03.8.1
Questions? Ask away.
We're here to talk about our services, answer any questions, give advice, or just to chat.
{"treedata":{"name":"root","toggled":true,"children":[{"name":".circleci","children":[{"name":"config.yml","path":".circleci/config.yml","sha":"e634c582e53ab8c0571118cebf8033c4f558cee3"}]},{"name":".gitignore","path":".gitignore","sha":"7f6cf4bc746bbfd6da4c7a21dbcf1a2296aa0c10"},{"name":".pre-commit-config.yaml","path":".pre-commit-config.yaml","sha":"b008949ef10a7bad93ab93e8821da77577a30c5c"},{"name":"CODEOWNERS","path":"CODEOWNERS","sha":"db8a871849f2583384d581e2a4c35eb5d2c50625"},{"name":"CONTRIBUTING.md","path":"CONTRIBUTING.md","sha":"a7cc7bd94443c252390564fa988755dbbe80d87d"},{"name":"GRUNTWORK_PHILOSOPHY.md","path":"GRUNTWORK_PHILOSOPHY.md","sha":"02d9873a74c99fe6d9b6b26bd9f8eb4a7a699c32"},{"name":"LICENSE.md","path":"LICENSE.md","sha":"a2cf01ecdd725fddd718ab91c80c115882c94f3c"},{"name":"README.adoc","path":"README.adoc","sha":"5ed02a037b64e33fe15e8a022c97f39d7e6242c2"},{"name":"_docs","children":[{"name":"eks-architecture.png","path":"_docs/eks-architecture.png","sha":"b4c9c46f88ed465c5575e915af54ad9920b56941"},{"name":"eks-icon.png","path":"_docs/eks-icon.png","sha":"83a29dc46e7bc6234ba5bb825e8ae283c56229a0"}]},{"name":"core-concepts.md","path":"core-concepts.md","sha":"3c504a547fc55ecff5536141534a32ed8a4a4ae7"},{"name":"examples","children":[{"name":"README.md","path":"examples/README.md","sha":"a70f3adc0c888e07b0b03cb32fbd156547c354da"},{"name":"eks-cluster-managed-workers","children":[{"name":"README.md","path":"examples/eks-cluster-managed-workers/README.md","sha":"21acaeb73c1d8a1819480bc7a8d1c35b8fa69081"},{"name":"dependencies.tf","path":"examples/eks-cluster-managed-workers/dependencies.tf","sha":"cf1b48a0d58571356ce788dda915332a48bb45c2"},{"name":"main.tf","path":"examples/eks-cluster-managed-workers/main.tf","sha":"cd7f3fbb03ddd663b552eb852a5f2befc379add0"},{"name":"outputs.tf","path":"examples/eks-cluster-managed-workers/outputs.tf","sha":"431bebd71e3f9d5c299c1740ba16b2eef717cbf0"},{"name":"variables.tf","path":"examples/eks-cluster-managed-workers/variables.tf","sha":"e4a9d5b2da436ca317a0380b03d8a85bce549472"}]},{"name":"eks-cluster-with-iam-role-mappings","children":[{"name":"README.md","path":"examples/eks-cluster-with-iam-role-mappings/README.md","sha":"6479e81678f2e08df477d467f2124f5dc53e9e53"},{"name":"dependencies.tf","path":"examples/eks-cluster-with-iam-role-mappings/dependencies.tf","sha":"df21d64c3435ff2859c4099ce0b854e98483f624"},{"name":"main.tf","path":"examples/eks-cluster-with-iam-role-mappings/main.tf","sha":"e445741db6fefdf7ddd0850f6820644c2a98348d"},{"name":"outputs.tf","path":"examples/eks-cluster-with-iam-role-mappings/outputs.tf","sha":"3876c30890ffef1726d533a869c23e66fa244e6c"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/eks-cluster-with-iam-role-mappings/user-data/user-data.sh","sha":"b10c34bfe4c9d10101472b47edbc3b7dff42a88e"}]},{"name":"variables.tf","path":"examples/eks-cluster-with-iam-role-mappings/variables.tf","sha":"7b3d2c4949848e51a7676269d419b85dc7ccfa4b"}]},{"name":"eks-cluster-with-supporting-services","children":[{"name":"README.md","path":"examples/eks-cluster-with-supporting-services/README.md","sha":"1af610f60977f2f05bb6917c8a3040449028ddd5"},{"name":"core-services","children":[{"name":"README.md","path":"examples/eks-cluster-with-supporting-services/core-services/README.md","sha":"e0bac13c7fd97d206766cbe3db0e7f269f7f0126"},{"name":"dependencies.tf","path":"examples/eks-cluster-with-supporting-services/core-services/dependencies.tf","sha":"8ef506ceacdd4b57bcdea3ad91c84c3c2544ba03"},{"name":"main.tf","path":"examples/eks-cluster-with-supporting-services/core-services/main.tf","sha":"35ecbae3c848a60de5e7e2d07a517bad76fdca3e"},{"name":"outputs.tf","path":"examples/eks-cluster-with-supporting-services/core-services/outputs.tf","sha":"35eb7dffb12786d50f580e64fa4a6ef496c160e8"},{"name":"variables.tf","path":"examples/eks-cluster-with-supporting-services/core-services/variables.tf","sha":"e43a616334814e86479287150cfc822187226708"}]},{"name":"eks-cluster","children":[{"name":"README.md","path":"examples/eks-cluster-with-supporting-services/eks-cluster/README.md","sha":"8a60a01004a93bbbf2091b730f0207f6dd2cc07e"},{"name":"dependencies.tf","path":"examples/eks-cluster-with-supporting-services/eks-cluster/dependencies.tf","sha":"fdc70a25511df461747927bc6874cff7bc787def"},{"name":"main.tf","path":"examples/eks-cluster-with-supporting-services/eks-cluster/main.tf","sha":"e662e9cc615234b62f7c8a2b4489124d52db0c37"},{"name":"outputs.tf","path":"examples/eks-cluster-with-supporting-services/eks-cluster/outputs.tf","sha":"534f5957ab5d9225aebf863e7849baec0da96dbb"},{"name":"user-data","children":[{"name":"app_worker_user_data.sh","path":"examples/eks-cluster-with-supporting-services/eks-cluster/user-data/app_worker_user_data.sh","sha":"c5fdd13d5bb04f765f1c90e9f12d23c48e94a252"},{"name":"core_worker_user_data.sh","path":"examples/eks-cluster-with-supporting-services/eks-cluster/user-data/core_worker_user_data.sh","sha":"0fa26153108b3d030ceeaae777aeb0a7e115404e"}]},{"name":"variables.tf","path":"examples/eks-cluster-with-supporting-services/eks-cluster/variables.tf","sha":"225ad92a427b38a0cf3fd4cd02e7c0ada2c0eccb"}]},{"name":"nginx-service","children":[{"name":"README.md","path":"examples/eks-cluster-with-supporting-services/nginx-service/README.md","sha":"58b899364432605520b890c407d1bcd0fafc8b27"},{"name":"dependencies.tf","path":"examples/eks-cluster-with-supporting-services/nginx-service/dependencies.tf","sha":"a2819acb9c726887612d04e224c9473cb7e293fd"},{"name":"main.tf","path":"examples/eks-cluster-with-supporting-services/nginx-service/main.tf","sha":"24d8cbbff07b1aa3e5a4ef7eae83851a0e895a3f"},{"name":"templates","children":[{"name":"values.yaml","path":"examples/eks-cluster-with-supporting-services/nginx-service/templates/values.yaml","sha":"298435e01df9fa495b15d512073c62662d292cd3"}]},{"name":"variables.tf","path":"examples/eks-cluster-with-supporting-services/nginx-service/variables.tf","sha":"36ea6f8a36b19e34dbeeb25ae7e5fcf30c956b0f"}]},{"name":"packer","children":[{"name":"README.md","path":"examples/eks-cluster-with-supporting-services/packer/README.md","sha":"6a974a7fd5da7ac13309d9e0c4aaba7bd8cb46c7"},{"name":"build.json","path":"examples/eks-cluster-with-supporting-services/packer/build.json","sha":"34760ce3ea4fe41078097d7a34092e2c6bf3ee43"}]}]},{"name":"eks-fargate-cluster-with-irsa","children":[{"name":"README.md","path":"examples/eks-fargate-cluster-with-irsa/README.md","sha":"7dfcee13140ca3df3baf9f61e666a45dde71a98a"},{"name":"dependencies.tf","path":"examples/eks-fargate-cluster-with-irsa/dependencies.tf","sha":"b422b2aa58d724243115464cebd86dfc9d22de19"},{"name":"main.tf","path":"examples/eks-fargate-cluster-with-irsa/main.tf","sha":"7f2e1bc01b84948b28c554f9d8d08776168490c4"},{"name":"outputs.tf","path":"examples/eks-fargate-cluster-with-irsa/outputs.tf","sha":"f059d7b74ffbfb06a0868d6d0a5d1831c8f45f10"},{"name":"variables.tf","path":"examples/eks-fargate-cluster-with-irsa/variables.tf","sha":"431b95593cc36fafc2a0072391d5e039a3d53c19"}]},{"name":"eks-fargate-cluster-with-supporting-services","children":[{"name":"README.md","path":"examples/eks-fargate-cluster-with-supporting-services/README.md","sha":"e597364fdf056051daa5b24e43afb02b22d8ec5c"},{"name":"core-services","children":[{"name":"README.md","path":"examples/eks-fargate-cluster-with-supporting-services/core-services/README.md","sha":"e0bac13c7fd97d206766cbe3db0e7f269f7f0126"},{"name":"dependencies.tf","path":"examples/eks-fargate-cluster-with-supporting-services/core-services/dependencies.tf","sha":"c8a0975403bb81f5c9e8c2cddea1666df0adb8b0"},{"name":"main.tf","path":"examples/eks-fargate-cluster-with-supporting-services/core-services/main.tf","sha":"6476d1e073caffbe5999320b9609d1dbba2aa7a0"},{"name":"outputs.tf","path":"examples/eks-fargate-cluster-with-supporting-services/core-services/outputs.tf","sha":"35eb7dffb12786d50f580e64fa4a6ef496c160e8"},{"name":"variables.tf","path":"examples/eks-fargate-cluster-with-supporting-services/core-services/variables.tf","sha":"da657291959044d32535597ed3d384ddaa6f83bd"}]},{"name":"eks-cluster","children":[{"name":"dependencies.tf","path":"examples/eks-fargate-cluster-with-supporting-services/eks-cluster/dependencies.tf","sha":"c1fa9e2c0d794ed6a8bf8afe6773d9645ea161d8"},{"name":"main.tf","path":"examples/eks-fargate-cluster-with-supporting-services/eks-cluster/main.tf","sha":"8fa3a3d6b84684f20307962ff9831ccc94bccb01"},{"name":"outputs.tf","path":"examples/eks-fargate-cluster-with-supporting-services/eks-cluster/outputs.tf","sha":"db0e767fd7ed3a0bcad5628a0c13b6208a442f13"},{"name":"variables.tf","path":"examples/eks-fargate-cluster-with-supporting-services/eks-cluster/variables.tf","sha":"cad45b14637d265dd23de69acc03ff6152ea1814"}]},{"name":"nginx-service","children":[{"name":"dependencies.tf","path":"examples/eks-fargate-cluster-with-supporting-services/nginx-service/dependencies.tf","sha":"3165e5c71fb1642d39a60f544be708d547825e7f"},{"name":"main.tf","path":"examples/eks-fargate-cluster-with-supporting-services/nginx-service/main.tf","sha":"d66648b4557bfb3ed32b094248fc137f41e98975"},{"name":"templates","children":[{"name":"values.yaml","path":"examples/eks-fargate-cluster-with-supporting-services/nginx-service/templates/values.yaml","sha":"655914f91177135cb7c5f15b62166cfc82a62a91"}]},{"name":"variables.tf","path":"examples/eks-fargate-cluster-with-supporting-services/nginx-service/variables.tf","sha":"d3c166441cdc556b0839930fbc281b7e8a1bd57f"}]}]},{"name":"eks-fargate-cluster","children":[{"name":"README.md","path":"examples/eks-fargate-cluster/README.md","sha":"df681cdbe945d0592ca57bd3a8eb9ae5d88c2f4a"},{"name":"dependencies.tf","path":"examples/eks-fargate-cluster/dependencies.tf","sha":"b422b2aa58d724243115464cebd86dfc9d22de19"},{"name":"main.tf","path":"examples/eks-fargate-cluster/main.tf","sha":"0094f34dbeb874c57ce20bcd9e3582f930d63cf2"},{"name":"outputs.tf","path":"examples/eks-fargate-cluster/outputs.tf","sha":"5115288e4192921035aba980990103fe4c4b7150"},{"name":"terraform.tfvars.back","path":"examples/eks-fargate-cluster/terraform.tfvars.back","sha":"6cb73f75cc7828c6b3efdc2a9b1787f75ed276d1"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/eks-fargate-cluster/user-data/user-data.sh","sha":"b10c34bfe4c9d10101472b47edbc3b7dff42a88e"}]},{"name":"variables.tf","path":"examples/eks-fargate-cluster/variables.tf","sha":"eea40e3144f3037c3b3451d61e1eeab2b871cce5"}]}]},{"name":"modules","children":[{"name":"eks-alb-ingress-controller-iam-policy","children":[{"name":"README.md","path":"modules/eks-alb-ingress-controller-iam-policy/README.md","sha":"d85eecf670ea161dcfe4b69c09926f31eef55c73"},{"name":"iampolicy.json","path":"modules/eks-alb-ingress-controller-iam-policy/iampolicy.json","sha":"5cba0c1500ee2520d72e8d47b86e318958e4dbc7"},{"name":"main.tf","path":"modules/eks-alb-ingress-controller-iam-policy/main.tf","sha":"a79f5a2e6a0ba72562c5a87182db516d8824ed21"},{"name":"outputs.tf","path":"modules/eks-alb-ingress-controller-iam-policy/outputs.tf","sha":"b551b0bcc6eb1b43bfff1606696566658564cfb4"},{"name":"variables.tf","path":"modules/eks-alb-ingress-controller-iam-policy/variables.tf","sha":"250152e6bfeb02a16bed4151ffc7156636db1bd9"}]},{"name":"eks-alb-ingress-controller","children":[{"name":"README.md","path":"modules/eks-alb-ingress-controller/README.md","sha":"f85f8d19d71b230c56f71d085d300c3135284a1e"},{"name":"main.tf","path":"modules/eks-alb-ingress-controller/main.tf","sha":"a9afcdabc54036bc7626ce8604523d802de21a3b"},{"name":"templates","children":[{"name":"node_affinity.yaml","path":"modules/eks-alb-ingress-controller/templates/node_affinity.yaml","sha":"c6eaf8e94fa7c893857cc009df954443239a8fe0"},{"name":"values.yaml","path":"modules/eks-alb-ingress-controller/templates/values.yaml","sha":"e2a11271abc9ec1937a082db6bef91a5e0d69a6c"}]},{"name":"variables.tf","path":"modules/eks-alb-ingress-controller/variables.tf","sha":"35941c1c6bdac42f50c810e61edee43829247d52"}]},{"name":"eks-cloudwatch-container-logs","children":[{"name":"README.md","path":"modules/eks-cloudwatch-container-logs/README.md","sha":"047fb9b3b97437261911c3fa4acec0cb419b1f1b"},{"name":"main.tf","path":"modules/eks-cloudwatch-container-logs/main.tf","sha":"f26b582dc8dad236cdf723d68fcd475285a29b8d"},{"name":"outputs.tf","path":"modules/eks-cloudwatch-container-logs/outputs.tf","sha":"7061ed458fec528c8b8b587291f0eccb4324fb72"},{"name":"templates","children":[{"name":"node_affinity.yaml","path":"modules/eks-cloudwatch-container-logs/templates/node_affinity.yaml","sha":"cf47b63d7c2b9699e0ab1e36e9a8dadad3a7f4c0"},{"name":"values.yaml","path":"modules/eks-cloudwatch-container-logs/templates/values.yaml","sha":"56bb63870ca40f0b60a3e1eb68dee108b59dae16"}]},{"name":"variables.tf","path":"modules/eks-cloudwatch-container-logs/variables.tf","sha":"e1b89a574ff63017bd992278048e690e1db6faf9"}]},{"name":"eks-cluster-control-plane","children":[{"name":"README.md","path":"modules/eks-cluster-control-plane/README.md","sha":"ad4be099d1da290902dc2290fd5c44439c1ca0ef"},{"name":"control_plane_scripts","children":[{"name":"bin","children":[{"name":"control_plane_scripts_py27_env.pex","path":"modules/eks-cluster-control-plane/control_plane_scripts/bin/control_plane_scripts_py27_env.pex","sha":"3b75ea0e3f39c5a2be32f1d17c370826fe062fcf"},{"name":"control_plane_scripts_py3_env.pex","path":"modules/eks-cluster-control-plane/control_plane_scripts/bin/control_plane_scripts_py3_env.pex","sha":"f5602767c99f0addee9cdf1ea1f1bfb7a26bfbc9"}]},{"name":"build.sh","path":"modules/eks-cluster-control-plane/control_plane_scripts/build.sh","sha":"33b5e9231babdb0c2c0997b04a964c27b98a4e13"},{"name":"cleanup_cluster_resources","children":[{"name":"__init__.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/cleanup_cluster_resources/__init__.py","sha":"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391"},{"name":"global_vars.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/cleanup_cluster_resources/global_vars.py","sha":"47920d25645a8c168f196beb76eb37da60055dd3"},{"name":"main.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/cleanup_cluster_resources/main.py","sha":"21dfb38d1bf8f4d15a03da5e09ae3ba575eb4501"},{"name":"vpc.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/cleanup_cluster_resources/vpc.py","sha":"76d1c2084906d1ce04c2e2e527859f47eddc6530"}]},{"name":"control_plane_scripts_utils","children":[{"name":"__init__.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/control_plane_scripts_utils/__init__.py","sha":"37d050d1afd8ebb0c9d6916cff61fa674e6ac8a3"},{"name":"project_logging.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/control_plane_scripts_utils/project_logging.py","sha":"c29bfb0dfe0a3d4e04aeaabff0b2e58387ccf12b"}]},{"name":"dev_requirements.txt","path":"modules/eks-cluster-control-plane/control_plane_scripts/dev_requirements.txt","sha":"430b91474dc8220624012e70d8c2e43582f17161"},{"name":"requirements.txt","path":"modules/eks-cluster-control-plane/control_plane_scripts/requirements.txt","sha":"0ae8cdb74f4c793658c5dfdd13ce1ec723f7b2a1"},{"name":"upgrade_cluster","children":[{"name":"__init__.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/upgrade_cluster/__init__.py","sha":"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391"},{"name":"eks.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/upgrade_cluster/eks.py","sha":"d0aca412ffa983300df0d8926bee8829e148f85e"},{"name":"exceptions.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/upgrade_cluster/exceptions.py","sha":"c35893a0f70e2c0d86dd64b7bce8d092e84355b3"},{"name":"global_vars.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/upgrade_cluster/global_vars.py","sha":"e223eefafed2576c8988a708395d92f6908b3f49"},{"name":"k8s.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/upgrade_cluster/k8s.py","sha":"83b3a0d7419d4a21872d9416f7b76d589650895d"},{"name":"k8s_version_map.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/upgrade_cluster/k8s_version_map.py","sha":"ed3b86c032b7829ba2983c1363efe936d85e4328"},{"name":"main.py","path":"modules/eks-cluster-control-plane/control_plane_scripts/upgrade_cluster/main.py","sha":"af8d29a692f2530b74b9581464aca7bd06c255cd"}]}]},{"name":"dependencies.tf","path":"modules/eks-cluster-control-plane/dependencies.tf","sha":"ff5c5efe0c1f84b9b17b995462f08d609ec454e6"},{"name":"main.tf","path":"modules/eks-cluster-control-plane/main.tf","sha":"3a7f2c315a2591e0f384063096950cd2c10cbe48"},{"name":"outputs.tf","path":"modules/eks-cluster-control-plane/outputs.tf","sha":"a68f4000d7524e2f2db24d3c12d2a3bac273a42a"},{"name":"templates","children":[{"name":"kubectl_config.tpl","path":"modules/eks-cluster-control-plane/templates/kubectl_config.tpl","sha":"083a5e914505363541190db3ee412d8d9e15b4ec"}]},{"name":"variables.tf","path":"modules/eks-cluster-control-plane/variables.tf","sha":"1c1d3d4b8827311c96c885464ecf3acda6959d3c"}]},{"name":"eks-cluster-managed-workers","children":[{"name":"README.md","path":"modules/eks-cluster-managed-workers/README.md","sha":"7c02b6cb8463d50ab1f7f0d64ede5617be7b8b71"},{"name":"main.tf","path":"modules/eks-cluster-managed-workers/main.tf","sha":"13454d6ece32b306cc703c23fa7dad39d99107b3"},{"name":"outputs.tf","path":"modules/eks-cluster-managed-workers/outputs.tf","sha":"ff528cd4101033d79defb8e8a6a9616a8b427849"},{"name":"variables.tf","path":"modules/eks-cluster-managed-workers/variables.tf","sha":"d8f332eaa8b195a7a7923f79d8ec05ccb2bc6539"}]},{"name":"eks-cluster-workers-cross-access","children":[{"name":"README.md","path":"modules/eks-cluster-workers-cross-access/README.md","sha":"6c4e50bda62acc6c06d836488ef54f7119f27aee"},{"name":"main.tf","path":"modules/eks-cluster-workers-cross-access/main.tf","sha":"30885a053867992d0c3ee3804ba6833ae463c116"},{"name":"outputs.tf","path":"modules/eks-cluster-workers-cross-access/outputs.tf","sha":"c6c7f7a89007c55be5470ffd639c05c3fb052ad7"},{"name":"variables.tf","path":"modules/eks-cluster-workers-cross-access/variables.tf","sha":"d64aab893b6e909416189e985f072dd8809dfa2f"}]},{"name":"eks-cluster-workers","children":[{"name":"README.md","path":"modules/eks-cluster-workers/README.md","sha":"9ea880ffa5b67ca8e135157476135054d8f152ea"},{"name":"main.tf","path":"modules/eks-cluster-workers/main.tf","sha":"8c4bc978bf1cd62b7c6255218a6d5bdcb38955a9"},{"name":"outputs.tf","path":"modules/eks-cluster-workers/outputs.tf","sha":"a9c37412a97c287000f2000c9c092b87e2487c11"},{"name":"variables.tf","path":"modules/eks-cluster-workers/variables.tf","sha":"d4b78bd1444cc595bce91006e7f02d6921a7ed96"}]},{"name":"eks-iam-role-assume-role-policy-for-service-account","children":[{"name":"README.md","path":"modules/eks-iam-role-assume-role-policy-for-service-account/README.md","sha":"efbbbd70fea3661c662750768facb7950239ffa3"},{"name":"main.tf","path":"modules/eks-iam-role-assume-role-policy-for-service-account/main.tf","sha":"be2fefe5e1a29a2582d1dcdc0b700b74f198cfc9"},{"name":"outputs.tf","path":"modules/eks-iam-role-assume-role-policy-for-service-account/outputs.tf","sha":"c2910cec89910bb06a157311ac8c4bf72835dfe5"},{"name":"variables.tf","path":"modules/eks-iam-role-assume-role-policy-for-service-account/variables.tf","sha":"dc660ddf84158851145289f6036a0fc19fbf7ce4"}]},{"name":"eks-k8s-cluster-autoscaler-iam-policy","children":[{"name":"README.md","path":"modules/eks-k8s-cluster-autoscaler-iam-policy/README.md","sha":"cfd86f6261a849f9204b0b7c80e96f9b03efd79d"},{"name":"main.tf","path":"modules/eks-k8s-cluster-autoscaler-iam-policy/main.tf","sha":"c743f0e3523119155e2f2a6434e6f634d659aaee"},{"name":"outputs.tf","path":"modules/eks-k8s-cluster-autoscaler-iam-policy/outputs.tf","sha":"a053ab9f76af3a83301a0a67eeedac9683ee5bc4"},{"name":"variables.tf","path":"modules/eks-k8s-cluster-autoscaler-iam-policy/variables.tf","sha":"be3db9023160b3754187f2f21ce77772b43ced53"}]},{"name":"eks-k8s-cluster-autoscaler","children":[{"name":"README.md","path":"modules/eks-k8s-cluster-autoscaler/README.md","sha":"6f2a76b27d33ffbd760ae7c8a40ab9e56853479d"},{"name":"main.tf","path":"modules/eks-k8s-cluster-autoscaler/main.tf","sha":"f877c9a88c0c82656675f40556dcb8c2774e265f"},{"name":"templates","children":[{"name":"node_affinity.yaml","path":"modules/eks-k8s-cluster-autoscaler/templates/node_affinity.yaml","sha":"c6eaf8e94fa7c893857cc009df954443239a8fe0"},{"name":"values.yaml","path":"modules/eks-k8s-cluster-autoscaler/templates/values.yaml","sha":"51e4cf44a9d8f054c1eced5d7b422255c5c9a481"}]},{"name":"variables.tf","path":"modules/eks-k8s-cluster-autoscaler/variables.tf","sha":"5b21aece34f5fd6f68ce9a88535de6b0b790b07d"}]},{"name":"eks-k8s-external-dns-iam-policy","children":[{"name":"README.md","path":"modules/eks-k8s-external-dns-iam-policy/README.md","sha":"aa9431f2e6f81e507d73482adb339d543b9d1051"},{"name":"main.tf","path":"modules/eks-k8s-external-dns-iam-policy/main.tf","sha":"b346bd0324c30907dd62ac89f93fe9cc7799fd4d"},{"name":"outputs.tf","path":"modules/eks-k8s-external-dns-iam-policy/outputs.tf","sha":"21604a63b741b94ea9ebffd20b18772131020fcf"},{"name":"variables.tf","path":"modules/eks-k8s-external-dns-iam-policy/variables.tf","sha":"250152e6bfeb02a16bed4151ffc7156636db1bd9"}]},{"name":"eks-k8s-external-dns","children":[{"name":"README.md","path":"modules/eks-k8s-external-dns/README.md","sha":"851e8d68beb5998b33d20f1e8cb56ee2f93c6bc2"},{"name":"main.tf","path":"modules/eks-k8s-external-dns/main.tf","sha":"39070bbbd47829cf3c82af84dd3c3092cee76c6c"},{"name":"templates","children":[{"name":"node_affinity.yaml","path":"modules/eks-k8s-external-dns/templates/node_affinity.yaml","sha":"c6eaf8e94fa7c893857cc009df954443239a8fe0"},{"name":"values.yaml","path":"modules/eks-k8s-external-dns/templates/values.yaml","sha":"233c10fd4723c4e515fed2870c778c4d8bf2e29f"}]},{"name":"variables.tf","path":"modules/eks-k8s-external-dns/variables.tf","sha":"8f6ef907c965091277e215b5d003d3a365f952ed"}]},{"name":"eks-k8s-role-mapping","children":[{"name":"README.md","path":"modules/eks-k8s-role-mapping/README.md","sha":"2359880e60bf9051ff9178cc13bbb9507a1aa456","toggled":true},{"name":"aws_auth_configmap_generator","children":[{"name":"aws_auth_configmap_generator","children":[{"name":"__init__.py","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/aws_auth_configmap_generator/__init__.py","sha":"e69de29bb2d1d6434b8b29ae775ad8c2e48c5391"},{"name":"generator.py","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/aws_auth_configmap_generator/generator.py","sha":"4057d70cebc26cb56e95d861618eda4629e41b19"},{"name":"global_vars.py","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/aws_auth_configmap_generator/global_vars.py","sha":"31c2b91932d79d37e284bdf708e506faf0a59649"},{"name":"main.py","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/aws_auth_configmap_generator/main.py","sha":"e69d8517efe23c680e9e67dc48dbd0478723b88f"},{"name":"utils.py","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/aws_auth_configmap_generator/utils.py","sha":"0874f15d63301e4f32cb0517817a515fb18f113e"}]},{"name":"bin","children":[{"name":"aws_auth_configmap_generator_py27_env.pex","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/bin/aws_auth_configmap_generator_py27_env.pex","sha":"d00c0aff5ef5ea8b7ad9a0ce9318e7e5e7a6da9f"},{"name":"aws_auth_configmap_generator_py3_env.pex","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/bin/aws_auth_configmap_generator_py3_env.pex","sha":"c4500959687a373596395a4c275bab61029ea2a9"}]},{"name":"build_scripts","children":[{"name":"build.sh","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/build_scripts/build.sh","sha":"34f496ada6fdc2d33028c6b8df7d3ba172a3dbdd"}]},{"name":"dev_requirements.txt","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/dev_requirements.txt","sha":"40f29298c05348c2f1227a53da3f88c89632feb3"},{"name":"requirements.txt","path":"modules/eks-k8s-role-mapping/aws_auth_configmap_generator/requirements.txt","sha":"97397a79f826def4e1023a6bc9b4cb346bdcafbe"}]},{"name":"main.tf","path":"modules/eks-k8s-role-mapping/main.tf","sha":"27557e43793f1ad7d021b8da3413c006075a0660"},{"name":"outputs.tf","path":"modules/eks-k8s-role-mapping/outputs.tf","sha":"95d4d4ec652bb541b91a2844e00f68064b423e60"},{"name":"variables.tf","path":"modules/eks-k8s-role-mapping/variables.tf","sha":"19ce18b4f61497d7366db872a40ce973f9db8549"}],"toggled":true},{"name":"eks-scripts","children":[{"name":"README.md","path":"modules/eks-scripts/README.md","sha":"96baaf535647b9f4c364d6a19057bcccb42df2be"},{"name":"bin","children":[{"name":"map-ec2-tags-to-node-labels","path":"modules/eks-scripts/bin/map-ec2-tags-to-node-labels","sha":"8087c82d4d47f25439f118c2a51e59d22689ada7"},{"name":"map_ec2_tags_to_node_labels.py","path":"modules/eks-scripts/bin/map_ec2_tags_to_node_labels.py","sha":"f75ad19587e95b2bd8924125ea2a1a697154909f"}]},{"name":"dev_requirements.txt","path":"modules/eks-scripts/dev_requirements.txt","sha":"f56f9d1629a85734fe16ed70f00f36b830cd97c9"},{"name":"install.sh","path":"modules/eks-scripts/install.sh","sha":"7f192fca97b098482a8a398019d4d53f45dba478"}]},{"name":"eks-vpc-tags","children":[{"name":"README.md","path":"modules/eks-vpc-tags/README.md","sha":"b53e923baaa79718b55a272158ff9b710871a6ce"},{"name":"outputs.tf","path":"modules/eks-vpc-tags/outputs.tf","sha":"0ef2787cfd02ea8668c687302b1929618079a0b2"},{"name":"variables.tf","path":"modules/eks-vpc-tags/variables.tf","sha":"a6e332e9da4e473e1e42b1ca6c7b0ba139a77cfb"},{"name":"versions.tf","path":"modules/eks-vpc-tags/versions.tf","sha":"e5d003c3e7a7296ca0f610fc77f94f2139fc59d2"}]}],"toggled":true},{"name":"rfc","children":[{"name":"locking-down-kiam.adoc","path":"rfc/locking-down-kiam.adoc","sha":"3e92efcc57dda26c406ed66c5f95fe76049b3d2c"},{"name":"shipping-logs-to-cloudwatch.md","path":"rfc/shipping-logs-to-cloudwatch.md","sha":"6199b55bfe1faea80833bbf0c411adc90b88b84b"}]},{"name":"setup.cfg","path":"setup.cfg","sha":"981bc2bfd0b35029438d56c6d862a7f1519b8fe6"},{"name":"test","children":[{"name":"Gopkg.lock","path":"test/Gopkg.lock","sha":"7dd58506d83164b594e3d650cae5c540987858e9"},{"name":"Gopkg.toml","path":"test/Gopkg.toml","sha":"a0159c5ca6bab4a7e77117edb9ab4b752517d4eb"},{"name":"README.md","path":"test/README.md","sha":"9bf8180d731bdc892279fcdbcbb03d245f31f83a"},{"name":"eks_cluster_integration_test.go","path":"test/eks_cluster_integration_test.go","sha":"e898491b14abb78d8c7c0bf6191547d3c7fa3fa1"},{"name":"eks_cluster_managed_workers_test.go","path":"test/eks_cluster_managed_workers_test.go","sha":"5c52034ff6ddf39d59169f1bc248d91867f0cdb7"},{"name":"eks_cluster_test_helpers.go","path":"test/eks_cluster_test_helpers.go","sha":"0ac527d18778dd162198297adb57e93927e5eb57"},{"name":"eks_cluster_upgrade_test.go","path":"test/eks_cluster_upgrade_test.go","sha":"73bb2f8bfe1a3cb2547e026840dc9bc6a88a7cc8"},{"name":"eks_cluster_with_iam_role_test.go","path":"test/eks_cluster_with_iam_role_test.go","sha":"ca0b2f65ebffee9c417c59c49884b4034c6ca895"},{"name":"eks_cluster_with_supporting_services_test.go","path":"test/eks_cluster_with_supporting_services_test.go","sha":"e90389ff9fd393a53e813000f3b22552913d0304"},{"name":"eks_fargate_cluster_disable_public_endpoint_test.go","path":"test/eks_fargate_cluster_disable_public_endpoint_test.go","sha":"25ba0984ef5979ca146d16b63654559939d822db"},{"name":"eks_fargate_cluster_irsa_test.go","path":"test/eks_fargate_cluster_irsa_test.go","sha":"ee867e5ad391a426146af448986959542b829490"},{"name":"eks_fargate_cluster_public_access_cidr_test.go","path":"test/eks_fargate_cluster_public_access_cidr_test.go","sha":"da8fa4c2a05ee1ba11ed1ab5310b4b209ad015f4"},{"name":"eks_fargate_cluster_test.go","path":"test/eks_fargate_cluster_test.go","sha":"49809cf53d4defb19e4672520d42c55d4d32d3f4"},{"name":"eks_fargate_cluster_with_supporting_services_test.go","path":"test/eks_fargate_cluster_with_supporting_services_test.go","sha":"196cb7393ea7159f75e189c3e2d235f0665043ad"},{"name":"errors.go","path":"test/errors.go","sha":"be062fe0205ff82db8183d0fde639aa1883013ad"},{"name":"kubefixtures","children":[{"name":"autoscaler-test-pods-deployment.yml","path":"test/kubefixtures/autoscaler-test-pods-deployment.yml","sha":"b2d94c4bfa729b639290ee21629c19ca6ea694ee"},{"name":"eks-irsa-test.yml","path":"test/kubefixtures/eks-irsa-test.yml","sha":"db5439cf6d38873dbae71daa4197d6947990a94a"},{"name":"eks-k8s-role-mapping-test-role.yml","path":"test/kubefixtures/eks-k8s-role-mapping-test-role.yml","sha":"ede7587308d2a4ecf55042b05800099c43f3af7d"},{"name":"kube-system-sa-admin-binding.yml","path":"test/kubefixtures/kube-system-sa-admin-binding.yml","sha":"282d406512102cbe54e952575f26e7e0fbb2aa9a"},{"name":"nginx-deployment.yml","path":"test/kubefixtures/nginx-deployment.yml","sha":"a58866e59c113635af24982cfb0b530f0c416af0"},{"name":"robust-nginx-deployment.yml","path":"test/kubefixtures/robust-nginx-deployment.yml","sha":"a71c2bb24c75b2ebcf54563df799281938a49ca5"}]},{"name":"script_tests","children":[{"name":"executor.sh","path":"test/script_tests/executor.sh","sha":"f2a571ab875195d450a942d684ce41f86f824e70"},{"name":"requirements.txt","path":"test/script_tests/requirements.txt","sha":"e78b3b8c7b4bdecf8d1f235c1f55dcf227ee19c6"},{"name":"test_aws_auth_configmap_generator.py","path":"test/script_tests/test_aws_auth_configmap_generator.py","sha":"8da981d07d31745a1db59e9693995e60cea14abc"},{"name":"test_map_ec2_tags_to_node_labels.py","path":"test/script_tests/test_map_ec2_tags_to_node_labels.py","sha":"1bb3a5eae3727c0e6caf29c2cf4b7d596bb9a161"},{"name":"tox.ini","path":"test/script_tests/tox.ini","sha":"088400028aa4cf08b188b449875cf243222f2250"}]},{"name":"terratest_options.go","path":"test/terratest_options.go","sha":"b396ba967a5d84e38dc5e94d89fba41f93f7e17a"},{"name":"test_debug_helpers.go","path":"test/test_debug_helpers.go","sha":"c71a7a9d5b68f0f59d2518496d9f5893206b5e22"},{"name":"test_helpers.go","path":"test/test_helpers.go","sha":"c0aa8112f2958c98fce5e1bf6193e04824b19aa7"}]}]},"detailsContent":"<h1 class=\"preview__body--title\" id=\"eks-k-8-s-role-mapping-module\">EKS K8S Role Mapping Module</h1><div class=\"preview__body--border\"></div><p>This Module can be used to manage the mapping of AWS IAM roles and users to Kubernetes RBAC groups for finer grained\naccess control of your EKS Cluster.</p>\n<p>This Module only manages the mapping between IAM roles and Kubernetes RBAC groups. This Module does not create, modify,\nor configure either roles. We recommend managing them in a separate Terraform template in the context of your needs,\nthat are then provided as inputs to this module.</p>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-you-use-this-module\">How do you use this module?</h2>\n<ul>\n<li>See the <a href=\"/repos/v0.19.6/terraform-aws-eks/README.adoc\" class=\"preview__body--description--blue\">root README</a> for instructions on using Terraform modules.</li>\n<li>This module uses <a href=\"https://www.terraform.io/docs/providers/kubernetes/index.html\" class=\"preview__body--description--blue\" target=\"_blank\">the <code>kubernetes</code> provider</a>.</li>\n<li>See the <a href=\"/repos/v0.19.6/terraform-aws-eks/examples\" class=\"preview__body--description--blue\">examples</a> folder for example usage.</li>\n<li>See <a href=\"/repos/v0.19.6/terraform-aws-eks/modules/eks-k8s-role-mapping/variables.tf\" class=\"preview__body--description--blue\">variables.tf</a> for all the variables you can set on this module.</li>\n<li>See <a href=\"/repos/v0.19.6/terraform-aws-eks/modules/eks-k8s-role-mapping/outputs.tf\" class=\"preview__body--description--blue\">outputs.tf</a> for all the variables that are outputed by this module.</li>\n<li>This module depends on a packaged python binary, which requires a working python install. See the <a href=\"#aws-auth-configmap-generator-binary\" class=\"preview__body--description--blue\">aws-auth ConfigMap\nGenerartor Binary</a> section of the docs for more information</li>\n</ul>\n<h2 class=\"preview__body--subtitle\" id=\"what-is-kubernetes-role-based-access-control-rbac\">What is Kubernetes Role Based Access Control (RBAC)?</h2>\n<p><a href=\"https://kubernetes.io/docs/reference/access-authn-authz/rbac/\" class=\"preview__body--description--blue\" target=\"_blank\">Role Based Access Control (RBAC)</a> is a method to regulate\naccess to resources based on the role that individual users assume in an organization. Kubernetes allows you to define\nroles in the system that individual users inherit, and explicitly grant permissions to resources within the system to\nthose roles. The Control Plane will then honor those permissions when accessing the resources on Kubernetes through\nclients such as <code>kubectl</code>. When combined with namespaces, you can implement sophisticated control schemes that limit the\naccess of resources across the roles in your organization.</p>\n<p>The RBAC system is managed using <code>ClusterRole</code> and <code>ClusterRoleBinding</code> resources (or <code>Role</code> and <code>RoleBinding</code> resources\nif restricting to a single namespace). The <code>ClusterRole</code> (or <code>Role</code>) object defines a role in the Kubernetes system that\nhas explicit permissions on what it can and cannot do. These roles are then bound to users and groups using the\n<code>ClusterRoleBinding</code> (or <code>RoleBinding</code>) resource. An important thing to note here is that you do not explicitly create\nusers and groups using RBAC, and instead rely on the authentication system to implicitly create these entities.</p>\n<p>You can refer to <a href=\"#examples\" class=\"preview__body--description--blue\">the example scenarios</a> below for an example of this in action.</p>\n<p>Refer to <a href=\"https://kubernetes.io/docs/reference/access-authn-authz/rbac/\" class=\"preview__body--description--blue\" target=\"_blank\">the official documentation</a> for more\ninformation.</p>\n<h2 class=\"preview__body--subtitle\" id=\"what-is-an-aws-iam-role\">What is an AWS IAM role?</h2>\n<p><a href=\"https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html\" class=\"preview__body--description--blue\" target=\"_blank\">AWS IAM role</a> is AWS's implementation of RBAC. Users\nand clients authenticate to AWS to assume an IAM role, that then have a set of permissions that grant or deny access to\nvarious resources within AWS. Unlike users, IAM roles do not have long standing credentials associated with them.\nInstead, a user uses the AWS API to assume a role, which will issue temporary credentials that can be used to access the\nAWS resources as the assumed role. Like the roles in Kubernetes RBAC implementation, you can configure the roles to have\nas much or as little permissions as necessary when accessing resources in the AWS system.</p>\n<p>This Module provides code for you to manage the mapping between AWS IAM roles and Kubernetes RBAC roles so that you can\nmaintain a consistent set of mappings between the two systems. This works hand in hand with the <a href=\"/repos/v0.19.6/terraform-aws-eks/core-concepts.md#how-do-i-authenticate-kubectl-to-the-eks-cluster\" class=\"preview__body--description--blue\">EKS authentication\nsystem</a>, providing the information to Kubernetes to resolve the\nuser to the right RBAC group based on the provided IAM role credentials.</p>\n<h2 class=\"preview__body--subtitle\" id=\"examples\">Examples</h2>\n<h3 class=\"preview__body--subtitle\" id=\"restricting-specific-actions\">Restricting specific actions</h3>\n<p>Suppose that you are setting up your EKS cluster for your organization that has an ops team and a dev team. Suppose\nfurther that your organization would like to restrict access to your dev team so that they can only list and update\nexisting Pods, but can not create new ones, while the ops team is able to manage all resources in your Kubernetes\ncluster.</p>\n<p>To support this, we need to first define the roles in Kubernetes that map to the explicit permissions granted to each\nteam. For the ops team in Kubernetes, since we want to grant them admin level privileges on the cluster, we can use the\ndefault <code>system:admin</code> group that will already obtain those permissions. For the <code>dev</code> group however, there is no\ndefault group and role that fits our needs, so we need to define a new <code>ClusterRole</code> and bind it to the <code>dev</code> group. To do\nthis, we will first define the <code>ClusterRole</code> resource using the RBAC API:</p>\n<pre><span class=\"hljs-comment\"># dev-role.yml</span>\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n name: dev\nrules:\n- apiGroups: [<span class=\"hljs-string\">\"\"</span>]\n resources: [<span class=\"hljs-string\">\"pods\"</span>]\n verbs: [<span class=\"hljs-string\">\"get\"</span>, <span class=\"hljs-string\">\"list\"</span>, <span class=\"hljs-string\">\"update\"</span>]\n</pre>\n<p>This creates a new role <code>dev</code> that allows the role to get, list, and update Pods in any namespace in the cluster. We can\napply this on the cluster using <code>kubectl</code>:</p>\n<pre>kubectl <span class=\"hljs-built_in\">apply</span> -f dev-role.yml\n</pre>\n<p>We then need to bind this to the <code>dev</code> group using a <code>ClusterRoleMapping</code> resource:</p>\n<pre><span class=\"hljs-comment\"># dev-role-binding.yml</span>\n<span class=\"hljs-attr\">apiVersion:</span> <span class=\"hljs-string\">rbac.authorization.k8s.io/v1</span>\n<span class=\"hljs-attr\">kind:</span> <span class=\"hljs-string\">ClusterRoleBinding</span>\n<span class=\"hljs-attr\">metadata:</span>\n <span class=\"hljs-attr\">name:</span> <span class=\"hljs-string\">bind-dev</span>\n<span class=\"hljs-attr\">roleRef:</span>\n <span class=\"hljs-attr\">apiGroup:</span> <span class=\"hljs-string\">rbac.authorization.k8s.io</span>\n <span class=\"hljs-attr\">kind:</span> <span class=\"hljs-string\">ClusterRole</span>\n <span class=\"hljs-attr\">name:</span> <span class=\"hljs-string\">dev</span>\n<span class=\"hljs-attr\">subjects:</span>\n<span class=\"hljs-bullet\">-</span> <span class=\"hljs-attr\">kind:</span> <span class=\"hljs-string\">Group</span>\n <span class=\"hljs-attr\">name:</span> <span class=\"hljs-string\">dev</span>\n <span class=\"hljs-attr\">apiGroup:</span> <span class=\"hljs-string\">rbac.authorization.k8s.io</span>\n</pre>\n<p>This config binds the <code>ClusterRole</code> named <code>dev</code> to the <code>Group</code> named <code>dev</code>. Like the <code>ClusterRole</code> config, we can apply\nthis on the cluster using <code>kubectl</code>:</p>\n<pre><span class=\"hljs-symbol\">kubectl</span> apply -f dev-role-<span class=\"hljs-keyword\">binding.yml\n</span></pre>\n<p>Now that we have the two roles and bindings in the system, we need some way for users in the ops and dev teams to inherit\nthe roles. This is done implicitly by mapping their authentication credentials to their respective groups. In EKS,\nauthentication is handled by IAM, which means that we need to tell Kubernetes to map their IAM credentials to their\nrespective groups. We will use this Module to do exactly that.</p>\n<p>This Module takes as input a mapping between IAM roles and RBAC groups as part of the <code>iam_role_to_rbac_group_mapping</code>\ninput variable. In this example, we will assume that members of the ops team access the cluster by assuming the <code>ops</code>\nIAM role and members of the dev team access the cluster by assuming the <code>dev</code> IAM role, so we will map these to their\nrespective groups in Kubernetes:</p>\n<pre><span class=\"hljs-keyword\">module</span> <span class=\"hljs-string\">\"eks_k8s_role_mapping\"</span> {\n eks_worker_iam_role_arn = <span class=\"hljs-string\">\"arn.aws.iam::5555555555:role/eks-worker\"</span>\n\n iam_role_to_rbac_group_mappings = <span class=\"hljs-string\">\"${\n map(\n \"</span><span class=\"hljs-symbol\">arn:</span><span class=\"hljs-symbol\">aws:</span>iam::<span class=\"hljs-number\">555555555555</span><span class=\"hljs-symbol\">:role/dev<span class=\"hljs-string\">\", list(\"</span>dev<span class=\"hljs-string\">\"),\n \"</span>arn</span><span class=\"hljs-symbol\">:aws</span><span class=\"hljs-symbol\">:iam</span>::<span class=\"hljs-number\">555555555555</span><span class=\"hljs-symbol\">:role/ops<span class=\"hljs-string\">\", list(\"</span>system</span><span class=\"hljs-symbol\">:admin<span class=\"hljs-string\">\"),\n )\n }\"</span></span>\n}\n</pre>\n<p>When you <code>terraform apply</code> the above code, the Module will configure Kubernetes to resolve the provided AWS IAM roles to\nthe specified RBAC groups when fulfilling client requests. In this case, any <code>kubectl</code> authentications using the <code>dev</code>\nIAM role will resolve to the <code>dev</code> Kubernetes RBAC group, while any authentications using the <code>ops</code> IAM role will resolve\nto the <code>system:admin</code> Kubernetes RBAC group. The <code>dev</code> team will then implicitly inherit the <code>dev</code> <code>ClusterRole</code> based\non the <code>ClusterRoleBinding</code> that binds that role to the <code>dev</code> group.</p>\n<p><strong>Important</strong>: Note that we did not need to define the <code>dev</code> group explicitly in Kubernetes. This is automatically\nhandled by the authentication system. In Kubernetes, the group is implicitly defined as part of defining a user entity\nthat can map to it. As such, it is important to take care to avoid typos here to ensure that the string you use for the\ngroup here matches any groups referenced in the role bindings.</p>\n<h3 class=\"preview__body--subtitle\" id=\"restricting-by-namespace\">Restricting by namespace</h3>\n<p>In this example, suppose that you are setting up a dev EKS cluster for your dev team that is organized into multiple\nsubteams working on different products. In this scenario, you want to give members of the dev team full access to deploy\nand manage their applications, including deleting resources. However, you may want to implement controls so that teams\ncan only manage their own resources, and not others' resources.</p>\n<p>To support this, you would use <a href=\"https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/\" class=\"preview__body--description--blue\" target=\"_blank\">Kubernetes\nnamespaces</a> to partition your Kubernetes\ncluster. Namespaces allow you to divide your resources into logical groups on the cluster. By utilizing namespaces, you\ncan grant teams full access to resources launched in their own namespace, but restrict access to resources in other\nnamespaces.</p>\n<p>To implement this on your EKS cluster, you would first need to create namespaces for each team. For this example, we\nwill assume there are two dev teams in the organization: <code>api</code> and <code>backend</code>. So we will create a namespace for each\nteam:</p>\n<pre><span class=\"hljs-comment\"># namespaces.yml</span>\n<span class=\"hljs-meta\">---</span>\n<span class=\"hljs-attr\">kind:</span> <span class=\"hljs-string\">Namespace</span>\n<span class=\"hljs-attr\">apiVersion:</span> <span class=\"hljs-string\">v1</span>\n<span class=\"hljs-attr\">metadata:</span>\n <span class=\"hljs-attr\">name:</span> <span class=\"hljs-string\">apiteam</span>\n <span class=\"hljs-attr\">labels:</span>\n <span class=\"hljs-attr\">name:</span> <span class=\"hljs-string\">apiteam</span>\n<span class=\"hljs-meta\">---</span>\n<span class=\"hljs-attr\">kind:</span> <span class=\"hljs-string\">Namespace</span>\n<span class=\"hljs-attr\">apiVersion:</span> <span class=\"hljs-string\">v1</span>\n<span class=\"hljs-attr\">metadata:</span>\n <span class=\"hljs-attr\">name:</span> <span class=\"hljs-string\">backendteam</span>\n <span class=\"hljs-attr\">labels:</span>\n <span class=\"hljs-attr\">name:</span> <span class=\"hljs-string\">backendteam</span>\n</pre>\n<p>This will create two namespaces: one named <code>apiteam</code> and one named <code>backendteam</code>. We can apply this on the cluster using\n<code>kubectl</code>:</p>\n<pre>kubectl <span class=\"hljs-built_in\">apply</span> -f namespaces.yml\n</pre>\n<p>Next, we need to create RBAC roles in Kubernetes that grant access to each of the namespaces, but not others. To do this\nwe will rely on the <code>Role</code> resource, instead of the <code>ClusterRole</code> resource because we want to scope the permissions to a\nparticular namespace:</p>\n<pre><span class=\"hljs-comment\"># roles.yml</span>\n<span class=\"hljs-meta\">---</span>\n<span class=\"hljs-attr\">kind:</span> <span class=\"hljs-string\">Role</span>\n<span class=\"hljs-attr\">apiVersion:</span> <span class=\"hljs-string\">rbac.authorization.k8s.io/v1beta1</span>\n<span class=\"hljs-attr\">metadata:</span>\n <span class=\"hljs-attr\">name:</span> <span class=\"hljs-string\">apiteam-full-access</span>\n <span class=\"hljs-attr\">namespace:</span> <span class=\"hljs-string\">apiteam</span>\n<span class=\"hljs-attr\">rules:</span>\n<span class=\"hljs-bullet\">-</span> <span class=\"hljs-attr\">apiGroups:</span> <span class=\"hljs-string\">[\"\",</span> <span class=\"hljs-string\">\"extensions\"</span><span class=\"hljs-string\">,</span> <span class=\"hljs-string\">\"apps\"</span><span class=\"hljs-string\">]</span>\n <span class=\"hljs-attr\">resources:</span> <span class=\"hljs-string\">[\"*\"]</span>\n <span class=\"hljs-attr\">verbs:</span> <span class=\"hljs-string\">[\"*\"]</span>\n<span class=\"hljs-meta\">---</span>\n<span class=\"hljs-attr\">kind:</span> <span class=\"hljs-string\">Role</span>\n<span class=\"hljs-attr\">apiVersion:</span> <span class=\"hljs-string\">rbac.authorization.k8s.io/v1beta1</span>\n<span class=\"hljs-attr\">metadata:</span>\n <span class=\"hljs-attr\">name:</span> <span class=\"hljs-string\">backendteam-full-access</span>\n <span class=\"hljs-attr\">namespace:</span> <span class=\"hljs-string\">backendteam</span>\n<span class=\"hljs-attr\">rules:</span>\n<span class=\"hljs-bullet\">-</span> <span class=\"hljs-attr\">apiGroups:</span> <span class=\"hljs-string\">[\"\",</span> <span class=\"hljs-string\">\"extensions\"</span><span class=\"hljs-string\">,</span> <span class=\"hljs-string\">\"apps\"</span><span class=\"hljs-string\">]</span>\n <span class=\"hljs-attr\">resources:</span> <span class=\"hljs-string\">[\"*\"]</span>\n <span class=\"hljs-attr\">verbs:</span> <span class=\"hljs-string\">[\"*\"]</span>\n</pre>\n<p>This will create two roles in the Kubernetes cluster: <code>apiteam-full-access</code> and <code>backendteam-full-access</code>, each giving\nfull access to all resources in the respective namespaces. Like the YAML file for the namespaces, you can apply this on\nthe cluster using <code>kubectl</code>:</p>\n<pre>kubectl <span class=\"hljs-built_in\">apply</span> -f roles.yml\n</pre>\n<p>To allow authenticating entities to be able to inherit these roles, we need to map these to a group. We can do that by\ndefining <code>RoleBinding</code> resources:</p>\n<pre><span class=\"hljs-comment\"># role-bindings.yml</span>\n<span class=\"hljs-meta\">---</span>\n<span class=\"hljs-attr\">apiVersion:</span> <span class=\"hljs-string\">rbac.authorization.k8s.io/v1</span>\n<span class=\"hljs-attr\">kind:</span> <span class=\"hljs-string\">RoleBinding</span>\n<span class=\"hljs-attr\">metadata:</span>\n <span class=\"hljs-attr\">name:</span> <span class=\"hljs-string\">bind-apiteam</span>\n <span class=\"hljs-attr\">namespace:</span> <span class=\"hljs-string\">apiteam</span>\n<span class=\"hljs-attr\">roleRef:</span>\n <span class=\"hljs-attr\">apiGroup:</span> <span class=\"hljs-string\">rbac.authorization.k8s.io</span>\n <span class=\"hljs-attr\">kind:</span> <span class=\"hljs-string\">Role</span>\n <span class=\"hljs-attr\">name:</span> <span class=\"hljs-string\">apiteam-full-access</span>\n<span class=\"hljs-attr\">subjects:</span>\n<span class=\"hljs-bullet\">-</span> <span class=\"hljs-attr\">kind:</span> <span class=\"hljs-string\">Group</span>\n <span class=\"hljs-attr\">name:</span> <span class=\"hljs-string\">apiteam</span>\n <span class=\"hljs-attr\">apiGroup:</span> <span class=\"hljs-string\">rbac.authorization.k8s.io</span>\n<span class=\"hljs-meta\">---</span>\n<span class=\"hljs-attr\">apiVersion:</span> <span class=\"hljs-string\">rbac.authorization.k8s.io/v1</span>\n<span class=\"hljs-attr\">kind:</span> <span class=\"hljs-string\">RoleBinding</span>\n<span class=\"hljs-attr\">metadata:</span>\n <span class=\"hljs-attr\">name:</span> <span class=\"hljs-string\">bind-backendteam</span>\n <span class=\"hljs-attr\">namespace:</span> <span class=\"hljs-string\">backendteam</span>\n<span class=\"hljs-attr\">roleRef:</span>\n <span class=\"hljs-attr\">apiGroup:</span> <span class=\"hljs-string\">rbac.authorization.k8s.io</span>\n <span class=\"hljs-attr\">kind:</span> <span class=\"hljs-string\">Role</span>\n <span class=\"hljs-attr\">name:</span> <span class=\"hljs-string\">backendteam-full-access</span>\n<span class=\"hljs-attr\">subjects:</span>\n<span class=\"hljs-bullet\">-</span> <span class=\"hljs-attr\">kind:</span> <span class=\"hljs-string\">Group</span>\n <span class=\"hljs-attr\">name:</span> <span class=\"hljs-string\">backendteam</span>\n <span class=\"hljs-attr\">apiGroup:</span> <span class=\"hljs-string\">rbac.authorization.k8s.io</span>\n</pre>\n<p>These two resources bind the <code>apiteam</code> to the <code>apiteam-full-access</code> role and the <code>backendteam</code> to the\n<code>backendteam-full-access</code> role so that any client that maps to those groups will inherit the right permissions. We can\napply this to the cluster using <code>kubectl</code>:</p>\n<pre><span class=\"hljs-symbol\">kubectl</span> apply -f role-<span class=\"hljs-keyword\">bindings.yml\n</span></pre>\n<p>Now that we have the namespaces, the roles, and the bindings in the system, we need to create the AWS IAM roles that map\nto each team and tell Kubernetes to map the AWS IAM role to the proper RBAC role when authenticating the client. We will\nassume that the IAM roles already exist (named <code>ApiDeveloper</code> and <code>BackendDeveloper</code>). To map the IAM roles to the RBAC\ngroups, we will use this Module. This Module takes as input a mapping between IAM roles and RBAC roles as part of the\n<code>iam_role_to_rbac_group_mapping</code> input variable:</p>\n<pre><span class=\"hljs-keyword\">module</span> <span class=\"hljs-string\">\"eks_k8s_role_mapping\"</span> {\n eks_worker_iam_role_arn = <span class=\"hljs-string\">\"arn.aws.iam::5555555555:role/eks-worker\"</span>\n\n iam_role_to_rbac_group_mappings = <span class=\"hljs-string\">\"${\n map(\n \"</span><span class=\"hljs-symbol\">arn:</span><span class=\"hljs-symbol\">aws:</span>iam::<span class=\"hljs-number\">555555555555</span><span class=\"hljs-symbol\">:role/ApiDeveloper<span class=\"hljs-string\">\", list(\"</span>apiteam<span class=\"hljs-string\">\"),\n \"</span>arn</span><span class=\"hljs-symbol\">:aws</span><span class=\"hljs-symbol\">:iam</span>::<span class=\"hljs-number\">555555555555</span><span class=\"hljs-symbol\">:role/BackendDeveloper<span class=\"hljs-string\">\", list(\"</span>backendteam<span class=\"hljs-string\">\"),\n )\n }\"</span></span>\n}\n</pre>\n<p>When you <code>terraform apply</code> the above code, the Module will configure Kubernetes to resolve the provided AWS IAM roles to\nthe specified RBAC groups when fulfilling client requests. In this case, any <code>kubectl</code> authentications using the\n<code>ApiDeveloper</code> IAM role will resolve to the <code>apiteam</code> Kubernetes RBAC group, while any authentications using the\n<code>BackendDeveloper</code> IAM role will resolve to the <code>backendteam</code> Kubernetes RBAC group. In this way, the\ndevelopers who authenticate as <code>ApiDeveloper</code> will only be able to access the <code>apiteam</code> namespace in the Kubernetes\ncluster, while the developers who authenticate as <code>BackendDeveloper</code> will only be able to access the <code>backendteam</code>\nnamespace.</p>\n<p><strong>Important</strong>: Note that we did not need to define the <code>apiteam</code> and <code>backendteam</code> group explicitly in Kubernetes. This\nis automatically handled by the authentication system. In Kubernetes, the group is implicitly defined as part of\ndefining a user entity that can map to it. As such, it is important to take care to avoid typos here to ensure that the\nstring you use for the group here matches any groups referenced in the role bindings.</p>\n<h2 class=\"preview__body--subtitle\" id=\"why-not-use-a-helm-chart\">Why not use a Helm Chart?</h2>\n<p>This Module cannot be implemented as a helm chart due to the functionality of the ConfigMap being generated here. In\nEKS, the worker nodes also use an IAM role to authenticate against the EKS Control Plane. As such, the worker nodes rely\non the mapping from the <code>aws-auth</code> ConfigMap generated by this module to be able to successfully register to the EKS\ncluster as a worker node.</p>\n<p>To use Helm, the Kubernetes cluster must be running the Tiller (Helm Server) Pods on the cluster. However, to run the\nTiller Pods, the cluster must have worker nodes online and available. As such, we have a chicken and egg situation,\nwhere to use Helm we need to have worker nodes, which need the <code>aws-auth</code> ConfigMap, which needs Helm.</p>\n<p>To avoid this cyclic dependency, we implement this module using the <code>kubernetes</code> provider which will use <code>kubectl</code> under\nthe hood. The cluster requirement for a working <code>kubectl</code> is the EKS control plane, which will be available without the\nConfigMap and as such does not have the cyclic dependency problem of Helm.</p>\n<h2 class=\"preview__body--subtitle\" id=\"aws-auth-config-map-generator-binary\">aws-auth ConfigMap Generator Binary</h2>\n<p>The <code>aws-auth</code> ConfigMap requires two string entries: <code>mapRoles</code>, which defines the mapping of IAM roles to Kubernetes\nRBAC roles, and <code>mapUsers</code>, which defines the mapping of IAM users to Kubernetes RBAC roles. Both entries in the\nConfigMap need to be defined as a YAML string for EKS to parse correctly.</p>\n<p>There are inherent challenges in the Terraform syntax that make it difficult to generate this YAML. Specifically, the\nneed to have a nested loop to generate the <code>groups</code> entry for each IAM role/user mapping. Therefore, to have better\nflexibility in generating the YAML entries, we resort to using a python binary to handle the YAML generation based on\nthe Terraform inputs to this module.</p>\n<p><strong>NOTE</strong>: When Terraform 0.12 lands, there will be richer syntax including for loops and complex map types that will\nsupport the generation of the YAML using pure Terraform. This binary will be replaced with a pure Terraform version when\n0.12 is released.</p>\n<p>The operator machine must have a valid python interpreter available in the <code>PATH</code> under the name <code>python</code>. The binary\nsupports python versions 2.7, 3.5, 3.6, 3.7, and 3.8, on Mac OSX or Linux.</p>\n<h3 class=\"preview__body--subtitle\" id=\"usage\">Usage</h3>\n<p>The binary is intended to be used as part of <a href=\"https://www.terraform.io/docs/providers/external/data_source.html\" class=\"preview__body--description--blue\" target=\"_blank\">a Terraform external data\nsource</a>. As such, the script will read JSON data from\nstdin and output JSON data to stdout.</p>\n<p>See this module's <a href=\"/repos/v0.19.6/terraform-aws-eks/modules/eks-k8s-role-mapping/main.tf\" class=\"preview__body--description--blue\">main.tf file</a> for example usage.</p>\n<h3 class=\"preview__body--subtitle\" id=\"building-the-binary\">Building the binary</h3>\n<p>The binary is a python executable that includes the necessary third party requirements. This special version of python\nembeds cross platform versions of the requirements that are unpacked at runtime into a virtualenv. This executable is\nthen used to call out to the entrypoint script, which will import the library function.</p>\n<p>As such, the binary only needs to be built when the requirements change. You do not need to rebuild the binary for any\nchanges to the source files in the <code>aws_auth_configmap_generator</code> library.</p>\n<p>This approach is taken so that consumers of the module do not need to install additional third party libraries on top of\npython to utilize the script. To make this work, the <code>pex</code> binaries need to be checked into the repository so that they\nare distributed with the module.</p>\n<p>The binary is generated using the <a href=\"https://pex.readthedocs.io/en/stable/whatispex.html\" class=\"preview__body--description--blue\" target=\"_blank\"><code>pex</code></a> utility. Pex will package\nthe python script with all its requirements into a single binary, that can be made to be compatible with multiple\nversions of python and multiple OS platforms.</p>\n<p>To build the binary, you will need the following:</p>\n<ul>\n<li>A working python environment with <strong>all compatible versions of python</strong> setup (so that you can build binaries for all\nversions)</li>\n<li>tox and pex installed (use <code>pip install -r dev_requirements.txt</code>)</li>\n</ul>\n<p>You can then build the binary using the helper script <code>build.sh</code> which will build the binary and copy it to the <code>bin</code>\ndirectory for distribution. After that, you just need to check in the updated binaries.</p>\n<p>It is recommended to use <a href=\"https://github.com/pyenv/pyenv\" class=\"preview__body--description--blue\" target=\"_blank\"><code>pyenv</code></a> to help setup an environment with multiple python\ninterpreters. The latest binaries are built with the following python environment:</p>\n<pre>pyenv shell <span class=\"hljs-number\">2.7</span><span class=\"hljs-number\">.15</span> <span class=\"hljs-number\">3.5</span><span class=\"hljs-number\">.2</span> <span class=\"hljs-number\">3.6</span><span class=\"hljs-number\">.6</span> <span class=\"hljs-number\">3.7</span><span class=\"hljs-number\">.0</span> <span class=\"hljs-number\">3.8</span><span class=\"hljs-number\">.1</span>\n</pre>\n","repoName":"terraform-aws-eks","repoRef":"v0.14.0","serviceDescriptor":{"serviceName":"EC2 Kubernetes Service (EKS) Cluster","serviceRepoName":"terraform-aws-eks","serviceRepoOrg":"gruntwork-io","cloudProviders":["aws"],"description":"Deploy a Kubernetes cluster on top of Amazon EC2 Kubernetes Service (EKS).","imageUrl":"eks.png","licenseType":"subscriber","technologies":["Terraform","Python","Bash"],"compliance":[],"tags":[""]},"serviceCategoryName":"Docker orchestration","fileName":"README.md","filePath":"/modules/eks-k8s-role-mapping/README.md","title":"Repo Browser: EC2 Kubernetes Service (EKS) Cluster","description":"Browse the repos in the Gruntwork Infrastructure as Code Library."}