Browse the Repo

file-type-icon_docs
file-type-icon_images
file-type-icon01-architecture-overview.md
file-type-icon02-whats-deployed.md
file-type-icon03-security-compliance-compatibility.md
file-type-icon04-how-code-is-organized.md
file-type-icon05-dev-environment.md
file-type-icon06-ci-cd.md
file-type-icon07-monitoring-alerting-logging.md
file-type-icon08-ssh-vpn.md
file-type-icon09-accounts-and-auth.md
file-type-icon10-gruntwork-tools.md
file-type-icon11-deploying-a-docker-service.md
file-type-icon12-migration.md
file-type-icon13-deploying-the-reference-architecture-fr...
file-type-icon14-undeploying-the-reference-architecture.md
file-type-icon15-adding-new-environments-regions-and-acc...
file-type-iconREADME.md
file-type-icondev
file-type-iconmaster
file-type-iconprod
file-type-iconsecurity
file-type-iconshared-services
file-type-iconstage
file-type-icon.gitignore
file-type-iconCODEOWNERS
file-type-iconREADME.md

Browse the Repo

file-type-icon_docs
file-type-icon_images
file-type-icon01-architecture-overview.md
file-type-icon02-whats-deployed.md
file-type-icon03-security-compliance-compatibility.md
file-type-icon04-how-code-is-organized.md
file-type-icon05-dev-environment.md
file-type-icon06-ci-cd.md
file-type-icon07-monitoring-alerting-logging.md
file-type-icon08-ssh-vpn.md
file-type-icon09-accounts-and-auth.md
file-type-icon10-gruntwork-tools.md
file-type-icon11-deploying-a-docker-service.md
file-type-icon12-migration.md
file-type-icon13-deploying-the-reference-architecture-fr...
file-type-icon14-undeploying-the-reference-architecture.md
file-type-icon15-adding-new-environments-regions-and-acc...
file-type-iconREADME.md
file-type-icondev
file-type-iconmaster
file-type-iconprod
file-type-iconsecurity
file-type-iconshared-services
file-type-iconstage
file-type-icon.gitignore
file-type-iconCODEOWNERS
file-type-iconREADME.md
Multi-account Reference Architecture

Multi-account Reference Architecture

End-to-end tech stack designed to deploy into multiple AWS accounts. Includes VPCs, EKS, ALBs, CI / CD, monitoring, alerting, VPN, DNS, and more.

Code Preview

Preview the Code

mobile file icon

09-accounts-and-auth.md

down

Accounts and Auth

In the last section, you learned about connecting to your servers using SSH and VPN. In this section, you'll learn about connecting to your AWS accounts:

Auth basics

For an overview of AWS authentication, including how to authenticate on the command-line, we strongly recommend reading A Comprehensive Guide to Authenticating to AWS on the Command Line.

Account setup

Each of your environments (e.g., stage, prod) is in a separate AWS account. This gives you more fine grained control over who can access what and improves isolation and security, as a mistake or breach in one account is unlikely to affect the others. The accounts are:

  • dev: 087285199408
  • master: 087285199408
  • prod: 087285199408
  • security: 087285199408
  • shared-services: 087285199408
  • stage: 087285199408

Note that all IAM users are deployed in a single account called "Security." The idea is that you log into the Security account and, if you need to do something in one of the other accounts, you "switch" to it by assuming an IAM Role in that account (if you've been granted the necessary permissions).

Switching accounts prerequisites

If you are logged in as an IAM user in account A and you want to switch to account B, you need the following:

  1. Account B must have an IAM role that explicitly allows your IAM user in account A (or all IAM users in account A) to assume that IAM role. We have already set this up in all accounts using the cross-account-iam-roles module.

  2. Your IAM user in account A must have the proper IAM permissions to assume roles in account B. We have created IAM groups with these permissions using the iam-groups module. Typically, these IAM groups using the naming convention _account.xxx, where xxx is the name of an account you can switch to (e.g. _account.stage, _account.prod). There is also an _account.all group that allows you to switch to all other accounts. Make sure your IAM user is in the appropriate group.

Once you take care of the two prerequisites above, you will need two pieces of information to switch to another account:

  1. The ID of the account you wish to switch to. You should get this from whoever administers your AWS accounts.

  2. The name of the IAM role in that account you want to assume. Typically, this will be one of the roles from the cross-account-iam-roles module

    such as allow-read-only-access-from-other-accounts or allow-full-access-from-other-accounts.

With these two pieces of data, you should be able to switch accounts in the AWS console or with AWS CLI tools as explained in the following two sections.

Switching accounts in the AWS console

Check out the AWS Switching to a Role (AWS Console) documentation for instructions on how to switch between accounts in the AWS console with a single click.

Switching with CLI tools (including Terraform)

The official way to assume an IAM role with AWS CLI tools is documented here: AWS Switching to a Role (AWS Command Line Interface) documentation. This process requires quite a few steps, so here are easier ways to do it:

  1. Terragrunt has the ability to assume an IAM role before running Terraform. That means you can authenticate to any account by:

    1. Authenticate to your Security account (the one where the IAM users are defined) using the normal process, such as setting the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables for that account.

    2. Call Terragrunt with the --terragrunt-iam-role argument or set the TERRAGRUNT_IAM_ROLE environment variable. For example, to assume the allow-full-access-from-other-accounts role in account 1111111111111: export TERRAGRUNT_IAM_ROLE=arn:aws:iam::1111111111111:role/allow-full-access-from-other-accounts.

    3. Now you can use all your normal Terragrunt commands: e.g., terragrunt plan.

  2. If you want to assume an IAM role in another account for some other AWS CLI tool, the easiest way to do it is with the aws-auth script, which can reduce the authentication process to a one-liner. This tool is also useful for authenticating in the CLI when MFA is enabled.

Authenticating

Some best practices around authenticating to your AWS account:

Note that most of this section comes from the Gruntwork Security Best Practices document, so make sure to read through that for more info.

Enable MFA

Always enable multi-factor authentication (MFA) for your AWS account. That is, in addition to a password, you must provide a second factor to prove your identity. The best option for AWS is to install Google Authenticator on your phone and use it to generate a one-time token as your second factor.

Use a password manager

Never store secrets in plain text. Store your secrets using a secure password manager, such as pass, OS X Keychain, or KeePass. You can also use cloud-based password managers, such as 1Password or LastPass, but be aware that since they have everyone's passwords, they are inherently much more tempting targets for attackers. That said, any reasonable password manager is better than none at all!

Don't use the root user

AWS uses the Identity and Access Management (IAM) service to manage users and their permissions. When you first sign up for an AWS account, you are logged in as the root user. This user has permissions to do everything in the account, so if you compromise these credentials, you’re in deep trouble.

Therefore, right after signing up, you should:

  1. Enable MFA on your root account. Note: we strongly recommend making a copy of the MFA secret key. This way, if you lose your MFA device (e.g. your iPhone), you don’t lose access to your AWS account. To make the backup, when activating MFA, AWS will show you a QR code. Click the "show secret key for manual configuration" link and save that key to a secure password manager.

  2. Make sure you use a very long and secure password. Never share that password with anyone. If you need to store it (as opposed to memorizing it), only store it in a secure password manager.

  3. Use the root account to create a separate IAM user for yourself and your team members with more limited IAM permissions. You should manage permissions using IAM groups. See the iam-groups module for details.

  4. Use IAM roles when you need to give limited permissions to tools (for eg, CI servers or EC2 instances).

  5. Require all IAM users in your account to use MFA.

  6. Never use the root IAM account again.

Kubernetes RBAC Roles and Helm Authentication

Up to this point we focused on accounts and authentication in AWS. However, with EKS, Kubernetes adds another layer of accounts and authentication that are tied to, but not exactly the same as, AWS IAM.

In this section, you'll learn about Kubernetes RBAC roles and Helm authentication:

RBAC basics

Role Based Access Control (RBAC) is a method to regulate access to resources based on the role that individual users assume in an organization. Kubernetes allows you to define roles in the system that individual users inherit, and explicitly grant permissions to resources within the system to those roles. The Control Plane will then honor those permissions when accessing the resources on Kubernetes through clients such as kubectl. When combined with namespaces, you can implement sophisticated control schemes that limit the access of resources across the roles in your organization.

The RBAC system is managed using ClusterRole and ClusterRoleBinding resources (or Role and RoleBinding resources if restricting to a single namespace). The ClusterRole (or Role) object defines a role in the Kubernetes system that has explicit permissions on what it can and cannot do. These roles are then bound to users and groups using the ClusterRoleBinding (or RoleBinding) resource. An important thing to note here is that you do not explicitly create users and groups using RBAC, and instead rely on the authentication system to implicitly create these entities.

You can refer to Gruntwork's RBAC example scenarios for use cases.

Relation to IAM Roles

EKS manages authentication to Kubernetes based on AWS IAM roles and users. This is done by embedding AWS IAM credentials (the access key and secret key) into the authentication token used to authenticate to the Kubernetes API. The API server then forwards this to AWS to validate it, and then reconciles the role / user into an RBAC user and group that is then used to reconcile authorization rules for the API.

By default all IAM roles and users (except for the role / user that deployed the cluster) has no RBAC user or groups associated with it. This automatically translates the role / user into an anonymous user on the cluster, who by default has no permissions. In order to allow access to the cluster, you need to explicitly bind the IAM role / user to an RBAC entity, and then bind Roles or ClusterRoles that explicitly grants permissions to perform actions on the cluster. This mapping is handled by the eks-k8s-role-mapping module, used under the hood in the eks-cluster infrastructure module.

You can read more about the relationship between IAM roles and RBAC roles in EKS in the official documentation.

Namespaces and RBAC

Namespaces are Kubernetes resources that creates virtual partition boundaries in your cluster. The resources in each Namespace are isolated from other Namespaces, and can only interact with them through Service endpoints, unless explicit permissions are granted. This allows you to divide the cluster between multiple users in a way that prevents them from seeing each others' resources, allowing you to share clusters while protecting sensitive information.

RBAC is critical in achieving isolation of Namespaces. The RBAC permissions can be restricted by Namespace. This allows you to bind permissions to entities such that they can only perform certain actions on resources within a particular Namespace.

Refer to the eks-k8s-role-mapping module docs for an example on using RBAC to restrict actions to a particular Namespace.

Every EKS cluster comes with two default Namespaces:

  • kube-system: This Namespace holds admin and cluster level resources. Only cluster administrators ("superusers") should have access to this Namespace.
  • default: This is the default Namespace that is used for API calls that don't specify a particular Namespace. This should primarily be used for development and experimentation purposes.

Additionally, in the Reference Architecture, we create another Namespace: applications. This Namespace is used to house the deployed sample applications and its associated resources.

Most Kubernetes tools will let you set the Namespace as CLI args. For example, kubectl supports a -n parameter for specifying which Namespace you intend to run the command against. kubectl additionally supports overriding the default Namespace for your commands by binding a Namespace to your authentication context.

Accessing the cluster

As mentioned in Relation to IAM Roles, EKS proxies Kubernetes authentication through AWS IAM credentials. This means that you need to be authenticated to AWS first in order to authenticate to Kubernetes. Refer to the previous section on AWS authentication for information on how to authenticate to AWS.

There are three main ways to interact with Kubernetes in the Reference Architecture:

Terragrunt / Terraform

When deploying Kubernetes resources using Terragrunt / Terraform, all the authentication is handled inside of Terraform using a combination of EKS data sources and provider logic. What this means is that you don't have to worry about explicitly authenticating to Kubernetes when going through Terraform, as long as you are authenticating to an IAM role that has a valid mapping to an RBAC entity in the cluster.

The one exception to this is the modules that depend on helm, which requires additional configuration. See the section on helm for more info.

Kubectl

Most manual operations in Kubernetes are handled through the kubectl command line utility. kubectl requires an explicit authentication configuration to access the cluster.

You can use kubergrunt to configure your local kubectl client to authenticate against a deployed EKS cluster. After authenticating to AWS, run:

kubergrunt eks configure --eks-cluster-arn $EKS_CLUSTER_ARN

This will add a new entry to your kubectl config file (defaults to $HOME/.kube/config) with the logic for authenticating to EKS, registering it under the context name $EKS_CLUSTER_ARN. You can modify the name of the context using the --kubectl-context-name CLI arg.

You can verify the setup by running:

kubectl cluster-info

This will report information about the Kubernetes endpoints for the cluster only if you are authorized to access to the cluster. Note that you will need to be authenticated to AWS for kubectl to successfully authenticate to the cluster.

If you have multiple clusters, you can switch the kubectl context using the use command. For example, to switch the current context to the dev EKS cluster from the prod cluster and back:

kubectl use arn:aws:eks:us-east-1:$DEV_ACCOUNT_ID:cluster/eks-dev
kubectl cluster-info  # Should target the dev EKS cluster
kubectl use arn:aws:eks:us-east-1:$PROD_ACCOUNT_ID:cluster/eks-prod
kubectl cluster-info  # Should target the prod EKS cluster

Helm

Helm relies on TLS based authentication and authorization to access Tiller (the Helm Server). This is separate from the RBAC based authorization native to Kubernetes. Intuitively, RBAC is used to manage whether or not someone can lookup the Pod endpoint address, while the TLS authentication and authorization scheme manages whether or not you can establish a connection to the Tiller server. All deployments of Tiller in the Reference Architecture uses kubergrunt to manage the TLS certificates.

We highly recommend reading Gruntwork's guide to helm to understand the security model surrounding Helm and Tiller.

kubergrunt manages the TLS certificates using Kubernetes Secrets, guarded by RBAC roles. A cluster administrator can grant access to any RBAC entity to any Tiller deployment using the kubergrunt helm grant command. For example, to grant access to a Tiller server deployed in the namespace applications-tiller to the RBAC user allow-full-access-from-other-accounts:

kubergrunt helm grant \
    --tls-common-name allow-full-access-from-other-accounts \
    --tls-org Acme Multi Account \
    --tiller-namespace applications \
    --rbac-user allow-full-access-from-other-accounts

Note on RBAC users: The RBAC user username (--rbac-user) corresponds to the IAM Role or User name of the authenticating AWS credentials.

This generates new TLS certificate key pairs that grant access to the Tiller deployed in the applications-tiller Namespace. In addition, this creates and binds RBAC roles that allow users of the RBAC group developers to be able to read the necessary Secrets to download the generated TLS certificate key pairs.

Now anyone who maps to the developers RBAC group can use the kubergrunt helm configure command to setup their helm client to access the deployed Tiller:

kubergrunt helm configure \
    --tiller-namespace applications-tiller \
    --resource-namespace applications \
    --rbac-user allow-full-access-from-other-accounts

This will:

  • Download the client TLS certificate key pair generated with the grant command.
  • Install the TLS certificate key pair in the helm home directory (defaults to $HOME/.helm).
  • Install an environment file that sets up environment variables to target the specific helm server (defaults to $HELM_HOME/env). This environment file needs to be loaded before issuing any commands, at it sets the necessary environment variables to signal to the helm client which helm server to use. The environment variables it sets are:
    • HELM_HOME: The helm client home directory where the TLS certs are located.
    • TILLER_NAMESPACE: The namespace where the helm server is installed.
    • HELM_TLS_VERIFY: This will be set to true to enable TLS verification.
    • HELM_TLS_ENABLE: This will be set to true to enable TLS authentication.

Once this is setup, Terraform modules that need to access helm will be able to use the downloaded credentials to authenticate to Tiller. Additionally, once you source the environment file, you will be able to use the helm client to directly work with Tiller.

If you have the helm client installed, you can verify your configuration setup using the helm version command:

helm version

If your helm client is configured correctly, the version command will output information about the deployed Tiller instance that it connected to.

Next steps

Now that you know how to authenticate, you may want to take a look through this list of Gruntwork Tools.

Questions? Ask away.

We're here to talk about our services, answer any questions, give advice, or just to chat.

Ready to hand off the Gruntwork?