This Terraform Module can be used to declaratively deploy and manage multiple Tiller (the server component of Helm)
deployments in a single Kubernetes cluster.
Unlike the defaults installed by the helm client, the deployed Tiller instances:
Use Kubernetes Secrets instead of ConfigMaps for storing release information.
Enable TLS verification and authentication.
Only listens on localhost within the container.
Note: Please be advised that there are plans by the Helm community to deprecate and remove Tiller starting Helm v3. This
repository will be updated with migration instructions to help smooth out the upgrade when Helm v3 lands.
How do you use this module?
See the root README for
instructions on using Terraform modules.
See variables.tf
for all the variables you can set on this module.
See outputs.tf
for all the variables that are outputed by this module.
What is Tiller?
Tiller is a component of Helm that runs inside the Kubernetes cluster. Tiller is what provides the functionality to
apply the Kubernetes resource descriptions to the Kubernetes cluster. When you install a release, the helm client
essentially packages up the values and charts as a release, which is submitted to Tiller. Tiller will then generate
Kubernetes YAML files from the packaged release, and then apply the generated Kubernetes YAML file from the charts on
the cluster.
You can read more about Helm, Tiller, and their security model in our Helm
guide.
This module ensures all the security features provided by Helm are employed by:
Forcing a named ServiceAccount and avoiding defaults.
Enabling TLS verification features
What ServiceAccount should I use for Tiller?
This module requires a ServiceAccount to use for Tiller, specified by the tiller_service_account_name and
tiller_service_account_token_secret_name input variables. Tiller relies on ServiceAccounts and the associated RBAC
roles to properly restrict what Helm Charts can do. The RBAC system in Kubernetes allows the operator to define fine
grained permissions on what an individual or system can do in the cluster. By using RBAC, you can restrict Tiller
installs to only manage resources in particular namespaces, or even restrict what resources Tiller can manage.
The specific roles to use for Tiller depends on your infrastructure needs. At a minimum, Tiller needs enough permissions
to manage its own metadata, and permissions to deploy resources in the target Namespace. We provide minimal permission
sets that you can use in the k8s-namespace-roles
module. You can
associate the rbac_tiller_metadata_access_role and rbac_tiller_resource_access_role roles created by the module to
the Tiller ServiceAccount to grant those permissions. For example, the following terraform code will create these
roles in the kube-systemNamespace and attach it to a new ServiceAccount that you can then use in this module:
This will create the default roles in the kube-systemNamespace. Then, it will create a new ServiceAccount names
tiller in the kube-systemNamespace, bound to the metadata access role and resource access role of the
kube-systemNamespace. This allows the tillerServiceAccount to manage it's state in Kubernetes Secrets in the
kube-systemNamespace, and deploy resources in there.
TLS authentication and verification
This module installs Tiller with TLS verification turned on. If you are unfamiliar with TLS/SSL, we recommend reading
this background
document describing how it works before continuing.
With this feature, Tiller will validate client side TLS certificates provided as part of the API call to ensure the
client has access. Likewise, the client will also validate the TLS certificates provided by Tiller. In this way, both
the client and the server can trust each other as authorized entities.
To achieve this, we will need to generate a Certificate Authority (CA) that can be used to issue and validate
certificates. This CA will be shared between the server and the client to validate each others' certificates.
Then, using the generated CA, we will issue at least two sets of signed certificates:
A certificate for Tiller that identifies it.
A certificate for the Helm client that identifies it.
We recommend that you issue a certificate for each unique helm client (and therefore each user of helm). This makes it
easier to manage access for team changes (e.g when someone leaves the team), as well as compliance requirements (e.g
access logs that uniquely identifies individuals).
Finally, both Tiller and the Helm client need to be setup to utilize the issued certificates.
To summarize, assuming a single client, in this model we have three sets of TLS key pairs in play:
Key pair for the CA to issue new certificate key pairs.
Key pair to identify Tiller.
Key pair to identify the client.
This module supports three ways to setup the CA and server side TLS certificates for Tiller:
This method of configuring the TLS certs requires that the TLS certs have already been generated. To use this method,
set tiller_tls_gen_method to "none".
Tiller expects to mount the TLS keys from a Secret resource. To directly pass in to Tiller, you must first upload the
TLS certificate key pair with the CA public certificate into a Secret resource in the Namespace where you intend on
deploying Tiller. Then, you can pass in the name of the Secret as the tiller_tls_secret_name variable to this module
to deploy Tiller with that Secret mounted. You can configure what keys to read the certificate key pairs from using
the tiller_tls_key_file_name, tiller_tls_cert_file_name, and tiller_tls_cacert_file_name variables for the private
key, public certificate, and CA public certificate files respectively.
Generating with tls provider
WARNING: The private keys generated using this method will be stored unencrypted in your Terraform state file. If you
are sensitive to storing secrets in your Terraform state file, consider using kubergrunt to generate and manage your
TLS certificate. See Generating with kubergrunt for more details.
This method of configuring the TLS certs utilizes the k8s-tiller-tls-certs
module to generate
the TLS CA, and a signed certificate key pair for Tiller using that CA. To use this method, set tiller_tls_gen_method
to "provider".
When this method is set, the module will call out to k8s-tiller-tls-certs to generate TLS certificate key pairs that
are then stored as Kubernetes Secrets. Under the hood the
k8s-tiller-tls-certs module uses the tls
provider to generate the TLS certificates, and the kubernetes
provider to manage the Secrets.
The main advantage of this approach is that everything will be managed in Terraform. This means that you have access to
the full lifecycle of Terraform, including plan to see drift and destroy to undo your changes.
This method requires specifying the TLS subject info as the tiller_tls_subject input map, which is used to generate
the identifying information of the certificate. See
https://www.terraform.io/docs/providers/tls/r/cert_request.html#common_name for a list of expected keys for this map.
Generating with kubergrunt
WARNING: This method requires the kubergrunt and kubectl binaries to be installed and available. See
https://github.com/gruntwork-io/kubergrunt for installation instructions for kubergrunt, and
https://kubernetes.io/docs/tasks/tools/install-kubectl/ for installation instructions for kubectl.
NOTE: You must have kubergrunt version >=0.5.8
This method of configuring the TLS certs utilizes kubergrunt to generate
the TLS CA, and a signed certificate key pair for Tiller using that CA. To use this method, set tiller_tls_gen_method
to "kubergrunt".
When this method is set, the module will call out to kubergrunt to generate the TLS certificate key pairs and store
them as Kubernetes Secrets. kubergrunt handles both steps in a single callout, which keeps the TLS certificates from
leaking into the Terraform state file. The only thing that is stored in the state is the Kubernetes Secret references,
not the contents. However, because this uses null_resources and an external binary, not all features of Terraform are
available. For example, you can not rely on plan to see drift if anything changes about the Kubernetes Secret
storing the TLS certs.
This method requires specifying the TLS subject info as the tiller_tls_subject input map, which is used to generate
the identifying information of the certificate. See
https://www.terraform.io/docs/providers/tls/r/cert_request.html#common_name for a list of expected keys for this map.
This method also requires configuring authentication to the Kubernetes cluster. Currently kubergrunt only supports
either using config contexts, or directly passing in tokens and server info. Note that you can not mix the two methods
(e.g you cannot pull the server info from the context and use a passed in token).
Using config contexts is the default authentication method. When no authentication parameters are set, kubergrunt will
load the default context from the default config location (typically $HOME/.kube/config). You can control which
context to use using the input variable kubectl_config_context_name. You can also specify your config file location
using the input variable kubectl_config_path.
If you wish to avoid using the config, you can pass in the server and token info directly. This method is automatically
chosen if the kubectl_server_endpoint is provided. Note that kubectl_ca_b64_data and kubectl_token must also be
provided for this method.
How do I grant access to other users?
In order to access Tiller, you will typically need to generate additional signed certificates using the generated TLS CA
certs. If you used the direct method, you will have to rely on your certificate provider to sign additional client
certificates. For ther other two methods, you can take a look at How do you use the generated TLS certs to sign
additional
certificates
for information on how sign additional certificates using the generated TLS CA.
Questions? Ask away.
We're here to talk about our services, answer any questions, give advice, or just to chat.
{"treedata":{"name":"root","toggled":true,"children":[{"name":".circleci","children":[{"name":"config.yml","path":".circleci/config.yml","sha":"cef3f13f1128f203609bd70491c5789049d97396"}]},{"name":".gitignore","path":".gitignore","sha":"ca31ff35c5b25c686571a0430a14f86a38f15e77"},{"name":".pre-commit-config.yaml","path":".pre-commit-config.yaml","sha":"2dd1a8d4e16b65a1991537d51b5b59b6df1866f8"},{"name":"CODEOWNERS","path":"CODEOWNERS","sha":"89db2c0afb6268a0fa92d8e841018cef4bc653cb"},{"name":"CONTRIBUTING.md","path":"CONTRIBUTING.md","sha":"b12849077d576a7fad88da42db1131bb3369e194"},{"name":"GRUNTWORK_PHILOSOPHY.md","path":"GRUNTWORK_PHILOSOPHY.md","sha":"02d9873a74c99fe6d9b6b26bd9f8eb4a7a699c32"},{"name":"LICENSE","path":"LICENSE","sha":"276620ad6ffbc9954fd6633d167b0501155441d4"},{"name":"README.md","path":"README.md","sha":"444e80d608763aabb990f37a8badbe3bdf24b872"},{"name":"examples","children":[{"name":"k8s-namespace-with-service-account","children":[{"name":"README.md","path":"examples/k8s-namespace-with-service-account/README.md","sha":"cdff492defdaa77f736adc1c1ce9c1db1b3a9e1e"},{"name":"main.tf","path":"examples/k8s-namespace-with-service-account/main.tf","sha":"071b39770653ee2d6d12c80119bb290fff5c35d8"},{"name":"outputs.tf","path":"examples/k8s-namespace-with-service-account/outputs.tf","sha":"71e367b5fa5fc4d940c68b1689d340ab6fa2e17c"},{"name":"variables.tf","path":"examples/k8s-namespace-with-service-account/variables.tf","sha":"f38169a042290747cd6cda6375d99af81b1df97e"}]},{"name":"k8s-tiller-kubergrunt-minikube","children":[{"name":"README.md","path":"examples/k8s-tiller-kubergrunt-minikube/README.md","sha":"f4ee8247319417b5d433017005af1284c2feda71"},{"name":"main.tf","path":"examples/k8s-tiller-kubergrunt-minikube/main.tf","sha":"a2d71286a91c7cf02371ff10336ad6c8c65c5a99"},{"name":"outputs.tf","path":"examples/k8s-tiller-kubergrunt-minikube/outputs.tf","sha":"f839014a1879cfef47954641a01edf7820ac3bdc"},{"name":"variables.tf","path":"examples/k8s-tiller-kubergrunt-minikube/variables.tf","sha":"3da8530a497b5102a1bd8d88c473e15794e7f9d5"}]},{"name":"k8s-tiller-minikube","children":[{"name":"README.md","path":"examples/k8s-tiller-minikube/README.md","sha":"d579c9f2952bb63704def4d2c799a267e2ca3b2a"}]}]},{"name":"main.tf","path":"main.tf","sha":"506f789e6c7b0a9c767a683a4bc101498daef4d2"},{"name":"modules","children":[{"name":"k8s-helm-client-tls-certs","children":[{"name":"README.md","path":"modules/k8s-helm-client-tls-certs/README.md","sha":"4806c8188e5da43941b60b091133ad58f2537532"},{"name":"main.tf","path":"modules/k8s-helm-client-tls-certs/main.tf","sha":"f9727619e0606e5710d81934c541d94d528ee633"},{"name":"outputs.tf","path":"modules/k8s-helm-client-tls-certs/outputs.tf","sha":"59994511786075fe1a930e3ec6b46dd6134fff29"},{"name":"variables.tf","path":"modules/k8s-helm-client-tls-certs/variables.tf","sha":"445e4da760004173140af1176fa80b0cc8722a81"}]},{"name":"k8s-namespace-roles","children":[{"name":"README.md","path":"modules/k8s-namespace-roles/README.md","sha":"9aaca3f9e32408e23c02622d33942e0ce4586e34"},{"name":"main.tf","path":"modules/k8s-namespace-roles/main.tf","sha":"f82819379a8864bd6b9e71a8a18d815dbbc02559"},{"name":"outputs.tf","path":"modules/k8s-namespace-roles/outputs.tf","sha":"ab91d72a436cc5e450795182b4b43b78be207293"},{"name":"variables.tf","path":"modules/k8s-namespace-roles/variables.tf","sha":"0e5be83826073b0f786ff8f9b04826c555208758"}]},{"name":"k8s-namespace","children":[{"name":"README.md","path":"modules/k8s-namespace/README.md","sha":"4fa9469fbbd22faae11ac3f461487b2bfbe167e6"},{"name":"main.tf","path":"modules/k8s-namespace/main.tf","sha":"eba9f2b3dbc0191d2d4bfe2931caac9b58122541"},{"name":"outputs.tf","path":"modules/k8s-namespace/outputs.tf","sha":"d8e2f96f44b67f9d0d59431c789b39609a745996"},{"name":"variables.tf","path":"modules/k8s-namespace/variables.tf","sha":"572a62e2ca963233931c527bab99fd9f0a3a048b"}]},{"name":"k8s-service-account","children":[{"name":"README.md","path":"modules/k8s-service-account/README.md","sha":"a53dfad1ff1d991dfed08fb5da77f9c15e3b6d50"},{"name":"main.tf","path":"modules/k8s-service-account/main.tf","sha":"3c5efa6c04722d679f1707ab49c118d39ef6a806"},{"name":"outputs.tf","path":"modules/k8s-service-account/outputs.tf","sha":"c5c2389e4646bb2a16b87bec129330e0c3c4dcf6"},{"name":"variables.tf","path":"modules/k8s-service-account/variables.tf","sha":"4f0cfad5f8a5869ff201fad385ecaa0de7454674"}]},{"name":"k8s-tiller-tls-certs","children":[{"name":"README.md","path":"modules/k8s-tiller-tls-certs/README.md","sha":"bf4e7de237e87459f8f617b6fa60f0ed1d94cd86"},{"name":"main.tf","path":"modules/k8s-tiller-tls-certs/main.tf","sha":"69d3adb8717f12381f0683c0ab37581975b5a8dc"},{"name":"outputs.tf","path":"modules/k8s-tiller-tls-certs/outputs.tf","sha":"1a4038f3a59478a9f2d4b76eef51817dd4200dc7"},{"name":"variables.tf","path":"modules/k8s-tiller-tls-certs/variables.tf","sha":"259ebea616195628a68b8eb8a1da83465cd0c782"}]},{"name":"k8s-tiller","children":[{"name":"README.md","path":"modules/k8s-tiller/README.md","sha":"7dec15a673134ebc488a070dc614d04c2bf538d3","toggled":true},{"name":"main.tf","path":"modules/k8s-tiller/main.tf","sha":"c278eeebee9bc47d059f59a22916e5cc6dac1335"},{"name":"outputs.tf","path":"modules/k8s-tiller/outputs.tf","sha":"e23f872780e656e4f12578d439eb910692f59a60"},{"name":"variables.tf","path":"modules/k8s-tiller/variables.tf","sha":"5d03e38bfec082950d20c0f48865d7c5107f259f"}],"toggled":true}],"toggled":true},{"name":"outputs.tf","path":"outputs.tf","sha":"e81c16d641a0e30b706292ad37ce8b8ef343becb"},{"name":"test","children":[{"name":"Gopkg.lock","path":"test/Gopkg.lock","sha":"cdae09784de4638a1b0eea0525c3be70cebcf2d7"},{"name":"Gopkg.toml","path":"test/Gopkg.toml","sha":"a8644e81d7acf83db32419833ca1ca318d2559c1"},{"name":"README.md","path":"test/README.md","sha":"c4361f3756f62c10366b7302401e8a53552061bb"},{"name":"k8s_namespace_with_service_account_test.go","path":"test/k8s_namespace_with_service_account_test.go","sha":"8fac6e063e5c03371b681cdd48a98a8d27137712"},{"name":"k8s_tiller_kubergrunt_test.go","path":"test/k8s_tiller_kubergrunt_test.go","sha":"67f31a218feaa2840cdad23308b577f442bd3f4d"},{"name":"k8s_tiller_test.go","path":"test/k8s_tiller_test.go","sha":"7501db1bfc755b2709c688a5bbd1867b4238b44a"},{"name":"kubefixtures","children":[{"name":"curl-kubeapi-as-service-account.yml.tpl","path":"test/kubefixtures/curl-kubeapi-as-service-account.yml.tpl","sha":"12fa119c7e183bb8d35cda86322c92e6a36a5307"},{"name":"namespace-check-create-pod.json.tpl","path":"test/kubefixtures/namespace-check-create-pod.json.tpl","sha":"ba88dfa440d815c221febd455550dd6fdbe7cbac"},{"name":"namespace-check-list-pod.json.tpl","path":"test/kubefixtures/namespace-check-list-pod.json.tpl","sha":"047bb650ac081c5f63d9491cde3ca80b92603489"}]},{"name":"terratest_options.go","path":"test/terratest_options.go","sha":"675f98b4c34be584c9f7eea240f290949f08f460"}]},{"name":"variables.tf","path":"variables.tf","sha":"7efc75a2f7d990de3cdf52c7589e9468da6f4133"}]},"detailsContent":"<h1 class=\"preview__body--title\" id=\"k-8-s-tiller-helm-server-module\">K8S Tiller (Helm Server) Module</h1><div class=\"preview__body--border\"></div><p></p>\n<p>This Terraform Module can be used to declaratively deploy and manage multiple Tiller (the server component of Helm)\ndeployments in a single Kubernetes cluster.\nUnlike the defaults installed by the helm client, the deployed Tiller instances:</p>\n<ul>\n<li>Use Kubernetes Secrets instead of ConfigMaps for storing release information.</li>\n<li>Enable TLS verification and authentication.</li>\n<li>Only listens on localhost within the container.</li>\n</ul>\n<p>Note: Please be advised that there are plans by the Helm community to deprecate and remove Tiller starting Helm v3. This\nrepository will be updated with migration instructions to help smooth out the upgrade when Helm v3 lands.</p>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-you-use-this-module\">How do you use this module?</h2>\n<ul>\n<li>See the <a href=\"/repos/v0.6.1/terraform-kubernetes-helm/README.md\" class=\"preview__body--description--blue\">root README</a> for\ninstructions on using Terraform modules.</li>\n<li>This module uses <a href=\"https://www.terraform.io/docs/providers/kubernetes/index.html\" class=\"preview__body--description--blue\" target=\"_blank\">the <code>kubernetes</code> provider</a>.</li>\n<li>See <a href=\"/repos/v0.6.1/terraform-kubernetes-helm\" class=\"preview__body--description--blue\">the example at the root of the repo</a> for example\nusage.</li>\n<li>See <a href=\"/repos/v0.6.1/terraform-kubernetes-helm/modules/k8s-tiller/variables.tf\" class=\"preview__body--description--blue\">variables.tf</a>\nfor all the variables you can set on this module.</li>\n<li>See <a href=\"/repos/v0.6.1/terraform-kubernetes-helm/modules/k8s-tiller/outputs.tf\" class=\"preview__body--description--blue\">outputs.tf</a>\nfor all the variables that are outputed by this module.</li>\n</ul>\n<h2 class=\"preview__body--subtitle\" id=\"what-is-tiller\">What is Tiller?</h2>\n<p>Tiller is a component of Helm that runs inside the Kubernetes cluster. Tiller is what provides the functionality to\napply the Kubernetes resource descriptions to the Kubernetes cluster. When you install a release, the helm client\nessentially packages up the values and charts as a release, which is submitted to Tiller. Tiller will then generate\nKubernetes YAML files from the packaged release, and then apply the generated Kubernetes YAML file from the charts on\nthe cluster.</p>\n<p>You can read more about Helm, Tiller, and their security model in our <a href=\"/repos/kubergrunt/HELM_GUIDE.md\" class=\"preview__body--description--blue\">Helm\nguide</a>.</p>\n<p>This module ensures all the security features provided by Helm are employed by:</p>\n<ul>\n<li>Forcing a named <code>ServiceAccount</code> and avoiding defaults.</li>\n<li>Enabling TLS verification features</li>\n</ul>\n<h3 class=\"preview__body--subtitle\" id=\"what-service-account-should-i-use-for-tiller\">What ServiceAccount should I use for Tiller?</h3>\n<p>This module requires a <code>ServiceAccount</code> to use for Tiller, specified by the <code>tiller_service_account_name</code> and\n<code>tiller_service_account_token_secret_name</code> input variables. Tiller relies on <code>ServiceAccounts</code> and the associated RBAC\nroles to properly restrict what Helm Charts can do. The RBAC system in Kubernetes allows the operator to define fine\ngrained permissions on what an individual or system can do in the cluster. By using RBAC, you can restrict Tiller\ninstalls to only manage resources in particular namespaces, or even restrict what resources Tiller can manage.</p>\n<p>The specific roles to use for Tiller depends on your infrastructure needs. At a minimum, Tiller needs enough permissions\nto manage its own metadata, and permissions to deploy resources in the target Namespace. We provide minimal permission\nsets that you can use in the <a href=\"/repos/v0.6.1/terraform-kubernetes-helm/modules/k8s-namespace-roles\" class=\"preview__body--description--blue\">k8s-namespace-roles\nmodule</a>. You can\nassociate the <code>rbac_tiller_metadata_access_role</code> and <code>rbac_tiller_resource_access_role</code> roles created by the module to\nthe Tiller <code>ServiceAccount</code> to grant those permissions. For example, the following terraform code will create these\nroles in the <code>kube-system</code> <code>Namespace</code> and attach it to a new <code>ServiceAccount</code> that you can then use in this module:</p>\n<pre><span class=\"hljs-keyword\">module</span> <span class=\"hljs-string\">\"namespace_roles\"</span> {\n source = <span class=\"hljs-string\">\"git::https://github.com/gruntwork-io/terraform-kubernetes-helm.git//modules/k8s-namespace-roles?ref=v0.3.0\"</span>\n\n namespace = <span class=\"hljs-string\">\"kube-system\"</span>\n}\n\n<span class=\"hljs-keyword\">module</span> <span class=\"hljs-string\">\"tiller_service_account\"</span> {\n source = <span class=\"hljs-string\">\"git::https://github.com/gruntwork-io/terraform-kubernetes-helm.git//modules/k8s-service-account?ref=v0.3.0\"</span>\n\n name = <span class=\"hljs-string\">\"tiller\"</span>\n namespace = <span class=\"hljs-string\">\"kube-system\"</span>\n num_rbac_roles = <span class=\"hljs-number\">2</span>\n\n rbac_roles = [\n {\n name = <span class=\"hljs-string\">\"<span class=\"hljs-variable\">${module.namespace_roles.rbac_tiller_metadata_access_role}</span>\"</span>\n namespace = <span class=\"hljs-string\">\"kube-system\"</span>\n },\n {\n name = <span class=\"hljs-string\">\"<span class=\"hljs-variable\">${module.namespace_roles.rbac_tiller_resource_access_role}</span>\"</span>\n namespace = <span class=\"hljs-string\">\"kube-system\"</span>\n },\n ]\n}\n</pre>\n<p>This will create the default roles in the <code>kube-system</code> <code>Namespace</code>. Then, it will create a new <code>ServiceAccount</code> names\n<code>tiller</code> in the <code>kube-system</code> <code>Namespace</code>, bound to the metadata access role and resource access role of the\n<code>kube-system</code> <code>Namespace</code>. This allows the <code>tiller</code> <code>ServiceAccount</code> to manage it's state in Kubernetes <code>Secrets</code> in the\n<code>kube-system</code> <code>Namespace</code>, and deploy resources in there.</p>\n<h3 class=\"preview__body--subtitle\" id=\"tls-authentication-and-verification\">TLS authentication and verification</h3>\n<p>This module installs Tiller with TLS verification turned on. If you are unfamiliar with TLS/SSL, we recommend reading\n<a href=\"/repos/terraform-aws-vault/modules/private-tls-cert#background\" class=\"preview__body--description--blue\">this background</a>\ndocument describing how it works before continuing.</p>\n<p>With this feature, Tiller will validate client side TLS certificates provided as part of the API call to ensure the\nclient has access. Likewise, the client will also validate the TLS certificates provided by Tiller. In this way, both\nthe client and the server can trust each other as authorized entities.</p>\n<p>To achieve this, we will need to generate a Certificate Authority (CA) that can be used to issue and validate\ncertificates. This CA will be shared between the server and the client to validate each others' certificates.</p>\n<p>Then, using the generated CA, we will issue at least two sets of signed certificates:</p>\n<ul>\n<li>A certificate for Tiller that identifies it.</li>\n<li>A certificate for the Helm client that identifies it.</li>\n</ul>\n<p>We recommend that you issue a certificate for each unique helm client (and therefore each user of helm). This makes it\neasier to manage access for team changes (e.g when someone leaves the team), as well as compliance requirements (e.g\naccess logs that uniquely identifies individuals).</p>\n<p>Finally, both Tiller and the Helm client need to be setup to utilize the issued certificates.</p>\n<p>To summarize, assuming a single client, in this model we have three sets of TLS key pairs in play:</p>\n<ul>\n<li>Key pair for the CA to issue new certificate key pairs.</li>\n<li>Key pair to identify Tiller.</li>\n<li>Key pair to identify the client.</li>\n</ul>\n<p>This module supports three ways to setup the CA and server side TLS certificates for Tiller:</p>\n<ul>\n<li><a href=\"#directly-passing-in-tls-certs\" class=\"preview__body--description--blue\">Directly passing it in</a></li>\n<li><a href=\"#generating-with-tls-provider\" class=\"preview__body--description--blue\">Generating with <code>tls</code> provider</a></li>\n<li><a href=\"#generating-with-kubergrunt\" class=\"preview__body--description--blue\">Generating with <code>kubergrunt</code></a></li>\n</ul>\n<p>Summary of differences:</p>\n<p></p>\n<table>\n<thead>\n<tr>\n<th><strong>Method</strong></th>\n<th><strong>Amount of Control</strong></th>\n<th><strong>Terraform Features</strong></th>\n<th><strong>Secrets in Terraform State</strong></th>\n<th><strong>External Dependencies</strong></th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>Direct</td>\n<td>Full control</td>\n<td>N/A</td>\n<td>Only references</td>\n<td>Yes (TLS certs must be generated externally)</td>\n</tr>\n<tr>\n<td>Provider</td>\n<td>Limited control</td>\n<td>Full support</td>\n<td>All Secrets are stored in Terraform State</td>\n<td>No</td>\n</tr>\n<tr>\n<td>Kubergrunt</td>\n<td>Limited control</td>\n<td>Limited support</td>\n<td>Only references</td>\n<td>Yes (kubergrunt binary)</td>\n</tr>\n</tbody>\n</table>\n<h4 id=\"directly-passing-in-tls-certs\">Directly passing in TLS certs</h4>\n<p>This method of configuring the TLS certs requires that the TLS certs have already been generated. To use this method,\nset <code>tiller_tls_gen_method</code> to <code>"none"</code>.</p>\n<p>Tiller expects to mount the TLS keys from a <code>Secret</code> resource. To directly pass in to Tiller, you must first upload the\nTLS certificate key pair with the CA public certificate into a <code>Secret</code> resource in the <code>Namespace</code> where you intend on\ndeploying Tiller. Then, you can pass in the name of the <code>Secret</code> as the <code>tiller_tls_secret_name</code> variable to this module\nto deploy Tiller with that <code>Secret</code> mounted. You can configure what keys to read the certificate key pairs from using\nthe <code>tiller_tls_key_file_name</code>, <code>tiller_tls_cert_file_name</code>, and <code>tiller_tls_cacert_file_name</code> variables for the private\nkey, public certificate, and CA public certificate files respectively.</p>\n<h4 id=\"generating-with-tls-provider\">Generating with <code>tls</code> provider</h4>\n<p><strong>WARNING: The private keys generated using this method will be stored unencrypted in your Terraform state file. If you\nare sensitive to storing secrets in your Terraform state file, consider using <code>kubergrunt</code> to generate and manage your\nTLS certificate. See <a href=\"#generating-with-kubergrunt\" class=\"preview__body--description--blue\">Generating with kubergrunt</a> for more details.</strong></p>\n<p>This method of configuring the TLS certs utilizes the <a href=\"/repos/v0.6.1/terraform-kubernetes-helm/modules/k8s-tiller-tls-certs\" class=\"preview__body--description--blue\">k8s-tiller-tls-certs\nmodule</a> to generate\nthe TLS CA, and a signed certificate key pair for Tiller using that CA. To use this method, set <code>tiller_tls_gen_method</code>\nto <code>"provider"</code>.</p>\n<p>When this method is set, the module will call out to <code>k8s-tiller-tls-certs</code> to generate TLS certificate key pairs that\nare then stored as Kubernetes <code>Secrets</code>. Under the hood the\n<code>k8s-tiller-tls-certs</code> module uses the <a href=\"https://www.terraform.io/docs/providers/tls/index.html\" class=\"preview__body--description--blue\" target=\"_blank\">tls\nprovider</a> to generate the TLS certificates, and the <a href=\"https://www.terraform.io/docs/providers/kubernetes/index.html\" class=\"preview__body--description--blue\" target=\"_blank\">kubernetes\nprovider</a> to manage the Secrets.</p>\n<p>The main advantage of this approach is that everything will be managed in Terraform. This means that you have access to\nthe full lifecycle of Terraform, including <code>plan</code> to see drift and <code>destroy</code> to undo your changes.</p>\n<p>This method requires specifying the TLS subject info as the <code>tiller_tls_subject</code> input map, which is used to generate\nthe identifying information of the certificate. See\nhttps://www.terraform.io/docs/providers/tls/r/cert_request.html#common_name for a list of expected keys for this map.</p>\n<h4 id=\"generating-with-kubergrunt\">Generating with kubergrunt</h4>\n<p><strong>WARNING: This method requires the <code>kubergrunt</code> and <code>kubectl</code> binaries to be installed and available. See\nhttps://github.com/gruntwork-io/kubergrunt for installation instructions for <code>kubergrunt</code>, and\nhttps://kubernetes.io/docs/tasks/tools/install-kubectl/ for installation instructions for <code>kubectl</code>.</strong></p>\n<p><strong>NOTE: You must have kubergrunt version >=0.5.8</strong></p>\n<p>This method of configuring the TLS certs utilizes <a href=\"/repos/kubergrunt\" class=\"preview__body--description--blue\">kubergrunt</a> to generate\nthe TLS CA, and a signed certificate key pair for Tiller using that CA. To use this method, set <code>tiller_tls_gen_method</code>\nto <code>"kubergrunt"</code>.</p>\n<p>When this method is set, the module will call out to <code>kubergrunt</code> to generate the TLS certificate key pairs and store\nthem as Kubernetes <code>Secrets</code>. <code>kubergrunt</code> handles both steps in a single callout, which keeps the TLS certificates from\nleaking into the Terraform state file. The only thing that is stored in the state is the Kubernetes <code>Secret</code> references,\nnot the contents. However, because this uses <code>null_resources</code> and an external binary, not all features of Terraform are\navailable. For example, you can not rely on <code>plan</code> to see drift if anything changes about the Kubernetes <code>Secret</code>\nstoring the TLS certs.</p>\n<p>This method requires specifying the TLS subject info as the <code>tiller_tls_subject</code> input map, which is used to generate\nthe identifying information of the certificate. See\nhttps://www.terraform.io/docs/providers/tls/r/cert_request.html#common_name for a list of expected keys for this map.</p>\n<p>This method also requires configuring authentication to the Kubernetes cluster. Currently <code>kubergrunt</code> only supports\neither using config contexts, or directly passing in tokens and server info. Note that you can not mix the two methods\n(e.g you cannot pull the server info from the context and use a passed in token).</p>\n<p>Using config contexts is the default authentication method. When no authentication parameters are set, <code>kubergrunt</code> will\nload the default context from the default config location (typically <code>$HOME/.kube/config</code>). You can control which\ncontext to use using the input variable <code>kubectl_config_context_name</code>. You can also specify your config file location\nusing the input variable <code>kubectl_config_path</code>.</p>\n<p>If you wish to avoid using the config, you can pass in the server and token info directly. This method is automatically\nchosen if the <code>kubectl_server_endpoint</code> is provided. Note that <code>kubectl_ca_b64_data</code> and <code>kubectl_token</code> must also be\nprovided for this method.</p>\n<h2 class=\"preview__body--subtitle\" id=\"how-do-i-grant-access-to-other-users\">How do I grant access to other users?</h2>\n<p>In order to access Tiller, you will typically need to generate additional signed certificates using the generated TLS CA\ncerts. If you used the direct method, you will have to rely on your certificate provider to sign additional client\ncertificates. For ther other two methods, you can take a look at <a href=\"/repos/v0.6.1/terraform-kubernetes-helm/modules/k8s-tiller-tls-certs/README.md#how-do-you-use-the-generated-tls-certs-to-sign-additional-certificates\" class=\"preview__body--description--blue\">How do you use the generated TLS certs to sign\nadditional\ncertificates</a>\nfor information on how sign additional certificates using the generated TLS CA.</p>\n","repoName":"terraform-kubernetes-helm","repoRef":"v0.6.1","serviceDescriptor":{"serviceName":"Tiller / Helm","serviceRepoName":"terraform-kubernetes-helm","serviceRepoOrg":"gruntwork-io","cloudProviders":["aws"],"description":"Deploy Tiller (Helm Server) to your Kubernetes cluster as a service/package manager. Supports namespaces, service accounts, RBAC roles, and TLS.","imageUrl":"kubernetes.png","licenseType":"subscriber","technologies":["Terraform","Bash","Helm"],"compliance":[],"tags":[""]},"serviceCategoryName":"Docker services","fileName":"README.md","filePath":"/modules/k8s-tiller","title":"Repo Browser: Tiller / Helm","description":"Browse the repos in the Gruntwork Infrastructure as Code Library."}