The root folder of this repo shows an example of how to use the Terraform modules in this repository to deploy
Tiller (the server component of Helm) onto a Kubernetes cluster. Here we will walk through a detailed guide on how you
can setup minikube and use this module to deploy Tiller onto it.
WARNING: The private keys generated in this example will be stored unencrypted in your Terraform state file. If you are
sensitive to storing secrets in your Terraform state file, consider using kubergrunt to generate and manage your TLS
certificate. See the k8s-tiller-kubergrunt-minikube example for how to use
kubergrunt for TLS management.
Background
We strongly recommend reading our guide on Helm
before continuing with this guide for a background on Helm, Tiller, and the security model backing it.
Overview
In this guide we will walk through the steps necessary to get up and running with deploying Tiller using this module,
using minikube to deploy our target Kubernetes cluster. Here are the steps:
In this guide, we will use minikube as our Kubernetes cluster to deploy Tiller to.
Minikube is an official tool maintained by the Kubernetes community to be
able to provision and run Kubernetes locally your machine. By having a local environment you can have fast iteration
cycles while you develop and play with Kubernetes before deploying to production.
Run minikube start to provision a new minikube instance on your local machine.
Verify setup with kubectl: kubectl cluster-info
Installing necessary tools
Additionally, this example depends on terraform and helm. Optionally, you can install kubergrunt which automates a
few of the steps. Here are the installation guide for each:
Make sure the binaries are discoverble in your PATH variable. See this stackoverflow
post for instructions on
setting up your PATH on Unix, and this
post for instructions on
Windows.
Apply the Terraform Code
Now that we have a working Kubernetes cluster, and all the prerequisite tools are installed, we are ready to deploy
Tiller! To deploy Tiller, we will use the example Terraform code at the root of this repo:
The Terraform code creates a few resources before deploying Tiller:
A Kubernetes Namespace (the tiller-namespace) to house the Tiller instance. This namespace is where all the
Kubernetes resources that Tiller needs to function will live. In production, you will want to lock down access to this
namespace as being able to access these resources can compromise all the protections built into Helm.
A Kubernetes Namespace (the resource-namespace) to house the resources deployed by Tiller. This namespace is where
all the Helm chart resources will be deployed into. This is the namespace that your devs and users will have access
to.
A Kubernetes ServiceAccount (tiller-service-account) that Tiller will use to apply the resources in Helm charts.
Our Terraform code grants enough permissions to the ServiceAccount to be able to have full access to both the
tiller-namespace and the resource-namespace, so that it can:
Manage its own resources in the tiller-namespace, where the Tiller metadata (e.g release tracking information) will live.
Manage the resources deployed by helm charts in the resource-namespace.
Generate a TLS CA certificate key pair and a set of signed certificate key pairs for the server and the client. These
will then be uploaded as Secrets on the Kubernetes cluster.
These resources are then passed into the k8s-tiller module where the Tiller Deployment resources will be created.
Once the resources are applied to the cluster, this will wait for the Tiller Deployment to roll out the Pods using
kubergrunt helm wait-for-tiller.
At the end of the apply, you should now have a working Tiller deployment. So let's verify that in the next step!
Verify Tiller Deployment
To start using helm, we must first configure our client with the generated TLS certificates. This is done by
downloading the client side certificates in to the Helm home folder. The client side TLS certificates are available as
outputs by the terraform code. We can store them in the home directory using the terraform output command:
Once the certificate key pairs are stored, we need to setup the default repositories where the helm charts are stored.
This can be done using the helm init command:
helm init --client-only
If you have kubergrunt installed, the above steps can be automated in a single using the helm configure command of
kubergrunt:
Once the certificates are installed and the client is configured, you are ready to use helm. However, by default the
helm client does not assume a TLS setup. In order for the helm client to properly communicate with the deployed
Tiller instance, it needs to be told to use TLS verification. These are specified through command line arguments. If
everything is configured correctly, you should be able to access the Tiller that was deployed with the following args:
If you have access to Tiller, this should return you both the client version and the server version of Helm. Note that
you need to pass the above CLI argument every time you want to use helm.
If you used kubergrunt to configure your helm client, it will install an environment file into your helm home
directory that you can dot source to set environment variables that guide helm to use those options:
. ~/.helm/env
helm version
This can be a convenient way to avoid specifying the TLS parameters for each and every helm command you run.
Granting Access to Additional Users
Now that you have deployed Tiller and setup access for your local machine, you are ready to start using helm! However,
you might be wondering how do you share the access with your team?
In order to allow other users access to the deployed Tiller instance, you need to explicitly grant their RBAC entities
permission to access it. This involves:
Granting enough permissions to access the Tiller pod
Generating and sharing TLS certificate key pairs to identify the client
k8s-helm-client-tls-certs is designed to take a CA TLS cert generated using k8s-tiller-tls-certs and generate new
signed TLS certs that can be used as verified clients. To use the module for this purpose, you can either call out to
the module in your terraform code (like we do here to generate one for the operator), or use it directly as a temporary
module.
Follow these steps to use it as a temporary module:
Copy this module to your computer.
Open variables.tf and fill in the variables that do not have a default.
DO NOT configure Terraform remote state storage for this code. You do NOT want to store the state files as they will
contain the private keys for the certificates.
DO NOT configure store_in_kubernetes_secret to true. You do NOT want to store the certificates in Kubernetes
without the state file.
Run terraform apply.
Extract the generated certificates from the output and store to a file. E.g:
Delete your local Terraform state: rm -rf terraform.tfstate*. The Terraform state will contain the private keys for
the certificates, so it's important to clean it up!
The user can then install the certs and setup the client in a similar manner to the process described in Verify Tiller
Deployment
Using kubergrunt
kubergrunt automates this process in the grant and configure commands. For example, suppose you wanted to grant
access to the deployed Tiller to a group of users grouped under the RBAC group dev. You can grant them access using
the following command:
This will generate a new certificate key pair for the client and upload it as a Secret. Then, it will bind new RBAC
roles to the dev RBAC group that grants it permission to access the Tiller pod and the uploaded Secret.
This in turn allows your users to configure their local client using kubergrunt:
{"treedata":{"name":"root","toggled":true,"children":[{"name":".circleci","children":[{"name":"config.yml","path":".circleci/config.yml","sha":"cef3f13f1128f203609bd70491c5789049d97396"}]},{"name":".gitignore","path":".gitignore","sha":"ca31ff35c5b25c686571a0430a14f86a38f15e77"},{"name":".pre-commit-config.yaml","path":".pre-commit-config.yaml","sha":"2dd1a8d4e16b65a1991537d51b5b59b6df1866f8"},{"name":"CODEOWNERS","path":"CODEOWNERS","sha":"89db2c0afb6268a0fa92d8e841018cef4bc653cb"},{"name":"CONTRIBUTING.md","path":"CONTRIBUTING.md","sha":"b12849077d576a7fad88da42db1131bb3369e194"},{"name":"GRUNTWORK_PHILOSOPHY.md","path":"GRUNTWORK_PHILOSOPHY.md","sha":"02d9873a74c99fe6d9b6b26bd9f8eb4a7a699c32"},{"name":"LICENSE","path":"LICENSE","sha":"276620ad6ffbc9954fd6633d167b0501155441d4"},{"name":"README.md","path":"README.md","sha":"444e80d608763aabb990f37a8badbe3bdf24b872"},{"name":"examples","children":[{"name":"k8s-namespace-with-service-account","children":[{"name":"README.md","path":"examples/k8s-namespace-with-service-account/README.md","sha":"cdff492defdaa77f736adc1c1ce9c1db1b3a9e1e"},{"name":"main.tf","path":"examples/k8s-namespace-with-service-account/main.tf","sha":"071b39770653ee2d6d12c80119bb290fff5c35d8"},{"name":"outputs.tf","path":"examples/k8s-namespace-with-service-account/outputs.tf","sha":"71e367b5fa5fc4d940c68b1689d340ab6fa2e17c"},{"name":"variables.tf","path":"examples/k8s-namespace-with-service-account/variables.tf","sha":"f38169a042290747cd6cda6375d99af81b1df97e"}]},{"name":"k8s-tiller-kubergrunt-minikube","children":[{"name":"README.md","path":"examples/k8s-tiller-kubergrunt-minikube/README.md","sha":"f4ee8247319417b5d433017005af1284c2feda71"},{"name":"main.tf","path":"examples/k8s-tiller-kubergrunt-minikube/main.tf","sha":"a2d71286a91c7cf02371ff10336ad6c8c65c5a99"},{"name":"outputs.tf","path":"examples/k8s-tiller-kubergrunt-minikube/outputs.tf","sha":"f839014a1879cfef47954641a01edf7820ac3bdc"},{"name":"variables.tf","path":"examples/k8s-tiller-kubergrunt-minikube/variables.tf","sha":"3da8530a497b5102a1bd8d88c473e15794e7f9d5"}]},{"name":"k8s-tiller-minikube","children":[{"name":"README.md","path":"examples/k8s-tiller-minikube/README.md","sha":"d579c9f2952bb63704def4d2c799a267e2ca3b2a","toggled":true}],"toggled":true}],"toggled":true},{"name":"main.tf","path":"main.tf","sha":"506f789e6c7b0a9c767a683a4bc101498daef4d2"},{"name":"modules","children":[{"name":"k8s-helm-client-tls-certs","children":[{"name":"README.md","path":"modules/k8s-helm-client-tls-certs/README.md","sha":"4806c8188e5da43941b60b091133ad58f2537532"},{"name":"main.tf","path":"modules/k8s-helm-client-tls-certs/main.tf","sha":"f9727619e0606e5710d81934c541d94d528ee633"},{"name":"outputs.tf","path":"modules/k8s-helm-client-tls-certs/outputs.tf","sha":"59994511786075fe1a930e3ec6b46dd6134fff29"},{"name":"variables.tf","path":"modules/k8s-helm-client-tls-certs/variables.tf","sha":"445e4da760004173140af1176fa80b0cc8722a81"}]},{"name":"k8s-namespace-roles","children":[{"name":"README.md","path":"modules/k8s-namespace-roles/README.md","sha":"9aaca3f9e32408e23c02622d33942e0ce4586e34"},{"name":"main.tf","path":"modules/k8s-namespace-roles/main.tf","sha":"f82819379a8864bd6b9e71a8a18d815dbbc02559"},{"name":"outputs.tf","path":"modules/k8s-namespace-roles/outputs.tf","sha":"ab91d72a436cc5e450795182b4b43b78be207293"},{"name":"variables.tf","path":"modules/k8s-namespace-roles/variables.tf","sha":"0e5be83826073b0f786ff8f9b04826c555208758"}]},{"name":"k8s-namespace","children":[{"name":"README.md","path":"modules/k8s-namespace/README.md","sha":"4fa9469fbbd22faae11ac3f461487b2bfbe167e6"},{"name":"main.tf","path":"modules/k8s-namespace/main.tf","sha":"eba9f2b3dbc0191d2d4bfe2931caac9b58122541"},{"name":"outputs.tf","path":"modules/k8s-namespace/outputs.tf","sha":"d8e2f96f44b67f9d0d59431c789b39609a745996"},{"name":"variables.tf","path":"modules/k8s-namespace/variables.tf","sha":"572a62e2ca963233931c527bab99fd9f0a3a048b"}]},{"name":"k8s-service-account","children":[{"name":"README.md","path":"modules/k8s-service-account/README.md","sha":"a53dfad1ff1d991dfed08fb5da77f9c15e3b6d50"},{"name":"main.tf","path":"modules/k8s-service-account/main.tf","sha":"3c5efa6c04722d679f1707ab49c118d39ef6a806"},{"name":"outputs.tf","path":"modules/k8s-service-account/outputs.tf","sha":"c5c2389e4646bb2a16b87bec129330e0c3c4dcf6"},{"name":"variables.tf","path":"modules/k8s-service-account/variables.tf","sha":"4f0cfad5f8a5869ff201fad385ecaa0de7454674"}]},{"name":"k8s-tiller-tls-certs","children":[{"name":"README.md","path":"modules/k8s-tiller-tls-certs/README.md","sha":"bf4e7de237e87459f8f617b6fa60f0ed1d94cd86"},{"name":"main.tf","path":"modules/k8s-tiller-tls-certs/main.tf","sha":"69d3adb8717f12381f0683c0ab37581975b5a8dc"},{"name":"outputs.tf","path":"modules/k8s-tiller-tls-certs/outputs.tf","sha":"1a4038f3a59478a9f2d4b76eef51817dd4200dc7"},{"name":"variables.tf","path":"modules/k8s-tiller-tls-certs/variables.tf","sha":"259ebea616195628a68b8eb8a1da83465cd0c782"}]},{"name":"k8s-tiller","children":[{"name":"README.md","path":"modules/k8s-tiller/README.md","sha":"7dec15a673134ebc488a070dc614d04c2bf538d3"},{"name":"main.tf","path":"modules/k8s-tiller/main.tf","sha":"c278eeebee9bc47d059f59a22916e5cc6dac1335"},{"name":"outputs.tf","path":"modules/k8s-tiller/outputs.tf","sha":"e23f872780e656e4f12578d439eb910692f59a60"},{"name":"variables.tf","path":"modules/k8s-tiller/variables.tf","sha":"5d03e38bfec082950d20c0f48865d7c5107f259f"}]}]},{"name":"outputs.tf","path":"outputs.tf","sha":"e81c16d641a0e30b706292ad37ce8b8ef343becb"},{"name":"test","children":[{"name":"Gopkg.lock","path":"test/Gopkg.lock","sha":"cdae09784de4638a1b0eea0525c3be70cebcf2d7"},{"name":"Gopkg.toml","path":"test/Gopkg.toml","sha":"a8644e81d7acf83db32419833ca1ca318d2559c1"},{"name":"README.md","path":"test/README.md","sha":"c4361f3756f62c10366b7302401e8a53552061bb"},{"name":"k8s_namespace_with_service_account_test.go","path":"test/k8s_namespace_with_service_account_test.go","sha":"8fac6e063e5c03371b681cdd48a98a8d27137712"},{"name":"k8s_tiller_kubergrunt_test.go","path":"test/k8s_tiller_kubergrunt_test.go","sha":"67f31a218feaa2840cdad23308b577f442bd3f4d"},{"name":"k8s_tiller_test.go","path":"test/k8s_tiller_test.go","sha":"7501db1bfc755b2709c688a5bbd1867b4238b44a"},{"name":"kubefixtures","children":[{"name":"curl-kubeapi-as-service-account.yml.tpl","path":"test/kubefixtures/curl-kubeapi-as-service-account.yml.tpl","sha":"12fa119c7e183bb8d35cda86322c92e6a36a5307"},{"name":"namespace-check-create-pod.json.tpl","path":"test/kubefixtures/namespace-check-create-pod.json.tpl","sha":"ba88dfa440d815c221febd455550dd6fdbe7cbac"},{"name":"namespace-check-list-pod.json.tpl","path":"test/kubefixtures/namespace-check-list-pod.json.tpl","sha":"047bb650ac081c5f63d9491cde3ca80b92603489"}]},{"name":"terratest_options.go","path":"test/terratest_options.go","sha":"675f98b4c34be584c9f7eea240f290949f08f460"}]},{"name":"variables.tf","path":"variables.tf","sha":"7efc75a2f7d990de3cdf52c7589e9468da6f4133"}]},"detailsContent":"<h1 class=\"preview__body--title\" id=\"kubernetes-tiller-deployment-on-minikube\">Kubernetes Tiller Deployment On Minikube</h1><div class=\"preview__body--border\"></div><p>The root folder of this repo shows an example of how to use the Terraform modules in this repository to deploy\nTiller (the server component of Helm) onto a Kubernetes cluster. Here we will walk through a detailed guide on how you\ncan setup <code>minikube</code> and use this module to deploy Tiller onto it.</p>\n<p><strong>WARNING: The private keys generated in this example will be stored unencrypted in your Terraform state file. If you are\nsensitive to storing secrets in your Terraform state file, consider using <code>kubergrunt</code> to generate and manage your TLS\ncertificate. See <a href=\"/repos/v0.6.1/terraform-kubernetes-helm/examples/k8s-tiller-kubergrunt-minikube\" class=\"preview__body--description--blue\">the k8s-tiller-kubergrunt-minikube example</a> for how to use\n<code>kubergrunt</code> for TLS management.</strong></p>\n<h2 class=\"preview__body--subtitle\" id=\"background\">Background</h2>\n<p>We strongly recommend reading <a href=\"/repos/kubergrunt/HELM_GUIDE.md\" class=\"preview__body--description--blue\">our guide on Helm</a>\nbefore continuing with this guide for a background on Helm, Tiller, and the security model backing it.</p>\n<h2 class=\"preview__body--subtitle\" id=\"overview\">Overview</h2>\n<p>In this guide we will walk through the steps necessary to get up and running with deploying Tiller using this module,\nusing <code>minikube</code> to deploy our target Kubernetes cluster. Here are the steps:</p>\n<ol>\n<li><a href=\"#setting-up-your-kubernetes-cluster-minikube\" class=\"preview__body--description--blue\">Install and setup <code>minikube</code></a></li>\n<li><a href=\"#installing-necessary-tools\" class=\"preview__body--description--blue\">Install the necessary tools</a></li>\n<li><a href=\"#apply-the-terraform-code\" class=\"preview__body--description--blue\">Apply the terraform code</a></li>\n<li><a href=\"#verify-tiller-deployment\" class=\"preview__body--description--blue\">Verify the deployment</a></li>\n<li><a href=\"#granting-access-to-additional-users\" class=\"preview__body--description--blue\">Granting access to additional roles</a></li>\n<li><a href=\"#upgrading-deployed-tiller\" class=\"preview__body--description--blue\">Upgrading the deployed Tiller instance</a></li>\n</ol>\n<h2 class=\"preview__body--subtitle\" id=\"setting-up-your-kubernetes-cluster-minikube\">Setting up your Kubernetes cluster: Minikube</h2>\n<p>In this guide, we will use <code>minikube</code> as our Kubernetes cluster to deploy Tiller to.\n<a href=\"https://kubernetes.io/docs/setup/minikube/\" class=\"preview__body--description--blue\" target=\"_blank\">Minikube</a> is an official tool maintained by the Kubernetes community to be\nable to provision and run Kubernetes locally your machine. By having a local environment you can have fast iteration\ncycles while you develop and play with Kubernetes before deploying to production.</p>\n<p>To setup <code>minikube</code>:</p>\n<ol>\n<li><a href=\"https://kubernetes.io/docs/tasks/tools/install-kubectl/\" class=\"preview__body--description--blue\" target=\"_blank\">Install kubectl</a></li>\n<li><a href=\"https://kubernetes.io/docs/tasks/tools/install-minikube/\" class=\"preview__body--description--blue\" target=\"_blank\">Install the minikube utility</a></li>\n<li>Run <code>minikube start</code> to provision a new <code>minikube</code> instance on your local machine.</li>\n<li>Verify setup with <code>kubectl</code>: <code>kubectl cluster-info</code></li>\n</ol>\n<p></p>\n<h2 class=\"preview__body--subtitle\" id=\"installing-necessary-tools\">Installing necessary tools</h2>\n<p>Additionally, this example depends on <code>terraform</code> and <code>helm</code>. Optionally, you can install <code>kubergrunt</code> which automates a\nfew of the steps. Here are the installation guide for each:</p>\n<ol>\n<li><a href=\"https://learn.hashicorp.com/terraform/getting-started/install.html\" class=\"preview__body--description--blue\" target=\"_blank\"><code>terraform</code></a></li>\n<li><a href=\"https://docs.helm.sh/using_helm/#installing-helm\" class=\"preview__body--description--blue\" target=\"_blank\"><code>helm</code> client</a></li>\n<li><a href=\"/repos/kubergrunt#installation\" class=\"preview__body--description--blue\"><code>kubergrunt</code></a>, minimum version: v0.3.6</li>\n</ol>\n<p>Make sure the binaries are discoverble in your <code>PATH</code> variable. See <a href=\"https://stackoverflow.com/questions/14637979/how-to-permanently-set-path-on-linux-unix\" class=\"preview__body--description--blue\" target=\"_blank\">this stackoverflow\npost</a> for instructions on\nsetting up your <code>PATH</code> on Unix, and <a href=\"https://stackoverflow.com/questions/1618280/where-can-i-set-path-to-make-exe-on-windows\" class=\"preview__body--description--blue\" target=\"_blank\">this\npost</a> for instructions on\nWindows.</p>\n<h2 class=\"preview__body--subtitle\" id=\"apply-the-terraform-code\">Apply the Terraform Code</h2>\n<p>Now that we have a working Kubernetes cluster, and all the prerequisite tools are installed, we are ready to deploy\nTiller! To deploy Tiller, we will use the example Terraform code at the root of this repo:</p>\n<ol>\n<li>If you haven't already, clone this repo:\n<ul>\n<li><code>git clone https://github.com/gruntwork-io/terraform-kubernetes-helm.git</code></li>\n</ul>\n</li>\n<li>Make sure you are at the root of this repo:\n<ul>\n<li><code>cd terraform-kubernetes-helm</code></li>\n</ul>\n</li>\n<li>Initialize terraform:\n<ul>\n<li><code>terraform init</code></li>\n</ul>\n</li>\n<li>Apply the terraform code:\n<ul>\n<li><code>terraform apply</code></li>\n<li></li>\n</ul>\n</li>\n</ol>\n<p>The Terraform code creates a few resources before deploying Tiller:</p>\n<ul>\n<li>A Kubernetes <code>Namespace</code> (the <code>tiller-namespace</code>) to house the Tiller instance. This namespace is where all the\nKubernetes resources that Tiller needs to function will live. In production, you will want to lock down access to this\nnamespace as being able to access these resources can compromise all the protections built into Helm.</li>\n<li>A Kubernetes <code>Namespace</code> (the <code>resource-namespace</code>) to house the resources deployed by Tiller. This namespace is where\nall the Helm chart resources will be deployed into. This is the namespace that your devs and users will have access\nto.</li>\n<li>A Kubernetes <code>ServiceAccount</code> (<code>tiller-service-account</code>) that Tiller will use to apply the resources in Helm charts.\nOur Terraform code grants enough permissions to the <code>ServiceAccount</code> to be able to have full access to both the\n<code>tiller-namespace</code> and the <code>resource-namespace</code>, so that it can:\n<ul>\n<li>Manage its own resources in the <code>tiller-namespace</code>, where the Tiller metadata (e.g release tracking information) will live.</li>\n<li>Manage the resources deployed by helm charts in the <code>resource-namespace</code>.</li>\n</ul>\n</li>\n<li>Generate a TLS CA certificate key pair and a set of signed certificate key pairs for the server and the client. These\nwill then be uploaded as <code>Secrets</code> on the Kubernetes cluster.</li>\n</ul>\n<p>These resources are then passed into the <code>k8s-tiller</code> module where the Tiller <code>Deployment</code> resources will be created.\nOnce the resources are applied to the cluster, this will wait for the Tiller <code>Deployment</code> to roll out the <code>Pods</code> using\n<code>kubergrunt helm wait-for-tiller</code>.</p>\n<p>At the end of the <code>apply</code>, you should now have a working Tiller deployment. So let's verify that in the next step!</p>\n<h2 class=\"preview__body--subtitle\" id=\"verify-tiller-deployment\">Verify Tiller Deployment</h2>\n<p>To start using <code>helm</code>, we must first configure our client with the generated TLS certificates. This is done by\ndownloading the client side certificates in to the Helm home folder. The client side TLS certificates are available as\noutputs by the terraform code. We can store them in the home directory using the <code>terraform output</code> command:</p>\n<pre>mkdir -p $HOME/.helm\n<span class=\"hljs-keyword\">terraform</span> <span class=\"hljs-keyword\">output</span> helm_client_tls_private_key_pem > <span class=\"hljs-string\">\"$HOME/.helm/client.pem\"</span>\n<span class=\"hljs-keyword\">terraform</span> <span class=\"hljs-keyword\">output</span> helm_client_tls_public_cert_pem > <span class=\"hljs-string\">\"$HOME/.helm/client.crt\"</span>\n<span class=\"hljs-keyword\">terraform</span> <span class=\"hljs-keyword\">output</span> helm_client_tls_ca_cert_pem > <span class=\"hljs-string\">\"$HOME/.helm/ca.crt\"</span>\n</pre>\n<p>Once the certificate key pairs are stored, we need to setup the default repositories where the helm charts are stored.\nThis can be done using the <code>helm init</code> command:</p>\n<pre>helm init <span class=\"hljs-comment\">--client-only</span>\n</pre>\n<p>If you have <code>kubergrunt</code> installed, the above steps can be automated in a single using the <code>helm configure</code> command of\n<code>kubergrunt</code>:</p>\n<pre>kubergrunt helm configure \\\n --tiller-namespace $(<span class=\"hljs-keyword\">terraform</span> <span class=\"hljs-keyword\">output</span> tiller_namespace) \\\n --<span class=\"hljs-keyword\">resource</span>-namespace $(<span class=\"hljs-keyword\">terraform</span> <span class=\"hljs-keyword\">output</span> resource_namespace) \\\n --rbac-user minikube\n</pre>\n<p>Once the certificates are installed and the client is configured, you are ready to use <code>helm</code>. However, by default the\n<code>helm</code> client does not assume a TLS setup. In order for the <code>helm</code> client to properly communicate with the deployed\nTiller instance, it needs to be told to use TLS verification. These are specified through command line arguments. If\neverything is configured correctly, you should be able to access the Tiller that was deployed with the following args:</p>\n<pre>helm <span class=\"hljs-keyword\">version</span> <span class=\"hljs-params\">--tls</span> <span class=\"hljs-params\">--tls-verify</span> <span class=\"hljs-params\">--tiller-namespace</span> NAMESPACE_OF_TILLER\n</pre>\n<p>If you have access to Tiller, this should return you both the client version and the server version of Helm. Note that\nyou need to pass the above CLI argument every time you want to use <code>helm</code>.</p>\n<p>If you used <code>kubergrunt</code> to configure your helm client, it will install an environment file into your helm home\ndirectory that you can dot source to set environment variables that guide <code>helm</code> to use those options:</p>\n<pre>. ~<span class=\"hljs-string\">/.helm/env</span>\nhelm <span class=\"hljs-keyword\">version</span>\n</pre>\n<p>This can be a convenient way to avoid specifying the TLS parameters for each and every <code>helm</code> command you run.</p>\n<p></p>\n<h2 class=\"preview__body--subtitle\" id=\"granting-access-to-additional-users\">Granting Access to Additional Users</h2>\n<p>Now that you have deployed Tiller and setup access for your local machine, you are ready to start using <code>helm</code>! However,\nyou might be wondering how do you share the access with your team?</p>\n<p>In order to allow other users access to the deployed Tiller instance, you need to explicitly grant their RBAC entities\npermission to access it. This involves:</p>\n<ul>\n<li>Granting enough permissions to access the Tiller pod</li>\n<li>Generating and sharing TLS certificate key pairs to identify the client</li>\n</ul>\n<p>You have two options to do this:</p>\n<ul>\n<li><a href=\"#using-the-k8s-helm-client-tls-certs-module\" class=\"preview__body--description--blue\">Using the <code>k8s-helm-client-tls-certs</code> module</a></li>\n<li><a href=\"#using-kubergrunt\" class=\"preview__body--description--blue\">Using <code>kubergrunt</code></a></li>\n</ul>\n<h4 id=\"using-the-k-8-s-helm-client-tls-certs-module\">Using the k8s-helm-client-tls-certs module</h4>\n<p><code>k8s-helm-client-tls-certs</code> is designed to take a CA TLS cert generated using <code>k8s-tiller-tls-certs</code> and generate new\nsigned TLS certs that can be used as verified clients. To use the module for this purpose, you can either call out to\nthe module in your terraform code (like we do here to generate one for the operator), or use it directly as a temporary\nmodule.</p>\n<p>Follow these steps to use it as a temporary module:</p>\n<ol>\n<li>\n<p>Copy this module to your computer.</p>\n</li>\n<li>\n<p>Open <code>variables.tf</code> and fill in the variables that do not have a default.</p>\n</li>\n<li>\n<p>DO NOT configure Terraform remote state storage for this code. You do NOT want to store the state files as they will\ncontain the private keys for the certificates.</p>\n</li>\n<li>\n<p>DO NOT configure <code>store_in_kubernetes_secret</code> to <code>true</code>. You do NOT want to store the certificates in Kubernetes\nwithout the state file.</p>\n</li>\n<li>\n<p>Run <code>terraform apply</code>.</p>\n</li>\n<li>\n<p>Extract the generated certificates from the output and store to a file. E.g:</p>\n<pre><span class=\"hljs-keyword\">terraform</span> <span class=\"hljs-keyword\">output</span> tls_certificate_key_pair_private_key_pem > client.pem\n<span class=\"hljs-keyword\">terraform</span> <span class=\"hljs-keyword\">output</span> tls_certificate_key_pair_certificate_pem > client.crt\n<span class=\"hljs-keyword\">terraform</span> <span class=\"hljs-keyword\">output</span> ca_tls_certificate_key_pair_certificate_pem > ca.crt\n</pre>\n</li>\n<li>\n<p>Share the extracted files with the user.</p>\n</li>\n<li>\n<p>Delete your local Terraform state: <code>rm -rf terraform.tfstate*</code>. The Terraform state will contain the private keys for\nthe certificates, so it's important to clean it up!</p>\n</li>\n</ol>\n<p>The user can then install the certs and setup the client in a similar manner to the process described in <a href=\"#verify-tiller-deployment\" class=\"preview__body--description--blue\">Verify Tiller\nDeployment</a></p>\n<h4 id=\"using-kubergrunt\">Using kubergrunt</h4>\n<p><code>kubergrunt</code> automates this process in the <code>grant</code> and <code>configure</code> commands. For example, suppose you wanted to grant\naccess to the deployed Tiller to a group of users grouped under the RBAC group <code>dev</code>. You can grant them access using\nthe following command:</p>\n<pre><span class=\"hljs-comment\">kubergrunt</span> <span class=\"hljs-comment\">helm</span> <span class=\"hljs-comment\">grant</span> --<span class=\"hljs-comment\">tiller</span><span class=\"hljs-literal\">-</span><span class=\"hljs-comment\">namespace</span> <span class=\"hljs-comment\">NAMESPACE_OF_TILLER</span> --<span class=\"hljs-comment\">rbac</span><span class=\"hljs-literal\">-</span><span class=\"hljs-comment\">group</span> <span class=\"hljs-comment\">dev</span> --<span class=\"hljs-comment\">tls</span><span class=\"hljs-literal\">-</span><span class=\"hljs-comment\">common</span><span class=\"hljs-literal\">-</span><span class=\"hljs-comment\">name</span> <span class=\"hljs-comment\">dev</span> --<span class=\"hljs-comment\">tls</span><span class=\"hljs-literal\">-</span><span class=\"hljs-comment\">org</span> <span class=\"hljs-comment\">YOUR_ORG</span>\n</pre>\n<p>This will generate a new certificate key pair for the client and upload it as a <code>Secret</code>. Then, it will bind new RBAC\nroles to the <code>dev</code> RBAC group that grants it permission to access the Tiller pod and the uploaded <code>Secret</code>.</p>\n<p>This in turn allows your users to configure their local client using <code>kubergrunt</code>:</p>\n<pre>kubergrunt helm configure --tiller-namespace NAMESPACE_OF_TILLER --rbac-<span class=\"hljs-keyword\">group</span> <span class=\"hljs-title\">dev</span>\n</pre>\n<p>At the end of this, your users should have the same helm client setup as above.</p>\n","repoName":"terraform-kubernetes-helm","repoRef":"v0.6.1","serviceDescriptor":{"serviceName":"Tiller / Helm","serviceRepoName":"terraform-kubernetes-helm","serviceRepoOrg":"gruntwork-io","cloudProviders":["aws"],"description":"Deploy Tiller (Helm Server) to your Kubernetes cluster as a service/package manager. Supports namespaces, service accounts, RBAC roles, and TLS.","imageUrl":"kubernetes.png","licenseType":"subscriber","technologies":["Terraform","Bash","Helm"],"compliance":[],"tags":[""]},"serviceCategoryName":"Docker services","fileName":"README.md","filePath":"/examples/k8s-tiller-minikube","title":"Repo Browser: Tiller / Helm","description":"Browse the repos in the Gruntwork Infrastructure as Code Library."}