Kubernetes Tiller Deployment With Kubergrunt On Minikube
This folder shows an example of how to use Terraform to call out to our kubergrunt utility for TLS management when
deploying Tiller (the server component of Helm) onto a Kubernetes cluster. Here we will walk through a detailed guide on
how you can setup minikube and use the modules in this repo to deploy Tiller onto it.
Background
We strongly recommend reading our guide on Helm
before continuing with this guide for a background on Helm, Tiller, and the security model backing it.
Overview
In this guide we will walk through the steps necessary to get up and running with deploying Tiller using this module,
using minikube to deploy our target Kubernetes cluster. Here are the steps:
In this guide, we will use minikube as our Kubernetes cluster to deploy Tiller to.
Minikube is an official tool maintained by the Kubernetes community to be
able to provision and run Kubernetes locally your machine. By having a local environment you can have fast iteration
cycles while you develop and play with Kubernetes before deploying to production.
Run minikube start to provision a new minikube instance on your local machine.
Verify setup with kubectl: kubectl cluster-info
Installing necessary tools
In addition to terraform, this guide uses kubergrunt to manage TLS certificates for the deployment of Tiller. You
can read more about the decision behind this approach in the Appendix of this guide.
This means that your system needs to be configured to be able to find terraform, kubergrunt, and helm client
utilities on the system PATH. Here are the installation guide for each:
Make sure the binaries are discoverable in your PATH variable. See this stackoverflow
post for instructions on
setting up your PATH on Unix, and this
post for instructions on
Windows.
Apply the Terraform Code
Now that we have a working Kubernetes cluster, and all the prerequisite tools are installed, we are ready to deploy
Tiller! To deploy Tiller, we will use the example Terraform code in this folder:
cd terraform-kubernetes-helm/examples/8s-tiller-kubergrunt-minikube
Initialize terraform:
terraform init
Apply the terraform code:
terraform apply
The Terraform code creates a few resources before deploying Tiller:
A Kubernetes Namespace (the tiller-namespace) to house the Tiller instance. This namespace is where all the
Kubernetes resources that Tiller needs to function will live. In production, you will want to lock down access to this
namespace as being able to access these resources can compromise all the protections built into Helm.
A Kubernetes Namespace (the resource-namespace) to house the resources deployed by Tiller. This namespace is where
all the Helm chart resources will be deployed into. This is the namespace that your devs and users will have access
to.
A Kubernetes ServiceAccount (tiller-service-account) that Tiller will use to apply the resources in Helm charts.
Our Terraform code grants enough permissions to the ServiceAccount to be able to have full access to both the
tiller-namespace and the resource-namespace, so that it can:
Manage its own resources in the tiller-namespace, where the Tiller metadata (e.g release tracking information) will live.
Manage the resources deployed by helm charts in the resource-namespace.
Using kubergrunt, generate a TLS CA certificate key pair and a set of signed certificate key pairs for the server
and the client. These will then be uploaded as Secrets on the Kubernetes cluster.
These resources are then passed into the k8s-tiller module where the Tiller Deployment resources will be created.
Once the resources are applied to the cluster, this will wait for the Tiller Deployment to roll out the Pods using
kubergrunt helm wait-for-tiller.
Finally, to allow you to use helm right away, this code also sets up the local helm client. This involves:
Using the CA TLS certificate key pair, create a signed TLS certificate key pair to use to identify the client.
Upload the certificate key pair to the tiller-namespace.
Grant the RBAC entity access to:
Get the client certificate Secret (kubergrunt helm configure uses this to install the client certificate
key pair locally)
Get and List pods in tiller-namespace (the helm client uses this to find the Tiller pod)
Create a port forward to the Tiller pod (the helm client uses this to make requests to the Tiller pod)
Install the client certificate key pair to the helm home directory so the client can use it.
At the end of the apply, you should now have a working Tiller deployment with your helm client configured to access
it. So let's verify that in the next step!
Verify Tiller Deployment
To start using helm with the configured credentials, you need to specify the following things:
enable TLS verification
use TLS credentials to authenticate
the namespace where Tiller is deployed
These are specified through command line arguments. If everything is configured correctly, you should be able to access
the Tiller that was deployed with the following args:
If you have access to Tiller, this should return you both the client version and the server version of Helm.
Note that you need to pass the above CLI argument every time you want to use helm. This can be cumbersome, so
kubergrunt installs an environment file into your helm home directory that you can dot source to set environment
variables that guide helm to use those options:
. ~/.helm/env
helm version
Granting Access to Additional Users
Now that you have deployed Tiller and setup access for your local machine, you are ready to start using helm! However,
you might be wondering how do you share the access with your team? To do so, you can rely on kubergrunt helm grant.
In order to allow other users access to the deployed Tiller instance, you need to explicitly grant their RBAC entities
permission to access it. This involves:
Granting enough permissions to access the Tiller pod
Generating and sharing TLS certificate key pairs to identify the client
kubergrunt automates this process in the grant and configure commands. For example, suppose you wanted to grant
access to the deployed Tiller to a group of users grouped under the RBAC group dev. You can grant them access using
the following command:
This will generate a new certificate key pair for the client and upload it as a Secret. Then, it will bind new RBAC
roles to the dev RBAC group that grants it permission to access the Tiller pod and the uploaded Secret.
This in turn allows your users to configure their local client using kubergrunt:
At the end of this, your users should have the same helm client setup as above.
Appendix A: Why kubergrunt?
This Terraform example is not idiomatic Terraform code in that it relies on an external binary, kubergrunt as opposed
to implementing the functionalities using pure Terraform providers. This approach has some noticeable drawbacks:
You have to install extra tools to use, so it is not a minimal terraform init && terraform apply.
Portability concerns to setup, as there is no guarantee the tools work cross platform. We make every effort to test
across the major operating systems (Linux, Mac OSX, and Windows), but we can't possibly test every combination and so
there are bound to be portability issues.
You don't have the declarative Terraform features that you come to love, such as plan, updates through apply, and
destroy.
That said, we decided to use this approach because of limitations in the existing providers to implement the
functionalities here in pure Terraform code.
kubergrunt fulfills the role of generating and managing TLS certificate key pairs using Kubernetes Secrets as a
database. This allows us to deploy Tiller with TLS verification enabled. We could instead use the tls and kubernetes
providers in Terraform, but this has a few drawbacks:
The TLS provider stores the certificate key pairs in plain
text into the Terraform state.
The grant and configure workflows are better suited as CLI tools than in Terraform.
kubergrunt works around this by generating the TLS certs and storing them in Kubernetes Secrets directly. In this
way, the generated TLS certs never leak into the Terraform state as they are referenced by name when deploying Tiller as
opposed to by value.
Note that we intend to implement a pure Terraform version of this functionality, but we plan to continue to maintain the
kubergrunt approach for folks who are wary of leaking secrets into Terraform state.
Questions? Ask away.
We're here to talk about our services, answer any questions, give advice, or just to chat.
{"treedata":{"name":"root","toggled":true,"children":[{"name":".circleci","children":[{"name":"config.yml","path":".circleci/config.yml","sha":"cef3f13f1128f203609bd70491c5789049d97396"}]},{"name":".gitignore","path":".gitignore","sha":"ca31ff35c5b25c686571a0430a14f86a38f15e77"},{"name":".pre-commit-config.yaml","path":".pre-commit-config.yaml","sha":"2dd1a8d4e16b65a1991537d51b5b59b6df1866f8"},{"name":"CODEOWNERS","path":"CODEOWNERS","sha":"89db2c0afb6268a0fa92d8e841018cef4bc653cb"},{"name":"CONTRIBUTING.md","path":"CONTRIBUTING.md","sha":"b12849077d576a7fad88da42db1131bb3369e194"},{"name":"GRUNTWORK_PHILOSOPHY.md","path":"GRUNTWORK_PHILOSOPHY.md","sha":"02d9873a74c99fe6d9b6b26bd9f8eb4a7a699c32"},{"name":"LICENSE","path":"LICENSE","sha":"276620ad6ffbc9954fd6633d167b0501155441d4"},{"name":"README.md","path":"README.md","sha":"444e80d608763aabb990f37a8badbe3bdf24b872"},{"name":"examples","children":[{"name":"k8s-namespace-with-service-account","children":[{"name":"README.md","path":"examples/k8s-namespace-with-service-account/README.md","sha":"cdff492defdaa77f736adc1c1ce9c1db1b3a9e1e"},{"name":"main.tf","path":"examples/k8s-namespace-with-service-account/main.tf","sha":"071b39770653ee2d6d12c80119bb290fff5c35d8"},{"name":"outputs.tf","path":"examples/k8s-namespace-with-service-account/outputs.tf","sha":"71e367b5fa5fc4d940c68b1689d340ab6fa2e17c"},{"name":"variables.tf","path":"examples/k8s-namespace-with-service-account/variables.tf","sha":"f38169a042290747cd6cda6375d99af81b1df97e"}]},{"name":"k8s-tiller-kubergrunt-minikube","children":[{"name":"README.md","path":"examples/k8s-tiller-kubergrunt-minikube/README.md","sha":"f4ee8247319417b5d433017005af1284c2feda71","toggled":true},{"name":"main.tf","path":"examples/k8s-tiller-kubergrunt-minikube/main.tf","sha":"a2d71286a91c7cf02371ff10336ad6c8c65c5a99"},{"name":"outputs.tf","path":"examples/k8s-tiller-kubergrunt-minikube/outputs.tf","sha":"f839014a1879cfef47954641a01edf7820ac3bdc"},{"name":"variables.tf","path":"examples/k8s-tiller-kubergrunt-minikube/variables.tf","sha":"3da8530a497b5102a1bd8d88c473e15794e7f9d5"}],"toggled":true},{"name":"k8s-tiller-minikube","children":[{"name":"README.md","path":"examples/k8s-tiller-minikube/README.md","sha":"d579c9f2952bb63704def4d2c799a267e2ca3b2a"}]}],"toggled":true},{"name":"main.tf","path":"main.tf","sha":"506f789e6c7b0a9c767a683a4bc101498daef4d2"},{"name":"modules","children":[{"name":"k8s-helm-client-tls-certs","children":[{"name":"README.md","path":"modules/k8s-helm-client-tls-certs/README.md","sha":"4806c8188e5da43941b60b091133ad58f2537532"},{"name":"main.tf","path":"modules/k8s-helm-client-tls-certs/main.tf","sha":"f9727619e0606e5710d81934c541d94d528ee633"},{"name":"outputs.tf","path":"modules/k8s-helm-client-tls-certs/outputs.tf","sha":"59994511786075fe1a930e3ec6b46dd6134fff29"},{"name":"variables.tf","path":"modules/k8s-helm-client-tls-certs/variables.tf","sha":"445e4da760004173140af1176fa80b0cc8722a81"}]},{"name":"k8s-namespace-roles","children":[{"name":"README.md","path":"modules/k8s-namespace-roles/README.md","sha":"9aaca3f9e32408e23c02622d33942e0ce4586e34"},{"name":"main.tf","path":"modules/k8s-namespace-roles/main.tf","sha":"f82819379a8864bd6b9e71a8a18d815dbbc02559"},{"name":"outputs.tf","path":"modules/k8s-namespace-roles/outputs.tf","sha":"ab91d72a436cc5e450795182b4b43b78be207293"},{"name":"variables.tf","path":"modules/k8s-namespace-roles/variables.tf","sha":"0e5be83826073b0f786ff8f9b04826c555208758"}]},{"name":"k8s-namespace","children":[{"name":"README.md","path":"modules/k8s-namespace/README.md","sha":"4fa9469fbbd22faae11ac3f461487b2bfbe167e6"},{"name":"main.tf","path":"modules/k8s-namespace/main.tf","sha":"eba9f2b3dbc0191d2d4bfe2931caac9b58122541"},{"name":"outputs.tf","path":"modules/k8s-namespace/outputs.tf","sha":"d8e2f96f44b67f9d0d59431c789b39609a745996"},{"name":"variables.tf","path":"modules/k8s-namespace/variables.tf","sha":"572a62e2ca963233931c527bab99fd9f0a3a048b"}]},{"name":"k8s-service-account","children":[{"name":"README.md","path":"modules/k8s-service-account/README.md","sha":"a53dfad1ff1d991dfed08fb5da77f9c15e3b6d50"},{"name":"main.tf","path":"modules/k8s-service-account/main.tf","sha":"3c5efa6c04722d679f1707ab49c118d39ef6a806"},{"name":"outputs.tf","path":"modules/k8s-service-account/outputs.tf","sha":"c5c2389e4646bb2a16b87bec129330e0c3c4dcf6"},{"name":"variables.tf","path":"modules/k8s-service-account/variables.tf","sha":"4f0cfad5f8a5869ff201fad385ecaa0de7454674"}]},{"name":"k8s-tiller-tls-certs","children":[{"name":"README.md","path":"modules/k8s-tiller-tls-certs/README.md","sha":"bf4e7de237e87459f8f617b6fa60f0ed1d94cd86"},{"name":"main.tf","path":"modules/k8s-tiller-tls-certs/main.tf","sha":"69d3adb8717f12381f0683c0ab37581975b5a8dc"},{"name":"outputs.tf","path":"modules/k8s-tiller-tls-certs/outputs.tf","sha":"1a4038f3a59478a9f2d4b76eef51817dd4200dc7"},{"name":"variables.tf","path":"modules/k8s-tiller-tls-certs/variables.tf","sha":"259ebea616195628a68b8eb8a1da83465cd0c782"}]},{"name":"k8s-tiller","children":[{"name":"README.md","path":"modules/k8s-tiller/README.md","sha":"7dec15a673134ebc488a070dc614d04c2bf538d3"},{"name":"main.tf","path":"modules/k8s-tiller/main.tf","sha":"c278eeebee9bc47d059f59a22916e5cc6dac1335"},{"name":"outputs.tf","path":"modules/k8s-tiller/outputs.tf","sha":"e23f872780e656e4f12578d439eb910692f59a60"},{"name":"variables.tf","path":"modules/k8s-tiller/variables.tf","sha":"5d03e38bfec082950d20c0f48865d7c5107f259f"}]}]},{"name":"outputs.tf","path":"outputs.tf","sha":"e81c16d641a0e30b706292ad37ce8b8ef343becb"},{"name":"test","children":[{"name":"Gopkg.lock","path":"test/Gopkg.lock","sha":"cdae09784de4638a1b0eea0525c3be70cebcf2d7"},{"name":"Gopkg.toml","path":"test/Gopkg.toml","sha":"a8644e81d7acf83db32419833ca1ca318d2559c1"},{"name":"README.md","path":"test/README.md","sha":"c4361f3756f62c10366b7302401e8a53552061bb"},{"name":"k8s_namespace_with_service_account_test.go","path":"test/k8s_namespace_with_service_account_test.go","sha":"8fac6e063e5c03371b681cdd48a98a8d27137712"},{"name":"k8s_tiller_kubergrunt_test.go","path":"test/k8s_tiller_kubergrunt_test.go","sha":"67f31a218feaa2840cdad23308b577f442bd3f4d"},{"name":"k8s_tiller_test.go","path":"test/k8s_tiller_test.go","sha":"7501db1bfc755b2709c688a5bbd1867b4238b44a"},{"name":"kubefixtures","children":[{"name":"curl-kubeapi-as-service-account.yml.tpl","path":"test/kubefixtures/curl-kubeapi-as-service-account.yml.tpl","sha":"12fa119c7e183bb8d35cda86322c92e6a36a5307"},{"name":"namespace-check-create-pod.json.tpl","path":"test/kubefixtures/namespace-check-create-pod.json.tpl","sha":"ba88dfa440d815c221febd455550dd6fdbe7cbac"},{"name":"namespace-check-list-pod.json.tpl","path":"test/kubefixtures/namespace-check-list-pod.json.tpl","sha":"047bb650ac081c5f63d9491cde3ca80b92603489"}]},{"name":"terratest_options.go","path":"test/terratest_options.go","sha":"675f98b4c34be584c9f7eea240f290949f08f460"}]},{"name":"variables.tf","path":"variables.tf","sha":"7efc75a2f7d990de3cdf52c7589e9468da6f4133"}]},"detailsContent":"<h1 class=\"preview__body--title\" id=\"kubernetes-tiller-deployment-with-kubergrunt-on-minikube\">Kubernetes Tiller Deployment With Kubergrunt On Minikube</h1><div class=\"preview__body--border\"></div><p>This folder shows an example of how to use Terraform to call out to our <code>kubergrunt</code> utility for TLS management when\ndeploying Tiller (the server component of Helm) onto a Kubernetes cluster. Here we will walk through a detailed guide on\nhow you can setup <code>minikube</code> and use the modules in this repo to deploy Tiller onto it.</p>\n<h2 class=\"preview__body--subtitle\" id=\"background\">Background</h2>\n<p>We strongly recommend reading <a href=\"/repos/kubergrunt/HELM_GUIDE.md\" class=\"preview__body--description--blue\">our guide on Helm</a>\nbefore continuing with this guide for a background on Helm, Tiller, and the security model backing it.</p>\n<h2 class=\"preview__body--subtitle\" id=\"overview\">Overview</h2>\n<p>In this guide we will walk through the steps necessary to get up and running with deploying Tiller using this module,\nusing <code>minikube</code> to deploy our target Kubernetes cluster. Here are the steps:</p>\n<ol>\n<li><a href=\"#setting-up-your-kubernetes-cluster-minikube\" class=\"preview__body--description--blue\">Install and setup <code>minikube</code></a></li>\n<li><a href=\"#installing-necessary-tools\" class=\"preview__body--description--blue\">Install the necessary tools</a></li>\n<li><a href=\"#apply-the-terraform-code\" class=\"preview__body--description--blue\">Apply the terraform code</a></li>\n<li><a href=\"#verify-tiller-deployment\" class=\"preview__body--description--blue\">Verify the deployment</a></li>\n<li><a href=\"#granting-access-to-additional-users\" class=\"preview__body--description--blue\">Granting access to additional roles</a></li>\n<li><a href=\"#upgrading-deployed-tiller\" class=\"preview__body--description--blue\">Upgrading the deployed Tiller instance</a></li>\n</ol>\n<h2 class=\"preview__body--subtitle\" id=\"setting-up-your-kubernetes-cluster-minikube\">Setting up your Kubernetes cluster: Minikube</h2>\n<p>In this guide, we will use <code>minikube</code> as our Kubernetes cluster to deploy Tiller to.\n<a href=\"https://kubernetes.io/docs/setup/minikube/\" class=\"preview__body--description--blue\" target=\"_blank\">Minikube</a> is an official tool maintained by the Kubernetes community to be\nable to provision and run Kubernetes locally your machine. By having a local environment you can have fast iteration\ncycles while you develop and play with Kubernetes before deploying to production.</p>\n<p>To setup <code>minikube</code>:</p>\n<ol>\n<li><a href=\"https://kubernetes.io/docs/tasks/tools/install-kubectl/\" class=\"preview__body--description--blue\" target=\"_blank\">Install kubectl</a></li>\n<li><a href=\"https://kubernetes.io/docs/tasks/tools/install-minikube/\" class=\"preview__body--description--blue\" target=\"_blank\">Install the minikube utility</a></li>\n<li>Run <code>minikube start</code> to provision a new <code>minikube</code> instance on your local machine.</li>\n<li>Verify setup with <code>kubectl</code>: <code>kubectl cluster-info</code></li>\n</ol>\n<p></p>\n<h2 class=\"preview__body--subtitle\" id=\"installing-necessary-tools\">Installing necessary tools</h2>\n<p>In addition to <code>terraform</code>, this guide uses <code>kubergrunt</code> to manage TLS certificates for the deployment of Tiller. You\ncan read more about the decision behind this approach in <a href=\"#appendix-a-why-kubergrunt\" class=\"preview__body--description--blue\">the Appendix</a> of this guide.</p>\n<p>This means that your system needs to be configured to be able to find <code>terraform</code>, <code>kubergrunt</code>, and <code>helm</code> client\nutilities on the system <code>PATH</code>. Here are the installation guide for each:</p>\n<ol>\n<li><a href=\"https://learn.hashicorp.com/terraform/getting-started/install.html\" class=\"preview__body--description--blue\" target=\"_blank\"><code>terraform</code></a></li>\n<li><a href=\"https://docs.helm.sh/using_helm/#installing-helm\" class=\"preview__body--description--blue\" target=\"_blank\"><code>helm</code> client</a></li>\n<li><a href=\"/repos/kubergrunt#installation\" class=\"preview__body--description--blue\"><code>kubergrunt</code></a>, minimum version: v0.3.6</li>\n</ol>\n<p>Make sure the binaries are discoverable in your <code>PATH</code> variable. See <a href=\"https://stackoverflow.com/questions/14637979/how-to-permanently-set-path-on-linux-unix\" class=\"preview__body--description--blue\" target=\"_blank\">this stackoverflow\npost</a> for instructions on\nsetting up your <code>PATH</code> on Unix, and <a href=\"https://stackoverflow.com/questions/1618280/where-can-i-set-path-to-make-exe-on-windows\" class=\"preview__body--description--blue\" target=\"_blank\">this\npost</a> for instructions on\nWindows.</p>\n<h2 class=\"preview__body--subtitle\" id=\"apply-the-terraform-code\">Apply the Terraform Code</h2>\n<p>Now that we have a working Kubernetes cluster, and all the prerequisite tools are installed, we are ready to deploy\nTiller! To deploy Tiller, we will use the example Terraform code in this folder:</p>\n<ol>\n<li>If you haven't already, clone this repo:\n<ul>\n<li><code>git clone https://github.com/gruntwork-io/terraform-kubernetes-helm.git</code></li>\n</ul>\n</li>\n<li>Make sure you are in the example folder:\n<ul>\n<li><code>cd terraform-kubernetes-helm/examples/8s-tiller-kubergrunt-minikube</code></li>\n</ul>\n</li>\n<li>Initialize terraform:\n<ul>\n<li><code>terraform init</code></li>\n</ul>\n</li>\n<li>Apply the terraform code:\n<ul>\n<li><code>terraform apply</code></li>\n<li></li>\n</ul>\n</li>\n</ol>\n<p>The Terraform code creates a few resources before deploying Tiller:</p>\n<ul>\n<li>A Kubernetes <code>Namespace</code> (the <code>tiller-namespace</code>) to house the Tiller instance. This namespace is where all the\nKubernetes resources that Tiller needs to function will live. In production, you will want to lock down access to this\nnamespace as being able to access these resources can compromise all the protections built into Helm.</li>\n<li>A Kubernetes <code>Namespace</code> (the <code>resource-namespace</code>) to house the resources deployed by Tiller. This namespace is where\nall the Helm chart resources will be deployed into. This is the namespace that your devs and users will have access\nto.</li>\n<li>A Kubernetes <code>ServiceAccount</code> (<code>tiller-service-account</code>) that Tiller will use to apply the resources in Helm charts.\nOur Terraform code grants enough permissions to the <code>ServiceAccount</code> to be able to have full access to both the\n<code>tiller-namespace</code> and the <code>resource-namespace</code>, so that it can:\n<ul>\n<li>Manage its own resources in the <code>tiller-namespace</code>, where the Tiller metadata (e.g release tracking information) will live.</li>\n<li>Manage the resources deployed by helm charts in the <code>resource-namespace</code>.</li>\n</ul>\n</li>\n<li>Using <code>kubergrunt</code>, generate a TLS CA certificate key pair and a set of signed certificate key pairs for the server\nand the client. These will then be uploaded as <code>Secrets</code> on the Kubernetes cluster.</li>\n</ul>\n<p>These resources are then passed into the <code>k8s-tiller</code> module where the Tiller <code>Deployment</code> resources will be created.\nOnce the resources are applied to the cluster, this will wait for the Tiller <code>Deployment</code> to roll out the <code>Pods</code> using\n<code>kubergrunt helm wait-for-tiller</code>.</p>\n<p>Finally, to allow you to use <code>helm</code> right away, this code also sets up the local <code>helm</code> client. This involves:</p>\n<ul>\n<li>\n<p>Using the CA TLS certificate key pair, create a signed TLS certificate key pair to use to identify the client.</p>\n</li>\n<li>\n<p>Upload the certificate key pair to the <code>tiller-namespace</code>.</p>\n</li>\n<li>\n<p>Grant the RBAC entity access to:</p>\n<ul>\n<li>Get the client certificate <code>Secret</code> (<code>kubergrunt helm configure</code> uses this to install the client certificate\nkey pair locally)</li>\n<li>Get and List pods in <code>tiller-namespace</code> (the <code>helm</code> client uses this to find the Tiller pod)</li>\n<li>Create a port forward to the Tiller pod (the <code>helm</code> client uses this to make requests to the Tiller pod)</li>\n</ul>\n</li>\n<li>\n<p>Install the client certificate key pair to the helm home directory so the client can use it.</p>\n</li>\n</ul>\n<p>At the end of the <code>apply</code>, you should now have a working Tiller deployment with your <code>helm</code> client configured to access\nit. So let's verify that in the next step!</p>\n<h2 class=\"preview__body--subtitle\" id=\"verify-tiller-deployment\">Verify Tiller Deployment</h2>\n<p>To start using <code>helm</code> with the configured credentials, you need to specify the following things:</p>\n<ul>\n<li>enable TLS verification</li>\n<li>use TLS credentials to authenticate</li>\n<li>the namespace where Tiller is deployed</li>\n</ul>\n<p>These are specified through command line arguments. If everything is configured correctly, you should be able to access\nthe Tiller that was deployed with the following args:</p>\n<pre>helm <span class=\"hljs-keyword\">version</span> <span class=\"hljs-params\">--tls</span> <span class=\"hljs-params\">--tls-verify</span> <span class=\"hljs-params\">--tiller-namespace</span> NAMESPACE_OF_TILLER\n</pre>\n<p>If you have access to Tiller, this should return you both the client version and the server version of Helm.</p>\n<p>Note that you need to pass the above CLI argument every time you want to use <code>helm</code>. This can be cumbersome, so\n<code>kubergrunt</code> installs an environment file into your helm home directory that you can dot source to set environment\nvariables that guide <code>helm</code> to use those options:</p>\n<pre>. ~<span class=\"hljs-string\">/.helm/env</span>\nhelm <span class=\"hljs-keyword\">version</span>\n</pre>\n<p></p>\n<h2 class=\"preview__body--subtitle\" id=\"granting-access-to-additional-users\">Granting Access to Additional Users</h2>\n<p>Now that you have deployed Tiller and setup access for your local machine, you are ready to start using <code>helm</code>! However,\nyou might be wondering how do you share the access with your team? To do so, you can rely on <code>kubergrunt helm grant</code>.</p>\n<p>In order to allow other users access to the deployed Tiller instance, you need to explicitly grant their RBAC entities\npermission to access it. This involves:</p>\n<ul>\n<li>Granting enough permissions to access the Tiller pod</li>\n<li>Generating and sharing TLS certificate key pairs to identify the client</li>\n</ul>\n<p><code>kubergrunt</code> automates this process in the <code>grant</code> and <code>configure</code> commands. For example, suppose you wanted to grant\naccess to the deployed Tiller to a group of users grouped under the RBAC group <code>dev</code>. You can grant them access using\nthe following command:</p>\n<pre><span class=\"hljs-comment\">kubergrunt</span> <span class=\"hljs-comment\">helm</span> <span class=\"hljs-comment\">grant</span> --<span class=\"hljs-comment\">tiller</span><span class=\"hljs-literal\">-</span><span class=\"hljs-comment\">namespace</span> <span class=\"hljs-comment\">NAMESPACE_OF_TILLER</span> --<span class=\"hljs-comment\">rbac</span><span class=\"hljs-literal\">-</span><span class=\"hljs-comment\">group</span> <span class=\"hljs-comment\">dev</span> --<span class=\"hljs-comment\">tls</span><span class=\"hljs-literal\">-</span><span class=\"hljs-comment\">common</span><span class=\"hljs-literal\">-</span><span class=\"hljs-comment\">name</span> <span class=\"hljs-comment\">dev</span> --<span class=\"hljs-comment\">tls</span><span class=\"hljs-literal\">-</span><span class=\"hljs-comment\">org</span> <span class=\"hljs-comment\">YOUR_ORG</span>\n</pre>\n<p>This will generate a new certificate key pair for the client and upload it as a <code>Secret</code>. Then, it will bind new RBAC\nroles to the <code>dev</code> RBAC group that grants it permission to access the Tiller pod and the uploaded <code>Secret</code>.</p>\n<p>This in turn allows your users to configure their local client using <code>kubergrunt</code>:</p>\n<pre>kubergrunt helm configure --tiller-namespace NAMESPACE_OF_TILLER --rbac-<span class=\"hljs-keyword\">group</span> <span class=\"hljs-title\">dev</span>\n</pre>\n<p>At the end of this, your users should have the same helm client setup as above.</p>\n<h2 class=\"preview__body--subtitle\" id=\"appendix-a-why-kubergrunt\">Appendix A: Why kubergrunt?</h2>\n<p>This Terraform example is not idiomatic Terraform code in that it relies on an external binary, <code>kubergrunt</code> as opposed\nto implementing the functionalities using pure Terraform providers. This approach has some noticeable drawbacks:</p>\n<ul>\n<li>You have to install extra tools to use, so it is not a minimal <code>terraform init && terraform apply</code>.</li>\n<li>Portability concerns to setup, as there is no guarantee the tools work cross platform. We make every effort to test\nacross the major operating systems (Linux, Mac OSX, and Windows), but we can't possibly test every combination and so\nthere are bound to be portability issues.</li>\n<li>You don't have the declarative Terraform features that you come to love, such as <code>plan</code>, updates through <code>apply</code>, and\n<code>destroy</code>.</li>\n</ul>\n<p>That said, we decided to use this approach because of limitations in the existing providers to implement the\nfunctionalities here in pure Terraform code.</p>\n<p><code>kubergrunt</code> fulfills the role of generating and managing TLS certificate key pairs using Kubernetes <code>Secrets</code> as a\ndatabase. This allows us to deploy Tiller with TLS verification enabled. We could instead use the <code>tls</code> and <code>kubernetes</code>\nproviders in Terraform, but this has a few drawbacks:</p>\n<ul>\n<li>The <a href=\"https://www.terraform.io/docs/providers/tls/index.html\" class=\"preview__body--description--blue\" target=\"_blank\">TLS provider</a> stores the certificate key pairs in plain\ntext into the Terraform state.</li>\n<li>The Kubernetes Secret resource in the provider <a href=\"https://www.terraform.io/docs/providers/kubernetes/r/secret.html\" class=\"preview__body--description--blue\" target=\"_blank\">also stores the value in plain text in the Terraform\nstate</a>.</li>\n<li>The grant and configure workflows are better suited as CLI tools than in Terraform.</li>\n</ul>\n<p><code>kubergrunt</code> works around this by generating the TLS certs and storing them in Kubernetes <code>Secrets</code> directly. In this\nway, the generated TLS certs never leak into the Terraform state as they are referenced by name when deploying Tiller as\nopposed to by value.</p>\n<p>Note that we intend to implement a pure Terraform version of this functionality, but we plan to continue to maintain the\n<code>kubergrunt</code> approach for folks who are wary of leaking secrets into Terraform state.</p>\n","repoName":"terraform-kubernetes-helm","repoRef":"v0.6.1","serviceDescriptor":{"serviceName":"Tiller / Helm","serviceRepoName":"terraform-kubernetes-helm","serviceRepoOrg":"gruntwork-io","cloudProviders":["aws"],"description":"Deploy Tiller (Helm Server) to your Kubernetes cluster as a service/package manager. Supports namespaces, service accounts, RBAC roles, and TLS.","imageUrl":"kubernetes.png","licenseType":"subscriber","technologies":["Terraform","Bash","Helm"],"compliance":[],"tags":[""]},"serviceCategoryName":"Docker services","fileName":"README.md","filePath":"/examples/k8s-tiller-kubergrunt-minikube","title":"Repo Browser: Tiller / Helm","description":"Browse the repos in the Gruntwork Infrastructure as Code Library."}