This folder contains a Terraform module for running a cluster of Apache Kafka brokers.
Under the hood, the cluster is powered by the server-group
module, so it supports attaching ENIs and
EBS Volumes, zero-downtime rolling deployment, and auto-recovery of failed nodes.
Quick start
See the root README for instructions on using Terraform modules.
You specify the AMI to run in the cluster using the ami_id input variable. We recommend creating a
Packer template to define the AMI with the following modules installed:
When your servers are booting, you need to tell them to start Kafka. The easiest way to do that is to specify a User
Data script via the user_data
input variable that runs the run-kafka script. See
kafka-user-data.sh for an example.
The number and type of servers you need for Kafka depends on your use case and the amount of data you expect to
process. Here are a few basic rules of thumb:
Every write to Kafka gets persisted to Kafka's log on disk, so hard drive performance is important. Check out
Logs and EBS Volumes for more info.
Most writes to Kafka are initially buffered in memory by the OS. Therefore, you need sufficient memory to buffer
active readers and writers. You can do a back-of-the-envelope estimate: e.g., if you want to be able to buffer for
30 seconds, then you need at least write_throughput * 30, where write_throughput is how many MB/s you expect
to be written to your Kafka cluster. Using 32GB+ machines for Kafka brokers is common.
Kafka is not particularly CPU intensive, so getting machines with more cores is typically more efficient than
machines with higher clock speeds. Note that enabling SSL for Kafka brokers significantly increases CPU usage.
In general r3.xlarge or m4.2xlarge are a good choice for Kafka brokers.
Every write to a Kafka broker is persisted to disk in Kafka's log. We recommend using a separate EBS
Volume to store these logs. This ensures the hard drive used for transaction logs does
not have to contend with any other disk operations, which can improve Kafka performance. Moreover, if a Kafka broker
is replaced (e.g., during a deployment or after a crash), it can reattach the same EBS Volume and catch up on whatever
data it missed much faster than if it has to start from scratch (see Design and Deployment Considerations for
Deploying Apache Kafka on AWS).
This module creates an EBS Volume for each Kafka server and gives each (server, EBS Volume) a matching
ebs-volume-0 tag. You can use the persistent-ebs-volume
module in the User
Data of each server to find an
EBS Volume with a matching ebs-volume-0 tag and attach it to the server during boot. That way, if a server goes down
and is replaced, its replacement reattaches the same EBS Volume.
We strongly recommend associating an Elastic Load Balancer
(ELB) with your Kafka cluster and configuring it
to perform TCP health checks on the Kafka broker port (9092 by default). The kafka-cluster module allows you
to associate an ELB with Kafka, using the ELB's health checks to perform zero-downtime
deployments (i.e., ensuring the previous node is passing health checks before deploying the next
one) and to detect when a server is down and needs to be automatically replaced.
Note that we do NOT recommend connecting to Kafka via the ELB. That's because Kafka clients need to connect to specific
brokers, depending on which topics and partitions they are using, whereas an ELB will randomly round-robin requests
across all brokers.
Kafka's primary mechanism for backing up data is the replication within the cluster. Typically, the only backup you
may do beyond that is to create a Kafka consumer that dumps all data into a permanent, reliable store such as S3. This
functionality is NOT included with this module.
Connecting to Kafka brokers
Once you've used this module to deploy the Kafka brokers, you'll want to connect to them from Kafka clients (e.g.,
Kafka consumers and producers in your apps) to read and write data. To do this, you typically need to configure the
bootstrap.servers property for your Kafka client with the IP addresses of a few of your Kafka brokers (you don't
need all the IPs, as the rest will be discovered automatically via ZooKeeper):
Each Kafka broker deployed using this module will have a tag called ServerGroupName with the value set to the
var.name parameter you pass in. You can automatically discover all the servers with this tag and get their IP
addresses using either the AWS CLI or AWS SDK.
In the command above, you'll need to replace <REGION> with your AWS region (e.g., us-east-1) and
<KAFKA_CLUSTER_NAME> with the name of your Kafka cluster (i.e., the var.name parameter you passed to this module).
The returned data will contain the information about all the Kafka brokers, including their private IP addresses.
Extract these IPs, add the Kafka port to each one (default 9092), and put them into a comma-separated list:
An alternative option is to attach an Elastic Network Interface
(ENI) to each Kafka broker so that it has a static
IP address. You can enable ENIs using the attach_eni parameter:
With ENIs enabled, this module will output the list of private IPs for your brokers in the private_ips output
variable. Attach the port number (default 9092) to each of these IPs and pass them on to your Kafka clients:
The main downside of using ENIs is if you decide to change the size of your Kafka cluster, and therefore the number of
ENIs, then Kafka clients that have the old list of ENIs won't be updated until you re-deploy them with a
terraform apply. If you increased the size of your cluster, then those older clients may not have access to all the
available ENIs, which is typically not a problem, since they are only used for bootstrapping, and you only need a few
anyway. However, if you decreased the size of your cluster, then those older clients may be trying to connect to ENIs
that are no longer valid.
Questions? Ask away.
We're here to talk about our services, answer any questions, give advice, or just to chat.
{"treedata":{"name":"root","toggled":true,"children":[{"name":".circleci","children":[{"name":"config.yml","path":".circleci/config.yml","sha":"ecf06e5cbc41efd9ea3020bf1ba2959458ad2eff"}]},{"name":".gitignore","path":".gitignore","sha":"e68eece82c5bbdddf63121f38b66cdea255b5567"},{"name":".pre-commit-config.yaml","path":".pre-commit-config.yaml","sha":"8f0a49e6e74c419dd55216b6397d21c6cc2e1029"},{"name":"CODEOWNERS","path":"CODEOWNERS","sha":"4be01a6334d39aa5bf6abe6baae701f5e2a8c5ac"},{"name":"LICENSE.txt","path":"LICENSE.txt","sha":"689cf10ec98e3297a75bdd9b9fb5da10b7a675f8"},{"name":"README.md","path":"README.md","sha":"3b28f35855c132ef18878116e94b623ef2831d0d"},{"name":"examples","children":[{"name":"confluent-oss-ami","children":[{"name":"README.md","path":"examples/confluent-oss-ami/README.md","sha":"675ce60c000b84facb441e96fa16e89614fe2b6b"},{"name":"check-for-kafka-trust-store.sh","path":"examples/confluent-oss-ami/check-for-kafka-trust-store.sh","sha":"e2e7d323e5153471809af13b4434c9edfad1b85b"},{"name":"check-for-key-store.sh","path":"examples/confluent-oss-ami/check-for-key-store.sh","sha":"120539a6d63dfd2ed208e92c0a25a32b6b86a2bd"},{"name":"config","children":[{"name":"README.md","path":"examples/confluent-oss-ami/config/README.md","sha":"50e3000eda90f23880d9f34da435f1b170e213f5"},{"name":"kafka-connect","children":[{"name":"config","children":[{"name":"dev.worker-4.0.x.properties","path":"examples/confluent-oss-ami/config/kafka-connect/config/dev.worker-4.0.x.properties","sha":"5b4760ba1b5805ed9ec7eb0f0be06a3597f15001"},{"name":"prod.worker-4.0.x.properties","path":"examples/confluent-oss-ami/config/kafka-connect/config/prod.worker-4.0.x.properties","sha":"5b4760ba1b5805ed9ec7eb0f0be06a3597f15001"},{"name":"stage.worker-4.0.x.properties","path":"examples/confluent-oss-ami/config/kafka-connect/config/stage.worker-4.0.x.properties","sha":"5b4760ba1b5805ed9ec7eb0f0be06a3597f15001"}]},{"name":"log4j","children":[{"name":"dev.log4j.properties","path":"examples/confluent-oss-ami/config/kafka-connect/log4j/dev.log4j.properties","sha":"a23dfdbf369c5a8cba498d9016ab239f3c1c18a8"},{"name":"prod.log4j.properties","path":"examples/confluent-oss-ami/config/kafka-connect/log4j/prod.log4j.properties","sha":"a23dfdbf369c5a8cba498d9016ab239f3c1c18a8"},{"name":"stage.log4j.properties","path":"examples/confluent-oss-ami/config/kafka-connect/log4j/stage.log4j.properties","sha":"a23dfdbf369c5a8cba498d9016ab239f3c1c18a8"}]}]},{"name":"kafka-rest","children":[{"name":"config","children":[{"name":"dev.kafka-rest-4.0.x.properties","path":"examples/confluent-oss-ami/config/kafka-rest/config/dev.kafka-rest-4.0.x.properties","sha":"29c9ca3bd784637597c683b1585bfd52fd0035db"},{"name":"prod.kafka-rest-4.0.x.properties","path":"examples/confluent-oss-ami/config/kafka-rest/config/prod.kafka-rest-4.0.x.properties","sha":"29c9ca3bd784637597c683b1585bfd52fd0035db"},{"name":"stage.kafka-rest-4.0.x.properties","path":"examples/confluent-oss-ami/config/kafka-rest/config/stage.kafka-rest-4.0.x.properties","sha":"29c9ca3bd784637597c683b1585bfd52fd0035db"}]},{"name":"log4j","children":[{"name":"dev.log4j.properties","path":"examples/confluent-oss-ami/config/kafka-rest/log4j/dev.log4j.properties","sha":"43c18e3a2eb5bdf7a49c0336919aac1acf5f6b6d"},{"name":"prod.log4j.properties","path":"examples/confluent-oss-ami/config/kafka-rest/log4j/prod.log4j.properties","sha":"43c18e3a2eb5bdf7a49c0336919aac1acf5f6b6d"},{"name":"stage.log4j.properties","path":"examples/confluent-oss-ami/config/kafka-rest/log4j/stage.log4j.properties","sha":"43c18e3a2eb5bdf7a49c0336919aac1acf5f6b6d"}]}]},{"name":"schema-registry","children":[{"name":"config","children":[{"name":"dev.schema-registry-4.0.x.properties","path":"examples/confluent-oss-ami/config/schema-registry/config/dev.schema-registry-4.0.x.properties","sha":"e6541005171b9f0de27e7f177f915b08399f9404"},{"name":"prod.schema-registry-4.0.x.properties","path":"examples/confluent-oss-ami/config/schema-registry/config/prod.schema-registry-4.0.x.properties","sha":"e6541005171b9f0de27e7f177f915b08399f9404"},{"name":"stage.schema-registry-4.0.x.properties","path":"examples/confluent-oss-ami/config/schema-registry/config/stage.schema-registry-4.0.x.properties","sha":"e6541005171b9f0de27e7f177f915b08399f9404"}]},{"name":"log4j","children":[{"name":"dev.log4j.properties","path":"examples/confluent-oss-ami/config/schema-registry/log4j/dev.log4j.properties","sha":"28fa60645b6ba0ab402433aebbedec8a8a9533e3"},{"name":"prod.log4j.properties","path":"examples/confluent-oss-ami/config/schema-registry/log4j/prod.log4j.properties","sha":"28fa60645b6ba0ab402433aebbedec8a8a9533e3"},{"name":"stage.log4j.properties","path":"examples/confluent-oss-ami/config/schema-registry/log4j/stage.log4j.properties","sha":"28fa60645b6ba0ab402433aebbedec8a8a9533e3"}]}]}]},{"name":"configure-common-dependencies.sh","path":"examples/confluent-oss-ami/configure-common-dependencies.sh","sha":"ef00825af9bd3f296e622d21378b61e119e18365"},{"name":"configure-kafka-connect.sh","path":"examples/confluent-oss-ami/configure-kafka-connect.sh","sha":"effeed2d32e0ea2878b1d5ad726452e21af72647"},{"name":"configure-kafka-rest.sh","path":"examples/confluent-oss-ami/configure-kafka-rest.sh","sha":"aa9b04bb10946d50a22157a351ba859935b0350c"},{"name":"configure-schema-registry.sh","path":"examples/confluent-oss-ami/configure-schema-registry.sh","sha":"6ecc2c129f17a28e81ff45edade8a57e56bc9e07"},{"name":"confluent-oss.json","path":"examples/confluent-oss-ami/confluent-oss.json","sha":"8d9a8100d0c6823d0e670b0b9b1b32bf43640922"},{"name":"ssl","children":[{"name":"README.md","path":"examples/confluent-oss-ami/ssl/README.md","sha":"2b7b50749a90c78e026f597f677e44ece9f2458d"},{"name":"ca-cert","path":"examples/confluent-oss-ami/ssl/ca-cert","sha":"fb02e172efcdc4ad4c660e137059be86926108f4"},{"name":"cert","path":"examples/confluent-oss-ami/ssl/cert","sha":"0f486b16f80eebe97d9135542d229404a6b48ddc"},{"name":"kafka-connect","children":[{"name":"keystore","children":[{"name":"dev.keystore.jks","path":"examples/confluent-oss-ami/ssl/kafka-connect/keystore/dev.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"},{"name":"prod.keystore.jks","path":"examples/confluent-oss-ami/ssl/kafka-connect/keystore/prod.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"},{"name":"stage.keystore.jks","path":"examples/confluent-oss-ami/ssl/kafka-connect/keystore/stage.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"}]},{"name":"truststore","children":[{"name":"dev.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka-connect/truststore/dev.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"prod.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka-connect/truststore/prod.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"stage.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka-connect/truststore/stage.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"}]}]},{"name":"kafka-rest","children":[{"name":"keystore","children":[{"name":"dev.keystore.jks","path":"examples/confluent-oss-ami/ssl/kafka-rest/keystore/dev.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"},{"name":"prod.keystore.jks","path":"examples/confluent-oss-ami/ssl/kafka-rest/keystore/prod.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"},{"name":"stage.keystore.jks","path":"examples/confluent-oss-ami/ssl/kafka-rest/keystore/stage.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"}]},{"name":"truststore","children":[{"name":"dev.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka-rest/truststore/dev.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"prod.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka-rest/truststore/prod.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"stage.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka-rest/truststore/stage.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"}]}]},{"name":"kafka","children":[{"name":"truststore","children":[{"name":"dev.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka/truststore/dev.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"prod.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka/truststore/prod.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"stage.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka/truststore/stage.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"}]}]},{"name":"schema-registry","children":[{"name":"keystore","children":[{"name":"dev.keystore.jks","path":"examples/confluent-oss-ami/ssl/schema-registry/keystore/dev.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"},{"name":"prod.keystore.jks","path":"examples/confluent-oss-ami/ssl/schema-registry/keystore/prod.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"},{"name":"stage.keystore.jks","path":"examples/confluent-oss-ami/ssl/schema-registry/keystore/stage.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"}]},{"name":"truststore","children":[{"name":"dev.truststore.jks","path":"examples/confluent-oss-ami/ssl/schema-registry/truststore/dev.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"prod.truststore.jks","path":"examples/confluent-oss-ami/ssl/schema-registry/truststore/prod.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"stage.truststore.jks","path":"examples/confluent-oss-ami/ssl/schema-registry/truststore/stage.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"}]}]}]}]},{"name":"kafka-ami","children":[{"name":"README.md","path":"examples/kafka-ami/README.md","sha":"692e94969d0352ab15fb3f87a67b44085a1285a5"},{"name":"check-for-kafka-key-store.sh","path":"examples/kafka-ami/check-for-kafka-key-store.sh","sha":"27141a3a4a2ffe07fae4e83503d833ef3e3ec36b"},{"name":"config","children":[{"name":"README.md","path":"examples/kafka-ami/config/README.md","sha":"ec0ff5b551e31f4783712d3ed169dbbb757ca9f9"},{"name":"kafka","children":[{"name":"config","children":[{"name":"dev.server-4.0.x.properties","path":"examples/kafka-ami/config/kafka/config/dev.server-4.0.x.properties","sha":"5ea1bae91e95a50333444d0ffffc94924cfd0483"},{"name":"prod.server-4.0.x.properties","path":"examples/kafka-ami/config/kafka/config/prod.server-4.0.x.properties","sha":"5ea1bae91e95a50333444d0ffffc94924cfd0483"},{"name":"stage.server-4.0.x.properties","path":"examples/kafka-ami/config/kafka/config/stage.server-4.0.x.properties","sha":"5ea1bae91e95a50333444d0ffffc94924cfd0483"}]},{"name":"log4j","children":[{"name":"dev.log4j.properties","path":"examples/kafka-ami/config/kafka/log4j/dev.log4j.properties","sha":"394c539d46d5922b33ba1e8b3a50db2fbed7e6ef"},{"name":"prod.log4j.properties","path":"examples/kafka-ami/config/kafka/log4j/prod.log4j.properties","sha":"394c539d46d5922b33ba1e8b3a50db2fbed7e6ef"},{"name":"stage.log4j.properties","path":"examples/kafka-ami/config/kafka/log4j/stage.log4j.properties","sha":"394c539d46d5922b33ba1e8b3a50db2fbed7e6ef"}]}]}]},{"name":"configure-kafka-server.sh","path":"examples/kafka-ami/configure-kafka-server.sh","sha":"faaef8f2cde466b0388fcac809cfa37df5eaf9ad"},{"name":"kafka.json","path":"examples/kafka-ami/kafka.json","sha":"98b106f115513800b58723a8d97f632cdda65c11"},{"name":"ssl","children":[{"name":"README.md","path":"examples/kafka-ami/ssl/README.md","sha":"51859e48ac5ba48f1278f479d38112e69e761fa3"},{"name":"kafka","children":[{"name":"ca-cert","path":"examples/kafka-ami/ssl/kafka/ca-cert","sha":"fb02e172efcdc4ad4c660e137059be86926108f4"},{"name":"cert","path":"examples/kafka-ami/ssl/kafka/cert","sha":"0f486b16f80eebe97d9135542d229404a6b48ddc"},{"name":"keystore","children":[{"name":"dev.keystore.jks","path":"examples/kafka-ami/ssl/kafka/keystore/dev.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"},{"name":"prod.keystore.jks","path":"examples/kafka-ami/ssl/kafka/keystore/prod.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"},{"name":"stage.keystore.jks","path":"examples/kafka-ami/ssl/kafka/keystore/stage.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"}]},{"name":"truststore","children":[{"name":"dev.truststore.jks","path":"examples/kafka-ami/ssl/kafka/truststore/dev.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"prod.truststore.jks","path":"examples/kafka-ami/ssl/kafka/truststore/prod.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"stage.truststore.jks","path":"examples/kafka-ami/ssl/kafka/truststore/stage.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"}]}]}]}]},{"name":"kafka-zookeeper-confluent-oss-ami","children":[{"name":"README.md","path":"examples/kafka-zookeeper-confluent-oss-ami/README.md","sha":"1626f60062f2b2ba3d8352a459067b5fae812dbf"},{"name":"config","children":[{"name":"README.md","path":"examples/kafka-zookeeper-confluent-oss-ami/config/README.md","sha":"c14625eae0097f9e3428ce11d5a0a39580bde664"},{"name":"kafka-connect","children":[{"name":"config","children":[{"name":"worker-4.0.x.properties","path":"examples/kafka-zookeeper-confluent-oss-ami/config/kafka-connect/config/worker-4.0.x.properties","sha":"7cde3283393b1b511a79b87936a0430c4607641f"}]},{"name":"log4j","children":[{"name":"log4j.properties","path":"examples/kafka-zookeeper-confluent-oss-ami/config/kafka-connect/log4j/log4j.properties","sha":"a23dfdbf369c5a8cba498d9016ab239f3c1c18a8"}]}]},{"name":"kafka-rest","children":[{"name":"config","children":[{"name":"kafka-rest-4.0.x.properties","path":"examples/kafka-zookeeper-confluent-oss-ami/config/kafka-rest/config/kafka-rest-4.0.x.properties","sha":"29c9ca3bd784637597c683b1585bfd52fd0035db"}]},{"name":"log4j","children":[{"name":"log4j.properties","path":"examples/kafka-zookeeper-confluent-oss-ami/config/kafka-rest/log4j/log4j.properties","sha":"43c18e3a2eb5bdf7a49c0336919aac1acf5f6b6d"}]}]},{"name":"kafka","children":[{"name":"config","children":[{"name":"server-4.0.x.properties","path":"examples/kafka-zookeeper-confluent-oss-ami/config/kafka/config/server-4.0.x.properties","sha":"f9ae2462af98e6e98f4b64df21be5bbdd4df56a9"}]},{"name":"log4j","children":[{"name":"log4j.properties","path":"examples/kafka-zookeeper-confluent-oss-ami/config/kafka/log4j/log4j.properties","sha":"394c539d46d5922b33ba1e8b3a50db2fbed7e6ef"}]}]},{"name":"schema-registry","children":[{"name":"config","children":[{"name":"schema-registry-4.0.x.properties","path":"examples/kafka-zookeeper-confluent-oss-ami/config/schema-registry/config/schema-registry-4.0.x.properties","sha":"6d499d60b09982b45424f4060e04c586dd39287c"}]},{"name":"log4j","children":[{"name":"log4j.properties","path":"examples/kafka-zookeeper-confluent-oss-ami/config/schema-registry/log4j/log4j.properties","sha":"28fa60645b6ba0ab402433aebbedec8a8a9533e3"}]}]}]},{"name":"configure-kafka-zk-confluent-server.sh","path":"examples/kafka-zookeeper-confluent-oss-ami/configure-kafka-zk-confluent-server.sh","sha":"490836942d3838d689b668d033860f7b7ff1e66b"},{"name":"docker-compose.yml","path":"examples/kafka-zookeeper-confluent-oss-ami/docker-compose.yml","sha":"8beedfce4e773f452dee733f4f26cf5e8b0cb763"},{"name":"kafka-zookeeper-confluent-oss.json","path":"examples/kafka-zookeeper-confluent-oss-ami/kafka-zookeeper-confluent-oss.json","sha":"996c854d2065434180500881778d184615bbcb61"},{"name":"mock","children":[{"name":"README.md","path":"examples/kafka-zookeeper-confluent-oss-ami/mock/README.md","sha":"d373af41223ad92e574a958e4e17b78da61fe725"},{"name":"bash-commons","children":[{"name":"aws.sh","path":"examples/kafka-zookeeper-confluent-oss-ami/mock/bash-commons/aws.sh","sha":"ce067be902c8c7c49b85bb395f8eb50b87a535e6"},{"name":"docker.sh","path":"examples/kafka-zookeeper-confluent-oss-ami/mock/bash-commons/docker.sh","sha":"7827db443288057e5f2df9f43a955bc2afa464a4"}]},{"name":"modules","children":[{"name":"attach-eni","path":"examples/kafka-zookeeper-confluent-oss-ami/mock/modules/attach-eni","sha":"da052caea4586b27c2dc13e521092e9403fcc327"},{"name":"mount-ebs-volume","path":"examples/kafka-zookeeper-confluent-oss-ami/mock/modules/mount-ebs-volume","sha":"9b81549efc7c94e5baf609918e4831dd780eee2f"}]},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/kafka-zookeeper-confluent-oss-ami/mock/user-data/user-data.sh","sha":"31964f5f291ce525a04c0aedd3a9c8c84557944a"}]}]},{"name":"wait_for_zk.sh","path":"examples/kafka-zookeeper-confluent-oss-ami/wait_for_zk.sh","sha":"0ac5e9e1bb712d727f14c8777b35c0ccfbfbf59e"}]},{"name":"kafka-zookeeper-confluent-oss-colocated-cluster","children":[{"name":"README.md","path":"examples/kafka-zookeeper-confluent-oss-colocated-cluster/README.md","sha":"1ab7d9295b492aee332af4c8e507320eef2351ee"},{"name":"main.tf","path":"examples/kafka-zookeeper-confluent-oss-colocated-cluster/main.tf","sha":"09a837e6b90515d5efdaecdb3aa135dc138db9aa"},{"name":"outputs.tf","path":"examples/kafka-zookeeper-confluent-oss-colocated-cluster/outputs.tf","sha":"331d0632825a6f349d01751717ecf84bb08581c7"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/kafka-zookeeper-confluent-oss-colocated-cluster/user-data/user-data.sh","sha":"a63ab90111ce199b179f3db24f04e00e6cfeaf69"}]},{"name":"vars.tf","path":"examples/kafka-zookeeper-confluent-oss-colocated-cluster/vars.tf","sha":"b671f2f0d2f1ac9888ec630f42275ccd7e3d6a4e"}]},{"name":"kafka-zookeeper-confluent-oss-standalone-clusters","children":[{"name":"README.md","path":"examples/kafka-zookeeper-confluent-oss-standalone-clusters/README.md","sha":"c951818155d03b44f3335a54da07f2ba43360bbf"},{"name":"main.tf","path":"examples/kafka-zookeeper-confluent-oss-standalone-clusters/main.tf","sha":"84ef0cfcd738cb946a1636e1489031c45c414c95"},{"name":"outputs.tf","path":"examples/kafka-zookeeper-confluent-oss-standalone-clusters/outputs.tf","sha":"3c12deec6a6f70912b60bd14f20ac5f40152046e"},{"name":"user-data","children":[{"name":"confluent-tools-cluster-user-data.sh","path":"examples/kafka-zookeeper-confluent-oss-standalone-clusters/user-data/confluent-tools-cluster-user-data.sh","sha":"01b77ddab8cebc5367c05af25f361113d1d34a5f"},{"name":"kafka-cluster-user-data.sh","path":"examples/kafka-zookeeper-confluent-oss-standalone-clusters/user-data/kafka-cluster-user-data.sh","sha":"4216a3d67c3fab1db94c78d290eb685952268b46"},{"name":"zookeeper-cluster-user-data.sh","path":"examples/kafka-zookeeper-confluent-oss-standalone-clusters/user-data/zookeeper-cluster-user-data.sh","sha":"e5edb3e727f377436ae9c991cd2c7b01fc52baba"}]},{"name":"vars.tf","path":"examples/kafka-zookeeper-confluent-oss-standalone-clusters/vars.tf","sha":"80ed87cfc775f8e81415d9c6d0d636fa4eec4fed"}]},{"name":"kafka-zookeeper-standalone-clusters","children":[{"name":"README.md","path":"examples/kafka-zookeeper-standalone-clusters/README.md","sha":"9780235b75ceb497d3fd299c4794eb0f2601ef15"},{"name":"main.tf","path":"examples/kafka-zookeeper-standalone-clusters/main.tf","sha":"9b652568d63255027b932c48217688476f9a9391"},{"name":"outputs.tf","path":"examples/kafka-zookeeper-standalone-clusters/outputs.tf","sha":"051d6e5ddda7cef04fd9f5031a57694124762eca"},{"name":"user-data","children":[{"name":"kafka-user-data.sh","path":"examples/kafka-zookeeper-standalone-clusters/user-data/kafka-user-data.sh","sha":"ccb4d9324cabce27d4617c7e32ea10a10014385a"},{"name":"zookeeper-user-data.sh","path":"examples/kafka-zookeeper-standalone-clusters/user-data/zookeeper-user-data.sh","sha":"2e144c5dc55c4e721ecf595ae60df4553024191f"}]},{"name":"vars.tf","path":"examples/kafka-zookeeper-standalone-clusters/vars.tf","sha":"fc70b12e7d2c784e3a491ad08d10005f44ed71a7"}]},{"name":"zookeeper-ami","children":[{"name":"README.md","path":"examples/zookeeper-ami/README.md","sha":"6ebd1619152754561ae480c0ec00945ff7d2df43"},{"name":"configure-zookeeper-server.sh","path":"examples/zookeeper-ami/configure-zookeeper-server.sh","sha":"c024828978ceb80651ce610d141644cb0e107aa0"},{"name":"zookeeper.json","path":"examples/zookeeper-ami/zookeeper.json","sha":"cbe0d0d35c0353dd3d7d6203a8e678e3c9def2c5"}]}]},{"name":"modules","children":[{"name":"bash-commons","children":[{"name":"README.md","path":"modules/bash-commons/README.md","sha":"0b7b7bf23db870999e14ee833d05488ba44a136c"},{"name":"install.sh","path":"modules/bash-commons/install.sh","sha":"bedb09f6eaa00a323ae1dd814afec954ca3efeeb"},{"name":"lib","children":[{"name":"array.sh","path":"modules/bash-commons/lib/array.sh","sha":"2d4e0ef22dc608392e99522e8ff0eb68ed1f708c"},{"name":"assert.sh","path":"modules/bash-commons/lib/assert.sh","sha":"bfaf1740050694ed05d03bcff7dbc99724c4fc43"},{"name":"aws.sh","path":"modules/bash-commons/lib/aws.sh","sha":"e6986c813e1fef28dfd5881b0193c4925e8dc66b"},{"name":"file.sh","path":"modules/bash-commons/lib/file.sh","sha":"196b04006ff622844d6d198f78130b2d2fd7c0c6"},{"name":"java.sh","path":"modules/bash-commons/lib/java.sh","sha":"3cc8614fd91c2d9e0816555e558ee12ba0a8c95b"},{"name":"log.sh","path":"modules/bash-commons/lib/log.sh","sha":"1b5887a63f9e7de613707866753e2cfe910da4d1"},{"name":"os.sh","path":"modules/bash-commons/lib/os.sh","sha":"3371306dc7959874cf6b8935d9454e5c1c942c4d"},{"name":"strings.sh","path":"modules/bash-commons/lib/strings.sh","sha":"67a96995df1886ff0d4ce528b5fb26cfbe7b044d"}]}]},{"name":"confluent-tools-cluster","children":[{"name":"README.md","path":"modules/confluent-tools-cluster/README.md","sha":"658308333cc07a35f46a8ec04ff882c74db52bfe"},{"name":"main.tf","path":"modules/confluent-tools-cluster/main.tf","sha":"16c15acdd86ea5094a6916f1879f90a57ec18c29"},{"name":"outputs.tf","path":"modules/confluent-tools-cluster/outputs.tf","sha":"12f380d0aff454ad6d556c116d84c344a62bb999"},{"name":"vars.tf","path":"modules/confluent-tools-cluster/vars.tf","sha":"8234819f003e0566e96cb3a5fa14f71e6020139d"}]},{"name":"confluent-tools-iam-permissions","children":[{"name":"README.md","path":"modules/confluent-tools-iam-permissions/README.md","sha":"06057008bc666088f319da89baa9d16b68fdce30"},{"name":"main.tf","path":"modules/confluent-tools-iam-permissions/main.tf","sha":"6c7626892879f32ce8217606ec3574fe39b7ae40"},{"name":"vars.tf","path":"modules/confluent-tools-iam-permissions/vars.tf","sha":"9803fc14dc414f11fc9f236685dbb0db8f5273e6"}]},{"name":"confluent-tools-security-group-rules","children":[{"name":"README.md","path":"modules/confluent-tools-security-group-rules/README.md","sha":"5ee75e53e2ba7c2893e147fe86a2ba48fc8feab0"},{"name":"main.tf","path":"modules/confluent-tools-security-group-rules/main.tf","sha":"070df83ba36ff546133f387aa4a673bb9945477c"},{"name":"vars.tf","path":"modules/confluent-tools-security-group-rules/vars.tf","sha":"3faf1ad90e4c918aefbec1545235ecf97ecc08cf"}]},{"name":"generate-key-stores","children":[{"name":"README.md","path":"modules/generate-key-stores/README.md","sha":"e4cacd66c8857a552810dc5a8d328a5395659392"},{"name":"generate-key-stores.sh","path":"modules/generate-key-stores/generate-key-stores.sh","sha":"b2076744af89591375d36974d4c3a84ba25f32bb"},{"name":"install.sh","path":"modules/generate-key-stores/install.sh","sha":"33c6e02e94425b4d58050ca7719e70c0292e1ba6"}]},{"name":"install-confluent-tools","children":[{"name":"README.md","path":"modules/install-confluent-tools/README.md","sha":"ca24e1b76b9a0299b4c36e07bff2941e6960e0c1"},{"name":"install.sh","path":"modules/install-confluent-tools/install.sh","sha":"2a0b67e853dfa3f9e6fa760f05fea02a3abe9f87"},{"name":"security","children":[{"name":"confluent.key","path":"modules/install-confluent-tools/security/confluent.key","sha":"1025a2c6dfa66f224c0c45ac172fd8d3efce1744"}]}]},{"name":"install-kafka","children":[{"name":"README.md","path":"modules/install-kafka/README.md","sha":"c94e75bda5a0c753d35a67d785af65d46b551e5b"},{"name":"install.sh","path":"modules/install-kafka/install.sh","sha":"4b0c29e5515b4e94ed4f56c1685faebccbd22471"}]},{"name":"kafka-cluster","children":[{"name":"README.md","path":"modules/kafka-cluster/README.md","sha":"a054f23f1a0aebcc5be1f1ec0368bb67f17e20f6","toggled":true},{"name":"main.tf","path":"modules/kafka-cluster/main.tf","sha":"c08d8bec9fa7764a39aba7b3acde73587a169c47"},{"name":"outputs.tf","path":"modules/kafka-cluster/outputs.tf","sha":"d53790ae73bc413f3a21acb52064bddf30d0d517"},{"name":"vars.tf","path":"modules/kafka-cluster/vars.tf","sha":"7bee28a5d8850ee7ed6647941776f07b7a488560"}],"toggled":true},{"name":"kafka-iam-permissions","children":[{"name":"README.md","path":"modules/kafka-iam-permissions/README.md","sha":"3763dd58a4cd52e71b5ffe0d3e5e9fe0cf48053f"},{"name":"main.tf","path":"modules/kafka-iam-permissions/main.tf","sha":"b1b1c3dd1a8a2685be09f1a121acc82c54a5dd9d"},{"name":"vars.tf","path":"modules/kafka-iam-permissions/vars.tf","sha":"d29e8e5a07834701c3050e6369607747f85e43d8"}]},{"name":"kafka-security-group-rules","children":[{"name":"README.md","path":"modules/kafka-security-group-rules/README.md","sha":"63627f3af8842e4af90d0c31337004ef2503fc9c"},{"name":"main.tf","path":"modules/kafka-security-group-rules/main.tf","sha":"74336b4f3032e0b6a4e8bc02d6f2c705e159f693"},{"name":"vars.tf","path":"modules/kafka-security-group-rules/vars.tf","sha":"9c27d84eb7bfd6291acafc3e844f0ab7e1ec970f"}]},{"name":"run-health-checker","children":[{"name":"README.md","path":"modules/run-health-checker/README.md","sha":"8bc3cab46b5eecb25e642364dbd57ea35a0be71c"},{"name":"bin","children":[{"name":"run-health-checker","path":"modules/run-health-checker/bin/run-health-checker","sha":"ee3357d5fcc32957115b32538c00702c20b36a97"}]},{"name":"install.sh","path":"modules/run-health-checker/install.sh","sha":"af927c79f7df2b1d57204e1207a4104cbf0f63a2"}]},{"name":"run-kafka-connect","children":[{"name":"README.md","path":"modules/run-kafka-connect/README.md","sha":"0f180c7e494588f218cfe2ace99c671c8dfcadcf"},{"name":"bin","children":[{"name":"run-kafka-connect","path":"modules/run-kafka-connect/bin/run-kafka-connect","sha":"fb4145cc36ab2f8efc39ad7fc94444fa866ccef1"}]},{"name":"config","children":[{"name":"README.md","path":"modules/run-kafka-connect/config/README.md","sha":"5c6f2b6e63f1eba41957dd083c0787ac139c9622"},{"name":"kafka-connect","children":[{"name":"worker-3.3.x.properties","path":"modules/run-kafka-connect/config/kafka-connect/worker-3.3.x.properties","sha":"6bd29bb369f3aaea951e0a00d157ba167e943813"},{"name":"worker-4.0.x.properties","path":"modules/run-kafka-connect/config/kafka-connect/worker-4.0.x.properties","sha":"5b4760ba1b5805ed9ec7eb0f0be06a3597f15001"}]},{"name":"log4j","children":[{"name":"log4j.properties","path":"modules/run-kafka-connect/config/log4j/log4j.properties","sha":"a23dfdbf369c5a8cba498d9016ab239f3c1c18a8"}]}]},{"name":"install.sh","path":"modules/run-kafka-connect/install.sh","sha":"ba03f49f90a63a0e683c7be1b016b14db41bd71c"},{"name":"security","children":[{"name":"README.md","path":"modules/run-kafka-connect/security/README.md","sha":"5242a8435552c50055c926e1ec704545ca2c1b24"},{"name":"confluent-3.3.1-2.11.tar.gz.checksum","path":"modules/run-kafka-connect/security/confluent-3.3.1-2.11.tar.gz.checksum","sha":"c7aed490972e7b1565795221488d18449bd0bae1"},{"name":"confluent-4.0.0-2.11.tar.gz.checksum","path":"modules/run-kafka-connect/security/confluent-4.0.0-2.11.tar.gz.checksum","sha":"27b7a13f188475b4157386aa2761915633b72aa3"}]}]},{"name":"run-kafka-rest","children":[{"name":"README.md","path":"modules/run-kafka-rest/README.md","sha":"93c501f797818e3fee1e078cfbb93cc35577cc91"},{"name":"bin","children":[{"name":"run-kafka-rest","path":"modules/run-kafka-rest/bin/run-kafka-rest","sha":"2d1ff513fef3134b9c066f1e80ef04986ec6d5a2"}]},{"name":"config","children":[{"name":"README.md","path":"modules/run-kafka-rest/config/README.md","sha":"772b1a95a54aadd3c0f0dce34eac10f3d1967634"},{"name":"kafka-rest","children":[{"name":"kafka-rest-3.3.x.properties","path":"modules/run-kafka-rest/config/kafka-rest/kafka-rest-3.3.x.properties","sha":"21263ea344efbfa26e1a792b8bb29c33606c0d7a"},{"name":"kafka-rest-4.0.x.properties","path":"modules/run-kafka-rest/config/kafka-rest/kafka-rest-4.0.x.properties","sha":"29c9ca3bd784637597c683b1585bfd52fd0035db"}]},{"name":"log4j","children":[{"name":"log4j.properties","path":"modules/run-kafka-rest/config/log4j/log4j.properties","sha":"43c18e3a2eb5bdf7a49c0336919aac1acf5f6b6d"}]}]},{"name":"install.sh","path":"modules/run-kafka-rest/install.sh","sha":"97c543dd2a175ea3866f92e835494723645c6edc"}]},{"name":"run-kafka","children":[{"name":"README.md","path":"modules/run-kafka/README.md","sha":"b1fcb424860e462141f9a17423b0c3ecc3f2de08"},{"name":"bin","children":[{"name":"run-kafka","path":"modules/run-kafka/bin/run-kafka","sha":"2301427f95fbfb42ac5be080993d5cdd8699e172"}]},{"name":"config","children":[{"name":"README.md","path":"modules/run-kafka/config/README.md","sha":"e08702423b137b254ad0a0a070f7300f094ba046"},{"name":"kafka","children":[{"name":"server-3.3.x.properties","path":"modules/run-kafka/config/kafka/server-3.3.x.properties","sha":"b5aa3757d41f0e6fef81799b58c222774a0da63d"},{"name":"server-4.0.x.properties","path":"modules/run-kafka/config/kafka/server-4.0.x.properties","sha":"5ea1bae91e95a50333444d0ffffc94924cfd0483"}]},{"name":"log4j","children":[{"name":"log4j.properties","path":"modules/run-kafka/config/log4j/log4j.properties","sha":"394c539d46d5922b33ba1e8b3a50db2fbed7e6ef"}]}]},{"name":"install.sh","path":"modules/run-kafka/install.sh","sha":"d538e667d2a66004ceacc149b1d8c2f54e639ec7"}]},{"name":"run-schema-registry","children":[{"name":"README.md","path":"modules/run-schema-registry/README.md","sha":"343ddd7b1e9b24054c05722b3cb6ea4e63d88419"},{"name":"bin","children":[{"name":"run-schema-registry","path":"modules/run-schema-registry/bin/run-schema-registry","sha":"07a457889d54a9385de0bcc4d4d5dc51b3636b19"}]},{"name":"config","children":[{"name":"README.md","path":"modules/run-schema-registry/config/README.md","sha":"93a30f8adf3f778682463bbc8715a38f70908866"},{"name":"log4j","children":[{"name":"log4j.properties","path":"modules/run-schema-registry/config/log4j/log4j.properties","sha":"28fa60645b6ba0ab402433aebbedec8a8a9533e3"}]},{"name":"schema-registry","children":[{"name":"schema-registry.properties","path":"modules/run-schema-registry/config/schema-registry/schema-registry.properties","sha":"e6541005171b9f0de27e7f177f915b08399f9404"}]}]},{"name":"install.sh","path":"modules/run-schema-registry/install.sh","sha":"81b4a8b7c8b26d6b4a60e524bcda30e95a2777b7"}]}],"toggled":true},{"name":"terraform-cloud-enterprise-private-module-registry-placeholder.tf","path":"terraform-cloud-enterprise-private-module-registry-placeholder.tf","sha":"ae586c0fe830819580e1009d41a9074f16e65bed"},{"name":"test","children":[{"name":"README.md","path":"test/README.md","sha":"cf72a9f58a3b36aee8053b37dfc6e1a617f93f7b"},{"name":"generate_key_stores_test.go","path":"test/generate_key_stores_test.go","sha":"fceb03a8da4583eff59b8c4da1c070b9ef0c980a"},{"name":"go.mod","path":"test/go.mod","sha":"596401d0e2ebb8fffec2a25c671263ab68411c97"},{"name":"go.sum","path":"test/go.sum","sha":"a818ef7b94cea11f56a70960c448eafb12257b41"},{"name":"kafka_zookeeper_confluent_colocated_cluster_test.go","path":"test/kafka_zookeeper_confluent_colocated_cluster_test.go","sha":"d5ac28bbeef65dfaf784f7339e516510c9c22ba4"},{"name":"kafka_zookeeper_confluent_standalone_clusters_test.go","path":"test/kafka_zookeeper_confluent_standalone_clusters_test.go","sha":"55445cf91cd5534aa0fdb148b96bd5d1013b4915"},{"name":"kafka_zookeeper_standalone_clusters_test.go","path":"test/kafka_zookeeper_standalone_clusters_test.go","sha":"55abe10b05b0cba29f54df6f83b02d849fe19c83"},{"name":"test_helpers.go","path":"test/test_helpers.go","sha":"f01924f4c18c7595d58621e5e3dcc2128996f44e"},{"name":"test_helpers_kafka.go","path":"test/test_helpers_kafka.go","sha":"474caf79574b27637d376fe5ab65abea9af7eb3c"},{"name":"test_helpers_kafka_connect.go","path":"test/test_helpers_kafka_connect.go","sha":"1ecef92e9a45501fb4bf834579d02ddcc05e7103"},{"name":"test_helpers_keystore.go","path":"test/test_helpers_keystore.go","sha":"02d88327f4021955dca74c5311eb916bed5c7afa"},{"name":"test_helpers_rest_proxy.go","path":"test/test_helpers_rest_proxy.go","sha":"39a5c7d2f96a873615c856cd56861ad5e7920e1c"},{"name":"test_helpers_schema_registry.go","path":"test/test_helpers_schema_registry.go","sha":"b15a39916cd41441705265ccabeb23e0460b260f"}]}]},"detailsContent":"<h1 class=\"preview__body--title\" id=\"kafka-cluster\">Kafka Cluster</h1><div class=\"preview__body--border\"></div><p>This folder contains a Terraform module for running a cluster of <a href=\"https://kafka.apache.org/\" class=\"preview__body--description--blue\" target=\"_blank\">Apache Kafka</a> brokers.\nUnder the hood, the cluster is powered by the <a href=\"/repos/terraform-aws-asg/modules/server-group\" class=\"preview__body--description--blue\">server-group\nmodule</a>, so it supports attaching ENIs and\nEBS Volumes, zero-downtime rolling deployment, and auto-recovery of failed nodes.</p>\n<h2 class=\"preview__body--subtitle\" id=\"quick-start\">Quick start</h2>\n<ul>\n<li>See the <a href=\"/repos/v0.11.0/package-kafka/README.md\" class=\"preview__body--description--blue\">root README</a> for instructions on using Terraform modules.</li>\n<li>See the <a href=\"/repos/v0.11.0/package-kafka/examples/kafka-zookeeper-standalone-clusters\" class=\"preview__body--description--blue\">kafka-zookeeper-standalone-clusters example</a> for sample usage.</li>\n<li>See <a href=\"/repos/v0.11.0/package-kafka/modules/kafka-cluster/vars.tf\" class=\"preview__body--description--blue\">vars.tf</a> for all the variables you can set on this module.</li>\n<li>See <a href=\"#connecting-to-kafka-brokers\" class=\"preview__body--description--blue\">Connecting to Kafka brokers</a> for instructions on reading / writing to Kafka.</li>\n</ul>\n<h2 class=\"preview__body--subtitle\" id=\"key-considerations-for-using-this-module\">Key considerations for using this module</h2>\n<p>Here are the key things to take into account when using this module:</p>\n<ul>\n<li><a href=\"#kafka-ami\" class=\"preview__body--description--blue\">Kafka AMI</a></li>\n<li><a href=\"#user-data\" class=\"preview__body--description--blue\">User Data</a></li>\n<li><a href=\"#zookeeper\" class=\"preview__body--description--blue\">ZooKeeper</a></li>\n<li><a href=\"#hardware\" class=\"preview__body--description--blue\">Hardware</a></li>\n<li><a href=\"#logs-and-ebs-volumes\" class=\"preview__body--description--blue\">Logs and EBS Volumes</a></li>\n<li><a href=\"#health-checks\" class=\"preview__body--description--blue\">Health checks</a></li>\n<li><a href=\"#rolling-deployments\" class=\"preview__body--description--blue\">Rolling deployments</a></li>\n<li><a href=\"#data-backup\" class=\"preview__body--description--blue\">Data backup</a></li>\n</ul>\n<h3 class=\"preview__body--subtitle\" id=\"kafka-ami\">Kafka AMI</h3>\n<p>You specify the AMI to run in the cluster using the <code>ami_id</code> input variable. We recommend creating a\n<a href=\"https://www.packer.io/\" class=\"preview__body--description--blue\" target=\"_blank\">Packer</a> template to define the AMI with the following modules installed:</p>\n<ul>\n<li>\n<p><a href=\"/repos/terraform-aws-zookeeper/modules/install-open-jdk\" class=\"preview__body--description--blue\">install-open-jdk</a>:\nInstall OpenJDK. Note that this module is part of\n<a href=\"/repos/terraform-aws-zookeeper\" class=\"preview__body--description--blue\">terraform-aws-zookeeper</a>.</p>\n</li>\n<li>\n<p><a href=\"/repos/terraform-aws-zookeeper/modules/install-supervisord\" class=\"preview__body--description--blue\">install-supervisord</a>:\nInstall Supervisord as a process manager. Note that this module is part of\n<a href=\"/repos/terraform-aws-zookeeper\" class=\"preview__body--description--blue\">terraform-aws-zookeeper</a>.</p>\n</li>\n<li>\n<p><a href=\"/repos/v0.11.0/package-kafka/modules/install-kafka\" class=\"preview__body--description--blue\">install-kafka</a>: Install Kafka.</p>\n</li>\n<li>\n<p><a href=\"/repos/v0.11.0/package-kafka/modules/run-kafka\" class=\"preview__body--description--blue\">run-kafka</a>: A script used to configure and start Kafka.</p>\n</li>\n</ul>\n<p>See the <a href=\"/repos/v0.11.0/package-kafka/examples/kafka-ami\" class=\"preview__body--description--blue\">kafka-ami example</a> for working sample code.</p>\n<h3 class=\"preview__body--subtitle\" id=\"user-data\">User Data</h3>\n<p>When your servers are booting, you need to tell them to start Kafka. The easiest way to do that is to specify a <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-api-cli\" class=\"preview__body--description--blue\" target=\"_blank\">User\nData script</a> via the <code>user_data</code>\ninput variable that runs the <a href=\"/repos/v0.11.0/package-kafka/modules/run-kafka\" class=\"preview__body--description--blue\">run-kafka script</a>. See\n<a href=\"/repos/v0.11.0/package-kafka/examples/kafka-zookeeper-standalone-clusters/user-data/kafka-user-data.sh\" class=\"preview__body--description--blue\">kafka-user-data.sh</a> for an example.</p>\n<h3 class=\"preview__body--subtitle\" id=\"zoo-keeper\">ZooKeeper</h3>\n<p>Kafka depends on <a href=\"https://zookeeper.apache.org/\" class=\"preview__body--description--blue\" target=\"_blank\">ZooKeeper</a> to work. The easiest way to run ZooKeeper is with\n<a href=\"/repos/terraform-aws-zookeeper\" class=\"preview__body--description--blue\">terraform-aws-zookeeper</a>. Check out the\n<a href=\"/repos/v0.11.0/package-kafka/examples/kafka-zookeeper-standalone-clusters\" class=\"preview__body--description--blue\">kafka-zookeeper-standalone-clusters example</a> for how to run Kafka and\nZooKeeper in separate clusters and the <a href=\"/repos/v0.11.0/package-kafka/examples/kafka-zookeeper-confluent-oss-colocated-cluster\" class=\"preview__body--description--blue\">kafka-zookeeper-confluent-oss-colocated-cluster\nexample</a> for how to run Kafka and ZooKeeper co-located in the same\ncluster.</p>\n<h3 class=\"preview__body--subtitle\" id=\"hardware\">Hardware</h3>\n<p>The number and type of servers you need for Kafka depends on your use case and the amount of data you expect to\nprocess. Here are a few basic rules of thumb:</p>\n<ol>\n<li>\n<p>Every write to Kafka gets persisted to Kafka's log on disk, so hard drive performance is important. Check out\n<a href=\"#logs-and-ebs-volumes\" class=\"preview__body--description--blue\">Logs and EBS Volumes</a> for more info.</p>\n</li>\n<li>\n<p>Most writes to Kafka are initially buffered in memory by the OS. Therefore, you need sufficient memory to buffer\nactive readers and writers. You can do a back-of-the-envelope estimate: e.g., if you want to be able to buffer for\n30 seconds, then you need at least <code>write_throughput * 30</code>, where <code>write_throughput</code> is how many MB/s you expect\nto be written to your Kafka cluster. Using 32GB+ machines for Kafka brokers is common.</p>\n</li>\n<li>\n<p>Kafka is not particularly CPU intensive, so getting machines with more cores is typically more efficient than\nmachines with higher clock speeds. Note that enabling SSL for Kafka brokers significantly increases CPU usage.</p>\n</li>\n<li>\n<p>In general <code>r3.xlarge</code> or <code>m4.2xlarge</code> are a good choice for Kafka brokers.</p>\n</li>\n</ol>\n<p>For more info, see:</p>\n<ul>\n<li><a href=\"http://docs.confluent.io/current/kafka/deployment.html\" class=\"preview__body--description--blue\" target=\"_blank\">Kafka Production Deployment</a></li>\n<li><a href=\"https://www.cloudera.com/content/dam/www/marketing/resources/datasheets/kafka-reference-architecture.pdf.landing.html\" class=\"preview__body--description--blue\" target=\"_blank\">Kafka Reference Architecture</a></li>\n<li><a href=\"https://www.confluent.io/blog/design-and-deployment-considerations-for-deploying-apache-kafka-on-aws/\" class=\"preview__body--description--blue\" target=\"_blank\">Design and Deployment Considerations for Deploying Apache Kafka on AWS</a></li>\n</ul>\n<h3 class=\"preview__body--subtitle\" id=\"logs-and-ebs-volumes\">Logs and EBS Volumes</h3>\n<p>Every write to a Kafka broker is persisted to disk in Kafka's <em>log</em>. We recommend using a separate <a href=\"https://aws.amazon.com/ebs/\" class=\"preview__body--description--blue\" target=\"_blank\">EBS\nVolume</a> to store these logs. This ensures the hard drive used for transaction logs does\nnot have to contend with any other disk operations, which can improve Kafka performance. Moreover, if a Kafka broker\nis replaced (e.g., during a deployment or after a crash), it can reattach the same EBS Volume and catch up on whatever\ndata it missed much faster than if it has to start from scratch (see <a href=\"https://www.confluent.io/blog/design-and-deployment-considerations-for-deploying-apache-kafka-on-aws/\" class=\"preview__body--description--blue\" target=\"_blank\">Design and Deployment Considerations for\nDeploying Apache Kafka on AWS</a>).</p>\n<p>This module creates an EBS Volume for each Kafka server and gives each (server, EBS Volume) a matching\n<code>ebs-volume-0</code> tag. You can use the <a href=\"/repos/terraform-aws-server/modules/persistent-ebs-volume\" class=\"preview__body--description--blue\">persistent-ebs-volume\nmodule</a> in the <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-api-cli\" class=\"preview__body--description--blue\" target=\"_blank\">User\nData</a> of each server to find an\nEBS Volume with a matching <code>ebs-volume-0</code> tag and attach it to the server during boot. That way, if a server goes down\nand is replaced, its replacement reattaches the same EBS Volume.</p>\n<p>See <a href=\"/repos/v0.11.0/package-kafka/examples/kafka-zookeeper-standalone-clusters/user-data/kafka-user-data.sh\" class=\"preview__body--description--blue\">kafka-user-data.sh</a> for an example.</p>\n<h3 class=\"preview__body--subtitle\" id=\"health-checks\">Health checks</h3>\n<p>We strongly recommend associating an <a href=\"https://aws.amazon.com/elasticloadbalancing/classicloadbalancer/\" class=\"preview__body--description--blue\" target=\"_blank\">Elastic Load Balancer\n(ELB)</a> with your Kafka cluster and configuring it\nto perform TCP health checks on the Kafka broker port (9092 by default). The <code>kafka-cluster</code> module allows you\nto associate an ELB with Kafka, using the ELB's health checks to perform <a href=\"#rolling-deployments\" class=\"preview__body--description--blue\">zero-downtime\ndeployments</a> (i.e., ensuring the previous node is passing health checks before deploying the next\none) and to detect when a server is down and needs to be automatically replaced.</p>\n<p>Note that we do NOT recommend connecting to Kafka via the ELB. That's because Kafka clients need to connect to specific\nbrokers, depending on which topics and partitions they are using, whereas an ELB will randomly round-robin requests\nacross all brokers.</p>\n<p>Check out the <a href=\"/repos/v0.11.0/package-kafka/examples/kafka-zookeeper-standalone-clusters\" class=\"preview__body--description--blue\">kafka-zookeeper-standalone-clusters</a> example for working\nsample code that includes an ELB.</p>\n<h3 class=\"preview__body--subtitle\" id=\"rolling-deployments\">Rolling deployments</h3>\n<p>To deploy updates to a Kafka cluster, such as rolling out a new version of the AMI, you need to do the following:</p>\n<ol>\n<li>Shut down a Kafka broker on one server.</li>\n<li>Deploy the new code on the same server.</li>\n<li>Wait for the new code to come up successfully and start passing health checks.</li>\n<li>Repeat the process with the remaining servers.</li>\n</ol>\n<p>This module can do this process for you automatically by using the <a href=\"/repos/terraform-aws-asg/modules/server-group\" class=\"preview__body--description--blue\">server-group\nmodule's</a> support for <a href=\"/repos/terraform-aws-asg/modules/server-group#how-does-rolling-deployment-work\" class=\"preview__body--description--blue\">zero-downtime\nrolling deployment</a>.</p>\n<h3 class=\"preview__body--subtitle\" id=\"data-backup\">Data backup</h3>\n<p>Kafka's primary mechanism for backing up data is the replication within the cluster. Typically, the only backup you\nmay do beyond that is to create a Kafka consumer that dumps all data into a permanent, reliable store such as S3. This\nfunctionality is NOT included with this module.</p>\n<h2 class=\"preview__body--subtitle\" id=\"connecting-to-kafka-brokers\">Connecting to Kafka brokers</h2>\n<p>Once you've used this module to deploy the Kafka brokers, you'll want to connect to them from Kafka clients (e.g.,\nKafka consumers and producers in your apps) to read and write data. To do this, you typically need to configure the\n<code>bootstrap.servers</code> property for your Kafka client with the IP addresses of a few of your Kafka brokers (you don't\nneed all the IPs, as the rest will be discovered automatically via ZooKeeper):</p>\n<pre>--bootstrap.servers=<span class=\"hljs-number\">10.0</span><span class=\"hljs-number\">.0</span><span class=\"hljs-number\">.4</span>:<span class=\"hljs-number\">9092</span>,<span class=\"hljs-number\">10.0</span><span class=\"hljs-number\">.0</span><span class=\"hljs-number\">.5</span>:<span class=\"hljs-number\">9092</span>,<span class=\"hljs-number\">10.0</span><span class=\"hljs-number\">.0</span><span class=\"hljs-number\">.6</span>:<span class=\"hljs-number\">9092</span>\n</pre>\n<p>There are two main ways to get the IP addresses of your Kafka brokers:</p>\n<ol>\n<li><a href=\"#find-kafka-brokers-by-tag\" class=\"preview__body--description--blue\">Find Kafka brokers by tag</a></li>\n<li><a href=\"#find-kafka-brokers-using-enis\" class=\"preview__body--description--blue\">Find Kafka brokers using ENIs</a></li>\n</ol>\n<h3 class=\"preview__body--subtitle\" id=\"find-kafka-brokers-by-tag\">Find Kafka brokers by tag</h3>\n<p>Each Kafka broker deployed using this module will have a tag called <code>ServerGroupName</code> with the value set to the\n<code>var.name</code> parameter you pass in. You can automatically discover all the servers with this tag and get their IP\naddresses using either the <a href=\"https://aws.amazon.com/cli/\" class=\"preview__body--description--blue\" target=\"_blank\">AWS CLI</a> or <a href=\"https://aws.amazon.com/tools/\" class=\"preview__body--description--blue\" target=\"_blank\">AWS SDK</a>.</p>\n<p>Here's an example using the AWS CLI:</p>\n<pre>aws ec2 describe-instances <span class=\"hljs-string\">\\</span>\n --region <REGION> <span class=\"hljs-string\">\\</span>\n --filters <span class=\"hljs-string\">\\</span>\n <span class=\"hljs-string\">\"Name=instance-state-name,Values=running\"</span> <span class=\"hljs-string\">\\</span>\n <span class=\"hljs-string\">\"Name=tag:ServerGroupName,Values=<KAFKA_CLUSTER_NAME>\"</span>\n</pre>\n<p>In the command above, you'll need to replace <code><REGION></code> with your AWS region (e.g., <code>us-east-1</code>) and\n<code><KAFKA_CLUSTER_NAME></code> with the name of your Kafka cluster (i.e., the <code>var.name</code> parameter you passed to this module).</p>\n<p>The returned data will contain the information about all the Kafka brokers, including their private IP addresses.\nExtract these IPs, add the Kafka port to each one (default <code>9092</code>), and put them into a comma-separated list:</p>\n<pre>--bootstrap.servers=<span class=\"hljs-number\">10.0</span><span class=\"hljs-number\">.0</span><span class=\"hljs-number\">.4</span>:<span class=\"hljs-number\">9092</span>,<span class=\"hljs-number\">10.0</span><span class=\"hljs-number\">.0</span><span class=\"hljs-number\">.5</span>:<span class=\"hljs-number\">9092</span>,<span class=\"hljs-number\">10.0</span><span class=\"hljs-number\">.0</span><span class=\"hljs-number\">.6</span>:<span class=\"hljs-number\">9092</span>\n</pre>\n<h3 class=\"preview__body--subtitle\" id=\"find-kafka-brokers-using-en-is\">Find Kafka brokers using ENIs</h3>\n<p>An alternative option is to attach an <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html\" class=\"preview__body--description--blue\" target=\"_blank\">Elastic Network Interface\n(ENI)</a> to each Kafka broker so that it has a static\nIP address. You can enable ENIs using the <code>attach_eni</code> parameter:</p>\n<pre><span class=\"hljs-keyword\">module</span> <span class=\"hljs-string\">\"kafka_brokers\"</span> {\n source = <span class=\"hljs-string\">\"git::git@github.com:gruntwork-io/terraform-aws-kafka.git//modules/kafka-cluster?ref=v0.0.5\"</span>\n\n cluster_name = <span class=\"hljs-string\">\"example-kafka-brokers\"</span>\n attach_eni = true\n \n <span class=\"hljs-comment\"># (other params omitted)</span>\n}\n</pre>\n<p>With ENIs enabled, this module will output the list of private IPs for your brokers in the <code>private_ips</code> output\nvariable. Attach the port number (default <code>9092</code>) to each of these IPs and pass them on to your Kafka clients:</p>\n<pre>bootstrap_servers = <span class=\"hljs-string\">\"<span class=\"hljs-variable\">${<span class=\"hljs-meta\">formatlist(<span class=\"hljs-string\">\"%s:9092\"</span>, module.kafka_brokers.private_ips)</span>}</span>\"</span>\n</pre>\n<p>The main downside of using ENIs is if you decide to change the size of your Kafka cluster, and therefore the number of\nENIs, then Kafka clients that have the old list of ENIs won't be updated until you re-deploy them with a\n<code>terraform apply</code>. If you increased the size of your cluster, then those older clients may not have access to all the\navailable ENIs, which is typically not a problem, since they are only used for bootstrapping, and you only need a few\nanyway. However, if you decreased the size of your cluster, then those older clients may be trying to connect to ENIs\nthat are no longer valid.</p>\n","repoName":"package-kafka","repoRef":"v0.9.0","serviceDescriptor":{"serviceName":"Apache Kafka and Confluent Tools","serviceRepoName":"package-kafka","serviceRepoOrg":"gruntwork-io","cloudProviders":["aws"],"description":"Deploy a cluster of Kafka brokers. Optionally deploy Confluent tools such as Schema Registry, REST Proxy, and Kafka Connect.","imageUrl":"kafka.png","licenseType":"subscriber","technologies":["Terraform","Bash"],"compliance":[],"tags":[""]},"serviceCategoryName":"Messaging & streaming","fileName":"README.md","filePath":"/modules/kafka-cluster","title":"Repo Browser: Apache Kafka and Confluent Tools","description":"Browse the repos in the Gruntwork Infrastructure as Code Library."}