This folder contains a script for configuring and running Kafka. Typically, you would run this script while your
server is booting to start Kafka. This script assumes that the following are already installed:
The run-kafka script will generate a Kafka configuration file (see Kafka config docs
below for details) and then use Supervisord to start Kafka.
This script has been tested on the following operating systems:
Amazon Linux
Ubuntu
There is a good chance it will work on Debian, CentOS, and RHEL as well, but our automated testing for this
module does not cover these other distros at the moment.
Quick start
The easiest way to install the run-kafka script is with the Gruntwork
Installer:
You can install the run-kafka script by running the install.sh
file in the run-kafka module folder. The install.sh script requires the following arguments:
--config-dir-src: The directory containing the Kafka config files to copy.
--log4j-config-dir-src: The directory containing the Log4j config files to copy.
In addition, the following optional arguments are accepted:
--install-dir: The directory where the run-kafka files should be installed. Default: /opt/kafka.
--user: The user who will be set as the owner of --install-dir. Default: kafka.
Run install.sh with the --help option or see the source code to see all additional arguments. If you wish to use SSL
with Kafka, see the SSL Settings section below for additional arguments that are accepted.
If you're using gruntwork-install to install this module, you can pass these arguments using --module-param arguments.
Example:
When you run the run-kafka script, you must provide exactly one of the following arguments:
--zookeeper-eni-tag: The name and value of a tag, in the format name=value, that can be used to find ZooKeeper
server ENI IPs.
--zookeeper-connect: A comma-separated list of the IPs of ZooKeeper nodes to connect to.
The script also accepts a number of optional parameters to customize Kafka's behavior. Run the script with the
--help flag to see all available options. See the Kafka config docs below for the highlights.
In addition, you will most likely want to explicitly specify the following optional arguments:
--config-path: The path to the Kafka config file. Default: /opt/kafka/config/dev.kafka.properties
--log4j-config-path: The path to the Log4j config file. Default: /opt/kafka/config/dev.log4j.properties"
Although the above arguments are optional, in practice, a single server often contains configuration files for many
environments (e.g. dev, stage, prod), and you can use these arguments to specify exactly which environment's configuration
file should be used.
To see all other parameters excpted by run-kafka, run the script with the --help flag. Also, see the
Kafka config docs below for the highlights.
Kafka config
The run-kafka script dynamically fills in the most important values in a Kafka configuration
file. The script focuses primarily on values that differ from
environment to environment (i.e., stage and prod), so to see how to set other values, see
other settings.
Kafka uses ZooKeeper cluster for coordination. You can provide the IPs of the ZooKeeper nodes manually using the
--zookeeper-connect argument, or you can allow the run-kafka script to discover them automatically. To use the
automatic version, you specify the --zookeeper-eni-tag argument with the name and value, in the format name=value,
of a tag used on ZooKeeper ENIs. The latter option is based on the assumption that the ZooKeeper cluster is deployed
using package-zookeeper, which uses the server-group
module under the hood, which assigns
an ENI to each ZooKeeper server with special tags. See Server IPs and
IDs for more
info.
Broker ID
Every Kafka broker needs a unique ID. By default, the run-kafka script automatically figures out the Broker ID by
looking up the ServerGroupIndex tag for the current server. This tag is set by the server-group
module to a unique integer for each
server in the server group. You can override this value with a custom broker ID by specifying the --broker-id
argument.
Number of partitions
Every topic in Kafka consists of one or more partitions, which is one of the most important settings for determining
the throughput, availability, and end-to-end latency for that topic. See How to choose the number of topics/partitions
in a Kafka cluster?
for more info.
You specify the number of partitions when creating a topic (e.g., by using the kafka-topics.sh script). You can
also use the --num-partitions argument in the run-kafka script to configure the default number of partitions for
automatically created topics (i.e., topics that are created when a producer first tries to write to them).
Replication
The main way Kafka achieves durability and availability for your data is to replicate that data across multiple Kafka
brokers. Therefore, if one broker goes down, the data is still available in the other brokers.
You specify replication settings when creating a topic (e.g., by using the kafka-topics.sh script). You can also
specify the following replication settings using the run-kafka script:
--replication-factor: The default replication factor for auto-created topics.
--offsets-replication-factor: The replication factor for the internal __consumer_offsets topic.
--transaction-state-replication-factor: The replication factor for the internal __transaction_state topic.
In production, you typically want to set all of these to > 1 to ensure data isn't lost if a single broker dies.
Availability
The number of brokers you are running and the replication settings for your topics will be the biggest influence on
your Kafka cluster's availability in the face of outages. There are also two other important settings you can set via
the run-kafka script:
--min-in-sync-replicas: The number of replicas that must be in-sync when a producer sets acks to all. For
example, if you have 3 replicas, and you set this setting to 2, the producer will wait for 2 of the replicas to
acknowledge they received the write before the producer considers the write successful. Setting this to a higher
value (e.g., 3 for a topic with 3 replicas) reduces the chance of data loss, but it also reduces availability, as
even a single broker going down means writes will fail. Setting this to 2 for a replication factor of 3 is common.
--unclean-leader-election: If set to true, an out-of-sync replica will be elected as leader when there is no live
in-sync replica (ISR). This preserves the availability of the partition, but there is a chance of data loss. If set
to false and there are no live in-sync replicas, Kafka returns an error and the partition will be unavailable. In
general, if you're optimizing for availability, set this setting to true; if you're optimizing for reducing data
loss, set this setting to false.
JVM memory settings
By default, we configure Kafka to run with 6g of memory. You can override this with the --memory argument. If you
wish to override all JVM settings for Kafka, you can use the --jvm-opts argument.
SSL settings
By default, Kafka brokers communicate over plaintext. If you wish to enable SSL, set the --enable-ssl argument to
true. For SSL to work, you need the following:
A Key Store that contains an SSL certificate and a Trust Store that contains the the CA that signed that SSL
certificate. You can use the generate-key-stores module to generate the Key Store and
Trust Store and you can install them on your server by passing the --key-store-path and --trust-store-path
arguments, respectively, to the install-kafka module.
You must use the --key-store-password argument to provide the run-kafka script with the password you used when
creating the Key Store.
You must use the --trust-store-password argument to provide the run-kafka script with the password you used when
creating the Trust Store.
Log directories
By default, the run-kafka script will configure Kafka to logs in /opt/kafka/kafka-logs/data. We strongly
recommend mounting a separate EBS Volume at /opt/kafka/kafka-logs, as every write to Kafka gets persisted to disk,
and you get much better performance if there is no contention for the hard drive from other processes.
You can override the log directories via the --log-dirs argument.
Other settings
Kafka has many, many configuration settings. The run-kafka
script gives you a convenient way to set just a few of the most important ones, and especially those that may differ
from environment to environment. To set other types of settings, your best bet is to put them into a custom
server.properties file and to install that file using the install-kafka module by setting
the --config argument. You can find the default server.properties file used by install-kafkahere.
Please note that the run-kafka script does a simple search and replace using sed to fill in run-time properties,
so it will replace or add settings to your custom server.properties at run time.
Questions? Ask away.
We're here to talk about our services, answer any questions, give advice, or just to chat.
{"treedata":{"name":"root","toggled":true,"children":[{"name":".circleci","children":[{"name":"config.yml","path":".circleci/config.yml","sha":"dd0f528286142897f203dd952b485df9b9c88d7d"}]},{"name":".gitignore","path":".gitignore","sha":"3ad7ffd2955d69604514f4f6e09972a966f293fd"},{"name":".pre-commit-config.yaml","path":".pre-commit-config.yaml","sha":"2d4efdd2ec44b972a044249e753b3be33c61c9cc"},{"name":"CODEOWNERS","path":"CODEOWNERS","sha":"98ce69cba614ca3100ef584efdddc020568d295b"},{"name":"LICENSE.txt","path":"LICENSE.txt","sha":"689cf10ec98e3297a75bdd9b9fb5da10b7a675f8"},{"name":"README.md","path":"README.md","sha":"2e2c659b74732e71e449fb154a057b8644c966fa"},{"name":"examples","children":[{"name":"confluent-oss-ami","children":[{"name":"README.md","path":"examples/confluent-oss-ami/README.md","sha":"3b87f881e239e9ecc7715542dec35994a9b04134"},{"name":"check-for-kafka-trust-store.sh","path":"examples/confluent-oss-ami/check-for-kafka-trust-store.sh","sha":"e2e7d323e5153471809af13b4434c9edfad1b85b"},{"name":"check-for-key-store.sh","path":"examples/confluent-oss-ami/check-for-key-store.sh","sha":"120539a6d63dfd2ed208e92c0a25a32b6b86a2bd"},{"name":"config","children":[{"name":"README.md","path":"examples/confluent-oss-ami/config/README.md","sha":"bb2d45baa7ae69e53034277a6ed2ea433fbab3c1"},{"name":"kafka-connect","children":[{"name":"config","children":[{"name":"dev.worker-4.0.x.properties","path":"examples/confluent-oss-ami/config/kafka-connect/config/dev.worker-4.0.x.properties","sha":"3df1dc856e3c2557ba683e90dd10d9b09c8c7b65"},{"name":"prod.worker-4.0.x.properties","path":"examples/confluent-oss-ami/config/kafka-connect/config/prod.worker-4.0.x.properties","sha":"3df1dc856e3c2557ba683e90dd10d9b09c8c7b65"},{"name":"stage.worker-4.0.x.properties","path":"examples/confluent-oss-ami/config/kafka-connect/config/stage.worker-4.0.x.properties","sha":"3df1dc856e3c2557ba683e90dd10d9b09c8c7b65"}]},{"name":"log4j","children":[{"name":"dev.log4j.properties","path":"examples/confluent-oss-ami/config/kafka-connect/log4j/dev.log4j.properties","sha":"a23dfdbf369c5a8cba498d9016ab239f3c1c18a8"},{"name":"prod.log4j.properties","path":"examples/confluent-oss-ami/config/kafka-connect/log4j/prod.log4j.properties","sha":"a23dfdbf369c5a8cba498d9016ab239f3c1c18a8"},{"name":"stage.log4j.properties","path":"examples/confluent-oss-ami/config/kafka-connect/log4j/stage.log4j.properties","sha":"a23dfdbf369c5a8cba498d9016ab239f3c1c18a8"}]}]},{"name":"kafka-rest","children":[{"name":"config","children":[{"name":"dev.kafka-rest-4.0.x.properties","path":"examples/confluent-oss-ami/config/kafka-rest/config/dev.kafka-rest-4.0.x.properties","sha":"de01923125bb6d66ea9f500e32122bde58b8b6cd"},{"name":"prod.kafka-rest-4.0.x.properties","path":"examples/confluent-oss-ami/config/kafka-rest/config/prod.kafka-rest-4.0.x.properties","sha":"de01923125bb6d66ea9f500e32122bde58b8b6cd"},{"name":"stage.kafka-rest-4.0.x.properties","path":"examples/confluent-oss-ami/config/kafka-rest/config/stage.kafka-rest-4.0.x.properties","sha":"de01923125bb6d66ea9f500e32122bde58b8b6cd"}]},{"name":"log4j","children":[{"name":"dev.log4j.properties","path":"examples/confluent-oss-ami/config/kafka-rest/log4j/dev.log4j.properties","sha":"43c18e3a2eb5bdf7a49c0336919aac1acf5f6b6d"},{"name":"prod.log4j.properties","path":"examples/confluent-oss-ami/config/kafka-rest/log4j/prod.log4j.properties","sha":"43c18e3a2eb5bdf7a49c0336919aac1acf5f6b6d"},{"name":"stage.log4j.properties","path":"examples/confluent-oss-ami/config/kafka-rest/log4j/stage.log4j.properties","sha":"43c18e3a2eb5bdf7a49c0336919aac1acf5f6b6d"}]}]},{"name":"schema-registry","children":[{"name":"config","children":[{"name":"dev.schema-registry-4.0.x.properties","path":"examples/confluent-oss-ami/config/schema-registry/config/dev.schema-registry-4.0.x.properties","sha":"493b9cf71e0bed80a48d22eb46e655aa8eacdb39"},{"name":"prod.schema-registry-4.0.x.properties","path":"examples/confluent-oss-ami/config/schema-registry/config/prod.schema-registry-4.0.x.properties","sha":"493b9cf71e0bed80a48d22eb46e655aa8eacdb39"},{"name":"stage.schema-registry-4.0.x.properties","path":"examples/confluent-oss-ami/config/schema-registry/config/stage.schema-registry-4.0.x.properties","sha":"493b9cf71e0bed80a48d22eb46e655aa8eacdb39"}]},{"name":"log4j","children":[{"name":"dev.log4j.properties","path":"examples/confluent-oss-ami/config/schema-registry/log4j/dev.log4j.properties","sha":"28fa60645b6ba0ab402433aebbedec8a8a9533e3"},{"name":"prod.log4j.properties","path":"examples/confluent-oss-ami/config/schema-registry/log4j/prod.log4j.properties","sha":"28fa60645b6ba0ab402433aebbedec8a8a9533e3"},{"name":"stage.log4j.properties","path":"examples/confluent-oss-ami/config/schema-registry/log4j/stage.log4j.properties","sha":"28fa60645b6ba0ab402433aebbedec8a8a9533e3"}]}]}]},{"name":"configure-common-dependencies.sh","path":"examples/confluent-oss-ami/configure-common-dependencies.sh","sha":"001091d1113afc8cb9781c79c883ce3f76ebfdb6"},{"name":"configure-kafka-connect.sh","path":"examples/confluent-oss-ami/configure-kafka-connect.sh","sha":"b7bce433b54eaed4cf76f6cc12cf970d000d48fc"},{"name":"configure-kafka-rest.sh","path":"examples/confluent-oss-ami/configure-kafka-rest.sh","sha":"3daf9fb56cc1b612f29a0527e1cbee4ae1d21be2"},{"name":"configure-schema-registry.sh","path":"examples/confluent-oss-ami/configure-schema-registry.sh","sha":"6ae98447e1929690e6023d19df04642cdf98e24e"},{"name":"confluent-oss.json","path":"examples/confluent-oss-ami/confluent-oss.json","sha":"727bda7ec33c7c5417b084d758030dff90f7b36c"},{"name":"ssl","children":[{"name":"README.md","path":"examples/confluent-oss-ami/ssl/README.md","sha":"51859e48ac5ba48f1278f479d38112e69e761fa3"},{"name":"ca-cert","path":"examples/confluent-oss-ami/ssl/ca-cert","sha":"fb02e172efcdc4ad4c660e137059be86926108f4"},{"name":"cert","path":"examples/confluent-oss-ami/ssl/cert","sha":"0f486b16f80eebe97d9135542d229404a6b48ddc"},{"name":"kafka-connect","children":[{"name":"keystore","children":[{"name":"dev.keystore.jks","path":"examples/confluent-oss-ami/ssl/kafka-connect/keystore/dev.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"},{"name":"prod.keystore.jks","path":"examples/confluent-oss-ami/ssl/kafka-connect/keystore/prod.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"},{"name":"stage.keystore.jks","path":"examples/confluent-oss-ami/ssl/kafka-connect/keystore/stage.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"}]},{"name":"truststore","children":[{"name":"dev.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka-connect/truststore/dev.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"prod.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka-connect/truststore/prod.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"stage.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka-connect/truststore/stage.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"}]}]},{"name":"kafka-rest","children":[{"name":"keystore","children":[{"name":"dev.keystore.jks","path":"examples/confluent-oss-ami/ssl/kafka-rest/keystore/dev.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"},{"name":"prod.keystore.jks","path":"examples/confluent-oss-ami/ssl/kafka-rest/keystore/prod.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"},{"name":"stage.keystore.jks","path":"examples/confluent-oss-ami/ssl/kafka-rest/keystore/stage.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"}]},{"name":"truststore","children":[{"name":"dev.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka-rest/truststore/dev.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"prod.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka-rest/truststore/prod.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"stage.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka-rest/truststore/stage.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"}]}]},{"name":"kafka","children":[{"name":"truststore","children":[{"name":"dev.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka/truststore/dev.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"prod.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka/truststore/prod.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"stage.truststore.jks","path":"examples/confluent-oss-ami/ssl/kafka/truststore/stage.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"}]}]},{"name":"schema-registry","children":[{"name":"keystore","children":[{"name":"dev.keystore.jks","path":"examples/confluent-oss-ami/ssl/schema-registry/keystore/dev.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"},{"name":"prod.keystore.jks","path":"examples/confluent-oss-ami/ssl/schema-registry/keystore/prod.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"},{"name":"stage.keystore.jks","path":"examples/confluent-oss-ami/ssl/schema-registry/keystore/stage.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"}]},{"name":"truststore","children":[{"name":"dev.truststore.jks","path":"examples/confluent-oss-ami/ssl/schema-registry/truststore/dev.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"prod.truststore.jks","path":"examples/confluent-oss-ami/ssl/schema-registry/truststore/prod.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"stage.truststore.jks","path":"examples/confluent-oss-ami/ssl/schema-registry/truststore/stage.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"}]}]}]}]},{"name":"kafka-ami","children":[{"name":"README.md","path":"examples/kafka-ami/README.md","sha":"692e94969d0352ab15fb3f87a67b44085a1285a5"},{"name":"check-for-kafka-key-store.sh","path":"examples/kafka-ami/check-for-kafka-key-store.sh","sha":"27141a3a4a2ffe07fae4e83503d833ef3e3ec36b"},{"name":"config","children":[{"name":"README.md","path":"examples/kafka-ami/config/README.md","sha":"7192b2878bc130644a67bb46bdef1e4a3ea05e90"},{"name":"kafka","children":[{"name":"config","children":[{"name":"dev.server-4.0.x.properties","path":"examples/kafka-ami/config/kafka/config/dev.server-4.0.x.properties","sha":"9475458db43bb335995a061dd91866b23a460a73"},{"name":"prod.server-4.0.x.properties","path":"examples/kafka-ami/config/kafka/config/prod.server-4.0.x.properties","sha":"9475458db43bb335995a061dd91866b23a460a73"},{"name":"stage.server-4.0.x.properties","path":"examples/kafka-ami/config/kafka/config/stage.server-4.0.x.properties","sha":"9475458db43bb335995a061dd91866b23a460a73"}]},{"name":"log4j","children":[{"name":"dev.log4j.properties","path":"examples/kafka-ami/config/kafka/log4j/dev.log4j.properties","sha":"394c539d46d5922b33ba1e8b3a50db2fbed7e6ef"},{"name":"prod.log4j.properties","path":"examples/kafka-ami/config/kafka/log4j/prod.log4j.properties","sha":"394c539d46d5922b33ba1e8b3a50db2fbed7e6ef"},{"name":"stage.log4j.properties","path":"examples/kafka-ami/config/kafka/log4j/stage.log4j.properties","sha":"394c539d46d5922b33ba1e8b3a50db2fbed7e6ef"}]}]}]},{"name":"configure-kafka-server.sh","path":"examples/kafka-ami/configure-kafka-server.sh","sha":"45a46998ce5ae87332a472ebf13fc6e6eb351936"},{"name":"kafka.json","path":"examples/kafka-ami/kafka.json","sha":"4ee7845de22ac23f3431872f4f857b91acbfa364"},{"name":"ssl","children":[{"name":"README.md","path":"examples/kafka-ami/ssl/README.md","sha":"51859e48ac5ba48f1278f479d38112e69e761fa3"},{"name":"kafka","children":[{"name":"ca-cert","path":"examples/kafka-ami/ssl/kafka/ca-cert","sha":"fb02e172efcdc4ad4c660e137059be86926108f4"},{"name":"cert","path":"examples/kafka-ami/ssl/kafka/cert","sha":"0f486b16f80eebe97d9135542d229404a6b48ddc"},{"name":"keystore","children":[{"name":"dev.keystore.jks","path":"examples/kafka-ami/ssl/kafka/keystore/dev.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"},{"name":"prod.keystore.jks","path":"examples/kafka-ami/ssl/kafka/keystore/prod.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"},{"name":"stage.keystore.jks","path":"examples/kafka-ami/ssl/kafka/keystore/stage.keystore.jks","sha":"6283b3e9b655c2a987192e81b3a6172e6c9ea487"}]},{"name":"truststore","children":[{"name":"dev.truststore.jks","path":"examples/kafka-ami/ssl/kafka/truststore/dev.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"prod.truststore.jks","path":"examples/kafka-ami/ssl/kafka/truststore/prod.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"},{"name":"stage.truststore.jks","path":"examples/kafka-ami/ssl/kafka/truststore/stage.truststore.jks","sha":"9545e6ac795144d714c23f252abe79f4811d4d89"}]}]}]}]},{"name":"kafka-zookeeper-confluent-oss-ami","children":[{"name":"README.md","path":"examples/kafka-zookeeper-confluent-oss-ami/README.md","sha":"4365f33065f627f20bf84ad23154d34951dfb830"},{"name":"config","children":[{"name":"README.md","path":"examples/kafka-zookeeper-confluent-oss-ami/config/README.md","sha":"8bc1792593d171ae5f9ca9790a947a8e9684f5bb"},{"name":"kafka-connect","children":[{"name":"config","children":[{"name":"worker-4.0.x.properties","path":"examples/kafka-zookeeper-confluent-oss-ami/config/kafka-connect/config/worker-4.0.x.properties","sha":"f7fb51f4aaab1f8ee12d474f32c1edf71e0fe777"}]},{"name":"log4j","children":[{"name":"log4j.properties","path":"examples/kafka-zookeeper-confluent-oss-ami/config/kafka-connect/log4j/log4j.properties","sha":"a23dfdbf369c5a8cba498d9016ab239f3c1c18a8"}]}]},{"name":"kafka-rest","children":[{"name":"config","children":[{"name":"kafka-rest-4.0.x.properties","path":"examples/kafka-zookeeper-confluent-oss-ami/config/kafka-rest/config/kafka-rest-4.0.x.properties","sha":"de01923125bb6d66ea9f500e32122bde58b8b6cd"}]},{"name":"log4j","children":[{"name":"log4j.properties","path":"examples/kafka-zookeeper-confluent-oss-ami/config/kafka-rest/log4j/log4j.properties","sha":"43c18e3a2eb5bdf7a49c0336919aac1acf5f6b6d"}]}]},{"name":"kafka","children":[{"name":"config","children":[{"name":"server-4.0.x.properties","path":"examples/kafka-zookeeper-confluent-oss-ami/config/kafka/config/server-4.0.x.properties","sha":"cef1a3644e451e8f49b4bfbcfe31803f715782e2"}]},{"name":"log4j","children":[{"name":"log4j.properties","path":"examples/kafka-zookeeper-confluent-oss-ami/config/kafka/log4j/log4j.properties","sha":"394c539d46d5922b33ba1e8b3a50db2fbed7e6ef"}]}]},{"name":"schema-registry","children":[{"name":"config","children":[{"name":"schema-registry-4.0.x.properties","path":"examples/kafka-zookeeper-confluent-oss-ami/config/schema-registry/config/schema-registry-4.0.x.properties","sha":"996f342fb78bc52ea5cdba6e804e6be078fe6e27"}]},{"name":"log4j","children":[{"name":"log4j.properties","path":"examples/kafka-zookeeper-confluent-oss-ami/config/schema-registry/log4j/log4j.properties","sha":"28fa60645b6ba0ab402433aebbedec8a8a9533e3"}]}]}]},{"name":"configure-kafka-zk-confluent-server.sh","path":"examples/kafka-zookeeper-confluent-oss-ami/configure-kafka-zk-confluent-server.sh","sha":"d3be36026a611fba55ccee4650c188862009ab1c"},{"name":"docker-compose.yml","path":"examples/kafka-zookeeper-confluent-oss-ami/docker-compose.yml","sha":"8d042e5c17941461949245afb6844544bce18c03"},{"name":"kafka-zookeeper-confluent-oss.json","path":"examples/kafka-zookeeper-confluent-oss-ami/kafka-zookeeper-confluent-oss.json","sha":"9b18f15eaa39399fc373974f3d4f8cf00c3dccdc"},{"name":"mock","children":[{"name":"README.md","path":"examples/kafka-zookeeper-confluent-oss-ami/mock/README.md","sha":"697cd7cb2605e3cc5f3d7baa953d2746edbd76cf"},{"name":"bash-commons","children":[{"name":"aws.sh","path":"examples/kafka-zookeeper-confluent-oss-ami/mock/bash-commons/aws.sh","sha":"ce067be902c8c7c49b85bb395f8eb50b87a535e6"},{"name":"docker.sh","path":"examples/kafka-zookeeper-confluent-oss-ami/mock/bash-commons/docker.sh","sha":"7827db443288057e5f2df9f43a955bc2afa464a4"}]},{"name":"modules","children":[{"name":"attach-eni","path":"examples/kafka-zookeeper-confluent-oss-ami/mock/modules/attach-eni","sha":"da052caea4586b27c2dc13e521092e9403fcc327"},{"name":"mount-ebs-volume","path":"examples/kafka-zookeeper-confluent-oss-ami/mock/modules/mount-ebs-volume","sha":"9b81549efc7c94e5baf609918e4831dd780eee2f"}]},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/kafka-zookeeper-confluent-oss-ami/mock/user-data/user-data.sh","sha":"0aecfeedce6c1c7c10b6b7e3fd6a518d41845d4d"}]}]},{"name":"wait_for_zk.sh","path":"examples/kafka-zookeeper-confluent-oss-ami/wait_for_zk.sh","sha":"0ac5e9e1bb712d727f14c8777b35c0ccfbfbf59e"}]},{"name":"kafka-zookeeper-confluent-oss-colocated-cluster","children":[{"name":"README.md","path":"examples/kafka-zookeeper-confluent-oss-colocated-cluster/README.md","sha":"f23cb780eb59f6bc20a4ea7435beedca81793da5"},{"name":"main.tf","path":"examples/kafka-zookeeper-confluent-oss-colocated-cluster/main.tf","sha":"6fb273f4af6a64796a1292aac3a5afc6fd368a55"},{"name":"outputs.tf","path":"examples/kafka-zookeeper-confluent-oss-colocated-cluster/outputs.tf","sha":"331d0632825a6f349d01751717ecf84bb08581c7"},{"name":"user-data","children":[{"name":"user-data.sh","path":"examples/kafka-zookeeper-confluent-oss-colocated-cluster/user-data/user-data.sh","sha":"a63ab90111ce199b179f3db24f04e00e6cfeaf69"}]},{"name":"vars.tf","path":"examples/kafka-zookeeper-confluent-oss-colocated-cluster/vars.tf","sha":"4a1c86725b6a1353924b38a8458ded00d91ca884"}]},{"name":"kafka-zookeeper-confluent-oss-standalone-clusters","children":[{"name":"README.md","path":"examples/kafka-zookeeper-confluent-oss-standalone-clusters/README.md","sha":"ac9a46f4c3eb8fe1de3e4cbc49f7d8c837407462"},{"name":"main.tf","path":"examples/kafka-zookeeper-confluent-oss-standalone-clusters/main.tf","sha":"1a42f1c1c32fc0c62145b4822c668f724996a8d1"},{"name":"outputs.tf","path":"examples/kafka-zookeeper-confluent-oss-standalone-clusters/outputs.tf","sha":"3c12deec6a6f70912b60bd14f20ac5f40152046e"},{"name":"user-data","children":[{"name":"confluent-tools-cluster-user-data.sh","path":"examples/kafka-zookeeper-confluent-oss-standalone-clusters/user-data/confluent-tools-cluster-user-data.sh","sha":"01b77ddab8cebc5367c05af25f361113d1d34a5f"},{"name":"kafka-cluster-user-data.sh","path":"examples/kafka-zookeeper-confluent-oss-standalone-clusters/user-data/kafka-cluster-user-data.sh","sha":"4216a3d67c3fab1db94c78d290eb685952268b46"},{"name":"zookeeper-cluster-user-data.sh","path":"examples/kafka-zookeeper-confluent-oss-standalone-clusters/user-data/zookeeper-cluster-user-data.sh","sha":"e5edb3e727f377436ae9c991cd2c7b01fc52baba"}]},{"name":"vars.tf","path":"examples/kafka-zookeeper-confluent-oss-standalone-clusters/vars.tf","sha":"2a0fda4741a9309a6b593f1bae7b9bb64027e52f"}]},{"name":"kafka-zookeeper-standalone-clusters","children":[{"name":"README.md","path":"examples/kafka-zookeeper-standalone-clusters/README.md","sha":"4029b604e4da5c28055f0b028a093f38c4757d22"},{"name":"main.tf","path":"examples/kafka-zookeeper-standalone-clusters/main.tf","sha":"cef030942b9286636c8fe633e50c12c93c6c7c93"},{"name":"outputs.tf","path":"examples/kafka-zookeeper-standalone-clusters/outputs.tf","sha":"051d6e5ddda7cef04fd9f5031a57694124762eca"},{"name":"user-data","children":[{"name":"kafka-user-data.sh","path":"examples/kafka-zookeeper-standalone-clusters/user-data/kafka-user-data.sh","sha":"ccb4d9324cabce27d4617c7e32ea10a10014385a"},{"name":"zookeeper-user-data.sh","path":"examples/kafka-zookeeper-standalone-clusters/user-data/zookeeper-user-data.sh","sha":"2e144c5dc55c4e721ecf595ae60df4553024191f"}]},{"name":"vars.tf","path":"examples/kafka-zookeeper-standalone-clusters/vars.tf","sha":"06bf2b854b3408fc5f5395310d9de6c841b39f6f"}]},{"name":"zookeeper-ami","children":[{"name":"README.md","path":"examples/zookeeper-ami/README.md","sha":"ffe6eaeb2341c27fe6bcc41f4de43a2afa86b760"},{"name":"configure-zookeeper-server.sh","path":"examples/zookeeper-ami/configure-zookeeper-server.sh","sha":"ed6a74e26838305c094bc81396f99f7ab6291a03"},{"name":"zookeeper.json","path":"examples/zookeeper-ami/zookeeper.json","sha":"4c54fcd4a9ecacbd317e1a605be7242d12a6658f"}]}]},{"name":"modules","children":[{"name":"bash-commons","children":[{"name":"README.md","path":"modules/bash-commons/README.md","sha":"5e571b5e344bbe3037fe3b47ab02c58f3c07fc84"},{"name":"install.sh","path":"modules/bash-commons/install.sh","sha":"d49cb442d6b141d64da539de2f655ccfd08663d8"},{"name":"lib","children":[{"name":"array.sh","path":"modules/bash-commons/lib/array.sh","sha":"2d4e0ef22dc608392e99522e8ff0eb68ed1f708c"},{"name":"assert.sh","path":"modules/bash-commons/lib/assert.sh","sha":"bfaf1740050694ed05d03bcff7dbc99724c4fc43"},{"name":"aws.sh","path":"modules/bash-commons/lib/aws.sh","sha":"e6986c813e1fef28dfd5881b0193c4925e8dc66b"},{"name":"file.sh","path":"modules/bash-commons/lib/file.sh","sha":"196b04006ff622844d6d198f78130b2d2fd7c0c6"},{"name":"java.sh","path":"modules/bash-commons/lib/java.sh","sha":"3cc8614fd91c2d9e0816555e558ee12ba0a8c95b"},{"name":"log.sh","path":"modules/bash-commons/lib/log.sh","sha":"1b5887a63f9e7de613707866753e2cfe910da4d1"},{"name":"os.sh","path":"modules/bash-commons/lib/os.sh","sha":"3371306dc7959874cf6b8935d9454e5c1c942c4d"},{"name":"strings.sh","path":"modules/bash-commons/lib/strings.sh","sha":"67a96995df1886ff0d4ce528b5fb26cfbe7b044d"}]}]},{"name":"confluent-tools-cluster","children":[{"name":"README.md","path":"modules/confluent-tools-cluster/README.md","sha":"5827a4de3358f59ed2a232d1255bcbbac40cf035"},{"name":"main.tf","path":"modules/confluent-tools-cluster/main.tf","sha":"b7cc023942bc9c2bcebf70e5fb514620713e3742"},{"name":"outputs.tf","path":"modules/confluent-tools-cluster/outputs.tf","sha":"12f380d0aff454ad6d556c116d84c344a62bb999"},{"name":"vars.tf","path":"modules/confluent-tools-cluster/vars.tf","sha":"f3a0723079f185876dee9b4dad3c60bcda50a2f7"}]},{"name":"confluent-tools-iam-permissions","children":[{"name":"README.md","path":"modules/confluent-tools-iam-permissions/README.md","sha":"0ac84778fee32e8ba436cfa43335637141173f77"},{"name":"main.tf","path":"modules/confluent-tools-iam-permissions/main.tf","sha":"608d5825c77ce8b44969009b0ade953315adcd80"},{"name":"vars.tf","path":"modules/confluent-tools-iam-permissions/vars.tf","sha":"9803fc14dc414f11fc9f236685dbb0db8f5273e6"}]},{"name":"confluent-tools-security-group-rules","children":[{"name":"README.md","path":"modules/confluent-tools-security-group-rules/README.md","sha":"d344146022c62c84ee4d59bcd29ab9564630be2b"},{"name":"main.tf","path":"modules/confluent-tools-security-group-rules/main.tf","sha":"0655a98adc507fdaebe7ccb25f32ab47a3a80819"},{"name":"vars.tf","path":"modules/confluent-tools-security-group-rules/vars.tf","sha":"3faf1ad90e4c918aefbec1545235ecf97ecc08cf"}]},{"name":"generate-key-stores","children":[{"name":"README.md","path":"modules/generate-key-stores/README.md","sha":"222d63d312f46bc4c591fe83ab91316d41de5eb3"},{"name":"generate-key-stores.sh","path":"modules/generate-key-stores/generate-key-stores.sh","sha":"b2076744af89591375d36974d4c3a84ba25f32bb"}]},{"name":"install-confluent-tools","children":[{"name":"README.md","path":"modules/install-confluent-tools/README.md","sha":"c4ab260cf2ec472384414605fdb7d45cb925da3f"},{"name":"install.sh","path":"modules/install-confluent-tools/install.sh","sha":"fd2639e7a7c3c28466a65a5ecf6a26a1bebcbefc"},{"name":"security","children":[{"name":"confluent.key","path":"modules/install-confluent-tools/security/confluent.key","sha":"1025a2c6dfa66f224c0c45ac172fd8d3efce1744"}]}]},{"name":"install-kafka","children":[{"name":"README.md","path":"modules/install-kafka/README.md","sha":"45cd3bd8d5b630c10bf806e0a43cdb985cd0327d"},{"name":"install.sh","path":"modules/install-kafka/install.sh","sha":"921775bea4b8d32f794e611e23867bd4de4b6ed5"}]},{"name":"kafka-cluster","children":[{"name":"README.md","path":"modules/kafka-cluster/README.md","sha":"1ab972e1beb7b9d5fe25235f2dbe2da0133a4aa8"},{"name":"main.tf","path":"modules/kafka-cluster/main.tf","sha":"d0b8666a94027db260d59d10ec972d64daa73079"},{"name":"outputs.tf","path":"modules/kafka-cluster/outputs.tf","sha":"d53790ae73bc413f3a21acb52064bddf30d0d517"},{"name":"vars.tf","path":"modules/kafka-cluster/vars.tf","sha":"b1efaed83ff7fe4535aa9ef6d10056e9a1ff6840"}]},{"name":"kafka-iam-permissions","children":[{"name":"README.md","path":"modules/kafka-iam-permissions/README.md","sha":"3e6fa930e9f98cef246bec891e8b6f7f005a060b"},{"name":"main.tf","path":"modules/kafka-iam-permissions/main.tf","sha":"f088a908908d098238bbb0cfafe7e4afd52a12b1"},{"name":"vars.tf","path":"modules/kafka-iam-permissions/vars.tf","sha":"d29e8e5a07834701c3050e6369607747f85e43d8"}]},{"name":"kafka-security-group-rules","children":[{"name":"README.md","path":"modules/kafka-security-group-rules/README.md","sha":"834343e96b147d606646c8ff5093484bc8095382"},{"name":"main.tf","path":"modules/kafka-security-group-rules/main.tf","sha":"7e9c549026ab05d821fe041491650ddd37911e54"},{"name":"vars.tf","path":"modules/kafka-security-group-rules/vars.tf","sha":"9c27d84eb7bfd6291acafc3e844f0ab7e1ec970f"}]},{"name":"run-health-checker","children":[{"name":"README.md","path":"modules/run-health-checker/README.md","sha":"79eac34492d6f0972c6ddabcdbf709a07e898bfa"},{"name":"bin","children":[{"name":"run-health-checker","path":"modules/run-health-checker/bin/run-health-checker","sha":"9aa7074256cf5686cfa35c34dd17777a60434480"}]},{"name":"install.sh","path":"modules/run-health-checker/install.sh","sha":"4f312a0bfe4635fdda2e2d023756269de7dd3087"}]},{"name":"run-kafka-connect","children":[{"name":"README.md","path":"modules/run-kafka-connect/README.md","sha":"d7c5b8a0aa19257c0da6937ff9077737519dfb8b"},{"name":"bin","children":[{"name":"run-kafka-connect","path":"modules/run-kafka-connect/bin/run-kafka-connect","sha":"cda699c97799d12a5894e1ad9bde8811492581e2"}]},{"name":"config","children":[{"name":"README.md","path":"modules/run-kafka-connect/config/README.md","sha":"5c6f2b6e63f1eba41957dd083c0787ac139c9622"},{"name":"kafka-connect","children":[{"name":"worker-3.3.x.properties","path":"modules/run-kafka-connect/config/kafka-connect/worker-3.3.x.properties","sha":"66abc6f293b50ee16ed14ad801bc36a321c1a24e"},{"name":"worker-4.0.x.properties","path":"modules/run-kafka-connect/config/kafka-connect/worker-4.0.x.properties","sha":"3df1dc856e3c2557ba683e90dd10d9b09c8c7b65"}]},{"name":"log4j","children":[{"name":"log4j.properties","path":"modules/run-kafka-connect/config/log4j/log4j.properties","sha":"a23dfdbf369c5a8cba498d9016ab239f3c1c18a8"}]}]},{"name":"install.sh","path":"modules/run-kafka-connect/install.sh","sha":"41f328ce78019caa553b735e34579453c7834fcc"},{"name":"security","children":[{"name":"README.md","path":"modules/run-kafka-connect/security/README.md","sha":"5242a8435552c50055c926e1ec704545ca2c1b24"},{"name":"confluent-3.3.1-2.11.tar.gz.checksum","path":"modules/run-kafka-connect/security/confluent-3.3.1-2.11.tar.gz.checksum","sha":"c7aed490972e7b1565795221488d18449bd0bae1"},{"name":"confluent-4.0.0-2.11.tar.gz.checksum","path":"modules/run-kafka-connect/security/confluent-4.0.0-2.11.tar.gz.checksum","sha":"27b7a13f188475b4157386aa2761915633b72aa3"}]}]},{"name":"run-kafka-rest","children":[{"name":"README.md","path":"modules/run-kafka-rest/README.md","sha":"996361ecaa26a992660ef86a5f9f3d240a2f1c8a"},{"name":"bin","children":[{"name":"run-kafka-rest","path":"modules/run-kafka-rest/bin/run-kafka-rest","sha":"90fd0fb143c65bfbd7b7bfdd3cb870487c201ace"}]},{"name":"config","children":[{"name":"README.md","path":"modules/run-kafka-rest/config/README.md","sha":"772b1a95a54aadd3c0f0dce34eac10f3d1967634"},{"name":"kafka-rest","children":[{"name":"kafka-rest-3.3.x.properties","path":"modules/run-kafka-rest/config/kafka-rest/kafka-rest-3.3.x.properties","sha":"e3c4f56b361ab64a9a1869134f227d10ed43710d"},{"name":"kafka-rest-4.0.x.properties","path":"modules/run-kafka-rest/config/kafka-rest/kafka-rest-4.0.x.properties","sha":"de01923125bb6d66ea9f500e32122bde58b8b6cd"}]},{"name":"log4j","children":[{"name":"log4j.properties","path":"modules/run-kafka-rest/config/log4j/log4j.properties","sha":"43c18e3a2eb5bdf7a49c0336919aac1acf5f6b6d"}]}]},{"name":"install.sh","path":"modules/run-kafka-rest/install.sh","sha":"dbdd44f684b225be35fd3f5932b06def213193a4"}]},{"name":"run-kafka","children":[{"name":"README.md","path":"modules/run-kafka/README.md","sha":"63ea86b385c32e6fdf94a3ed802c3ffe24c15abd","toggled":true},{"name":"bin","children":[{"name":"run-kafka","path":"modules/run-kafka/bin/run-kafka","sha":"0fd73251f3a13d2f9d3c456cee6e9ff8577193c7"}]},{"name":"config","children":[{"name":"README.md","path":"modules/run-kafka/config/README.md","sha":"e08702423b137b254ad0a0a070f7300f094ba046"},{"name":"kafka","children":[{"name":"server-3.3.x.properties","path":"modules/run-kafka/config/kafka/server-3.3.x.properties","sha":"73f50d56a543c6844fd04c78b9fdb5cad2a17821"},{"name":"server-4.0.x.properties","path":"modules/run-kafka/config/kafka/server-4.0.x.properties","sha":"9475458db43bb335995a061dd91866b23a460a73"}]},{"name":"log4j","children":[{"name":"log4j.properties","path":"modules/run-kafka/config/log4j/log4j.properties","sha":"394c539d46d5922b33ba1e8b3a50db2fbed7e6ef"}]}]},{"name":"install.sh","path":"modules/run-kafka/install.sh","sha":"185411f1db6298c68d8608d6415aa9f30fdb5b36"}],"toggled":true},{"name":"run-schema-registry","children":[{"name":"README.md","path":"modules/run-schema-registry/README.md","sha":"ae763c5f1984bae3aaab4a2e6a14cc68b0269eb0"},{"name":"bin","children":[{"name":"run-schema-registry","path":"modules/run-schema-registry/bin/run-schema-registry","sha":"f2671dea6c6956d51d5afa1db98e784cd88efedb"}]},{"name":"config","children":[{"name":"README.md","path":"modules/run-schema-registry/config/README.md","sha":"93a30f8adf3f778682463bbc8715a38f70908866"},{"name":"log4j","children":[{"name":"log4j.properties","path":"modules/run-schema-registry/config/log4j/log4j.properties","sha":"28fa60645b6ba0ab402433aebbedec8a8a9533e3"}]},{"name":"schema-registry","children":[{"name":"schema-registry.properties","path":"modules/run-schema-registry/config/schema-registry/schema-registry.properties","sha":"493b9cf71e0bed80a48d22eb46e655aa8eacdb39"}]}]},{"name":"install.sh","path":"modules/run-schema-registry/install.sh","sha":"8789a1ad855edb62edf43d1b99cdb037febc81c3"}]}],"toggled":true},{"name":"test","children":[{"name":"Gopkg.lock","path":"test/Gopkg.lock","sha":"36c942930f17f1a71f01cae09781d533740aa85c"},{"name":"Gopkg.toml","path":"test/Gopkg.toml","sha":"81756576186093e3aa91551eb19e1ce7f8e1a472"},{"name":"README.md","path":"test/README.md","sha":"175b4d4742064176dea672c3eb24fce4539e3ca1"},{"name":"generate_key_stores_test.go","path":"test/generate_key_stores_test.go","sha":"e4a448959b8b8e6f5baf4aa5bc02953cf39df6ab"},{"name":"kafka_zookeeper_confluent_colocated_cluster_test.go","path":"test/kafka_zookeeper_confluent_colocated_cluster_test.go","sha":"5abec9625f18ba8c60d6498d5048d2db303e16d2"},{"name":"kafka_zookeeper_confluent_standalone_clusters_test.go","path":"test/kafka_zookeeper_confluent_standalone_clusters_test.go","sha":"ee4dd90d2ad4aff46e4878616108b2bd488ee530"},{"name":"kafka_zookeeper_standalone_clusters_test.go","path":"test/kafka_zookeeper_standalone_clusters_test.go","sha":"8ca3cba7ca9b74f03a6354a6a7e641a1e1835ca4"},{"name":"test_helpers.go","path":"test/test_helpers.go","sha":"f01924f4c18c7595d58621e5e3dcc2128996f44e"},{"name":"test_helpers_kafka.go","path":"test/test_helpers_kafka.go","sha":"474caf79574b27637d376fe5ab65abea9af7eb3c"},{"name":"test_helpers_kafka_connect.go","path":"test/test_helpers_kafka_connect.go","sha":"1ecef92e9a45501fb4bf834579d02ddcc05e7103"},{"name":"test_helpers_keystore.go","path":"test/test_helpers_keystore.go","sha":"02d88327f4021955dca74c5311eb916bed5c7afa"},{"name":"test_helpers_rest_proxy.go","path":"test/test_helpers_rest_proxy.go","sha":"39a5c7d2f96a873615c856cd56861ad5e7920e1c"},{"name":"test_helpers_schema_registry.go","path":"test/test_helpers_schema_registry.go","sha":"302c785e653a1e10ad085240e2e78b500d5e1672"}]}]},"detailsContent":"<h1 class=\"preview__body--title\" id=\"run-kafka-script\">Run Kafka Script</h1><div class=\"preview__body--border\"></div><p>This folder contains a script for configuring and running Kafka. Typically, you would run this script while your\nserver is booting to start Kafka. This script assumes that the following are already installed:</p>\n<ol>\n<li>\n<p>Kafka: see the <a href=\"/repos/v0.6.1/package-kafka/modules/install-kafka\" class=\"preview__body--description--blue\">install-kafka module</a>.</p>\n</li>\n<li>\n<p>Supervisord: see the <a href=\"/repos/package-zookeeper/modules/install-supervisord\" class=\"preview__body--description--blue\">install-supervisord\nmodule</a> in\n<a href=\"/repos/package-zookeeper\" class=\"preview__body--description--blue\">package-zookeeper</a>.</p>\n</li>\n<li>\n<p><a href=\"https://aws.amazon.com/cli/\" class=\"preview__body--description--blue\" target=\"_blank\">AWS CLI</a>.</p>\n</li>\n<li>\n<p><a href=\"https://stedolan.github.io/jq/\" class=\"preview__body--description--blue\" target=\"_blank\">jq</a>.</p>\n</li>\n</ol>\n<p>The <code>run-kafka</code> script will generate a Kafka configuration file (see <a href=\"#kafka-config\" class=\"preview__body--description--blue\">Kafka config docs</a>\nbelow for details) and then use Supervisord to start Kafka.</p>\n<p>This script has been tested on the following operating systems:</p>\n<ul>\n<li>Amazon Linux</li>\n<li>Ubuntu</li>\n</ul>\n<p>There is a good chance it will work on Debian, CentOS, and RHEL as well, but our automated testing for this\nmodule does not cover these other distros at the moment.</p>\n<h2 class=\"preview__body--subtitle\" id=\"quick-start\">Quick start</h2>\n<p>The easiest way to install the run-kafka script is with the <a href=\"/repos/gruntwork-installer\" class=\"preview__body--description--blue\">Gruntwork\nInstaller</a>:</p>\n<pre>gruntwork-install \\\n -<span class=\"ruby\">-<span class=\"hljs-class\"><span class=\"hljs-keyword\">module</span>-<span class=\"hljs-title\">name</span> \"<span class=\"hljs-title\">run</span>-<span class=\"hljs-title\">kafka</span>\" \\</span>\n</span> -<span class=\"ruby\">-repo <span class=\"hljs-string\">\"https://github.com/gruntwork-io/package-kafka\"</span> \\\n</span> -<span class=\"ruby\">-tag <span class=\"hljs-string\">\"v0.0.4\"</span>\n</span></pre>\n<p>We recommend running this module, as well as <a href=\"/repos/v0.6.1/package-kafka/modules/install-kafka\" class=\"preview__body--description--blue\">install-kafka</a>, as part of a\n<a href=\"https://www.packer.io/\" class=\"preview__body--description--blue\" target=\"_blank\">Packer</a> template to create an <a href=\"http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html\" class=\"preview__body--description--blue\" target=\"_blank\">Amazon Machine Image\n(AMI)</a> (see the <a href=\"/repos/v0.6.1/package-kafka/examples/kafka-ami\" class=\"preview__body--description--blue\">kafka-ami\nexample</a> for a fully-working sample code).</p>\n<h2 class=\"preview__body--subtitle\" id=\"install-command-line-arguments\">Install Command Line Arguments</h2>\n<p>You can install the <code>run-kafka</code> script by running the <a href=\"/repos/v0.6.1/package-kafka/modules/run-kafka/install.sh\" class=\"preview__body--description--blue\">install.sh</a>\nfile in the <code>run-kafka</code> module folder. The <code>install.sh</code> script requires the following arguments:</p>\n<ul>\n<li><code>--config-dir-src</code>: The directory containing the Kafka config files to copy.</li>\n<li><code>--log4j-config-dir-src</code>: The directory containing the Log4j config files to copy.</li>\n</ul>\n<p>In addition, the following optional arguments are accepted:</p>\n<ul>\n<li><code>--install-dir</code>: The directory where the <code>run-kafka</code> files should be installed. Default: <code>/opt/kafka</code>.</li>\n<li><code>--user</code>: The user who will be set as the owner of <code>--install-dir</code>. Default: <code>kafka</code>.</li>\n</ul>\n<p>Run <code>install.sh</code> with the <code>--help</code> option or see the source code to see all additional arguments. If you wish to use SSL\nwith Kafka, see the <a href=\"#ssl-settings\" class=\"preview__body--description--blue\">SSL Settings</a> section below for additional arguments that are accepted.</p>\n<p>If you're using <code>gruntwork-install</code> to install this module, you can pass these arguments using <code>--module-param</code> arguments.\nExample:</p>\n<pre>gruntwork-install \\\n -<span class=\"ruby\">-<span class=\"hljs-class\"><span class=\"hljs-keyword\">module</span>-<span class=\"hljs-title\">name</span> \"<span class=\"hljs-title\">run</span>-<span class=\"hljs-title\">kafka</span>\" \\</span>\n</span> -<span class=\"ruby\">-repo <span class=\"hljs-string\">\"https://github.com/gruntwork-io/package-kafka\"</span> \\\n</span> -<span class=\"ruby\">-tag <span class=\"hljs-string\">\"v0.0.4\"</span> \\\n</span> -<span class=\"ruby\">-<span class=\"hljs-class\"><span class=\"hljs-keyword\">module</span>-<span class=\"hljs-title\">param</span> \"<span class=\"hljs-title\">config</span>-<span class=\"hljs-title\">dir</span>-<span class=\"hljs-title\">src</span>=/<span class=\"hljs-title\">tmp</span>/<span class=\"hljs-title\">config</span>/<span class=\"hljs-title\">kafka</span>/<span class=\"hljs-title\">config</span>\" \\</span>\n</span> -<span class=\"ruby\">-<span class=\"hljs-class\"><span class=\"hljs-keyword\">module</span>-<span class=\"hljs-title\">param</span> \"<span class=\"hljs-title\">log4j</span>-<span class=\"hljs-title\">config</span>-<span class=\"hljs-title\">dir</span>-<span class=\"hljs-title\">src</span>=/<span class=\"hljs-title\">tmp</span>/<span class=\"hljs-title\">config</span>/<span class=\"hljs-title\">kafka</span>/<span class=\"hljs-title\">log4j</span>\"</span>\n</span></pre>\n<h2 class=\"preview__body--subtitle\" id=\"run-kafka-command-line-arguments\">run-kafka command line arguments</h2>\n<p>When you run the <code>run-kafka</code> script, you must provide exactly one of the following arguments:</p>\n<ul>\n<li>\n<p><code>--zookeeper-eni-tag</code>: The name and value of a tag, in the format <code>name=value</code>, that can be used to find ZooKeeper\nserver ENI IPs.</p>\n</li>\n<li>\n<p><code>--zookeeper-connect</code>: A comma-separated list of the IPs of ZooKeeper nodes to connect to.</p>\n</li>\n</ul>\n<p>The script also accepts a number of optional parameters to customize Kafka's behavior. Run the script with the\n<code>--help</code> flag to see all available options. See the <a href=\"#kafka-config\" class=\"preview__body--description--blue\">Kafka config docs</a> below for the highlights.</p>\n<p>In addition, you will most likely want to explicitly specify the following optional arguments:</p>\n<ul>\n<li>\n<p><code>--config-path</code>: The path to the Kafka config file. Default: <code>/opt/kafka/config/dev.kafka.properties</code></p>\n</li>\n<li>\n<p><code>--log4j-config-path</code>: The path to the Log4j config file. Default: <code>/opt/kafka/config/dev.log4j.properties</code>"</p>\n</li>\n</ul>\n<p>Although the above arguments are optional, in practice, a single server often contains configuration files for many\nenvironments (e.g. dev, stage, prod), and you can use these arguments to specify exactly which environment's configuration\nfile should be used.</p>\n<p>To see all other parameters excpted by <code>run-kafka</code>, run the script with the <code>--help</code> flag. Also, see the\n<a href=\"#kafka-config\" class=\"preview__body--description--blue\">Kafka config docs</a> below for the highlights.</p>\n<h2 class=\"preview__body--subtitle\" id=\"kafka-config\">Kafka config</h2>\n<p>The <code>run-kafka</code> script dynamically fills in the most important values in a <a href=\"https://kafka.apache.org/documentation/#configuration\" class=\"preview__body--description--blue\" target=\"_blank\">Kafka configuration\nfile</a>. The script focuses primarily on values that differ from\nenvironment to environment (i.e., stage and prod), so to see how to set other values, see\n<a href=\"#other-settings\" class=\"preview__body--description--blue\">other settings</a>.</p>\n<p>Here are the key items to pay attention to:</p>\n<ul>\n<li><a href=\"#zookeeper-ips\" class=\"preview__body--description--blue\">ZooKeeper IPs</a></li>\n<li><a href=\"#broker-id\" class=\"preview__body--description--blue\">Broker ID</a></li>\n<li><a href=\"#number-of-partitions\" class=\"preview__body--description--blue\">Number of partitions</a></li>\n<li><a href=\"#replication\" class=\"preview__body--description--blue\">Replication</a></li>\n<li><a href=\"#availability\" class=\"preview__body--description--blue\">Availability</a></li>\n<li><a href=\"#jvm-memory-settings\" class=\"preview__body--description--blue\">JVM memory settings</a></li>\n<li><a href=\"#ssl-settings\" class=\"preview__body--description--blue\">SSL settings</a></li>\n<li><a href=\"#log-directories\" class=\"preview__body--description--blue\">Log directories</a></li>\n</ul>\n<h1 class=\"preview__body--title\" id=\"other-settings-other-settings\"><a href=\"#other-settings\" class=\"preview__body--description--blue\">Other settings</a></h1><div class=\"preview__body--border\"></div><h3 class=\"preview__body--subtitle\" id=\"zoo-keeper-i-ps\">ZooKeeper IPs</h3>\n<p>Kafka uses ZooKeeper cluster for coordination. You can provide the IPs of the ZooKeeper nodes manually using the\n<code>--zookeeper-connect</code> argument, or you can allow the <code>run-kafka</code> script to discover them automatically. To use the\nautomatic version, you specify the <code>--zookeeper-eni-tag</code> argument with the name and value, in the format <code>name=value</code>,\nof a tag used on ZooKeeper ENIs. The latter option is based on the assumption that the ZooKeeper cluster is deployed\nusing <a href=\"/repos/package-zookeeper\" class=\"preview__body--description--blue\">package-zookeeper</a>, which uses the <a href=\"/repos/module-asg/modules/server-group\" class=\"preview__body--description--blue\">server-group\nmodule</a> under the hood, which assigns\nan ENI to each ZooKeeper server with special tags. See <a href=\"/repos/package-zookeeper/modules/run-exhibitor#server-ips-and-ids\" class=\"preview__body--description--blue\">Server IPs and\nIDs</a> for more\ninfo.</p>\n<h3 class=\"preview__body--subtitle\" id=\"broker-id\">Broker ID</h3>\n<p>Every Kafka broker needs a unique ID. By default, the <code>run-kafka</code> script automatically figures out the Broker ID by\nlooking up the <code>ServerGroupIndex</code> tag for the current server. This tag is set by the <a href=\"/repos/module-asg/modules/server-group\" class=\"preview__body--description--blue\">server-group\nmodule</a> to a unique integer for each\nserver in the server group. You can override this value with a custom broker ID by specifying the <code>--broker-id</code>\nargument.</p>\n<h3 class=\"preview__body--subtitle\" id=\"number-of-partitions\">Number of partitions</h3>\n<p>Every topic in Kafka consists of one or more partitions, which is one of the most important settings for determining\nthe throughput, availability, and end-to-end latency for that topic. See <a href=\"https://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/\" class=\"preview__body--description--blue\" target=\"_blank\">How to choose the number of topics/partitions\nin a Kafka cluster?</a>\nfor more info.</p>\n<p>You specify the number of partitions when creating a topic (e.g., by using the <code>kafka-topics.sh</code> script). You can\nalso use the <code>--num-partitions</code> argument in the <code>run-kafka</code> script to configure the default number of partitions for\nautomatically created topics (i.e., topics that are created when a producer first tries to write to them).</p>\n<h3 class=\"preview__body--subtitle\" id=\"replication\">Replication</h3>\n<p>The main way Kafka achieves durability and availability for your data is to replicate that data across multiple Kafka\nbrokers. Therefore, if one broker goes down, the data is still available in the other brokers.</p>\n<p>You specify replication settings when creating a topic (e.g., by using the <code>kafka-topics.sh</code> script). You can also\nspecify the following replication settings using the <code>run-kafka</code> script:</p>\n<ul>\n<li><code>--replication-factor</code>: The default replication factor for auto-created topics.</li>\n<li><code>--offsets-replication-factor</code>: The replication factor for the internal <code>__consumer_offsets topic</code>.</li>\n<li><code>--transaction-state-replication-factor</code>: The replication factor for the internal <code>__transaction_state</code> topic.</li>\n</ul>\n<p>In production, you typically want to set all of these to > 1 to ensure data isn't lost if a single broker dies.</p>\n<h3 class=\"preview__body--subtitle\" id=\"availability\">Availability</h3>\n<p>The number of brokers you are running and the replication settings for your topics will be the biggest influence on\nyour Kafka cluster's availability in the face of outages. There are also two other important settings you can set via\nthe <code>run-kafka</code> script:</p>\n<ul>\n<li>\n<p><code>--min-in-sync-replicas</code>: The number of replicas that must be in-sync when a producer sets <code>acks</code> to <code>all</code>. For\nexample, if you have 3 replicas, and you set this setting to 2, the producer will wait for 2 of the replicas to\nacknowledge they received the write before the producer considers the write successful. Setting this to a higher\nvalue (e.g., 3 for a topic with 3 replicas) reduces the chance of data loss, but it also reduces availability, as\neven a single broker going down means writes will fail. Setting this to 2 for a replication factor of 3 is common.</p>\n</li>\n<li>\n<p><code>--unclean-leader-election</code>: If set to true, an out-of-sync replica will be elected as leader when there is no live\nin-sync replica (ISR). This preserves the availability of the partition, but there is a chance of data loss. If set\nto false and there are no live in-sync replicas, Kafka returns an error and the partition will be unavailable. In\ngeneral, if you're optimizing for availability, set this setting to true; if you're optimizing for reducing data\nloss, set this setting to false.</p>\n</li>\n</ul>\n<h3 class=\"preview__body--subtitle\" id=\"jvm-memory-settings\">JVM memory settings</h3>\n<p>By default, we configure Kafka to run with <code>6g</code> of memory. You can override this with the <code> --memory</code> argument. If you\nwish to override all JVM settings for Kafka, you can use the <code>--jvm-opts</code> argument.</p>\n<h3 class=\"preview__body--subtitle\" id=\"ssl-settings\">SSL settings</h3>\n<p>By default, Kafka brokers communicate over plaintext. If you wish to enable SSL, set the <code>--enable-ssl</code> argument to\n<code>true</code>. For SSL to work, you need the following:</p>\n<ul>\n<li>\n<p>A Key Store that contains an SSL certificate and a Trust Store that contains the the CA that signed that SSL\ncertificate. You can use the <a href=\"/repos/v0.6.1/package-kafka/modules/generate-key-stores\" class=\"preview__body--description--blue\">generate-key-stores</a> module to generate the Key Store and\nTrust Store and you can install them on your server by passing the <code>--key-store-path</code> and <code>--trust-store-path</code>\narguments, respectively, to the <a href=\"/repos/v0.6.1/package-kafka/modules/install-kafka\" class=\"preview__body--description--blue\">install-kafka module</a>.</p>\n</li>\n<li>\n<p>You must use the <code>--key-store-password</code> argument to provide the <code>run-kafka</code> script with the password you used when\ncreating the Key Store.</p>\n</li>\n<li>\n<p>You must use the <code>--trust-store-password</code> argument to provide the <code>run-kafka</code> script with the password you used when\ncreating the Trust Store.</p>\n</li>\n</ul>\n<h3 class=\"preview__body--subtitle\" id=\"log-directories\">Log directories</h3>\n<p>By default, the <code>run-kafka</code> script will configure Kafka to logs in <code>/opt/kafka/kafka-logs/data</code>. We strongly\nrecommend mounting a separate EBS Volume at <code>/opt/kafka/kafka-logs</code>, as every write to Kafka gets persisted to disk,\nand you get much better performance if there is no contention for the hard drive from other processes.</p>\n<p>You can override the log directories via the <code>--log-dirs</code> argument.</p>\n<h2 class=\"preview__body--subtitle\" id=\"other-settings\">Other settings</h2>\n<p>Kafka has <a href=\"https://kafka.apache.org/documentation/#configuration\" class=\"preview__body--description--blue\" target=\"_blank\">many, many configuration settings</a>. The <code>run-kafka</code>\nscript gives you a convenient way to set just a few of the most important ones, and especially those that may differ\nfrom environment to environment. To set other types of settings, your best bet is to put them into a custom\n<code>server.properties</code> file and to install that file using the <a href=\"/repos/v0.6.1/package-kafka/modules/install-kafka\" class=\"preview__body--description--blue\">install-kafka module</a> by setting\nthe <code>--config</code> argument. You can find the default <code>server.properties</code> file used by <code>install-kafka</code>\n<a href=\"/repos/v0.6.1/package-kafka/modules/install-kafka/config/server.properties\" class=\"preview__body--description--blue\">here</a>.</p>\n<p>Please note that the <code>run-kafka</code> script does a simple search and replace using <code>sed</code> to fill in run-time properties,\nso it will replace or add settings to your custom <code>server.properties</code> at run time.</p>\n","repoName":"package-kafka","repoRef":"v0.6.1","serviceDescriptor":{"serviceName":"Apache Kafka and Confluent Tools","serviceRepoName":"package-kafka","serviceRepoOrg":"gruntwork-io","cloudProviders":["aws"],"description":"Deploy a cluster of Kafka brokers. Optionally deploy Confluent tools such as Schema Registry, REST Proxy, and Kafka Connect.","imageUrl":"kafka.png","licenseType":"subscriber","technologies":["Terraform","Bash"],"compliance":[],"tags":[""]},"serviceCategoryName":"Messaging & streaming","fileName":"README.md","filePath":"/modules/run-kafka","title":"Repo Browser: Apache Kafka and Confluent Tools","description":"Browse the repos in the Gruntwork Infrastructure as Code Library."}