Browse the Repo

file-type-icon.circleci
file-type-iconexamples
file-type-iconconfluent-oss-ami
file-type-iconkafka-ami
file-type-iconkafka-zookeeper-confluent-oss-ami
file-type-iconkafka-zookeeper-confluent-oss-colocated-cl...
file-type-iconkafka-zookeeper-confluent-oss-standalone-c...
file-type-iconkafka-zookeeper-standalone-clusters
file-type-iconuser-data
file-type-iconkafka-user-data.sh
file-type-iconzookeeper-user-data.sh
file-type-iconREADME.md
file-type-iconmain.tf
file-type-iconoutputs.tf
file-type-iconvars.tf
file-type-iconzookeeper-ami
file-type-iconmodules
file-type-icontest
file-type-icon.gitignore
file-type-icon.pre-commit-config.yaml
file-type-iconCODEOWNERS
file-type-iconLICENSE.txt
file-type-iconREADME.md
file-type-iconterraform-cloud-enterprise-private-module-...

Browse the Repo

file-type-icon.circleci
file-type-iconexamples
file-type-iconconfluent-oss-ami
file-type-iconkafka-ami
file-type-iconkafka-zookeeper-confluent-oss-ami
file-type-iconkafka-zookeeper-confluent-oss-colocated-cl...
file-type-iconkafka-zookeeper-confluent-oss-standalone-c...
file-type-iconkafka-zookeeper-standalone-clusters
file-type-iconuser-data
file-type-iconkafka-user-data.sh
file-type-iconzookeeper-user-data.sh
file-type-iconREADME.md
file-type-iconmain.tf
file-type-iconoutputs.tf
file-type-iconvars.tf
file-type-iconzookeeper-ami
file-type-iconmodules
file-type-icontest
file-type-icon.gitignore
file-type-icon.pre-commit-config.yaml
file-type-iconCODEOWNERS
file-type-iconLICENSE.txt
file-type-iconREADME.md
file-type-iconterraform-cloud-enterprise-private-module-...
Apache Kafka and Confluent Tools

Apache Kafka and Confluent Tools

Deploy a cluster of Kafka brokers. Optionally deploy Confluent tools such as Schema Registry, REST Proxy, and Kafka Connect.

Code Preview

Preview the Code

mobile file icon

kafka-user-data.sh

down
  • #!/bin/bash
  • # This script is meant to be run in the User Data of each Kafka broker EC2 Instance while it's booting. The script uses
  • # the run-kafka script to configure and start Kafka. Note that this script assumes it's running in an AMI built from
  • # the Packer template in examples/kafka-ami/kafka.json.
  • #
  • # Note that many of the variables below are filled in via Terraform interpolation.
  • set -e
  • # Send the log output from this script to user-data.log, syslog, and the console
  • # From: https://alestic.com/2010/12/ec2-user-data-output/
  • exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
  • # Mount the EBS volume used for Kafka logs. Every write to Kafka is written to a log file and you get
  • # better performance if you store these log files on a completely separate disk that does not have to
  • # contend with any other I/O operations. The log.dirs setting in Kafka should be pointed at this volume.
  • # http://docs.confluent.io/current/kafka/deployment.html#disks
  • echo "Mounting EBS volume as device name ${logs_volume_device_name} at ${logs_volume_mount_point}"
  • /usr/local/bin/mount-ebs-volume \
  • --aws-region "${aws_region}" \
  • --volume-with-same-tag "ebs-volume-0" \
  • --device-name "${logs_volume_device_name}" \
  • --mount-point "${logs_volume_mount_point}" \
  • --owner "${kafka_user}"

Questions? Ask away.

We're here to talk about our services, answer any questions, give advice, or just to chat.

Ready to hand off the Gruntwork?