aws

Gruntwork Newsletter, August 2018

Once a month, we send out a newsletter to all Gruntwork customers that describes all the updates we’ve made in the last month, news in the…
Gruntwork Newsletter, August 2018
YB
Yevgeniy Brikman
Co-Founder
Published July 11, 2018

Once a month, we send out a newsletter to all Gruntwork customers that describes all the updates we’ve made in the last month, news in the DevOps industry, and important security updates. Note that many of the links below go to private repos in the Gruntwork Infrastructure as Code Library and Reference Architecture that are only accessible to customers.

Hello Grunts,

In the last month, we created a set of reusable modules to run your own ELK Stack (Elasticsearch, Logstash, Kibana) in AWS, released our comprehensive guide to authenticating to AWS on the CLI, and fixed a number of bugs. In other news, HashiCorp has released a blog post series outlining the powerful new features coming in Terraform 0.12, AWS has added support for redirects and fixed-content responses to the ALB, and Jenkins has another severe security vulnerability.

As always, if you have any questions or need help, email us at support@gruntwork.io!

Gruntwork Updates

ELK Package

Motivation: Several of our customers wanted to run the ELK stack—Elasticsearch, Logstash, and Kibana—but could not use Amazon’s hosted Elasticsearch Service due to a number of limitations, including:

  1. You can’t install custom Elasticsearch plugins.
  2. Authentication options are limited.
  3. Configuration options are limited.
  4. Monitoring options are limited.
  5. No support for in-place upgrades.
  6. You can only run backups once per day.
  7. Various other problems you can read about here and here.

Solution: We created a set of reusable modules that allow you to deploy and run your own ELK cluster in AWS! These modules can be combined configured in a variety of ways, such as the following architecture:

We’ve built modules to do the following:

  • Run an Elasticsearch cluster with EBS Volumes for persistent data storage.
  • Run a Logstash cluster to aggregate all log information.
  • Run a Kibana cluster to visualize data stored in Elasticsearch.
  • Run Filebeat on all your app servers to send log data to Logstash.
  • Run CollectD on all your app servers to send metrics to Logstash.
  • Automatically forward Cloudwatch logs to Logstash.
  • Automatically Forward Cloudtrail logs to Logstash.
  • Backup module to take snapshots of the Elasticsearch cluster on a configurable schedule and store those snapshots in S3, plus a restore module to restore a cluster from saved snapshots.
  • Full control over all plugins, configuration, and monitoring.
  • Support for authentication and end-to-end encryption for all data at rest and in transit. This was hilariously hard to get working, so we’ll write a separate blog post about this later!
  • Thorough documentation, example code, and end-to-end automated tests for all of these modules.

What to do about it: All of this new code is in the package-elk repo. If you’re a Gruntwork subscriber, email us at support@gruntwork.io and we’ll grant you access (and if you’re not a subscriber, sign up now)! package-elk consists of a number of standalone modules that can be mixed and matched as you see fit. See the examples on how to deploy a full end to end ELK pipeline with all components included.

A Comprehensive Guide to Authenticating to AWS on the Command Line

Motivation:**** Logging into your AWS account on the web is fairly straightforward: you type in a username and password and you’re done. Logging into your AWS account on the command line — so you can use CLI tools such as aws, terraform, packer, and so on — is much harder. It’s so bad that “how do I access my AWS account?” is the #1 support ticket we get at Gruntwork!

Solution: We’ve put together a blog post series to walk you through the different ways to authenticate to AWS on the command-line:

  1. An Intro to AWS Authentication
  2. Authenticating to AWS with the Credentials File
  3. Authenticating to AWS with Environment Variables
  4. Authenticating to AWS with Instance Metadata
  5. Authenticating to AWS with Gruntwork Houston

What to do about it: Read through the blog post series and let us know if you find it helpful or still have questions! Also, if you’d like access to the private beta of Gruntwork Houston, email us at info@gruntwork.io.

Open source updates

  • Terragrunt, v0.16.2: Properly exclude modules in xxx-all commands that show up in .terragrunt-cache, a custom download folder specified via --terragrunt-download-dir, or in nested subfolders of either of these.
  • Terragrunt, v0.16.3: Fix a bug where Terragrunt would hit an error trying to download Terraform configurations from source URLs pointing to the root of a repo.
  • Terragrunt, v0.16.4: Add prevent_destroy flag, which you can use in your Terragrunt configuration to protect a module from anyone running terragrunt destroy or terragrunt destroy-all.
  • Terratest, v0.9.15: Add EmptyS3Bucket method.
  • terraform-aws-vault, v0.9.0: If you’re using Vault 0.10.0 or above, the UI will now be enabled by default.
  • terraform-aws-vault, v0.9.1: Fix aws install with yum to use the proper package name.

Other updates

  • module-asg, v0.6.14: You can now specify custom termination policies in the asg-rolling-deploy module using the new termination_policies input variable.
  • module-asg, v0.6.15: Remove depends_on workaround in asg-rolling-deploy. This should now show the proper value for your ASG desired_capacity during plan.
  • module-ci, v0.12.1: You can now set the --git-user-email and --git-user-name params in terraform-update-variable to specify the email and username for the git commit.
  • package-zookeeper, v0.4.6: You can now configure DNS names for your ZooKeeper nodes using the new (optional) input variables route53_hosted_zone_id, dns_name_common_portion, dns_names, dns_ttl, and enable_elastic_ips.
  • package-kafka, v0.4.1: You can now specify domain names (--domain, may be repeated) as well as ip addresses (--ip, may be repeated) that will be added to the Subject Alternative Name (SAN) field in the generated certificate of package-kafka’sgenerate-key-stores.sh script. Additionally, you can now optionally export the private key of the generated certificate in pkcs12 or pkcs8 format using the arguments --out-cert-key-path and --out-cert-p8-key-path respectively.

DevOps News

Terraform 0.12 preview

What happened: Terraform 0.12 is coming soon, and bringing with it a number of major new changes. HashiCorp has released a series of blog posts describing these changes:

Why it matters: These changes make Terraform more powerful, consistent, and predictable as a language. Here are a few of the highlights:

First-class expressions mean you don’t have to wrap all expressions with quotes and curly braces ("${}"), so code that used look like this:

resource "aws_instance" "example" {
ami           = "${var.ami}"
instance_type = "${var.instance_type}"
}

Now looks like this:

resource "aws_instance" "example" {
ami           = var.ami
instance_type = var.instance_type
}

The for and for-each syntax enable a lot of powerful new capabilities, including dynamic inline-blocks:

resource "aws_autoscaling_group" "example" {
# ...

dynamic "tag" {
for_each = local.standard_tags

content {
key                 = tag.key
value               = tag.value
propagate_at_launch = true
}
}
}

Conditional operator improvements mean that the ternary syntax is now short circuiting and supports lists and maps:

buckets = (var.env == "dev" ? [var.foo, var.bar] : [var.baz])

And you can finally mark arguments as “omitted” via null to get the behavior of their default values:

variable "override_private_ip" {
type    = string
default = null
}
resource "aws_instance" "example" {
# ... (other aws_instance arguments) ...

private_ip = var.override_private_ip
}

The rich value types will allow you to define explicit types for your module’s inputs:

variable "networks" {
type = map(object({
network_number    = number
availability_zone = string
tags              = map(string)
}))
}

And pass entire resources as inputs or outputs to other modules:

output "vpc" {
value = aws_vpc.example
}

What to do about it: Terraform 0.12 is still in preview mode. Once it approaches a full release, we will update all of our modules, and send upgrade instructions. In the meantime, sit tight!

ALB now supports redirects and fixed responses

What happened: The Application Load Balancer (ALB) now supports redirects and fixed responses.

Why it matters: You can now add listener rules to your ALB to tell it, for example, to redirect /foo to /bar, or to redirect all HTTP traffic to HTTPS. You can also have static responses (e.g., 200 OK) for specific URLs.

What to do about it: The aws_lb_listener_rule resource in Terraform does not yet support redirect or fixed-response actions. Follow this issue to see when this new functionality will be available.

Security Updates

Below is a list of critical security updates that may impact your services. We notify Gruntwork customers of these vulnerabilities as soon as we know of them via the Gruntwork Security Alerts mailing list. It is up to you to scan this list and decide which of these apply and what to do about them, but most of these are severe vulnerabilities, and we recommend patching them ASAP.

Jenkins

  • Jenkins Security Advisory 2018–07–18: Several vulnerabilities have just been announced in Jenkins. The highest priority of these allows an unauthenticated user to send specially crafted HTTP requests and get back the contents of any file on the Jenkins master file system that the Jenkins master process has access to. Many teams have secrets, SSH keys, and source code on their Jenkins servers, so you should treat this as a severe vulnerability, and update immediately. See Jenkins Security Advisory 2018–07–18 for more information. We emailed the Gruntwork Security Alerts mailing list about this on July 17 , 2018.
  • Note: vulnerabilities like the one mentioned above are one of the reasons why in the Reference Architecture, we run Jenkins in a private subnet and only allow access via VPN. It prevents webhooks from running, but it also makes it much harder for attackers to exploit these sorts of vulnerabilities.