Running infrastructure at any scale almost always guarantees a dizzying array of components and configurations. To further complicate things, different teams within an organization may have the need for similar infrastructures with slight variations. That infrastructure may be spread over multiple topographies, from on-premise to one or more cloud vendors.
While it is possible to use infrastructure providers’ user or command line interfaces to spin up and configure components one at a time, the end result is typically hard to organize and maintain in a simple and straightforward way.
At Fairwinds, we build Kubernetes infrastructure for clients that is scalable, secure, and reliable. Along with running applications with Kubernetes, our work includes configuring networking, compute resources, storage, as well as supporting components like monitoring. To ensure that best practices for building infrastructure are maintained for all our clients, we use common patterns for our clients that provide both consistency and customization where needed.
Defining Terraform
To terraform means to transform a landscape on another planet into one that has similar characteristics to the landscapes on Earth. Similarly, Terraform is a declarative coding tool that allows you to automate the provisioning and management of infrastructure resources to describe or define the desired end state of the infrastructure.
Terraform is one tool we use to manage the entire lifecycle of infrastructure using infrastructure as code. That means declaring infrastructure components in configuration files that are then used by Terraform to provision, adjust and tear down infrastructure in various cloud provider, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
For example, say you are working in AWS and want to spin up several Elastic Compute Cloud (EC2) instances of a specific type. You define the type and number of instances in a configuration file, and Terraform uses that to communicate with the AWS application programming interface (API) to create those instances. The same file can then be used to adjust the configuration, for example increasing or decreasing the number of instances.
Because infrastructure typically includes many more components than just compute instances, we use a Terraform feature called modules to combine multiple infrastructure components into large, reusable, and shareable chunks.
In a four part blog series, we will demonstrate how Terraform modules can be used to easily set up a basic virtual private cloud (VPC) and Kubernetes cluster in three cloud providers: AWS, GCP and Azure Kubernetes Service (AKS).
For the remainder of this post, we will set the stage by diving into some of Terraform’s core elements. If you have never used Terraform and are looking for more hands on experience, the tutorials at Hashicorp’s site are useful.
Let’s talk a little bit about Terraform providers. Terraform has more than 100 cloud providers it serves. At Fairwinds we use Terraform for three – AWS, GCP, and Azure. The provider is what enables interfacing with the specific API and exposes whatever resource you have defined. HashiCorp Language (HCL) is the common language used to define a resource, no matter which provider you are using.
The first resource that needs to be defined in any project using Terraform is the provider, as this is what gives you access to the API you will be interacting with to create resources. Once you have configured and authenticated the provider, a vast amount of resources are now available to be created. Each cloud provider has their own set of resources; a few examples are
aws_vpc
google_dns_record_set
azuread_application
Resources, according to Terraform, are “the most important element in the Terraform language.” This is where you describe the piece of infrastructure to be created. This can range from a compute instance to defining specific permissions and much more.
In most cases, you’ll want more configuration than what the basic examples show in the Terraform docs. The terraform vpc resource for AWS provides the argument references that are required and optional in detail as well as what attributes you will have access to within this resource. Using the docs, you can spin up many resources with Terraform (likely meeting all the needs you have).
Now that we’ve outlined how Terraform interacts with various providers to create and manage resources, let’s talk about best practices for maintaining and organizing projects.
Terraform offers a great way to package and reuse common code in the form of modules. Terraform modules are comparable to functions or methods in scripting or programming languages. They present a standard interface for creating resources by providing inputs and returning outputs. Modules also simplify projects by increasing readability and allowing teams to organize infrastructure in logical blocks. Modules can be easily shared and sourced into any Terraform project.
Modules are generally used as a simpler interface for creating and managing multiple resources. This greatly cuts down on the amount of duplicate code in a project. It can be tedious copying and pasting sections of code around while only changing a few parameters. For example, if tasked with creating multiple VPCs for different environments — you can invoke a single VPC module multiple times, rather than creating each individual resource necessary for a fully functioning VPC.
Input variables are used to customize the behavior of a module, and potentially, how the resources will be named. Some modules can be very flexible about how, and if, a resource will be created. For example, our Google Cloud VPC module can change behavior based on what inputs are declared. If a user setsenable_cloud_nat = true
the module will then create additional Cloud NAT resources. Check out the official documentation for more information on Input Variables.
Similar to functions, Terraform modules can also return output. This output can be used as input into another module or resource.
State management is a key component to any long term Terraform project. The Terraform state file keeps track of all changes in an environment. State files can also serve as a data source that can be imported by other Terraform projects. By default, the state file is stored on the filesystem. However, it is important to keep the state file safe, secure, and backed up – which generally means keeping it in highly available object storage. By leveraging this remote storage, teams can safely share and interact with a single state that is always current. For additional information, check out the Terraform documentation on remote states.
In the following blogs, we will dive deeper into these topics as we demonstrate how to use Terraform to build Kubernetes infrastructure in different cloud providers: AKS, Amazon Elastic Kubernetes Service (Amazon EKS), and Google Kubernetes Engine (GKE). Each one provides a Terraform module to create an AKS, EKS, or GKE cluster and associated resource. Check out our getting started guides for these major cloud providers:
Originally published July 14, 2020
*** This is a Security Bloggers Network syndicated blog from Fairwinds | Blog authored by Brian Bensky, Nate Baker & Mary Goodhart. Read the original post at: https://www.fairwinds.com/blog/what-is-terraform-and-why-is-it-important