In a previous blog post, “There is no cloud, It’s just someone else’s computer”, we talked about how using the Cloud and DevOps creates efficiencies in the seven stages of the software delivery process, ultimately reducing the time from Asks-to-Gets.
In this second post we’re going to focus on the most important of these stages; one that is at the core of what we do in a DevOps team, namely provisioning.
What Do We Mean by Provisioning?
The provisioning tool creates the network and servers that we need to run our applications. Servers without an application aren’t much use, so we can use the provisioning tool to initiate the app’s deployment onto the server. Once the application is deployed then we’ll probably want to ensure that it is running as expected – we use the provisioning tool to install the monitoring and logging services that are required to maintain our application.
Provisioning our infrastructure using Infrastructure-as-Code best practices gives us complete transparency of the current state of our system. This in itself provides a high level of confidence in how secure the provisioned infrastructure is.
The fact that we write our infrastructure as code also means that we’ll be building, testing and then turning that code into a releasable package. Our infrastructure goes through the same release process as the applications that run on it! As you can see, provisioning really is at the heart of DevOps.
What Are We Provisioning?
A greenfield project typically starts with nothing. First we provision the network in which our servers will run. This includes virtual subnets, routing, firewalls, VPNs and access lists:
Next we provision the data stores (file storage, databases) that our applications will use. These can either be native cloud services which offer a managed solution; or custom, bespoke database deployments:
After this we provision the cloud servers themselves, including any required load balancing and rules which determine how the servers can scale with load:
The final part of the server provisioning includes a hook into the method that will configure the servers in the way that we want and ultimately deploy the required version of the application that we want to run.
How Do We Provision?
There are a few ways that we can provision Cloud servers…
1. Use the Cloud provider’s interface
Examples: AWS’s web console.
2. Use the Cloud provider’s command line tools
Examples: AWS CLI, gcloud compute.
3. Automate the use of the command line tools in a script
Examples: Bash, Powershell, Python
4. Use the Cloud provider’s native resource templating method
Examples: AWS CloudFormation, Google Cloud Deployment Manager.
5. Write a custom application that automatically generates the Cloud provider’s native resource templates
Examples: Troposphere (Python), SparkleFormation (Ruby)
6. Use a Third Party tool
So Which is the Best Choice?
The ordering of the options above is no accident; these are the natural progression in how people usually provision Cloud servers. Starting with the web GUI, progressing to using the command line tools, then putting those commands into a script before moving to using the Cloud provider’s native resource templating method (e.g. CloudFormation) and thinking about the best way to write and organise your templates.
This naturally leads to thinking about how you can reduce all the repetition and potential errors, and so people look to automate the creation of these templates by writing a custom application. If you have a fairly tight set of use cases that isn’t going to change much, and if you just want to churn out lots and lots of those use cases AND if you have the developer resource to maintain that code, then perhaps this is fine for your company, but this is generally the wrong choice.
As much as you may enjoy writing custom provisioning code, in our opinion your time is better spent utilising a tool that already has integrations into all the major Cloud providers. A tool that is well supported in both the community and (optionally) commercially, has a very active open source codebase and is well documented.
That tool is Terraform.
Terraform is a relatively new tool (first released July 2014) written by HashiCorp to solve many of the cons described in options 1 – 5 above. It is open source and has over 1000 contributors.
Terraform defines a number of ‘providers’ e.g AWS. From each provider you can define the resources that you want to create e.g from the ‘AWS’ provider you can define an ‘EC2’ resource. Additionally, it’s declarative – you specify what you want your infrastructure to look like, and terraform gets you there. Unlike a script, running Terraform twice won’t change things that are already how you want them.
Using Terraform to do the heavy lifting of provisioning allows you to spend more time writing code that adds value to the core application of your business.
HashiCorp themselves write and support the ‘big’ providers, Amazon Web Services, Google Compute Engine, Microsoft Azure and Oracle Public Cloud. Additionally there are over 60 providers that are written and maintained by the community. Testing standards are the same for both sets of provider which ensures the stability of the project.
At Naimuri we’re big fans of the HashiCorp approach to the design and purpose of their products. They’re very transparent about what the product does and does-not-do compared to other rival products. They are typically written to have a fairly limited scope and therefore specialise in what they are designed to do. This allows the users of their products to easily pick and choose what they want to use and does not force them into using or configuring features that they have no use for.