How to Provision an AWS Elastic Beanstalk Instance Using Packer, Ansible, Docker & Terraform

If, like me, you enjoy using all the latest tech tools, you’ll enjoy this one, for sure. These tools aren’t just for fun, they make our lives so much easier when it comes to building infrastructure, and releasing code.

Note, all code used in this article can be found here

First, let’s have a quick summary of each tool in this post.

Elastic Beanstalk (EB)

A great service from Amazon Web Services which allows you to easily deploy applications in your preferred solution stack. In our example, we’re using the Docker solution stack, along with a load balanced environment and auto-scaling. EB sets this all up for us, along with the relevant security groups – amazing!

Packer

Packer is a tool from HashiCorp to build your image. It supports various builders i.e. Docker, Vagrant etc, in addition to many provisioning tools i.e. Puppet, Chef, Ansible, etc. You can then export your built image to an AWS AMI, DockerHub or ECR etc.

Ansible

Ansible is a favourite of mine when it comes to provisioning. I’ve used Puppet & Chef in a number of projects, but for ease of use and simplicity, Ansible always comes crawling back. It’s Playbook style makes ‘recipes’ easy to follow and the configuration options are great.

Docker

Hopefully this one won’t need much of an intro either. Docker is a ‘containerisation’ technology which allows segregation of services on a given host. Gone are the days of mammoth web servers that have every service running in one place.

Terraform

Terraform is another amazing tool from HashiCorp. Gone are the days of opening up the AWS Console to create instances and VPC’s; now you can write your infrastructure as code! Define a VPC, define some subnets, define your internet gateway, along with your EB application and environments and then you’re on your way to a truly automated infrastructure. Awesome.

Part 1: Packer, Ansible & Docker

Getting Started

Firstly, why are we doing this? It’s important to note that this tutorial is built on the premise that you’d like to create a Foundation image to work from. i.e. you’d like to include an image in your Dockerfile, none of the images our there are quite right. For sure you could provision your entire container using a Dockerfile, but that increases the build time, and therefore release time. If your Dockerfile simply pulls in your readily-provisioned foundation image, all it has to do is rollout your new code and your application is up and running blazingly fast. For more information on Foundation images, see this article from Google.

So, how do we begin? The first step is to setup your intial Packer script:

template.json

{
  "_comment": "Template",
  "builders": [

  ],
  "provisioners": [

  ],
  "post-processors": [

  ]
}

As you can see, it’s built up of three main blocks, builders, provisioners & post-processors. As mentioned in the beginning of this article, we’re going to use Docker, Ansible & DockerHub.

For the builder, I’m using the base CentOS Docker Image. It may be that you’re happy using someone elses foundation image, which has other tools already installed, reducing the amount of provisioning you need to do yourself. I prefer to start with the basics though!

"builders": [
    {
      "type": "docker",
      "image": "centos:latest",
      "commit": true
    }
  ],

Due to the fact that I’m using a fresh image, we’re going to have to provision this with a shell command first, so that we can then provision with Ansible.

"provisioners": [
    {
      "type": "shell",
      "inline": [
        "rpm -iUvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-6.noarch.rpm",
        "yum -y update",
        "yum -y install ansible",
        "ansible --version"
      ]
    }
]

You could go ahead and run this right now, and you’ll see Packer grab the Centos:latest image, and then install Ansible on the container. Wicked, right?!

packer build template.json

The next step is to setup our Ansible provisioning which can be done like so:

"provisioners": [
    {
      "type": "shell",
      "inline": [
        "rpm -iUvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-6.noarch.rpm",
        "yum -y update",
        "yum -y install ansible",
        "ansible --version"
      ]
    },{
      "type": "ansible-local",
      "playbook_file": "./ansible/playbook.yml",
      "role_paths": [
          "./ansible/roles/init",
          "./ansible/roles/server",
          "./ansible/roles/mongodb",
          "./ansible/roles/php7",
          "./ansible/roles/nginx",
          "./ansible/roles/supervisord",
          "./ansible/roles/redis"
      ],
      "group_vars": "./ansible/{{ user `stage`}}/group_vars"
    }

For the purpose of this article, I’m not actually going into great detail as to how to use Ansible. However, the ‘group_vars’ parameter allows us to run our provisioning with different environemnts. I could pass ‘stage’ as ‘dev’ so that Ansible knows to install xdebug, disable Opcache etc. When I’m ready to create an identical box, but with ‘prod-like’ features. i.e no xdebug please, opcache enabled, error reporting switched off etc, then I can pass ‘prod’ into the stage parameter.

Passing parameters into Packer is pretty easy:

packer -var stage=dev template.json

Also, you’ll notice that we’re provisioning this container with everything my might typically see in a LAMP (or LEMP, rather) stack; Linux, Nginx, MongoDB & PHP. I’ve also got Redis being installed, along with Supervisord for ensuring the services stay running. In the real world, you’d probably only want each container running one service rather than all of them i.e create a packer script to create your PHP container, another one to create your MongoDB container, another for Nginx, etc. All the scripts are in the source code, so just pick and choose what you’d like.

Don’t forget though, using these tools is more about reproducible environments than it is creating the perfect micro-service architecture. Automatically provisioning a machine that runs all your services, is much better than a bare-metal machine that’s been manually ‘provisioned’! Use the tools as you see fit to improve your current situation.

If you’ve checked out the source code and followed so far, you’ll now have a container that’s been provisioned – so what do we do with it? My preference is to push it to DockerHub, but you could export it as an AWS AMI, or push it to any other container registry.

"post-processors": [
    [
      {
        "type": "docker-tag",
        "repository": "bobbydvo/packer-lemp-{{ user `stage`}}",
        "tag": "latest"
      },
      {
          "type": "docker-push",
          "login": true,
          "login_email":    "{{ user `docker_login_email`}}",
          "login_username": "{{ user `docker_login_username`}}",
          "login_password": "{{ user `docker_login_password`}}"
      }
    ]
  ]

The above will first tag your image locally. The second will push your image to DockerHub. This uses ‘user’ variables which you can place at the top of your template.json

"_comment": "Template file pulling from centos7",
  "variables": {
    "docker_login_email":    "{{ env `DOCKER_LOGIN_EMAIL` }}",
    "docker_login_username": "{{ env `DOCKER_LOGIN_USERNAME` }}",
    "docker_login_password": "{{ env `DOCKER_LOGIN_PASSWORD` }}"
  },

You can now run Packer like so:

DOCKER_LOGIN_EMAIL=your@email.com \
DOCKER_LOGIN_USERNAME=yourusername \
DOCKER_LOGIN_PASSWORD=yourpassword \
packer -var stage=dev template.json

Alternatively, you may wish to put these environment variables in your bash_profile so you don’t have to keep typing them.

Further to this, you’ll see I have a build.sh in the repository. This is so that I can create both dev-like and prod-like Foundation images:

envs=( dev prod )
for i in "${envs[@]}"
do
    PACKER_LOG=1 packer build \
        -var 'stage='$i \
        ./packer/template.json
done

You should now have a brand new Foundation image to work from that you can use in your Dockerfile, awesome, right?!

That brings us to an end of this article, but in Part 2 we’ll be exploring how to create our Elastic Beanstalk infrastructure using Terraform. After that, I’ll show you how to setup automated deployments into your Elastic Beanstalk environment!

Be Sociable, Share!
Share

One thought on “How to Provision an AWS Elastic Beanstalk Instance Using Packer, Ansible, Docker & Terraform

  1. Pingback: How to use Terraform to setup and configure Elastic BeanstalkInternet Marketing - Bobby Jason | Internet Marketing - Bobby Jason

Leave a Reply

Your email address will not be published. Required fields are marked *