Using Packer & Ansible to build a LEMP Server with Packer on Cloud Native Infrastructure Powered by OpenStack

In the days where everyone is now using Docker and building containers for every new project, it’s easy to forget that for some projects, it’s total overkill. Sometimes, you may just want to build a server the old way, and create a basic image that you can deploy on your server, and not worry about learning about containers!

Whatever your stance on the above, building a LEMP (Linux, (E)Nginx, MySQL/MariaDB, PHP) server using automated tools such as Packer & Ansible is a great way to get your head around these tools, as well as give you a great reproducible image that you can store in your image repository.

Packer, from HashiCorp is another great tool that allows you to select a builder and provisioner, and push your built image to the destination of your choice. What a great summary in one sentence eh?

To follow this guide, you may wish to view the code here

Firstly, make sure you have packer installed:

brew install packer

The next step is to create the template.json file which tells packer what to use as the builder, and what to use as the provisioner.

 "builders": [
    {
      "type": "openstack",
      "image_name": "centos_lamp_php7",
      "source_image": "0f1785b3-33c3-451e-92ce-13a35d991d60",
      "flavor": "c46be6d1-979d-4489-8ffe-e421a3c83fdd",
      "ssh_keypair_name": "ukcloudos",
      "ssh_private_key_file": "/Users/bobby/.ssh/ukcloudos",
      "use_floating_ip": true,
      "floating_ip_pool": "internet",
      "ssh_username": "centos",
      "ssh_pty" : true
    }
  ], 

This is the element of Packer which tells it to use the Openstack builder. The image_name is the name that you would like the end image to be called whereas the source_image is the image id of the base image.

In terms of provisioning we have the following block:

"provisioners": [
    {
      "type": "shell",
      "inline": [
        "sudo rpm -iUvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm",
        "sudo yum -y update",
        "sudo yum -y install ansible",
        "ansible --version"
      ]
    },{
      "type": "ansible-local",
      "playbook_file": "./ansible/playbook.yml",
      "role_paths": [
          "./ansible/roles/init",
          "./ansible/roles/server",
          "./ansible/roles/mongodb",
          "./ansible/roles/php7",
          "./ansible/roles/nginx",
          "./ansible/roles/supervisord",
          "./ansible/roles/redis"
      ],
      "group_vars": "./ansible/common/group_vars"
    },{
      "type": "shell",
      "inline": [
        "cd /srv && sudo chown -R nginx:nginx .",
        "sudo curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/bin --filename=composer"
      ]
    }
  ]

The first provisioner is just a shell command to install ansible. We need this on the server so that we can then apply the ansible provisioner.

Let’s take a look at the Playbooks we have (ansible/playbook.yml).

---
- hosts: all
  sudo: true
  vars_files:
    - "group_vars/settings.yml"
  roles:
    - init
    - server
    - php7
    - mongodb
    - nginx
    - supervisord
    - redis

In the settings.yml file I’ve placed a list of PHP & PECL packages that we’d like on our LEMP server:

---
php:
    packages: ["php", "php-fpm", "php-common", "php-mbstring", "php-mcrypt", "php-devel", "php-xml","php-mysqlnd", "php-pdo", "php-opcache", "php-bcmath", "php-pear"]
    pecl_packages: ["php-pecl-memcached", "php-pecl-redis", "php-pecl-zip", "php-pecl-xdebug"]

The next step of the Ansible provisioner refers to each role-path, and this will be done sequentially. Each ‘role’ has a sub directory called tasks, which then has a main.yml to execute the particular instructions.

As you can see, the first is init, with the following instructions:

---
- name: Install Remi Repo
  yum: name=http://rpms.famillecollet.com/enterprise/remi-release-7.rpm

- name: Enable Remi Repo
  shell: yum-config-manager --enable remi-php70

Remi is my favourite repository when using CentOS7, it has all the latest packages and can always be trusted to work out of the box, unlike some others.

We then have some the server role which installs some base packages such as wget and vim:

---
- name: Install System Packages
  sudo: yes
  yum: pkg={{ item }} state=latest
  with_items:
    - git
    - wget
    - vim
    - sudo
    - openssl-devel

- name: Configure the timezone
  sudo: yes
  template: src=timezone.tpl dest=/etc/timezone

- name: Allow root to not require password to perform commands
  sudo: yes
  template: src=mysudoers.tpl dest=/etc/sudoers.d/mysudoers

- name: install the 'Development tools' package group
  yum: name="@Development tools" state=present

You can see all the other roles by taking a look at the source code, but the interesting one is the one that installs the PHP stuff:

- name: Install PHP Packages
  sudo: yes
  yum: pkg={{ item }} state=latest
  with_items: '{{php.packages}}'

- name: Install PHP-Pecl Packages
  sudo: yes
  yum: pkg={{ item }} state=latest
  with_items: '{{php.pecl_packages}}'

  #Add templates
- name: Change to custom php.ini
  sudo: yes
  template: src=php-dev.ini.tpl dest=/etc/php-dev.ini

#Add templates
- name: Change to custom php.ini
  sudo: yes
  template: src=php-prod.ini.tpl dest=/etc/php-prod.ini

  #Add templates
- name: Change to custom opcache config
  sudo: yes
  template: src=10-opcache.ini.tpl dest=/etc/php.d/10-opcache.ini

  #Add templates
- name: Change to custom php-fpm config
  sudo: yes
  template: src=www.conf.tpl dest=/etc/php-fpm.d/www.conf

You’ll notice with the PHP role that we have some templates. This is where you can define some settings, which you may or may not want to later configure using other tools such as puppet or consul. It also has a template to configure php-fpm to work seamlessly with the nginx template:

server {
    listen       80 default_server;
    listen       [::]:80 default_server;
    server_name  _;

    root  /srv/web;
    index index.php index.html index.htm;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~*  \.(jpg|jpeg|png|gif|ico|css|js|woff)$ {
       expires 365d;
    }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_read_timeout 180;
        include fastcgi_params;
    }
}

As you can see, it has everything out of the box that is needed for a working LEMP server.

To run packer, all we need to do is head to the root of the project and type:

$ packer build ./packer/template.json

Packer will run through the motions of grabbing the CentOS base image from OpenStack, install Ansible, and then run our playbooks, and save the provisioned image in Glance with our chosen image name.

You can then head over to your GUI and view the newly created image and use it as you please! Alternatively, you can see the images with this OpenStack CLI command:

openstack image list

The true beauty of this is that you have an automated way to provision your servers. You can place this code in Jenkins and make nightly builds or you can run it manually whenever you need.

If you have any questions about using Packer or Ansible with Cloud Native Infrastructure just send me a tweet: @bobbyjason

Share

How to create an externally facing server using Terraform on Cloud Native Infrastructure Powered by Openstack – plus a bonus!

If you’ve been following my previous posts and videos, you may have already seen how to use the Cloud Native Infrastructure GUI to create a simple externally facing server or you. may have already read my post on using the Openstack CLI.

Let’s be honest though, any self-respecting DevOps guy doesn’t really want to be creating procedural shell scripts to create infrastructure and we certainly don’t want to be using the OpenStack GUI.

Diving straight into using Terraform, (another one of HashiCorps awesome tools), we can easily setup our basic environment that we’d like to spin up a new server.

Firstly, we’ll need to ensure our environment variables are set. I have these in my ~/.zshrc (or ~/.bashrc) file:

export OS_AUTH_URL=https://cor00005.cni.ukcloud.com:13000/v3
export OS_PROJECT_ID=c5223fac91064ac38460171c14eb47ef
export OS_PROJECT_NAME="UKCloud Bobby Demo"
export OS_USER_DOMAIN_NAME="Default"
export OS_DOMAIN_NAME="Default"

export OS_USERNAME="myusername@domain.com"
export OS_PASSWORD=***********
export OS_REGION_NAME="regionOne"

This then means that in our Terraform provider declaration, all we need is:

provider "openstack" {
}

No need to pass anything in, as it’s all read from the environment variable which is pretty handy!

Setting up the network is fairly trivial:

resource "openstack_networking_network_v2" "example_network1" {
  name           = "example_network_1"
  admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "example_subnet1" {
  name            = "example_subnet_1"
  network_id      = "${openstack_networking_network_v2.example_network1.id}"
  cidr            = "10.10.0.0/24"
  ip_version      = 4
  dns_nameservers = ["8.8.8.8", "8.8.4.4"]

Here we’re creating a basic subnet of 10.10.0.0/24 and setting the DNS nameservers. Without setting the DNS Nameservers, your server won’t be able to make and DNS lookups, which would be pretty rubbish!

The next steps are to create the router and the router interface to enable the network to connect to the all important externally facing network, the Internet:

resource "openstack_networking_router_v2" "example_router_1" {
  name             = "example_router1"
  external_gateway = "893a5b59-081a-4e3a-ac50-1e54e262c3fa"
}

resource "openstack_networking_router_interface_v2" "example_router_interface_1" {
  router_id = "${openstack_networking_router_v2.example_router_1.id}"
  subnet_id = "${openstack_networking_subnet_v2.example_subnet1.id}"
}

We’re going to want our server to have a static/elastic/floating IP, so lets grab one from the pool for use later:

resource "openstack_networking_floatingip_v2" "example_floatip_1" {
  pool = "internet"
}

The last part, is to set the firewall rules so that we can connect to our instance once we create it:

resource "openstack_compute_secgroup_v2" "example_secgroup_1" {
  name = "example_secgroup_1"
  description = "an example security group"
  rule {
    from_port   = 22
    to_port     = 22
    ip_protocol = "tcp"
    cidr        = "0.0.0.0/0"
  }
   rule {
    from_port   = 80
    to_port     = 80
    ip_protocol = "tcp"
    cidr        = "0.0.0.0/0"
  }
}

Basically, all we’re doing is allowing port 80 and port 22 – that will do to quickly get us started!

We now have all the fundamentals in place to create our instance. To fire up the basic CentOS stock image from within OpenStack, you could use the following:

resource "openstack_compute_instance_v2" "example_instance" {
  name            = "example_instance"

  # centos7
  #image_id        = "0f1785b3-33c3-451e-92ce-13a35d991d60"

  flavor_id       = "c46be6d1-979d-4489-8ffe-e421a3c83fdd"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  network {
    name        = "${openstack_networking_network_v2.example_network1.name}"
    floating_ip = "${openstack_networking_floatingip_v2.example_floatip_1.address}"
  }
}

Terraform will use everything we’ve created, from the key_pair & the security groups through to the network and floating IP; so go on, lets try it!

terraform apply

You can then connect to your server like a piece of cake:

ssh -i ~/.ssh/yourkey centos@123.45.67.8

Bonus for Reading this far

However, we can go one better than that – firing up an automated server might be cool, but it’s better if it’s serving our app straight out of the box too, right?

What about if we could give terraform our git repo for our software, and have it deploy it as soon as it starts up? We can do that very easily with 2 files, cloudinit.sh and variables.tf.

Lets take a look at the variables.tf first:

variable "clone_location" {
    default = "/srv"
}

variable "git_repo" {
    default = "https://github.com/bobbydeveaux/dummyphp.git"
}

All we’re doing is passing in our location to the GitHub repository of our App, and telling it where to clone it to. We can then make use of this in the cloudinit.sh file:

#!/bin/bash
# Script that will run at first boot via Openstack
# using user_data via cloud-init.

sudo chown -R centos:centos ${clone_location}
git clone ${git_repo} ${clone_location}
cd ${clone_location}

export COMPOSER_HOME=${clone_location}
composer install

sudo APPLICATION_ENV=${application_env} /usr/bin/supervisord -n -c /etc/supervisord.conf

For this to work, we also have to use our LEMP image that we built earlier. You can use your own of course, providing it’s capable of serving PHP, in this case. At the same time we need to provide the cloudinit instruction to terraform:

data "template_file" "cloudinit" {
    template = "${file("cloudinit.sh")}"
    vars {
        application_env = "dev"
        git_repo = "${var.git_repo}"
        clone_location = "${var.clone_location}"
    }
}

…and add this to the instance creation:

user_data =  "${data.template_file.cloudinit.rendered}"

…leaving our final instance creation looking like this:

resource "openstack_compute_instance_v2" "example_instance" {
  name            = "example_instance"

  #coreos
  #image_id        = "8e892f81-2197-464a-9b6b-1a5045735f5d"

  # centos7
  #image_id        = "0f1785b3-33c3-451e-92ce-13a35d991d60"

  # docker nginx
  #image_id        = "e24c8d96-4520-4554-b30a-14fec3605bc2"

  # centos7 lamp packer build
  image_id = "912e4218-963a-4580-a27d-72e5e195c4f5"

  flavor_id       = "c46be6d1-979d-4489-8ffe-e421a3c83fdd"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data =  "${data.template_file.cloudinit.rendered}"

  network {
    name        = "${openstack_networking_network_v2.example_network1.name}"
    floating_ip = "${openstack_networking_floatingip_v2.example_floatip_1.address}"
  }
}

If you now apply the terraform changes, your server will boot up, and deploy your application. Head to your floating IP and you will see the application being served. It really is as easy as that and there’s not much more explanation necessary!

To view the full source code for this example, you can check out the github repo.

As always, if you have any questions regarding this tutorial, or you need some pointing, please just tweet me!

Share

What Is Docker Swarm And How To Use It To Scale A Simple PHP App Along With Terraform & Packer on Cloud Native Infrastructure powered By OpenStack

Note: This is based on Docker 1.12 as at the time of writing, whilst Docker 1.13 is now released, it is not yet in the CoreOS builds. As soon as 1.13 is available, I will append a footnote to this blogpost and edit this note!

As more and more people jump on the Docker bandwagon, more and more people are wondering just exactly how we scale this thing. Some will have heard of Docker-Compose, some will have heard of Docker Swarm, and then there’s some folks out there with their Kubernetes and Mesos clusters.

Docker Swarm became native to Docker in v1.12 and makes container orchestration super simple. Not only that, but each node is accessible via the hostname due to the built in DNS and Service Discovery. With it’s overlay network and inbuilt routing mesh, all the nodes can accept connections on the published ports for any of the services running in the Swarm. This basically gives you the access to multiple-nodes and treat them as one.

Just to top it off, Docker Swarm has built-in load balancing. Send a request to any of the nodes and it will send the request in a round-robin fashion to all the containers running the requested service. Simply amazing, and I’m going to show you how you can get started with this great technology.

For my example, I’ve chosen a PHP application (cue the flames), it’s a great way to show how a real-world app may be scaled using Terraform, Packer & Docker Swarm on Openstack.

There are a few parts that I will be covering:

  1. Creating base images
  2. Using Docker-Compose in Development
  3. Creating the infrastructure (Terraform)
  4. Creating a base image (Packer)
  5. Deploying
  6. Scaling

1. Creating Base Images

You may already be familiar with keeping provisioned AMIs/images up in the cloud that contain most of the services you need. That’s essentially all a base/foundation image is. The reality is that every time you push your code, you don’t want to have to wait for the stock CentOS/Ubuntu image to be re-provisioned. Base images allow you to create a basic setup that you can use not just on one project, but on multiple projects.

What I’ve done, is created a repository called, Docker Images, which currently has just 2 services; Nginx & PHP-FPM. Inside it is a little build script which iterates over each container, builds it and then pushes it to Docker Hub.

Your foundation images can contain whatever you want. Mine have some simple configuration such as nginx/php-fpm configuration. I have configured Supervisord to ensure that php-fpm is always running. Additionally, as I am placing both dev and prod versions of php.ini on the container, the Supervisord accepts environment parameters so the container can be fired up in dev mode or production ready.

This is the build.sh script within the Docker Images repo:

build.sh:

#!/bin/bash

VERSION=1
CONTAINER=$1
BUILD_NUMBER=$2

docker build ./$CONTAINER -t bobbydvo/ukc_$CONTAINER:latest
docker tag bobbydvo/ukc_$CONTAINER:latest  bobbydvo/ukc_$CONTAINER:$VERSION.$BUILD_NUMBER

docker push bobbydvo/ukc_$CONTAINER:latest
docker push bobbydvo/ukc_$CONTAINER:$VERSION.$BUILD_NUMBER

A simple Jenkins job with parameterised builds has been configured to pass the correct arguments to the script:

echo $BUILD_NUMBER
docker -v
whoami
sudo docker login -u bobbydvo -p Lr6n9hrGBLNxBm
sudo ./build.sh $CONTAINER $BUILD_NUMBER

Note: You will have to ensure that the Jenkins user is allowed sudo access

You can find the repository here: https://github.com/bobbydeveaux/docker-images

Each time the job is run, it will place new versions of each container here:

https://hub.docker.com/r/bobbydvo

Some may argue that due to the cache built up in layers within Docker, you can skip the Base Image step. However, I find it to be a great way to keep jobs isolated, with the addition of being able to re-use the containers for other projects. It also gives great visibility when a container-build has failed simply because an external package has been updated, and therefore it will not update your ‘latest’ tag, and won’t halt your deployments! Google have a great guide on building Foundation Images

We now need to test our 2 images/containers with our PHP app.

2. Lets set up our dev environment with the dummyphp app

This is my repository with a dummy PHP app: https://github.com/bobbydeveaux/dummyphp

If you’re familiar with PHP, you will notice that this is a Slim 3 application using Composer for dependency management. You’ll also find a file, ‘docker-compose.yml’ – this will coordinate Docker to use both of our containers:

docker-compose.yml

version: "2"
services:
  php-fpm:
    tty: true
    build: ./
    image: bobbydvo/dummyapp_php-fpm:latest
    ports:
      - "9000:9000"
    environment:
      - APPLICATION_ENV=dev
  web:
    tty: true
    image: bobbydvo/ukc_nginx:latest
    ports:
      - "80:80"
    environment:
      - NGINX_HOST=localhost
      - NGINX_PORT=80

The php-fpm container will use the Dockerfile in the root of the application to build the image, copy over the files onto the Docker Image itself, and save the image locally as a new container, rather than the use the base image. As i