Category Archives: PHP

PHP Tutorials

What Is Docker Swarm And How To Use It To Scale A Simple PHP App Along With Terraform & Packer on Cloud Native Infrastructure powered By OpenStack

Note: This is based on Docker 1.12 as at the time of writing, whilst Docker 1.13 is now released, it is not yet in the CoreOS builds. As soon as 1.13 is available, I will append a footnote to this blogpost and edit this note!

As more and more people jump on the Docker bandwagon, more and more people are wondering just exactly how we scale this thing. Some will have heard of Docker-Compose, some will have heard of Docker Swarm, and then there’s some folks out there with their Kubernetes and Mesos clusters.

Docker Swarm became native to Docker in v1.12 and makes container orchestration super simple. Not only that, but each node is accessible via the hostname due to the built in DNS and Service Discovery. With it’s overlay network and inbuilt routing mesh, all the nodes can accept connections on the published ports for any of the services running in the Swarm. This basically gives you the access to multiple-nodes and treat them as one.

Just to top it off, Docker Swarm has built-in load balancing. Send a request to any of the nodes and it will send the request in a round-robin fashion to all the containers running the requested service. Simply amazing, and I’m going to show you how you can get started with this great technology.

For my example, I’ve chosen a PHP application (cue the flames), it’s a great way to show how a real-world app may be scaled using Terraform, Packer & Docker Swarm on Openstack.

There are a few parts that I will be covering:

  1. Creating base images
  2. Using Docker-Compose in Development
  3. Creating the infrastructure (Terraform)
  4. Creating a base image (Packer)
  5. Deploying
  6. Scaling

1. Creating Base Images

You may already be familiar with keeping provisioned AMIs/images up in the cloud that contain most of the services you need. That’s essentially all a base/foundation image is. The reality is that every time you push your code, you don’t want to have to wait for the stock CentOS/Ubuntu image to be re-provisioned. Base images allow you to create a basic setup that you can use not just on one project, but on multiple projects.

What I’ve done, is created a repository called, Docker Images, which currently has just 2 services; Nginx & PHP-FPM. Inside it is a little build script which iterates over each container, builds it and then pushes it to Docker Hub.

Your foundation images can contain whatever you want. Mine have some simple configuration such as nginx/php-fpm configuration. I have configured Supervisord to ensure that php-fpm is always running. Additionally, as I am placing both dev and prod versions of php.ini on the container, the Supervisord accepts environment parameters so the container can be fired up in dev mode or production ready.

This is the build.sh script within the Docker Images repo:

build.sh:

#!/bin/bash

VERSION=1
CONTAINER=$1
BUILD_NUMBER=$2

docker build ./$CONTAINER -t bobbydvo/ukc_$CONTAINER:latest
docker tag bobbydvo/ukc_$CONTAINER:latest  bobbydvo/ukc_$CONTAINER:$VERSION.$BUILD_NUMBER

docker push bobbydvo/ukc_$CONTAINER:latest
docker push bobbydvo/ukc_$CONTAINER:$VERSION.$BUILD_NUMBER

A simple Jenkins job with parameterised builds has been configured to pass the correct arguments to the script:

echo $BUILD_NUMBER
docker -v
whoami
sudo docker login -u bobbydvo -p Lr6n9hrGBLNxBm
sudo ./build.sh $CONTAINER $BUILD_NUMBER

Note: You will have to ensure that the Jenkins user is allowed sudo access

You can find the repository here: https://github.com/bobbydeveaux/docker-images

Each time the job is run, it will place new versions of each container here:

https://hub.docker.com/r/bobbydvo

Some may argue that due to the cache built up in layers within Docker, you can skip the Base Image step. However, I find it to be a great way to keep jobs isolated, with the addition of being able to re-use the containers for other projects. It also gives great visibility when a container-build has failed simply because an external package has been updated, and therefore it will not update your ‘latest’ tag, and won’t halt your deployments! Google have a great guide on building Foundation Images

We now need to test our 2 images/containers with our PHP app.

2. Lets set up our dev environment with the dummyphp app

This is my repository with a dummy PHP app: https://github.com/bobbydeveaux/dummyphp

If you’re familiar with PHP, you will notice that this is a Slim 3 application using Composer for dependency management. You’ll also find a file, ‘docker-compose.yml’ – this will coordinate Docker to use both of our containers:

docker-compose.yml

version: "2"
services:
  php-fpm:
    tty: true
    build: ./
    image: bobbydvo/dummyapp_php-fpm:latest
    ports:
      - "9000:9000"
    environment:
      - APPLICATION_ENV=dev
  web:
    tty: true
    image: bobbydvo/ukc_nginx:latest
    ports:
      - "80:80"
    environment:
      - NGINX_HOST=localhost
      - NGINX_PORT=80

The php-fpm container will use the Dockerfile in the root of the application to build the image, copy over the files onto the Docker Image itself, and save the image locally as a new container, rather than the use the base image. As it happens, the Nginx container doesn’t need any modification, as it’s only the PHP app that will change when we add code. Of course, you can change this to suit your needs if necessary.

Running the application is as simple as typing:

docker-compose up

You can now head over to http://localhost and test the application, it will be lightening fast. However, this means that the code on the container is what was copied over when docker-compose up was executed. Any changes to local code will not be reflected. There is a solution to this, and it’s in the form of ‘dev.yml’. This extends the docker-compose.yml file to mount the local volume onto the web root.

docker-compose up -f docker-compose.yml -f dev.yml

Now you can head to http://localhost, make some changes, and refresh, and you will see that it’s just as though you’re coding locally. Hurrah!

Note: there is a known bug with Docker for Mac, which means that the mounted volume has a bit of latency which can affect load times unless you make use of OPCache in dev mode. However, this is being worked on.

So now what? We have some shiny Docker containers that are working brilliantly together for our PHP app. Great for development, but what about the real world?

Our next topic will cover how to use Terraform to create a number of servers that will create 3 Docker Managers as well as a number of Docker Slave nodes.

Unfortunately, the CoreOS (great for Docker) image provided doesn’t have Docker Swarm, as this is still in the Beta channel. First we will have to create a new Docker Swarm enabled image using Packer, so let’s go ahead and do that first!

3. Using Packer to create a new Image in Cloud Native Infrastructure

Packer is another tool from Hashicorp which is comprised of a set of builders, and provisioners. It supports many builders such as AWS (AMI), Azure, DigitalOcean, Docker, Google, VirtualBox, VMWare, and of course the one we need; OpenStack. There are some others that it supports too which is great if you need it!

In terms of provisioning, you can use most of the popular tools such as Ansible, Puppet or Chef, as well as PowerShell and standard shell scripts.

For us, all we need to do is to take the stock image of CoreOS and tell it to use the Beta channel, which includes Docker Swarm, this can be done by modifying this file:

/etc/coreos/update.conf

…with this data:

GROUP=beta

At the time of writing Docker Swarm doesn’t work with docker-compose.yml files. However, Docker 1.13 will enable this feature. Once it’s made it’s way into the CoreOS builds I’ll be sure amend this article. For now I’ll show you how to install Docker Compose onto CoreOS whilst we’re provisioning, as it’s a great tool for testing.

As mentioned, we are going to use the OpenStack builder, so here is our ‘builder’ entry:

"builders": [
    {
      "type": "openstack",
      "image_name": "CoreOS-Docker-Beta-1-12",
      "source_image": "8e892f81-2197-464a-9b6b-1a5045735f5d",
      "flavor": "c46be6d1-979d-4489-8ffe-e421a3c83fdd",
      "ssh_keypair_name": "ukcloudos",
      "ssh_private_key_file": "/Users/bobby/.ssh/ukcloudos",
      "use_floating_ip": true,
      "floating_ip_pool": "internet",
      "ssh_username": "core",
      "ssh_pty" : true
    }
  ],

The type is required and must state the builder-type you’re using, whereas the image_name should be set to whatever you want your new image to be called. Source_image is the original image that is in Glance already. The builder also wants to know a flavor of the builder, I’m choosing a small instance as this is only to provision.

Note: Ensure that you are using an existing keypair name that is in your OpenStack project.

So, now that we have a builder, along with connectivity, let’s provision it:

"provisioners": [
    {
      "type": "shell",
      "inline": [
        "sudo sh -c 'echo GROUP=beta > /etc/coreos/update.conf'",
        "sudo systemctl restart update-engine",
        "sudo update_engine_client -update",
        "sudo sh -c 'mkdir /opt/'",
        "sudo sh -c 'mkdir /opt/bin'",
        "sudo sh -c 'curl -L https://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m` > /opt/bin/docker-compose'",
        "sudo sh -c 'chmod +x /opt/bin/docker-compose'"
      ]
    },{
      "type": "file",
      "source": "/Users/bobby/.ssh/ukcloudos",
      "destination": "/home/core/.ssh/key.pem"
    }
  ]

Given the simplicity of what we’re doing, I’m just using shell commands which just updates CoreOS to use the beta channel, and in turn installs the latest Beta build of Docker, along with installing Docker Compose.

You’ll also notice that we’re copying over an ssh key. This is an important piece of the puzzle later on when we need multiple servers to be able to communicate with each other.

All you need to do to kick of this build:

$ packer build ./packer/template.json

If you now view your images, either using the command line or the control panel, you will see your new image is ready to be consumed. Feel free to create a test instance using this image and type the following command:

docker version

You will see you are on at least 1.12.1, which includes Swarm. If you’d like to verify Docker Swarm is working, you can type the following command:

docker swarm init

Hopefully, everything worked perfectly for you. If not, feel free to view the full source code of this example here: https://github.com/UKCloud/openstack-packer/tree/docker-beta

4. Using Terraform to create your Infrastructure

Yet another tool from Hashicorp, an amazing one, Terraform allows infrastructure to be written as code aka IaC, but not only that, it’s Immutable. No matter how many times you execute it, you’ll get the same end result. Some other previous tools would be more procedural – take a shell script for example; if you ask the shell script to create 5 servers, and run it 5 times, you’ll end up with 25 servers. Terraform is clever, as it maintains state. If you ask it to create 5 servers, it will create 5. Run it again, and it will know you already have 5. Ask it to create 8, it will calculate that you already have 5, and simply add an extra 3. This flexibility is amazing and can be used for magnificent things.

All that being said, this is not a Terraform tutorial. This is a tutorial how to make use of Terraform to spin up some Docker Managers and some Docker slaves so that we can deploy our Dummy PHP App. It’s probably best to first take a look at the full main.tf file:

provider "openstack" {
}

resource "openstack_compute_keypair_v2" "test-keypair" {
  name = "ukcloudos"
  public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDggzO/9DNQzp8aPdvx0W+IqlbmbhpIgv1r2my1xOsVthFgx4HLiTB/2XEuEqVpwh5F+20fDn5Juox9jZAz+z3i5EI63ojpIMCKFDqDfFlIl54QPZVJUJVyQOe7Jzl/pmDJRU7vxTbdtZNYWSwjMjfZmQjGQhDd5mM9spQf3me5HsYY9Tko1vxGXcPE1WUyV60DrqSSBkrkSyf+mILXq43K1GszVj3JuYHCY/BBrupkhA126p6EoPtNKld4EyEJzDDNvK97+oyC38XKEg6lBgAngj4FnmG8cjLRXvbPU4gQNCqmrVUMljr3gYga+ZiPoj81NOuzauYNcbt6j+R1/B9qlze7VgNPYVv3ERzkboBdIx0WxwyTXg+3BHhY+E7zY1jLnO5Bdb40wDwl7AlUsOOriHL6fSBYuz2hRIdp0+upG6CNQnvg8pXNaNXNVPcNFPGLD1PuCJiG6x84+tLC2uAb0GWxAEVtWEMD1sBCp066dHwsivmQrYRxsYRHnlorlvdMSiJxpRo/peyiqEJ9Sa6OPl2A5JeokP1GxXJ6hyOoBn4h5WSuUVL6bS4J2ta7nA0fK6L6YreHV+dMdPZCZzSG0nV5qvSaAkdL7KuM4eeOvwcXAYMwZJPj+dCnGzwdhUIp/FtRy62mSHv5/kr+lVznWv2b2yl8L95SKAdfeOiFiQ== opensource@ukcloud.com"
}

resource "openstack_networking_network_v2" "example_network1" {
  name           = "example_network_1"
  admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "example_subnet1" {
  name            = "example_subnet_1"
  network_id      = "${openstack_networking_network_v2.example_network1.id}"
  cidr            = "10.10.0.0/24"
  ip_version      = 4
  dns_nameservers = ["8.8.8.8", "8.8.4.4"]
}

resource "openstack_compute_secgroup_v2" "example_secgroup_1" {
  name = "example_secgroup_1"
  description = "an example security group"
  rule {
    ip_protocol = "tcp"
    from_port   = 22
    to_port     = 22
    cidr        = "0.0.0.0/0"
  }

  rule {
    ip_protocol = "tcp"
    from_port   = 80
    to_port     = 80
    cidr        = "0.0.0.0/0"
  }

  rule {
    ip_protocol = "icmp"
    from_port   = "-1"
    to_port     = "-1"
    self        = true
  }
  rule {
    ip_protocol = "tcp"
    from_port   = "1"
    to_port     = "65535"
    self        = true
  }
  rule {
    ip_protocol = "udp"
    from_port   = "1"
    to_port     = "65535"
    self        = true
  }
}

resource "openstack_networking_router_v2" "example_router_1" {
  name             = "example_router1"
  external_gateway = "893a5b59-081a-4e3a-ac50-1e54e262c3fa"
}

resource "openstack_networking_router_interface_v2" "example_router_interface_1" {
  router_id = "${openstack_networking_router_v2.example_router_1.id}"
  subnet_id = "${openstack_networking_subnet_v2.example_subnet1.id}"
}

resource "openstack_networking_floatingip_v2" "example_floatip_manager" {
  pool = "internet"
}

resource "openstack_networking_floatingip_v2" "example_floatip_slaves" {
  pool = "internet"
}

data "template_file" "cloudinit" {
    template = "${file("cloudinit.sh")}"
    vars {
        application_env = "dev"
        git_repo = "${var.git_repo}"
        clone_location = "${var.clone_location}"   
    }
}

data "template_file" "managerinit" {
    template = "${file("managerinit.sh")}"
    vars {
        swarm_manager = "${openstack_compute_instance_v2.swarm_manager.access_ip_v4}"
    }
}

data "template_file" "slaveinit" {
    template = "${file("slaveinit.sh")}"
    vars {
        swarm_manager = "${openstack_compute_instance_v2.swarm_manager.access_ip_v4}"
        node_count = "${var.swarm_node_count + 3}"
    }
}

resource "openstack_compute_instance_v2" "swarm_manager" {
  name            = "swarm_manager_0"
  count = 1

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "7d73f524-f9a1-4e80-bedf-57216aae8038"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data =  "${data.template_file.cloudinit.rendered}"

  network {
    name        = "${openstack_networking_network_v2.example_network1.name}"
    floating_ip = "${openstack_networking_floatingip_v2.example_floatip_manager.address}"
  }

  provisioner "remote-exec" {
    inline = [
      # Create TLS certs
      "echo 'IP.1 = ${self.network.0.fixed_ip_v4}' > internalip",
      "docker swarm init --advertise-addr ${self.network.0.fixed_ip_v4}",
      "sudo docker swarm join-token --quiet worker > /home/core/worker-token",
      "sudo docker swarm join-token --quiet manager > /home/core/manager-token"
    ]
    connection {
        user = "core"
        host = "${openstack_networking_floatingip_v2.example_floatip_manager.address}"
    }
  }
}

resource "openstack_compute_instance_v2" "swarm_managerx" {
  name            = "swarm_manager_${count.index+1}"
  count           = 2

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "7d73f524-f9a1-4e80-bedf-57216aae8038"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data       =  "${data.template_file.managerinit.rendered}"

  network {
    name          = "${openstack_networking_network_v2.example_network1.name}"
  }
}

resource "openstack_compute_instance_v2" "swarm_slave" {
  name            = "swarm_slave_${count.index}"
  count           = "${var.swarm_node_count}"

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "c46be6d1-979d-4489-8ffe-e421a3c83fdd"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data =  "${data.template_file.slaveinit.rendered}"

  network {
    name        = "${openstack_networking_network_v2.example_network1.name}"
  }

}

Alternatively, you can view the full example on GitHub: https://github.com/UKCloud/openstack-terraform/tree/docker-swarm

Creating the first Docker Manager node

Assuming you’re all good with the basic setup of a network, security groups, floating IP addresses & routing we’ll head straight to the creation of our Docker Swarm.

To do this, what we’re going to do is create 1 Docker Manager, which will initiate the ‘docker swarm init’ command.

main.tf

...
data "template_file" "cloudinit" {
    template = "${file("cloudinit.sh")}"
    vars {
        application_env = "dev"
        git_repo = "${var.git_repo}"
        clone_location = "${var.clone_location}"   
    }
}

resource "openstack_compute_instance_v2" "swarm_manager" {
  name            = "swarm_manager_0"
  count = 1

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "7d73f524-f9a1-4e80-bedf-57216aae8038"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data =  "${data.template_file.cloudinit.rendered}"

  network {
    name        = "${openstack_networking_network_v2.example_network1.name}"
    floating_ip = "${openstack_networking_floatingip_v2.example_floatip_manager.address}"
  }

  provisioner "remote-exec" {
    inline = [
      # Bring up the Swarm!
      "echo 'IP.1 = ${self.network.0.fixed_ip_v4}' > internalip",
      "docker swarm init --advertise-addr ${self.network.0.fixed_ip_v4}",
      "sudo docker swarm join-token --quiet worker > /home/core/worker-token",
      "sudo docker swarm join-token --quiet manager > /home/core/manager-token"
    ]
    connection {
        user = "core"
        host = "${openstack_networking_floatingip_v2.example_floatip_manager.address}"
    }
  }
}
...

So, what does this do? Mostly it’s self explanatory, we’re bringing up an instance using the new CoreOS instance, and running a few shell commands. Amongst the shell commands is the swarm init command, which is advertising on the IP address allocated to the machine.

The next two commands are the really important ones though; these are the commands which grab the ‘join tokens’ that all the other nodes will need to be able to join the swarm. For now, we’re saving the tokens to the home directory, so that later nodes can SSH to this server, and grab the tokens (told you there was a reason we needed the SSH key adding to our CoreOS image!).

With just this one instance, we have an active swarm, but one that doesn’t do a great deal. The next thing we need to do is create the services, and for we’re using a template file to make use of the ‘cloud init’ functionality within OpenStack. The cloud init file looks like this:

cloudinit.sh

#!/bin/bash
# Script that will run at first boot via Openstack
# using user_data via cloud-init.

docker pull bobbydvo/ukc_nginx:latest
docker pull bobbydvo/ukc_php-fpm:latest
docker network create --driver overlay mynet
docker service create --update-delay 10s --replicas 1 -p 80:80 --network mynet --name web bobbydvo/ukc_nginx:latest
docker service create --update-delay 10s --replicas 1 -p 9000:9000  --network mynet --name php-fpm bobbydvo/ukc_php-fpm:latest

#The above services should be created by the DAB bundle..
#..but Docker 1.13 is changing the work bundles & stacks work so parking for now.

What this does, is tell the Docker Manager to fire off these commands when it first boots up.

If you visit the external IP address at this point, you should see some text like this, “Welcome to your php-fpm Docker container.”. This is because our application has not yet been deployed, we’ll get to that in a bit.

First, we need to create some more Docker Managers, some Docker Slaves, and get them all to join the Swarm!

Note: We’re initially deploying the base images, as we’ve not yet configured our Jenkins job to deploy the application. When we get that far, you may want to retrospectively update this cloudinit file with the Docker Image names of the built application name, but it’s not essential. Don’t worry about it!

Adding more nodes to the Swarm

Adding more Docker Managers is now fairly simple, but we can’t just increase the count of the first Docker Manager as that has special commands to initiate the Swarm. This second instruction below will allow us to configure as many more managers as we desire. Once up and running, these ‘secondary masters’ will be no less important than the first Manager, and we will have 3 identical instances with automatic failover.

Whilst Docker Swarm doesn’t specifically follow the Raft consensus like some other services, having at least 3 is important, whilst 5 is strongly recommended in production. This gives Docker Swarm the ability to still function whilst some nodes are out of service for whatever reason.

resource "openstack_compute_instance_v2" "swarm_managerx" {
  name            = "swarm_manager_${count.index+1}"
  count           = 2

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "7d73f524-f9a1-4e80-bedf-57216aae8038"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data       =  "${data.template_file.managerinit.rendered}"

  network {
    name          = "${openstack_networking_network_v2.example_network1.name}"
  }
}

The important part now, is to instruct each ‘secondary master’ to join the swarm as soon as they have booted up. We can do this with another cloud init script. For annotation purposes, I have called this ‘managerinit.sh’:

managerinit.sh

#!/bin/bash
# Script that will run at first boot via Openstack
# using user_data via cloud-init.

sudo scp -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null -i /home/core/.ssh/key.pem core@${swarm_manager}:/home/core/manager-token /home/core/manager-token
sudo docker swarm join --token $(cat /home/core/manager-token) ${swarm_manager}

Due to this being the first time the server will have connected, we’re passing a few options to prevent the scp command from prompting any input. Ultimately though, we’re connecting to the ‘primary master’ to grab the join-tokens that we mentioned earlier in the article. The join tokens are the only way we can ensure we join the correct swarm. The only parameters we are passing in is the IP address to the first Swarm Manager.

If you were to execute terraform as-is, without any slaves, and then ssh’d to the floating IP, you could run the following command:

docker node ls

And you will see a list of the masters, one of which will show it’s the leader, whereas the others will show they’re slaves.

Right now, masters will be able to serve your services in just the same way that slaves will be able to in future. In fact, you could just create a Swarm full of Masters if you like!

Adding Slaves to the Swarm

The code to add more slaves is similar to the masters, only this time the count is coming as an input from the variables.tf file. This is so that we can have as many nodes as we require.

resource "openstack_compute_instance_v2" "swarm_slave" {
  name            = "swarm_slave_${count.index}"
  count           = "${var.swarm_node_count}"

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "c46be6d1-979d-4489-8ffe-e421a3c83fdd"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_com