Category Archives: PHP

PHP Tutorials

What Is Docker Swarm And How To Use It To Scale A Simple PHP App Along With Terraform & Packer on Cloud Native Infrastructure powered By OpenStack

Note: This is based on Docker 1.12 as at the time of writing, whilst Docker 1.13 is now released, it is not yet in the CoreOS builds. As soon as 1.13 is available, I will append a footnote to this blogpost and edit this note!

As more and more people jump on the Docker bandwagon, more and more people are wondering just exactly how we scale this thing. Some will have heard of Docker-Compose, some will have heard of Docker Swarm, and then there’s some folks out there with their Kubernetes and Mesos clusters.

Docker Swarm became native to Docker in v1.12 and makes container orchestration super simple. Not only that, but each node is accessible via the hostname due to the built in DNS and Service Discovery. With it’s overlay network and inbuilt routing mesh, all the nodes can accept connections on the published ports for any of the services running in the Swarm. This basically gives you the access to multiple-nodes and treat them as one.

Just to top it off, Docker Swarm has built-in load balancing. Send a request to any of the nodes and it will send the request in a round-robin fashion to all the containers running the requested service. Simply amazing, and I’m going to show you how you can get started with this great technology.

For my example, I’ve chosen a PHP application (cue the flames), it’s a great way to show how a real-world app may be scaled using Terraform, Packer & Docker Swarm on Openstack.

There are a few parts that I will be covering:

  1. Creating base images
  2. Using Docker-Compose in Development
  3. Creating the infrastructure (Terraform)
  4. Creating a base image (Packer)
  5. Deploying
  6. Scaling

1. Creating Base Images

You may already be familiar with keeping provisioned AMIs/images up in the cloud that contain most of the services you need. That’s essentially all a base/foundation image is. The reality is that every time you push your code, you don’t want to have to wait for the stock CentOS/Ubuntu image to be re-provisioned. Base images allow you to create a basic setup that you can use not just on one project, but on multiple projects.

What I’ve done, is created a repository called, Docker Images, which currently has just 2 services; Nginx & PHP-FPM. Inside it is a little build script which iterates over each container, builds it and then pushes it to Docker Hub.

Your foundation images can contain whatever you want. Mine have some simple configuration such as nginx/php-fpm configuration. I have configured Supervisord to ensure that php-fpm is always running. Additionally, as I am placing both dev and prod versions of php.ini on the container, the Supervisord accepts environment parameters so the container can be fired up in dev mode or production ready.

This is the build.sh script within the Docker Images repo:

build.sh:

#!/bin/bash

VERSION=1
CONTAINER=$1
BUILD_NUMBER=$2

docker build ./$CONTAINER -t bobbydvo/ukc_$CONTAINER:latest
docker tag bobbydvo/ukc_$CONTAINER:latest  bobbydvo/ukc_$CONTAINER:$VERSION.$BUILD_NUMBER

docker push bobbydvo/ukc_$CONTAINER:latest
docker push bobbydvo/ukc_$CONTAINER:$VERSION.$BUILD_NUMBER

A simple Jenkins job with parameterised builds has been configured to pass the correct arguments to the script:

echo $BUILD_NUMBER
docker -v
whoami
sudo docker login -u bobbydvo -p Lr6n9hrGBLNxBm
sudo ./build.sh $CONTAINER $BUILD_NUMBER

Note: You will have to ensure that the Jenkins user is allowed sudo access

You can find the repository here: https://github.com/bobbydeveaux/docker-images

Each time the job is run, it will place new versions of each container here:

https://hub.docker.com/r/bobbydvo

Some may argue that due to the cache built up in layers within Docker, you can skip the Base Image step. However, I find it to be a great way to keep jobs isolated, with the addition of being able to re-use the containers for other projects. It also gives great visibility when a container-build has failed simply because an external package has been updated, and therefore it will not update your ‘latest’ tag, and won’t halt your deployments! Google have a great guide on building Foundation Images

We now need to test our 2 images/containers with our PHP app.

2. Lets set up our dev environment with the dummyphp app

This is my repository with a dummy PHP app: https://github.com/bobbydeveaux/dummyphp

If you’re familiar with PHP, you will notice that this is a Slim 3 application using Composer for dependency management. You’ll also find a file, ‘docker-compose.yml’ – this will coordinate Docker to use both of our containers:

docker-compose.yml

version: "2"
services:
  php-fpm:
    tty: true
    build: ./
    image: bobbydvo/dummyapp_php-fpm:latest
    ports:
      - "9000:9000"
    environment:
      - APPLICATION_ENV=dev
  web:
    tty: true
    image: bobbydvo/ukc_nginx:latest
    ports:
      - "80:80"
    environment:
      - NGINX_HOST=localhost
      - NGINX_PORT=80

The php-fpm container will use the Dockerfile in the root of the application to build the image, copy over the files onto the Docker Image itself, and save the image locally as a new container, rather than the use the base image. As it happens, the Nginx container doesn’t need any modification, as it’s only the PHP app that will change when we add code. Of course, you can change this to suit your needs if necessary.

Running the application is as simple as typing:

docker-compose up

You can now head over to http://localhost and test the application, it will be lightening fast. However, this means that the code on the container is what was copied over when docker-compose up was executed. Any changes to local code will not be reflected. There is a solution to this, and it’s in the form of ‘dev.yml’. This extends the docker-compose.yml file to mount the local volume onto the web root.

docker-compose up -f docker-compose.yml -f dev.yml

Now you can head to http://localhost, make some changes, and refresh, and you will see that it’s just as though you’re coding locally. Hurrah!

Note: there is a known bug with Docker for Mac, which means that the mounted volume has a bit of latency which can affect load times unless you make use of OPCache in dev mode. However, this is being worked on.

So now what? We have some shiny Docker containers that are working brilliantly together for our PHP app. Great for development, but what about the real world?

Our next topic will cover how to use Terraform to create a number of servers that will create 3 Docker Managers as well as a number of Docker Slave nodes.

Unfortunately, the CoreOS (great for Docker) image provided doesn’t have Docker Swarm, as this is still in the Beta channel. First we will have to create a new Docker Swarm enabled image using Packer, so let’s go ahead and do that first!

3. Using Packer to create a new Image in Cloud Native Infrastructure

Packer is another tool from Hashicorp which is comprised of a set of builders, and provisioners. It supports many builders such as AWS (AMI), Azure, DigitalOcean, Docker, Google, VirtualBox, VMWare, and of course the one we need; OpenStack. There are some others that it supports too which is great if you need it!

In terms of provisioning, you can use most of the popular tools such as Ansible, Puppet or Chef, as well as PowerShell and standard shell scripts.

For us, all we need to do is to take the stock image of CoreOS and tell it to use the Beta channel, which includes Docker Swarm, this can be done by modifying this file:

/etc/coreos/update.conf

…with this data:

GROUP=beta

At the time of writing Docker Swarm doesn’t work with docker-compose.yml files. However, Docker 1.13 will enable this feature. Once it’s made it’s way into the CoreOS builds I’ll be sure amend this article. For now I’ll show you how to install Docker Compose onto CoreOS whilst we’re provisioning, as it’s a great tool for testing.

As mentioned, we are going to use the OpenStack builder, so here is our ‘builder’ entry:

"builders": [
    {
      "type": "openstack",
      "image_name": "CoreOS-Docker-Beta-1-12",
      "source_image": "8e892f81-2197-464a-9b6b-1a5045735f5d",
      "flavor": "c46be6d1-979d-4489-8ffe-e421a3c83fdd",
      "ssh_keypair_name": "ukcloudos",
      "ssh_private_key_file": "/Users/bobby/.ssh/ukcloudos",
      "use_floating_ip": true,
      "floating_ip_pool": "internet",
      "ssh_username": "core",
      "ssh_pty" : true
    }
  ],

The type is required and must state the builder-type you’re using, whereas the image_name should be set to whatever you want your new image to be called. Source_image is the original image that is in Glance already. The builder also wants to know a flavor of the builder, I’m choosing a small instance as this is only to provision.

Note: Ensure that you are using an existing keypair name that is in your OpenStack project.

So, now that we have a builder, along with connectivity, let’s provision it:

"provisioners": [
    {
      "type": "shell",
      "inline": [
        "sudo sh -c 'echo GROUP=beta > /etc/coreos/update.conf'",
        "sudo systemctl restart update-engine",
        "sudo update_engine_client -update",
        "sudo sh -c 'mkdir /opt/'",
        "sudo sh -c 'mkdir /opt/bin'",
        "sudo sh -c 'curl -L https://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m` > /opt/bin/docker-compose'",
        "sudo sh -c 'chmod +x /opt/bin/docker-compose'"
      ]
    },{
      "type": "file",
      "source": "/Users/bobby/.ssh/ukcloudos",
      "destination": "/home/core/.ssh/key.pem"
    }
  ]

Given the simplicity of what we’re doing, I’m just using shell commands which just updates CoreOS to use the beta channel, and in turn installs the latest Beta build of Docker, along with installing Docker Compose.

You’ll also notice that we’re copying over an ssh key. This is an important piece of the puzzle later on when we need multiple servers to be able to communicate with each other.

All you need to do to kick of this build:

$ packer build ./packer/template.json

If you now view your images, either using the command line or the control panel, you will see your new image is ready to be consumed. Feel free to create a test instance using this image and type the following command:

docker version

You will see you are on at least 1.12.1, which includes Swarm. If you’d like to verify Docker Swarm is working, you can type the following command:

docker swarm init

Hopefully, everything worked perfectly for you. If not, feel free to view the full source code of this example here: https://github.com/UKCloud/openstack-packer/tree/docker-beta

4. Using Terraform to create your Infrastructure

Yet another tool from Hashicorp, an amazing one, Terraform allows infrastructure to be written as code aka IaC, but not only that, it’s Immutable. No matter how many times you execute it, you’ll get the same end result. Some other previous tools would be more procedural – take a shell script for example; if you ask the shell script to create 5 servers, and run it 5 times, you’ll end up with 25 servers. Terraform is clever, as it maintains state. If you ask it to create 5 servers, it will create 5. Run it again, and it will know you already have 5. Ask it to create 8, it will calculate that you already have 5, and simply add an extra 3. This flexibility is amazing and can be used for magnificent things.

All that being said, this is not a Terraform tutorial. This is a tutorial how to make use of Terraform to spin up some Docker Managers and some Docker slaves so that we can deploy our Dummy PHP App. It’s probably best to first take a look at the full main.tf file:

provider "openstack" {
}

resource "openstack_compute_keypair_v2" "test-keypair" {
  name = "ukcloudos"
  public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDggzO/9DNQzp8aPdvx0W+IqlbmbhpIgv1r2my1xOsVthFgx4HLiTB/2XEuEqVpwh5F+20fDn5Juox9jZAz+z3i5EI63ojpIMCKFDqDfFlIl54QPZVJUJVyQOe7Jzl/pmDJRU7vxTbdtZNYWSwjMjfZmQjGQhDd5mM9spQf3me5HsYY9Tko1vxGXcPE1WUyV60DrqSSBkrkSyf+mILXq43K1GszVj3JuYHCY/BBrupkhA126p6EoPtNKld4EyEJzDDNvK97+oyC38XKEg6lBgAngj4FnmG8cjLRXvbPU4gQNCqmrVUMljr3gYga+ZiPoj81NOuzauYNcbt6j+R1/B9qlze7VgNPYVv3ERzkboBdIx0WxwyTXg+3BHhY+E7zY1jLnO5Bdb40wDwl7AlUsOOriHL6fSBYuz2hRIdp0+upG6CNQnvg8pXNaNXNVPcNFPGLD1PuCJiG6x84+tLC2uAb0GWxAEVtWEMD1sBCp066dHwsivmQrYRxsYRHnlorlvdMSiJxpRo/peyiqEJ9Sa6OPl2A5JeokP1GxXJ6hyOoBn4h5WSuUVL6bS4J2ta7nA0fK6L6YreHV+dMdPZCZzSG0nV5qvSaAkdL7KuM4eeOvwcXAYMwZJPj+dCnGzwdhUIp/FtRy62mSHv5/kr+lVznWv2b2yl8L95SKAdfeOiFiQ== opensource@ukcloud.com"
}

resource "openstack_networking_network_v2" "example_network1" {
  name           = "example_network_1"
  admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "example_subnet1" {
  name            = "example_subnet_1"
  network_id      = "${openstack_networking_network_v2.example_network1.id}"
  cidr            = "10.10.0.0/24"
  ip_version      = 4
  dns_nameservers = ["8.8.8.8", "8.8.4.4"]
}

resource "openstack_compute_secgroup_v2" "example_secgroup_1" {
  name = "example_secgroup_1"
  description = "an example security group"
  rule {
    ip_protocol = "tcp"
    from_port   = 22
    to_port     = 22
    cidr        = "0.0.0.0/0"
  }

  rule {
    ip_protocol = "tcp"
    from_port   = 80
    to_port     = 80
    cidr        = "0.0.0.0/0"
  }

  rule {
    ip_protocol = "icmp"
    from_port   = "-1"
    to_port     = "-1"
    self        = true
  }
  rule {
    ip_protocol = "tcp"
    from_port   = "1"
    to_port     = "65535"
    self        = true
  }
  rule {
    ip_protocol = "udp"
    from_port   = "1"
    to_port     = "65535"
    self        = true
  }
}

resource "openstack_networking_router_v2" "example_router_1" {
  name             = "example_router1"
  external_gateway = "893a5b59-081a-4e3a-ac50-1e54e262c3fa"
}

resource "openstack_networking_router_interface_v2" "example_router_interface_1" {
  router_id = "${openstack_networking_router_v2.example_router_1.id}"
  subnet_id = "${openstack_networking_subnet_v2.example_subnet1.id}"
}

resource "openstack_networking_floatingip_v2" "example_floatip_manager" {
  pool = "internet"
}

resource "openstack_networking_floatingip_v2" "example_floatip_slaves" {
  pool = "internet"
}

data "template_file" "cloudinit" {
    template = "${file("cloudinit.sh")}"
    vars {
        application_env = "dev"
        git_repo = "${var.git_repo}"
        clone_location = "${var.clone_location}"   
    }
}

data "template_file" "managerinit" {
    template = "${file("managerinit.sh")}"
    vars {
        swarm_manager = "${openstack_compute_instance_v2.swarm_manager.access_ip_v4}"
    }
}

data "template_file" "slaveinit" {
    template = "${file("slaveinit.sh")}"
    vars {
        swarm_manager = "${openstack_compute_instance_v2.swarm_manager.access_ip_v4}"
        node_count = "${var.swarm_node_count + 3}"
    }
}

resource "openstack_compute_instance_v2" "swarm_manager" {
  name            = "swarm_manager_0"
  count = 1

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "7d73f524-f9a1-4e80-bedf-57216aae8038"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data =  "${data.template_file.cloudinit.rendered}"

  network {
    name        = "${openstack_networking_network_v2.example_network1.name}"
    floating_ip = "${openstack_networking_floatingip_v2.example_floatip_manager.address}"
  }

  provisioner "remote-exec" {
    inline = [
      # Create TLS certs
      "echo 'IP.1 = ${self.network.0.fixed_ip_v4}' > internalip",
      "docker swarm init --advertise-addr ${self.network.0.fixed_ip_v4}",
      "sudo docker swarm join-token --quiet worker > /home/core/worker-token",
      "sudo docker swarm join-token --quiet manager > /home/core/manager-token"
    ]
    connection {
        user = "core"
        host = "${openstack_networking_floatingip_v2.example_floatip_manager.address}"
    }
  }
}

resource "openstack_compute_instance_v2" "swarm_managerx" {
  name            = "swarm_manager_${count.index+1}"
  count           = 2

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "7d73f524-f9a1-4e80-bedf-57216aae8038"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data       =  "${data.template_file.managerinit.rendered}"

  network {
    name          = "${openstack_networking_network_v2.example_network1.name}"
  }
}

resource "openstack_compute_instance_v2" "swarm_slave" {
  name            = "swarm_slave_${count.index}"
  count           = "${var.swarm_node_count}"

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "c46be6d1-979d-4489-8ffe-e421a3c83fdd"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data =  "${data.template_file.slaveinit.rendered}"

  network {
    name        = "${openstack_networking_network_v2.example_network1.name}"
  }

}

Alternatively, you can view the full example on GitHub: https://github.com/UKCloud/openstack-terraform/tree/docker-swarm

Creating the first Docker Manager node

Assuming you’re all good with the basic setup of a network, security groups, floating IP addresses & routing we’ll head straight to the creation of our Docker Swarm.

To do this, what we’re going to do is create 1 Docker Manager, which will initiate the ‘docker swarm init’ command.

main.tf

...
data "template_file" "cloudinit" {
    template = "${file("cloudinit.sh")}"
    vars {
        application_env = "dev"
        git_repo = "${var.git_repo}"
        clone_location = "${var.clone_location}"   
    }
}

resource "openstack_compute_instance_v2" "swarm_manager" {
  name            = "swarm_manager_0"
  count = 1

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "7d73f524-f9a1-4e80-bedf-57216aae8038"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data =  "${data.template_file.cloudinit.rendered}"

  network {
    name        = "${openstack_networking_network_v2.example_network1.name}"
    floating_ip = "${openstack_networking_floatingip_v2.example_floatip_manager.address}"
  }

  provisioner "remote-exec" {
    inline = [
      # Bring up the Swarm!
      "echo 'IP.1 = ${self.network.0.fixed_ip_v4}' > internalip",
      "docker swarm init --advertise-addr ${self.network.0.fixed_ip_v4}",
      "sudo docker swarm join-token --quiet worker > /home/core/worker-token",
      "sudo docker swarm join-token --quiet manager > /home/core/manager-token"
    ]
    connection {
        user = "core"
        host = "${openstack_networking_floatingip_v2.example_floatip_manager.address}"
    }
  }
}
...

So, what does this do? Mostly it’s self explanatory, we’re bringing up an instance using the new CoreOS instance, and running a few shell commands. Amongst the shell commands is the swarm init command, which is advertising on the IP address allocated to the machine.

The next two commands are the really important ones though; these are the commands which grab the ‘join tokens’ that all the other nodes will need to be able to join the swarm. For now, we’re saving the tokens to the home directory, so that later nodes can SSH to this server, and grab the tokens (told you there was a reason we needed the SSH key adding to our CoreOS image!).

With just this one instance, we have an active swarm, but one that doesn’t do a great deal. The next thing we need to do is create the services, and for we’re using a template file to make use of the ‘cloud init’ functionality within OpenStack. The cloud init file looks like this:

cloudinit.sh

#!/bin/bash
# Script that will run at first boot via Openstack
# using user_data via cloud-init.

docker pull bobbydvo/ukc_nginx:latest
docker pull bobbydvo/ukc_php-fpm:latest
docker network create --driver overlay mynet
docker service create --update-delay 10s --replicas 1 -p 80:80 --network mynet --name web bobbydvo/ukc_nginx:latest
docker service create --update-delay 10s --replicas 1 -p 9000:9000  --network mynet --name php-fpm bobbydvo/ukc_php-fpm:latest

#The above services should be created by the DAB bundle..
#..but Docker 1.13 is changing the work bundles & stacks work so parking for now.

What this does, is tell the Docker Manager to fire off these commands when it first boots up.

If you visit the external IP address at this point, you should see some text like this, “Welcome to your php-fpm Docker container.”. This is because our application has not yet been deployed, we’ll get to that in a bit.

First, we need to create some more Docker Managers, some Docker Slaves, and get them all to join the Swarm!

Note: We’re initially deploying the base images, as we’ve not yet configured our Jenkins job to deploy the application. When we get that far, you may want to retrospectively update this cloudinit file with the Docker Image names of the built application name, but it’s not essential. Don’t worry about it!

Adding more nodes to the Swarm

Adding more Docker Managers is now fairly simple, but we can’t just increase the count of the first Docker Manager as that has special commands to initiate the Swarm. This second instruction below will allow us to configure as many more managers as we desire. Once up and running, these ‘secondary masters’ will be no less important than the first Manager, and we will have 3 identical instances with automatic failover.

Whilst Docker Swarm doesn’t specifically follow the Raft consensus like some other services, having at least 3 is important, whilst 5 is strongly recommended in production. This gives Docker Swarm the ability to still function whilst some nodes are out of service for whatever reason.

resource "openstack_compute_instance_v2" "swarm_managerx" {
  name            = "swarm_manager_${count.index+1}"
  count           = 2

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "7d73f524-f9a1-4e80-bedf-57216aae8038"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data       =  "${data.template_file.managerinit.rendered}"

  network {
    name          = "${openstack_networking_network_v2.example_network1.name}"
  }
}

The important part now, is to instruct each ‘secondary master’ to join the swarm as soon as they have booted up. We can do this with another cloud init script. For annotation purposes, I have called this ‘managerinit.sh’:

managerinit.sh

#!/bin/bash
# Script that will run at first boot via Openstack
# using user_data via cloud-init.

sudo scp -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null -i /home/core/.ssh/key.pem core@${swarm_manager}:/home/core/manager-token /home/core/manager-token
sudo docker swarm join --token $(cat /home/core/manager-token) ${swarm_manager}

Due to this being the first time the server will have connected, we’re passing a few options to prevent the scp command from prompting any input. Ultimately though, we’re connecting to the ‘primary master’ to grab the join-tokens that we mentioned earlier in the article. The join tokens are the only way we can ensure we join the correct swarm. The only parameters we are passing in is the IP address to the first Swarm Manager.

If you were to execute terraform as-is, without any slaves, and then ssh’d to the floating IP, you could run the following command:

docker node ls

And you will see a list of the masters, one of which will show it’s the leader, whereas the others will show they’re slaves.

Right now, masters will be able to serve your services in just the same way that slaves will be able to in future. In fact, you could just create a Swarm full of Masters if you like!

Adding Slaves to the Swarm

The code to add more slaves is similar to the masters, only this time the count is coming as an input from the variables.tf file. This is so that we can have as many nodes as we require.

resource "openstack_compute_instance_v2" "swarm_slave" {
  name            = "swarm_slave_${count.index}"
  count           = "${var.swarm_node_count}"

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "c46be6d1-979d-4489-8ffe-e421a3c83fdd"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data =  "${data.template_file.slaveinit.rendered}"

  network {
    name        = "${openstack_networking_network_v2.example_network1.name}"
  }

}

The main difference between the slaves and masters is the cloud init file. In the file below we’re doing a number of things:

  • Copying the worker ‘join token’ from the master
  • Joining the node into the Docker Swarm
  • Scaling the active services down to a minimum of 3
  • Scaling the active services back up to the number of nodes we require

slaveinit.sh

#!/bin/bash
# Script that will run at first boot via Openstack
# using user_data via cloud-init.


sudo scp -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null -i /home/core/.ssh/key.pem core@${swarm_manager}:/home/core/worker-token /home/core/worker-token
sudo docker swarm join --token $(cat /home/core/worker-token) ${swarm_manager}

# Horrible hack, as Swarm doesn't evenly distribute to new nodes 
# https://github.com/docker/docker/issues/24103
ssh -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null  -i /home/core/.ssh/key.pem core@${swarm_manager} "docker service scale php-fpm=3"
ssh -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null  -i /home/core/.ssh/key.pem core@${swarm_manager} "docker service scale web=3"

# Scale to the number of instances we should have once the script has finished.
# This means it may scale to 50 even though we only have 10, with 40 still processing.
# Hence the issue above.
ssh -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null  -i /home/core/.ssh/key.pem core@${swarm_manager} "docker service scale php-fpm=${node_count}"
ssh -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null  -i /home/core/.ssh/key.pem core@${swarm_manager} "docker service scale web=${node_count}"

The copying of the token and joining the swarm is fairly trivial, and is fairly similar to what happens with the master nodes. What we’re also doing though, is issuing a command to the Docker manager to instruct it to scale the service across x nodes. i.e. the number of nodes we are scaling to. Without adding this code here, one would have to scale the infrastructure, and then scale the Docker Service manually. By including this command in the infrastructure as code file, we can simply scale Docker Swarm just from the ‘terraform apply’ command.

Note: As the annotations suggest, the scaling solution here is not so elegant. I will explain more:

Suppose we have 3 Docker Managers and we add 3 Docker Slaves… as the first Docker Slave is created, it will scale the swarm using the ‘docker service scale web=6’ command, as can be seen in the code above. However, the moment the first Docker Slave issues that command, we only have 4 nodes. So we have 6 containers running on 4 nodes.. not a big problem as we’re about to add another 2 Docker Slave nodes. However, when the 2nd and 3rd slave nodes join the swarm, Docker doesn’t allocate any services to those nodes. The only way to allocate services to said nodes, is to scale down, and back up again, which is precisely what the code is doing above. Docker is aware of this ‘feature’ and they are looking at creating a flag to pass onto the Docker Swarm join command to redistribute the services.

5. Deploying the Application

We now have 3 Docker Managers and 3 Docker Slaves all running in an active Docker Swarm. We can scale up, and we can scale down. This is simply awesome, but not so fun if we don’t have our app deployed to test this functionality.

To deploy the app we’re going to set up a Jenkins job which will be fired either manually or when a commit has been made.

The Jenkins job should be configure with the below commands, however, if you don’t want to create a Jenkins job, you can always just throw it into a shell script and modify the variables.

set -e

DUMMY_VERSION=$BUILD_VERSION
NGINX_VERSION='latest'


sudo docker-compose build 

sudo docker run -i bobbydvo/dummyapp_php-fpm /srv/vendor/bin/phpunit -c /srv/app/phpunit.xml


# tag & push only if all the above succeeded (set -e)
sudo docker tag bobbydvo/dummyapp_php-fpm:latest  bobbydvo/dummyapp_php-fpm:$DUMMY_VERSION
sudo docker push bobbydvo/dummyapp_php-fpm:$DUMMY_VERSION
sudo docker push bobbydvo/dummyapp_php-fpm:latest

ssh core@51.179.219.14 "docker service update --image bobbydvo/dummyapp_php-fpm:$DUMMY_VERSION php-fpm"
ssh core@51.179.219.14 "docker service update --image bobbydvo/ukc_nginx:$NGINX_VERSION web"

Note: You will have to ensure that the jenkins user is allowed sudo access

What does this job do then? What we’re doing is telling docker-compose to execute the docker-compose.yml file that we included in step 2 for our dev environment. This will instruct Docker to build a new Docker container with the latest code, and we then run our unit tests on the newly built container. As we’re using the ‘set -e’ instruction, we’ll only continue to the next step if the previous step was successful. With that in mind, if our Unit tests pass, we then tag the latest image and push to Docker hub.

The final step, is to connect to the Docker Manager and update the service with the latest container. When creating the service, we specified a rolling update of 10s, so as soon as this command is issued, it will take approximately 1 minute for all our nodes to be updated.

You can now visit the floating IP that you’ve allocated to the Docker Manager and you will see that Docker automatically load balances the traffic amongst all the nodes. Simply amazing!

6. Scaling the Application

The final step, assuming your application is struggling to cope with the load, is to add more nodes. You can modify the value in the variables.tf:

variables.tf

variable "swarm_node_count" {
    default = 10
}

And apply!

terraform apply

Literally, that simple. You can scale to 10 nodes, 50 nodes, 1000 nodes, and your application will be automatically load balanced via Docker Swarm. What’s better, you know that each and every node running is an exact replica, provisioned in exactly the same way, running the exact same code.

I hope you’ve been able to follow this tutorial, along with understanding all the code examples. However, if you have any comments or questions, please leave them below or tweet me: @bobbyjason.

Many thanks!

Share

HHVM vs PHP vs OPCache vs Go vs Node

Lets start by saying, I was just curious as to how fast Go was compared to PHP. I knew it was faster, but wanted some benchmarks.

I found this great article by @jaxbot:
http://jaxbot.me/articles/benchmarks_nodejs_vs_go_vs_php_3_14_2013

He then went on to dismiss PHP as it was too slow; so didn’t include it in the Go 1.1 benchmark test he did later:
http://jaxbot.me/articles/benchmarks_nodejs_vs_go_11_5_27_2013

Fair game I thought. Until a colleague pondered how fast HHVM would be as he’d recently installed it. Given how easy it is to install HHVM these days I decided to do my own benchmarking.

For Node & Go benchmarks, please refer to @jaxbox’s links above.

There is a minor difference in the code – and that is I am using a function to wrap the code. PHP is generally faster to execute code when it’s in a function, plus HHVM ignores code that isn’t in a function.

The code:

PHP 5.5 (opcache.enable_cli=0)
$ php bubble.php
Run 1: 24.121762990952
Run 2: 24.156540155411
Run 3: 24.948321819305
Run 4: 26.411414861679
Run 5: 24.790290117264
Average: 24.882

PHP 5.5 w/OPCache (opcache.enable_cli=1)
$ php bubble.php
Run 1: 24.675834178925
Run 2: 25.641896009445
Run 3: 26.468472003937
Run 4: 24.278208017349
Run 5: 24.843347072601

HHVM (PHP 5.5 OPCache)
$ hhvm -v Eval.Jit=true bubble.php
Run 1: 2.6463210582733
Run 2: 2.6204199790955
Run 3: 2.563747882843
Run 4: 2.9089078903198
Run 5: 2.6408560276031
Average: 2.672
Interesting is that OPcache didn't fair well in this bubblesort...

Now let's compare to the tests from @jaxbot - he had different hardware, so a direct comparison is almost meaningless.. BUT HHVM speaks for itself when compared to the Zend Engine.

  Node.js Go 1.1 PHP 5.5 HHVM jit
Avg of 5 trials 430ms 326.26ms 24.88ms 2.67ms
Best 420ms 290.27ms 24.12ms 2.56ms

Hope that helps someone who wonders how good HHVM is?! 🙂

I finish with a quote from @jaxbox: "the same rules apply here as last time; take these with a lattice of Sodium Chloride, as the benchmarks tell us information, not answers."

Share

Installing ZooKeeper for PHP on CentOS 6.3

This post is very short, simply as a reference to anyone out there that would like to install ZooKeeper on CentOS 6.3, and connect via the PHP Bindings.

To download ZooKeeper, you can visit Globocom’s GitHub page for updated versions. Below are the versions I used at the time of writing.

Install ZooKeeper:

curl -O http://cloud.github.com/downloads/globocom/zookeeper-centos-6/zookeeper-3.4.3-2.x86_64.rpm
rpm -ivh zookeeper*
service zookeeper restart

ZooKeeper is now up and running, but you need to install some more stuff before you can connect to it!

curl -O http://cloud.github.com/downloads/globocom/zookeeper-centos-6/libzookeeper-3.4.3-2.x86_64.rpm
rpm -ivh libzookeeper*
curl -O http://cloud.github.com/downloads/globocom/zookeeper-centos-6/libzookeeper-devel-3.4.3-2.x86_64.rpm
rpm -ivh libzookeeper-devel*

Install php-zookeeper from Andrei Zmievski:

git clone https://github.com/andreiz/php-zookeeper.git
pear build
./configure
make
make install

add zookeeper.so to your php.ini, or create zookeeper.ini and place it in your php.d folder!

Hope that helps someone!

Cheers

Share

Adding PHPDocumentor (Sami) via Composer

In a previous article I demonstrated setting up a Silex project via composer. In addition to setting up PHPUnit I also mentioned how to get Codesniffer working to PSR2 coding standards. I figured a nice addition to that would be setting up PHPDocumentor.

Note: Using a document generator infers that you need documentation. If you’ve already got code and no documentation then it’s probably worth highlighting that some would say you are doing something wrong. Arguably, you should always have some documentation explaining what the API endpoints need to be, and what they should return. For the purposes of this article I shall assume that you’re like me – and want to create some pretty doucmentation automagically.

I’ve actually opted to use Sami over phpDocumentor2 – for reasons that phpDocumentor seemed to not fully support composer when Silex was installed (conflicting dependencies). There is also the addition that Sami is from Fabien Potencier of Sensio Labs, who wrote Silex.

Getting Started

Okay, so let’s dive right in – I’m assuming you’ve already got the application running from my previous article mentioned above.

Edit your composer.json:

{
    "minimum-stability": "dev",
    "require": {
        "silex/silex": "1.0.*@dev"
    },
    "autoload": {
        "psr-0": {"DVO": "src/"}
    },
    "require-dev": {
        "phpunit/phpunit": "3.7.*",
        "squizlabs/php_codesniffer": "1.*",
        "sami/sami" : "*",
    }
}

Now give composer a little nudge:

$ composer update --dev

Very quickly you can verify that Sami has been installed correctly:

$ ./vendor/bin/sami.php

If that spits out some useful help information then you’re onto a winner. The next step is to create a config file; I’ve gone for a basic implemtation just to get it up and running.

Create a config directory and create a new file called sami.php with the following:

<?php

return new Sami\Sami(__DIR__.'/../src/', array(
    'build_dir' => __DIR__.'/../build/sami/documentation',
    'cache_dir' => __DIR__.'/../build/sami/cache',
));

Give it another whirl:

$ ./vendor/bin/sami.php update config/sami.php

How easy was that? You will see it has dumped a bunch of stuff in the build/sami folder. You can easily browse the documentation and/or setup a vhost to share.

Nice and simple. Let me know if you have any thoughts/feedback.

Bobby (@bobbyjason)

Share

Using CruftFlake with ZeroMQ for ID generation instead of Twitter Snowflake

What is CruftFlake?

CruftFlake is essentially a PHP version of Twitter’s Snowflake. However, rather than using Apache Thrift, CruftFlake uses ZeroMQ.

Snowflake and CruftFlake are both used for generating unique ID numbers at high scale. In large, scalable systems – you tend to move away from the likes of MySQL (with its ever so lovely auto-increment), and move to NoSQL solutions such as MongoDB, CouchDB, Redis, Cassandra and Hadoop/Hbase.

There are many database solutions to address the problem of scalability, and one thing that you’ll find yourself needing more often than not: the ability to generate a unique ID – and that’s what CruftFlake is for.

Why?

I quote from my PHP UK Conference review post:

“If you use technology that was designed to be resilient, and then build your application atop of that with resilience in mind, then there is a very good chance that you app will also be resilient.”

To be fair, this is more about scalability than resilience, but one could argue they go hand in hand. Point being that you can’t rely on a single auto increment value if you have a 100 database servers; it would be… disgusting.

Installing

ZeroMQ

I’m still running things locally, so the installation below will assuming you’re running Mountain Lion.

First things first: Sadly, Mountain Lion and the new version of Xcode doesn’t appear to ship with Autoconf; so you’ll have to install it:

$ cd ~
$ mkdir src
$ cd src
$ curl -OL http://ftpmirror.gnu.org/autoconf/autoconf-latest.tar.gz
$ tar xzf autoconf-latest.tar.gz
$ cd autoconf-*
$ ./configure --prefix=/usr/local
$ make
$ sudo make install

Now that should have sorted that issue out!

Next, you’ll want to install ZeroMQ:

$ cd ~/src
$ curl -v http://download.zeromq.org/zeromq-3.2.2.tar.gz > zeromq.tar.gz
$ tar xzf zeromq.tar.gz
$ cd zeromq-*
$ configure
$ make
$ make install

After that you’ll need to install the PHP bindings:

$ sudo pear channel-discover pear.zero.mq
$ sudo pecl install pear.zero.mq/zmq-beta
$ sudo echo 'extension=zmq.so' >> /etc/php.ini

Verify the install:

$ php -i | grep libzmq
libzmq version => 3.2.2

If you’ve managed that with minimal effort then give yourself a huge pat on the back! 🙂

CruftFlake

Now here comes the easy part. I forked davegardnerisme/cruftflake over to my posmena/cruftflake repo and added in some composer love. I should really do a pull request and maybe @davegardnerisme will permit 🙂

Anyway, create a new folder called cruftflake and create a file called composer.json with the following:

{
    "minimum-stability": "dev",
    "require": {
        "posmena/cruftflake": "*"
    }
}

Then:

$ composer install

Generating an ID

Ready for the clever bit? Open two terminals, both in the cruftflake directory. In one of them, do:

$ php vendor/posmena/cruftflake/scripts/cruftflake.php
Claimed machine ID 512 via fixed configuration.
Binding to tcp://*:5599

That will set the service running. So, now if you want an id – you just go to the other window and type:

$ php vendor/posmena/cruftflake/scripts/client.php
153291408887775232

How’s that for a nice unique number? That’s using the system time, the configured machine id, in addition to a sequence number.

To show how fast it is, you can generate 10,000 in less than 2 seconds:

$ php vendor/posmena/cruftflake/scripts/client.php -n 10000

The duration it takes will of course depend on your server, but don’t forget that this is per process. You can have as many of these running as you wish!

Summary

Easy, right?!

I’m sure that as you’ve been following the tutorial, you’ve been looking at the code and seeing what it’s doing. You will see how ZeroMQ is set up as well as how the generator works.

You may notice that I’ve elected to skip using the ZooKeeper configuration. The reason for this is that ZooKeeper is for running multiple nodes; you don’t need multiple nodes for a quick demo!

I’ve found CruftFlake to be a really neat tool. It’s very much overkill for small projects, but the whole point is the play around with this stuff so you are aware of the scalable solutions out there.

Thanks to @davegardnerisme for letting me fork – if I do issue a pull request, I will be sure to update this post accordingly.

I shall definitely be blogging soon when implementing this into a real-world scenario. Stay tuned!

Share