Category Archives: Tutorials

Here I’ll show you how to do cool stuff.

Using Packer & Ansible to build a LEMP Server with Packer on Cloud Native Infrastructure Powered by OpenStack

In the days where everyone is now using Docker and building containers for every new project, it’s easy to forget that for some projects, it’s total overkill. Sometimes, you may just want to build a server the old way, and create a basic image that you can deploy on your server, and not worry about learning about containers!

Whatever your stance on the above, building a LEMP (Linux, (E)Nginx, MySQL/MariaDB, PHP) server using automated tools such as Packer & Ansible is a great way to get your head around these tools, as well as give you a great reproducible image that you can store in your image repository.

Packer, from HashiCorp is another great tool that allows you to select a builder and provisioner, and push your built image to the destination of your choice. What a great summary in one sentence eh?

To follow this guide, you may wish to view the code here

Firstly, make sure you have packer installed:

brew install packer

The next step is to create the template.json file which tells packer what to use as the builder, and what to use as the provisioner.

 "builders": [
    {
      "type": "openstack",
      "image_name": "centos_lamp_php7",
      "source_image": "0f1785b3-33c3-451e-92ce-13a35d991d60",
      "flavor": "c46be6d1-979d-4489-8ffe-e421a3c83fdd",
      "ssh_keypair_name": "ukcloudos",
      "ssh_private_key_file": "/Users/bobby/.ssh/ukcloudos",
      "use_floating_ip": true,
      "floating_ip_pool": "internet",
      "ssh_username": "centos",
      "ssh_pty" : true
    }
  ], 

This is the element of Packer which tells it to use the Openstack builder. The image_name is the name that you would like the end image to be called whereas the source_image is the image id of the base image.

In terms of provisioning we have the following block:

"provisioners": [
    {
      "type": "shell",
      "inline": [
        "sudo rpm -iUvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm",
        "sudo yum -y update",
        "sudo yum -y install ansible",
        "ansible --version"
      ]
    },{
      "type": "ansible-local",
      "playbook_file": "./ansible/playbook.yml",
      "role_paths": [
          "./ansible/roles/init",
          "./ansible/roles/server",
          "./ansible/roles/mongodb",
          "./ansible/roles/php7",
          "./ansible/roles/nginx",
          "./ansible/roles/supervisord",
          "./ansible/roles/redis"
      ],
      "group_vars": "./ansible/common/group_vars"
    },{
      "type": "shell",
      "inline": [
        "cd /srv && sudo chown -R nginx:nginx .",
        "sudo curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/bin --filename=composer"
      ]
    }
  ]

The first provisioner is just a shell command to install ansible. We need this on the server so that we can then apply the ansible provisioner.

Let’s take a look at the Playbooks we have (ansible/playbook.yml).

---
- hosts: all
  sudo: true
  vars_files:
    - "group_vars/settings.yml"
  roles:
    - init
    - server
    - php7
    - mongodb
    - nginx
    - supervisord
    - redis

In the settings.yml file I’ve placed a list of PHP & PECL packages that we’d like on our LEMP server:

---
php:
    packages: ["php", "php-fpm", "php-common", "php-mbstring", "php-mcrypt", "php-devel", "php-xml","php-mysqlnd", "php-pdo", "php-opcache", "php-bcmath", "php-pear"]
    pecl_packages: ["php-pecl-memcached", "php-pecl-redis", "php-pecl-zip", "php-pecl-xdebug"]

The next step of the Ansible provisioner refers to each role-path, and this will be done sequentially. Each ‘role’ has a sub directory called tasks, which then has a main.yml to execute the particular instructions.

As you can see, the first is init, with the following instructions:

---
- name: Install Remi Repo
  yum: name=http://rpms.famillecollet.com/enterprise/remi-release-7.rpm

- name: Enable Remi Repo
  shell: yum-config-manager --enable remi-php70

Remi is my favourite repository when using CentOS7, it has all the latest packages and can always be trusted to work out of the box, unlike some others.

We then have some the server role which installs some base packages such as wget and vim:

---
- name: Install System Packages
  sudo: yes
  yum: pkg={{ item }} state=latest
  with_items:
    - git
    - wget
    - vim
    - sudo
    - openssl-devel

- name: Configure the timezone
  sudo: yes
  template: src=timezone.tpl dest=/etc/timezone

- name: Allow root to not require password to perform commands
  sudo: yes
  template: src=mysudoers.tpl dest=/etc/sudoers.d/mysudoers

- name: install the 'Development tools' package group
  yum: name="@Development tools" state=present

You can see all the other roles by taking a look at the source code, but the interesting one is the one that installs the PHP stuff:

- name: Install PHP Packages
  sudo: yes
  yum: pkg={{ item }} state=latest
  with_items: '{{php.packages}}'

- name: Install PHP-Pecl Packages
  sudo: yes
  yum: pkg={{ item }} state=latest
  with_items: '{{php.pecl_packages}}'

  #Add templates
- name: Change to custom php.ini
  sudo: yes
  template: src=php-dev.ini.tpl dest=/etc/php-dev.ini

#Add templates
- name: Change to custom php.ini
  sudo: yes
  template: src=php-prod.ini.tpl dest=/etc/php-prod.ini

  #Add templates
- name: Change to custom opcache config
  sudo: yes
  template: src=10-opcache.ini.tpl dest=/etc/php.d/10-opcache.ini

  #Add templates
- name: Change to custom php-fpm config
  sudo: yes
  template: src=www.conf.tpl dest=/etc/php-fpm.d/www.conf

You’ll notice with the PHP role that we have some templates. This is where you can define some settings, which you may or may not want to later configure using other tools such as puppet or consul. It also has a template to configure php-fpm to work seamlessly with the nginx template:

server {
    listen       80 default_server;
    listen       [::]:80 default_server;
    server_name  _;

    root  /srv/web;
    index index.php index.html index.htm;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~*  \.(jpg|jpeg|png|gif|ico|css|js|woff)$ {
       expires 365d;
    }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_read_timeout 180;
        include fastcgi_params;
    }
}

As you can see, it has everything out of the box that is needed for a working LEMP server.

To run packer, all we need to do is head to the root of the project and type:

$ packer build ./packer/template.json

Packer will run through the motions of grabbing the CentOS base image from OpenStack, install Ansible, and then run our playbooks, and save the provisioned image in Glance with our chosen image name.

You can then head over to your GUI and view the newly created image and use it as you please! Alternatively, you can see the images with this OpenStack CLI command:

openstack image list

The true beauty of this is that you have an automated way to provision your servers. You can place this code in Jenkins and make nightly builds or you can run it manually whenever you need.

If you have any questions about using Packer or Ansible with Cloud Native Infrastructure just send me a tweet: @bobbyjason

Share

How to create an externally facing server using Terraform on Cloud Native Infrastructure Powered by Openstack – plus a bonus!

If you’ve been following my previous posts and videos, you may have already seen how to use the Cloud Native Infrastructure GUI to create a simple externally facing server or you. may have already read my post on using the Openstack CLI.

Let’s be honest though, any self-respecting DevOps guy doesn’t really want to be creating procedural shell scripts to create infrastructure and we certainly don’t want to be using the OpenStack GUI.

Diving straight into using Terraform, (another one of HashiCorps awesome tools), we can easily setup our basic environment that we’d like to spin up a new server.

Firstly, we’ll need to ensure our environment variables are set. I have these in my ~/.zshrc (or ~/.bashrc) file:

export OS_AUTH_URL=https://cor00005.cni.ukcloud.com:13000/v3
export OS_PROJECT_ID=c5223fac91064ac38460171c14eb47ef
export OS_PROJECT_NAME="UKCloud Bobby Demo"
export OS_USER_DOMAIN_NAME="Default"
export OS_DOMAIN_NAME="Default"

export OS_USERNAME="myusername@domain.com"
export OS_PASSWORD=***********
export OS_REGION_NAME="regionOne"

This then means that in our Terraform provider declaration, all we need is:

provider "openstack" {
}

No need to pass anything in, as it’s all read from the environment variable which is pretty handy!

Setting up the network is fairly trivial:

resource "openstack_networking_network_v2" "example_network1" {
  name           = "example_network_1"
  admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "example_subnet1" {
  name            = "example_subnet_1"
  network_id      = "${openstack_networking_network_v2.example_network1.id}"
  cidr            = "10.10.0.0/24"
  ip_version      = 4
  dns_nameservers = ["8.8.8.8", "8.8.4.4"]

Here we’re creating a basic subnet of 10.10.0.0/24 and setting the DNS nameservers. Without setting the DNS Nameservers, your server won’t be able to make and DNS lookups, which would be pretty rubbish!

The next steps are to create the router and the router interface to enable the network to connect to the all important externally facing network, the Internet:

resource "openstack_networking_router_v2" "example_router_1" {
  name             = "example_router1"
  external_gateway = "893a5b59-081a-4e3a-ac50-1e54e262c3fa"
}

resource "openstack_networking_router_interface_v2" "example_router_interface_1" {
  router_id = "${openstack_networking_router_v2.example_router_1.id}"
  subnet_id = "${openstack_networking_subnet_v2.example_subnet1.id}"
}

We’re going to want our server to have a static/elastic/floating IP, so lets grab one from the pool for use later:

resource "openstack_networking_floatingip_v2" "example_floatip_1" {
  pool = "internet"
}

The last part, is to set the firewall rules so that we can connect to our instance once we create it:

resource "openstack_compute_secgroup_v2" "example_secgroup_1" {
  name = "example_secgroup_1"
  description = "an example security group"
  rule {
    from_port   = 22
    to_port     = 22
    ip_protocol = "tcp"
    cidr        = "0.0.0.0/0"
  }
   rule {
    from_port   = 80
    to_port     = 80
    ip_protocol = "tcp"
    cidr        = "0.0.0.0/0"
  }
}

Basically, all we’re doing is allowing port 80 and port 22 – that will do to quickly get us started!

We now have all the fundamentals in place to create our instance. To fire up the basic CentOS stock image from within OpenStack, you could use the following:

resource "openstack_compute_instance_v2" "example_instance" {
  name            = "example_instance"

  # centos7
  #image_id        = "0f1785b3-33c3-451e-92ce-13a35d991d60"

  flavor_id       = "c46be6d1-979d-4489-8ffe-e421a3c83fdd"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  network {
    name        = "${openstack_networking_network_v2.example_network1.name}"
    floating_ip = "${openstack_networking_floatingip_v2.example_floatip_1.address}"
  }
}

Terraform will use everything we’ve created, from the key_pair & the security groups through to the network and floating IP; so go on, lets try it!

terraform apply

You can then connect to your server like a piece of cake:

ssh -i ~/.ssh/yourkey centos@123.45.67.8

Bonus for Reading this far

However, we can go one better than that – firing up an automated server might be cool, but it’s better if it’s serving our app straight out of the box too, right?

What about if we could give terraform our git repo for our software, and have it deploy it as soon as it starts up? We can do that very easily with 2 files, cloudinit.sh and variables.tf.

Lets take a look at the variables.tf first:

variable "clone_location" {
    default = "/srv"
}

variable "git_repo" {
    default = "https://github.com/bobbydeveaux/dummyphp.git"
}

All we’re doing is passing in our location to the GitHub repository of our App, and telling it where to clone it to. We can then make use of this in the cloudinit.sh file:

#!/bin/bash
# Script that will run at first boot via Openstack
# using user_data via cloud-init.

sudo chown -R centos:centos ${clone_location}
git clone ${git_repo} ${clone_location}
cd ${clone_location}

export COMPOSER_HOME=${clone_location}
composer install

sudo APPLICATION_ENV=${application_env} /usr/bin/supervisord -n -c /etc/supervisord.conf

For this to work, we also have to use our LEMP image that we built earlier. You can use your own of course, providing it’s capable of serving PHP, in this case. At the same time we need to provide the cloudinit instruction to terraform:

data "template_file" "cloudinit" {
    template = "${file("cloudinit.sh")}"
    vars {
        application_env = "dev"
        git_repo = "${var.git_repo}"
        clone_location = "${var.clone_location}"
    }
}

…and add this to the instance creation:

user_data =  "${data.template_file.cloudinit.rendered}"

…leaving our final instance creation looking like this:

resource "openstack_compute_instance_v2" "example_instance" {
  name            = "example_instance"

  #coreos
  #image_id        = "8e892f81-2197-464a-9b6b-1a5045735f5d"

  # centos7
  #image_id        = "0f1785b3-33c3-451e-92ce-13a35d991d60"

  # docker nginx
  #image_id        = "e24c8d96-4520-4554-b30a-14fec3605bc2"

  # centos7 lamp packer build
  image_id = "912e4218-963a-4580-a27d-72e5e195c4f5"

  flavor_id       = "c46be6d1-979d-4489-8ffe-e421a3c83fdd"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data =  "${data.template_file.cloudinit.rendered}"

  network {
    name        = "${openstack_networking_network_v2.example_network1.name}"
    floating_ip = "${openstack_networking_floatingip_v2.example_floatip_1.address}"
  }
}

If you now apply the terraform changes, your server will boot up, and deploy your application. Head to your floating IP and you will see the application being served. It really is as easy as that and there’s not much more explanation necessary!

To view the full source code for this example, you can check out the github repo.

As always, if you have any questions regarding this tutorial, or you need some pointing, please just tweet me!

Share

What Is Docker Swarm And How To Use It To Scale A Simple PHP App Along With Terraform & Packer on Cloud Native Infrastructure powered By OpenStack

Note: This is based on Docker 1.12 as at the time of writing, whilst Docker 1.13 is now released, it is not yet in the CoreOS builds. As soon as 1.13 is available, I will append a footnote to this blogpost and edit this note!

As more and more people jump on the Docker bandwagon, more and more people are wondering just exactly how we scale this thing. Some will have heard of Docker-Compose, some will have heard of Docker Swarm, and then there’s some folks out there with their Kubernetes and Mesos clusters.

Docker Swarm became native to Docker in v1.12 and makes container orchestration super simple. Not only that, but each node is accessible via the hostname due to the built in DNS and Service Discovery. With it’s overlay network and inbuilt routing mesh, all the nodes can accept connections on the published ports for any of the services running in the Swarm. This basically gives you the access to multiple-nodes and treat them as one.

Just to top it off, Docker Swarm has built-in load balancing. Send a request to any of the nodes and it will send the request in a round-robin fashion to all the containers running the requested service. Simply amazing, and I’m going to show you how you can get started with this great technology.

For my example, I’ve chosen a PHP application (cue the flames), it’s a great way to show how a real-world app may be scaled using Terraform, Packer & Docker Swarm on Openstack.

There are a few parts that I will be covering:

  1. Creating base images
  2. Using Docker-Compose in Development
  3. Creating the infrastructure (Terraform)
  4. Creating a base image (Packer)
  5. Deploying
  6. Scaling

1. Creating Base Images

You may already be familiar with keeping provisioned AMIs/images up in the cloud that contain most of the services you need. That’s essentially all a base/foundation image is. The reality is that every time you push your code, you don’t want to have to wait for the stock CentOS/Ubuntu image to be re-provisioned. Base images allow you to create a basic setup that you can use not just on one project, but on multiple projects.

What I’ve done, is created a repository called, Docker Images, which currently has just 2 services; Nginx & PHP-FPM. Inside it is a little build script which iterates over each container, builds it and then pushes it to Docker Hub.

Your foundation images can contain whatever you want. Mine have some simple configuration such as nginx/php-fpm configuration. I have configured Supervisord to ensure that php-fpm is always running. Additionally, as I am placing both dev and prod versions of php.ini on the container, the Supervisord accepts environment parameters so the container can be fired up in dev mode or production ready.

This is the build.sh script within the Docker Images repo:

build.sh:

#!/bin/bash

VERSION=1
CONTAINER=$1
BUILD_NUMBER=$2

docker build ./$CONTAINER -t bobbydvo/ukc_$CONTAINER:latest
docker tag bobbydvo/ukc_$CONTAINER:latest  bobbydvo/ukc_$CONTAINER:$VERSION.$BUILD_NUMBER

docker push bobbydvo/ukc_$CONTAINER:latest
docker push bobbydvo/ukc_$CONTAINER:$VERSION.$BUILD_NUMBER

A simple Jenkins job with parameterised builds has been configured to pass the correct arguments to the script:

echo $BUILD_NUMBER
docker -v
whoami
sudo docker login -u bobbydvo -p Lr6n9hrGBLNxBm
sudo ./build.sh $CONTAINER $BUILD_NUMBER

Note: You will have to ensure that the Jenkins user is allowed sudo access

You can find the repository here: https://github.com/bobbydeveaux/docker-images

Each time the job is run, it will place new versions of each container here:

https://hub.docker.com/r/bobbydvo

Some may argue that due to the cache built up in layers within Docker, you can skip the Base Image step. However, I find it to be a great way to keep jobs isolated, with the addition of being able to re-use the containers for other projects. It also gives great visibility when a container-build has failed simply because an external package has been updated, and therefore it will not update your ‘latest’ tag, and won’t halt your deployments! Google have a great guide on building Foundation Images

We now need to test our 2 images/containers with our PHP app.

2. Lets set up our dev environment with the dummyphp app

This is my repository with a dummy PHP app: https://github.com/bobbydeveaux/dummyphp

If you’re familiar with PHP, you will notice that this is a Slim 3 application using Composer for dependency management. You’ll also find a file, ‘docker-compose.yml’ – this will coordinate Docker to use both of our containers:

docker-compose.yml

version: "2"
services:
  php-fpm:
    tty: true
    build: ./
    image: bobbydvo/dummyapp_php-fpm:latest
    ports:
      - "9000:9000"
    environment:
      - APPLICATION_ENV=dev
  web:
    tty: true
    image: bobbydvo/ukc_nginx:latest
    ports:
      - "80:80"
    environment:
      - NGINX_HOST=localhost
      - NGINX_PORT=80

The php-fpm container will use the Dockerfile in the root of the application to build the image, copy over the files onto the Docker Image itself, and save the image locally as a new container, rather than the use the base image. As it happens, the Nginx container doesn’t need any modification, as it’s only the PHP app that will change when we add code. Of course, you can change this to suit your needs if necessary.

Running the application is as simple as typing:

docker-compose up

You can now head over to http://localhost and test the application, it will be lightening fast. However, this means that the code on the container is what was copied over when docker-compose up was executed. Any changes to local code will not be reflected. There is a solution to this, and it’s in the form of ‘dev.yml’. This extends the docker-compose.yml file to mount the local volume onto the web root.

docker-compose up -f docker-compose.yml -f dev.yml

Now you can head to http://localhost, make some changes, and refresh, and you will see that it’s just as though you’re coding locally. Hurrah!

Note: there is a known bug with Docker for Mac, which means that the mounted volume has a bit of latency which can affect load times unless you make use of OPCache in dev mode. However, this is being worked on.

So now what? We have some shiny Docker containers that are working brilliantly together for our PHP app. Great for development, but what about the real world?

Our next topic will cover how to use Terraform to create a number of servers that will create 3 Docker Managers as well as a number of Docker Slave nodes.

Unfortunately, the CoreOS (great for Docker) image provided doesn’t have Docker Swarm, as this is still in the Beta channel. First we will have to create a new Docker Swarm enabled image using Packer, so let’s go ahead and do that first!

3. Using Packer to create a new Image in Cloud Native Infrastructure

Packer is another tool from Hashicorp which is comprised of a set of builders, and provisioners. It supports many builders such as AWS (AMI), Azure, DigitalOcean, Docker, Google, VirtualBox, VMWare, and of course the one we need; OpenStack. There are some others that it supports too which is great if you need it!

In terms of provisioning, you can use most of the popular tools such as Ansible, Puppet or Chef, as well as PowerShell and standard shell scripts.

For us, all we need to do is to take the stock image of CoreOS and tell it to use the Beta channel, which includes Docker Swarm, this can be done by modifying this file:

/etc/coreos/update.conf

…with this data:

GROUP=beta

At the time of writing Docker Swarm doesn’t work with docker-compose.yml files. However, Docker 1.13 will enable this feature. Once it’s made it’s way into the CoreOS builds I’ll be sure amend this article. For now I’ll show you how to install Docker Compose onto CoreOS whilst we’re provisioning, as it’s a great tool for testing.

As mentioned, we are going to use the OpenStack builder, so here is our ‘builder’ entry:

"builders": [
    {
      "type": "openstack",
      "image_name": "CoreOS-Docker-Beta-1-12",
      "source_image": "8e892f81-2197-464a-9b6b-1a5045735f5d",
      "flavor": "c46be6d1-979d-4489-8ffe-e421a3c83fdd",
      "ssh_keypair_name": "ukcloudos",
      "ssh_private_key_file": "/Users/bobby/.ssh/ukcloudos",
      "use_floating_ip": true,
      "floating_ip_pool": "internet",
      "ssh_username": "core",
      "ssh_pty" : true
    }
  ],

The type is required and must state the builder-type you’re using, whereas the image_name should be set to whatever you want your new image to be called. Source_image is the original image that is in Glance already. The builder also wants to know a flavor of the builder, I’m choosing a small instance as this is only to provision.

Note: Ensure that you are using an existing keypair name that is in your OpenStack project.

So, now that we have a builder, along with connectivity, let’s provision it:

"provisioners": [
    {
      "type": "shell",
      "inline": [
        "sudo sh -c 'echo GROUP=beta > /etc/coreos/update.conf'",
        "sudo systemctl restart update-engine",
        "sudo update_engine_client -update",
        "sudo sh -c 'mkdir /opt/'",
        "sudo sh -c 'mkdir /opt/bin'",
        "sudo sh -c 'curl -L https://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m` > /opt/bin/docker-compose'",
        "sudo sh -c 'chmod +x /opt/bin/docker-compose'"
      ]
    },{
      "type": "file",
      "source": "/Users/bobby/.ssh/ukcloudos",
      "destination": "/home/core/.ssh/key.pem"
    }
  ]

Given the simplicity of what we’re doing, I’m just using shell commands which just updates CoreOS to use the beta channel, and in turn installs the latest Beta build of Docker, along with installing Docker Compose.

You’ll also notice that we’re copying over an ssh key. This is an important piece of the puzzle later on when we need multiple servers to be able to communicate with each other.

All you need to do to kick of this build:

$ packer build ./packer/template.json

If you now view your images, either using the command line or the control panel, you will see your new image is ready to be consumed. Feel free to create a test instance using this image and type the following command:

docker version

You will see you are on at least 1.12.1, which includes Swarm. If you’d like to verify Docker Swarm is working, you can type the following command:

docker swarm init

Hopefully, everything worked perfectly for you. If not, feel free to view the full source code of this example here: https://github.com/UKCloud/openstack-packer/tree/docker-beta

4. Using Terraform to create your Infrastructure

Yet another tool from Hashicorp, an amazing one, Terraform allows infrastructure to be written as code aka IaC, but not only that, it’s Immutable. No matter how many times you execute it, you’ll get the same end result. Some other previous tools would be more procedural – take a shell script for example; if you ask the shell script to create 5 servers, and run it 5 times, you’ll end up with 25 servers. Terraform is clever, as it maintains state. If you ask it to create 5 servers, it will create 5. Run it again, and it will know you already have 5. Ask it to create 8, it will calculate that you already have 5, and simply add an extra 3. This flexibility is amazing and can be used for magnificent things.

All that being said, this is not a Terraform tutorial. This is a tutorial how to make use of Terraform to spin up some Docker Managers and some Docker slaves so that we can deploy our Dummy PHP App. It’s probably best to first take a look at the full main.tf file:

provider "openstack" {
}

resource "openstack_compute_keypair_v2" "test-keypair" {
  name = "ukcloudos"
  public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDggzO/9DNQzp8aPdvx0W+IqlbmbhpIgv1r2my1xOsVthFgx4HLiTB/2XEuEqVpwh5F+20fDn5Juox9jZAz+z3i5EI63ojpIMCKFDqDfFlIl54QPZVJUJVyQOe7Jzl/pmDJRU7vxTbdtZNYWSwjMjfZmQjGQhDd5mM9spQf3me5HsYY9Tko1vxGXcPE1WUyV60DrqSSBkrkSyf+mILXq43K1GszVj3JuYHCY/BBrupkhA126p6EoPtNKld4EyEJzDDNvK97+oyC38XKEg6lBgAngj4FnmG8cjLRXvbPU4gQNCqmrVUMljr3gYga+ZiPoj81NOuzauYNcbt6j+R1/B9qlze7VgNPYVv3ERzkboBdIx0WxwyTXg+3BHhY+E7zY1jLnO5Bdb40wDwl7AlUsOOriHL6fSBYuz2hRIdp0+upG6CNQnvg8pXNaNXNVPcNFPGLD1PuCJiG6x84+tLC2uAb0GWxAEVtWEMD1sBCp066dHwsivmQrYRxsYRHnlorlvdMSiJxpRo/peyiqEJ9Sa6OPl2A5JeokP1GxXJ6hyOoBn4h5WSuUVL6bS4J2ta7nA0fK6L6YreHV+dMdPZCZzSG0nV5qvSaAkdL7KuM4eeOvwcXAYMwZJPj+dCnGzwdhUIp/FtRy62mSHv5/kr+lVznWv2b2yl8L95SKAdfeOiFiQ== opensource@ukcloud.com"
}

resource "openstack_networking_network_v2" "example_network1" {
  name           = "example_network_1"
  admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "example_subnet1" {
  name            = "example_subnet_1"
  network_id      = "${openstack_networking_network_v2.example_network1.id}"
  cidr            = "10.10.0.0/24"
  ip_version      = 4
  dns_nameservers = ["8.8.8.8", "8.8.4.4"]
}

resource "openstack_compute_secgroup_v2" "example_secgroup_1" {
  name = "example_secgroup_1"
  description = "an example security group"
  rule {
    ip_protocol = "tcp"
    from_port   = 22
    to_port     = 22
    cidr        = "0.0.0.0/0"
  }

  rule {
    ip_protocol = "tcp"
    from_port   = 80
    to_port     = 80
    cidr        = "0.0.0.0/0"
  }

  rule {
    ip_protocol = "icmp"
    from_port   = "-1"
    to_port     = "-1"
    self        = true
  }
  rule {
    ip_protocol = "tcp"
    from_port   = "1"
    to_port     = "65535"
    self        = true
  }
  rule {
    ip_protocol = "udp"
    from_port   = "1"
    to_port     = "65535"
    self        = true
  }
}

resource "openstack_networking_router_v2" "example_router_1" {
  name             = "example_router1"
  external_gateway = "893a5b59-081a-4e3a-ac50-1e54e262c3fa"
}

resource "openstack_networking_router_interface_v2" "example_router_interface_1" {
  router_id = "${openstack_networking_router_v2.example_router_1.id}"
  subnet_id = "${openstack_networking_subnet_v2.example_subnet1.id}"
}

resource "openstack_networking_floatingip_v2" "example_floatip_manager" {
  pool = "internet"
}

resource "openstack_networking_floatingip_v2" "example_floatip_slaves" {
  pool = "internet"
}

data "template_file" "cloudinit" {
    template = "${file("cloudinit.sh")}"
    vars {
        application_env = "dev"
        git_repo = "${var.git_repo}"
        clone_location = "${var.clone_location}"   
    }
}

data "template_file" "managerinit" {
    template = "${file("managerinit.sh")}"
    vars {
        swarm_manager = "${openstack_compute_instance_v2.swarm_manager.access_ip_v4}"
    }
}

data "template_file" "slaveinit" {
    template = "${file("slaveinit.sh")}"
    vars {
        swarm_manager = "${openstack_compute_instance_v2.swarm_manager.access_ip_v4}"
        node_count = "${var.swarm_node_count + 3}"
    }
}

resource "openstack_compute_instance_v2" "swarm_manager" {
  name            = "swarm_manager_0"
  count = 1

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "7d73f524-f9a1-4e80-bedf-57216aae8038"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data =  "${data.template_file.cloudinit.rendered}"

  network {
    name        = "${openstack_networking_network_v2.example_network1.name}"
    floating_ip = "${openstack_networking_floatingip_v2.example_floatip_manager.address}"
  }

  provisioner "remote-exec" {
    inline = [
      # Create TLS certs
      "echo 'IP.1 = ${self.network.0.fixed_ip_v4}' > internalip",
      "docker swarm init --advertise-addr ${self.network.0.fixed_ip_v4}",
      "sudo docker swarm join-token --quiet worker > /home/core/worker-token",
      "sudo docker swarm join-token --quiet manager > /home/core/manager-token"
    ]
    connection {
        user = "core"
        host = "${openstack_networking_floatingip_v2.example_floatip_manager.address}"
    }
  }
}

resource "openstack_compute_instance_v2" "swarm_managerx" {
  name            = "swarm_manager_${count.index+1}"
  count           = 2

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "7d73f524-f9a1-4e80-bedf-57216aae8038"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data       =  "${data.template_file.managerinit.rendered}"

  network {
    name          = "${openstack_networking_network_v2.example_network1.name}"
  }
}

resource "openstack_compute_instance_v2" "swarm_slave" {
  name            = "swarm_slave_${count.index}"
  count           = "${var.swarm_node_count}"

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "c46be6d1-979d-4489-8ffe-e421a3c83fdd"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data =  "${data.template_file.slaveinit.rendered}"

  network {
    name        = "${openstack_networking_network_v2.example_network1.name}"
  }

}

Alternatively, you can view the full example on GitHub: https://github.com/UKCloud/openstack-terraform/tree/docker-swarm

Creating the first Docker Manager node

Assuming you’re all good with the basic setup of a network, security groups, floating IP addresses & routing we’ll head straight to the creation of our Docker Swarm.

To do this, what we’re going to do is create 1 Docker Manager, which will initiate the ‘docker swarm init’ command.

main.tf

...
data "template_file" "cloudinit" {
    template = "${file("cloudinit.sh")}"
    vars {
        application_env = "dev"
        git_repo = "${var.git_repo}"
        clone_location = "${var.clone_location}"   
    }
}

resource "openstack_compute_instance_v2" "swarm_manager" {
  name            = "swarm_manager_0"
  count = 1

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "7d73f524-f9a1-4e80-bedf-57216aae8038"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data =  "${data.template_file.cloudinit.rendered}"

  network {
    name        = "${openstack_networking_network_v2.example_network1.name}"
    floating_ip = "${openstack_networking_floatingip_v2.example_floatip_manager.address}"
  }

  provisioner "remote-exec" {
    inline = [
      # Bring up the Swarm!
      "echo 'IP.1 = ${self.network.0.fixed_ip_v4}' > internalip",
      "docker swarm init --advertise-addr ${self.network.0.fixed_ip_v4}",
      "sudo docker swarm join-token --quiet worker > /home/core/worker-token",
      "sudo docker swarm join-token --quiet manager > /home/core/manager-token"
    ]
    connection {
        user = "core"
        host = "${openstack_networking_floatingip_v2.example_floatip_manager.address}"
    }
  }
}
...

So, what does this do? Mostly it’s self explanatory, we’re bringing up an instance using the new CoreOS instance, and running a few shell commands. Amongst the shell commands is the swarm init command, which is advertising on the IP address allocated to the machine.

The next two commands are the really important ones though; these are the commands which grab the ‘join tokens’ that all the other nodes will need to be able to join the swarm. For now, we’re saving the tokens to the home directory, so that later nodes can SSH to this server, and grab the tokens (told you there was a reason we needed the SSH key adding to our CoreOS image!).

With just this one instance, we have an active swarm, but one that doesn’t do a great deal. The next thing we need to do is create the services, and for we’re using a template file to make use of the ‘cloud init’ functionality within OpenStack. The cloud init file looks like this:

cloudinit.sh

#!/bin/bash
# Script that will run at first boot via Openstack
# using user_data via cloud-init.

docker pull bobbydvo/ukc_nginx:latest
docker pull bobbydvo/ukc_php-fpm:latest
docker network create --driver overlay mynet
docker service create --update-delay 10s --replicas 1 -p 80:80 --network mynet --name web bobbydvo/ukc_nginx:latest
docker service create --update-delay 10s --replicas 1 -p 9000:9000  --network mynet --name php-fpm bobbydvo/ukc_php-fpm:latest

#The above services should be created by the DAB bundle..
#..but Docker 1.13 is changing the work bundles & stacks work so parking for now.

What this does, is tell the Docker Manager to fire off these commands when it first boots up.

If you visit the external IP address at this point, you should see some text like this, “Welcome to your php-fpm Docker container.”. This is because our application has not yet been deployed, we’ll get to that in a bit.

First, we need to create some more Docker Managers, some Docker Slaves, and get them all to join the Swarm!

Note: We’re initially deploying the base images, as we’ve not yet configured our Jenkins job to deploy the application. When we get that far, you may want to retrospectively update this cloudinit file with the Docker Image names of the built application name, but it’s not essential. Don’t worry about it!

Adding more nodes to the Swarm

Adding more Docker Managers is now fairly simple, but we can’t just increase the count of the first Docker Manager as that has special commands to initiate the Swarm. This second instruction below will allow us to configure as many more managers as we desire. Once up and running, these ‘secondary masters’ will be no less important than the first Manager, and we will have 3 identical instances with automatic failover.

Whilst Docker Swarm doesn’t specifically follow the Raft consensus like some other services, having at least 3 is important, whilst 5 is strongly recommended in production. This gives Docker Swarm the ability to still function whilst some nodes are out of service for whatever reason.

resource "openstack_compute_instance_v2" "swarm_managerx" {
  name            = "swarm_manager_${count.index+1}"
  count           = 2

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "7d73f524-f9a1-4e80-bedf-57216aae8038"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data       =  "${data.template_file.managerinit.rendered}"

  network {
    name          = "${openstack_networking_network_v2.example_network1.name}"
  }
}

The important part now, is to instruct each ‘secondary master’ to join the swarm as soon as they have booted up. We can do this with another cloud init script. For annotation purposes, I have called this ‘managerinit.sh’:

managerinit.sh

#!/bin/bash
# Script that will run at first boot via Openstack
# using user_data via cloud-init.

sudo scp -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null -i /home/core/.ssh/key.pem core@${swarm_manager}:/home/core/manager-token /home/core/manager-token
sudo docker swarm join --token $(cat /home/core/manager-token) ${swarm_manager}

Due to this being the first time the server will have connected, we’re passing a few options to prevent the scp command from prompting any input. Ultimately though, we’re connecting to the ‘primary master’ to grab the join-tokens that we mentioned earlier in the article. The join tokens are the only way we can ensure we join the correct swarm. The only parameters we are passing in is the IP address to the first Swarm Manager.

If you were to execute terraform as-is, without any slaves, and then ssh’d to the floating IP, you could run the following command:

docker node ls

And you will see a list of the masters, one of which will show it’s the leader, whereas the others will show they’re slaves.

Right now, masters will be able to serve your services in just the same way that slaves will be able to in future. In fact, you could just create a Swarm full of Masters if you like!

Adding Slaves to the Swarm

The code to add more slaves is similar to the masters, only this time the count is coming as an input from the variables.tf file. This is so that we can have as many nodes as we require.

resource "openstack_compute_instance_v2" "swarm_slave" {
  name            = "swarm_slave_${count.index}"
  count           = "${var.swarm_node_count}"

  #coreos-docker-beta
  image_id        = "589c614e-32e5-49ad-aeea-69ebce553d8b"

  flavor_id       = "c46be6d1-979d-4489-8ffe-e421a3c83fdd"
  key_pair        = "${openstack_compute_keypair_v2.test-keypair.name}"
  security_groups = ["${openstack_compute_secgroup_v2.example_secgroup_1.name}"]

  user_data =  "${data.template_file.slaveinit.rendered}"

  network {
    name        = "${openstack_networking_network_v2.example_network1.name}"
  }

}

The main difference between the slaves and masters is the cloud init file. In the file below we’re doing a number of things:

  • Copying the worker ‘join token’ from the master
  • Joining the node into the Docker Swarm
  • Scaling the active services down to a minimum of 3
  • Scaling the active services back up to the number of nodes we require

slaveinit.sh

#!/bin/bash
# Script that will run at first boot via Openstack
# using user_data via cloud-init.


sudo scp -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null -i /home/core/.ssh/key.pem core@${swarm_manager}:/home/core/worker-token /home/core/worker-token
sudo docker swarm join --token $(cat /home/core/worker-token) ${swarm_manager}

# Horrible hack, as Swarm doesn't evenly distribute to new nodes 
# https://github.com/docker/docker/issues/24103
ssh -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null  -i /home/core/.ssh/key.pem core@${swarm_manager} "docker service scale php-fpm=3"
ssh -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null  -i /home/core/.ssh/key.pem core@${swarm_manager} "docker service scale web=3"

# Scale to the number of instances we should have once the script has finished.
# This means it may scale to 50 even though we only have 10, with 40 still processing.
# Hence the issue above.
ssh -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null  -i /home/core/.ssh/key.pem core@${swarm_manager} "docker service scale php-fpm=${node_count}"
ssh -o StrictHostKeyChecking=no -o NoHostAuthenticationForLocalhost=yes -o UserKnownHostsFile=/dev/null  -i /home/core/.ssh/key.pem core@${swarm_manager} "docker service scale web=${node_count}"

The copying of the token and joining the swarm is fairly trivial, and is fairly similar to what happens with the master nodes. What we’re also doing though, is issuing a command to the Docker manager to instruct it to scale the service across x nodes. i.e. the number of nodes we are scaling to. Without adding this code here, one would have to scale the infrastructure, and then scale the Docker Service manually. By including this command in the infrastructure as code file, we can simply scale Docker Swarm just from the ‘terraform apply’ command.

Note: As the annotations suggest, the scaling solution here is not so elegant. I will explain more:

Suppose we have 3 Docker Managers and we add 3 Docker Slaves… as the first Docker Slave is created, it will scale the swarm using the ‘docker service scale web=6’ command, as can be seen in the code above. However, the moment the first Docker Slave issues that command, we only have 4 nodes. So we have 6 containers running on 4 nodes.. not a big problem as we’re about to add another 2 Docker Slave nodes. However, when the 2nd and 3rd slave nodes join the swarm, Docker doesn’t allocate any services to those nodes. The only way to allocate services to said nodes, is to scale down, and back up again, which is precisely what the code is doing above. Docker is aware of this ‘feature’ and they are looking at creating a flag to pass onto the Docker Swarm join command to redistribute the services.

5. Deploying the Application

We now have 3 Docker Managers and 3 Docker Slaves all running in an active Docker Swarm. We can scale up, and we can scale down. This is simply awesome, but not so fun if we don’t have our app deployed to test this functionality.

To deploy the app we’re going to set up a Jenkins job which will be fired either manually or when a commit has been made.

The Jenkins job should be configure with the below commands, however, if you don’t want to create a Jenkins job, you can always just throw it into a shell script and modify the variables.

set -e

DUMMY_VERSION=$BUILD_VERSION
NGINX_VERSION='latest'


sudo docker-compose build 

sudo docker run -i bobbydvo/dummyapp_php-fpm /srv/vendor/bin/phpunit -c /srv/app/phpunit.xml


# tag & push only if all the above succeeded (set -e)
sudo docker tag bobbydvo/dummyapp_php-fpm:latest  bobbydvo/dummyapp_php-fpm:$DUMMY_VERSION
sudo docker push bobbydvo/dummyapp_php-fpm:$DUMMY_VERSION
sudo docker push bobbydvo/dummyapp_php-fpm:latest

ssh core@51.179.219.14 "docker service update --image bobbydvo/dummyapp_php-fpm:$DUMMY_VERSION php-fpm"
ssh core@51.179.219.14 "docker service update --image bobbydvo/ukc_nginx:$NGINX_VERSION web"

Note: You will have to ensure that the jenkins user is allowed sudo access

What does this job do then? What we’re doing is telling docker-compose to execute the docker-compose.yml file that we included in step 2 for our dev environment. This will instruct Docker to build a new Docker container with the latest code, and we then run our unit tests on the newly built container. As we’re using the ‘set -e’ instruction, we’ll only continue to the next step if the previous step was successful. With that in mind, if our Unit tests pass, we then tag the latest image and push to Docker hub.

The final step, is to connect to the Docker Manager and update the service with the latest container. When creating the service, we specified a rolling update of 10s, so as soon as this command is issued, it will take approximately 1 minute for all our nodes to be updated.

You can now visit the floating IP that you’ve allocated to the Docker Manager and you will see that Docker automatically load balances the traffic amongst all the nodes. Simply amazing!

6. Scaling the Application

The final step, assuming your application is struggling to cope with the load, is to add more nodes. You can modify the value in the variables.tf:

variables.tf

variable "swarm_node_count" {
    default = 10
}

And apply!

terraform apply

Literally, that simple. You can scale to 10 nodes, 50 nodes, 1000 nodes, and your application will be automatically load balanced via Docker Swarm. What’s better, you know that each and every node running is an exact replica, provisioned in exactly the same way, running the exact same code.

I hope you’ve been able to follow this tutorial, along with understanding all the code examples. However, if you have any comments or questions, please leave them below or tweet me: @bobbyjason.

Many thanks!

Share

How To Create An Externally-Facing Server On The Cloud Native Infrastructure – Powered By OpenStack, Using The OpenStack CLI

In my previous video, I showed how to use the OpenStack GUI to create an instance (or a server to you and I). The components included creating a Network, a Subnet, a Router, an Interface, an SSH keypair, as well as a floating IP to use with the created Instance.

In this article, I’m going to detail how you can get started with the OpenStack set of CLIs to create all the necessary components required for launching an Instance in an OpenStack environment.

Installing The Tools

First things first, you need to install the command line tools. You can do ths pretty easily using pip:

$ pip install python-openstackclient $ pip install python-novaclient $ pip install python-neutronclient

If you’re unfamiliar with the different tools (i.e. nova, neutron, cinder), keep your eye out as I’ll be covering that topic soon.

Setting Environment Variables

The OS CLI tools require that you have some environment variables set. Make sure you grab the right settings from your control panel:

export OS_AUTH_URL=https://cor00005.cni.ukcloud.com:13000/v2.0 export OS_PASSWORD=“password” export OS_PROJECT_ID=123123123123123 export OS_PROJECT_NAME=ProjectName export OS_USERNAME=username@domain.com

I recommend placing these in either your .bashrc or .zshrc file to ensure they’re set everytime you open your terminal.

Creating An Instance

If you’ve seen the OpenStack GUI Video, you’ll be aware that it’s not possible to create an instance straight out of the box. First we have to create other services that the instance will make use of.

Creating An SSH key-pair.

It’s imperitive to place your public key on the instance you create, otherwise you’ll never have access to it, making our efforts rather fruitless.

To create a key-pair within OpenStack, and keep the private key on your own machine:

$ nova keypair-add exampleKey > ~/.ssh/exampleKey.pem $ chmod 0600 ~/.ssh/exampleKey.pem $ ssh-add ~/.ssh/exampleKey.pem

Create A Network.

You’ll need a network so that you can create a gateway, subnet & allocate IP addresses to your instances.

$ neutron net-create exampleNetwork

Create A Subnet

When creating a subnet, you can use whatever class address you like, but it’s important to specify the full CIDR address

$ neutron subnet-create exampleNetwork 10.10.0.0/24 --name exampleSubnet

Take note of the ID that is shown once it’s created, as we’ll need that in our next step.

aa8ad9ba-0a58-4f80-9f4d-9aaa0cd9307a

Create A Router

Routers allow you to connect different networks. We want to connect our new subnet to the router, whilst at the same time allowing the router to be connected to our default, ‘internet’ network. This is our basic gateway to the internet with the subnet being added as an interface.

We can find the ‘internet’ network by asking neutron to list the networks:

$ neutron net-list

From this we can grab our internet network id. In our case, it’s:

893a5b59-081a-4e3a-ac50-1e54e262c3fa

So, let’s create the router:

$ neutron router-create exampleRouter

Take note of the RouterID:

37a2afe1-a49f-4560-bac3-84a36bace670

Now, we give the router a gateway to the internet:

$ neutron router-gateway-set 37a2afe1-a49f-4560-bac3-84a36bace670 893a5b59-081a-4e3a-ac50-1e54e262c3fa

..and attach our subnet to the router too:

$ neutron router-interface-add 37a2afe1-a49f-4560-bac3-84a36bace670 aa8ad9ba-0a58-4f80-9f4d-9aaa0cd9307a

If you’ve got this far, well done! We not have the pre-requisites in place to launch our instance!

Launching The Instance

Instances come in a list of pre-defined ‘flavors’; these are the sizes, ranging from ‘nano’ to large. You can see these here:

$ openstack flavor list

Once you’ve decided which flavour you’d like, you’ll also need to choose a pre-baked image. To keep in line with the previous video, I’m going to use the CentOS7 image from the below:

$ openstack image list

Finally, we need to refresh ourselves with the ID of the network, our key-pair name, as well as the security group ID. For the purposes of this article, we’re actually sticking with the default security group; we’ll modify the settings of that later.

$ openstack security group list $ openstack keypair list $ neutron net-list

Using the above commands, you wil be able to grab the necessary ID’s to pass into the important command, the one that will launch our instance:

$ nova boot --nic net-id=b4bd41aa-25b3-4f65-9120-df5891880a95 \ --flavor c46be6d1-979d-4489-8ffe-e421a3c83fdd \ --image 0f1785b3-33c3-451e-92ce-13a35d991d60 \ --key-name bobbynew3 \ --security-groups 88d0994e-cbee-4bb2-a5f3-73503f545af9 \ exampleServer

You’ll be mightly impressed with how quickly the server is up and running. You can get the status of it with this simple command:

$ nova list

Accessing From The Outside World

It’s all very well having a running server with outbound Internet connectivity, but right now it doesn’t have any way of being accessed from the Internet externally. The way we do this, is to create an IP address for use on the world wide web, and map that IP address to the port of our new instance. We’ll then open up the port on the firewall to allow us to SSH into it.

Floating IP

A floating IP is a way for us to have a ‘static’ IP address in our architecture, but at the same time be very flexible in where we send the traffic. We can map this floating IP to various instances & ports but for now we are going to map it against our new instance.

From the previous command (nova list), you’ll have the ID of the instance, which can be passed as a parameter into the following command:

$ neutron port-list --device_id=6371c025-86c4-42b2-a5a8-485e56e3f138

The ID that is returned, is the port that belongs to the instance and is also the same port that needs mapping against our new IP address. The following command will create a floating IP within the ‘internet’ network, and map it to the port of our new instance:

$ neutron floatingip-create \ --port-id 0413a947-4d9d-4475-bf7b-72e44f922707 internet

Security Groups

The security groups are the firewalls that you can use, and the default one has zero inbound rules. Initially, we just want to open up port 22 to allow us to SSH:

$ openstack security group rule create default \ --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0

Connecting

If you’ve followed all the above steps, you now have a CentOS7 server running on a fixed IP address that you can SSH to:

ssh -i ~/.ssh/exampleKey.pem centos@51.179.219.44

That concludes the guide to setting up a new server. What you will have noticed is that despite having a brilliant set of command-line tools, it’s still not very automated, and could take a while until you’ve memorised the process.

Keep your eyes posted, as next we’ll be looking at how to automated these steps with Terraform!

If you have any questions, please comment or email opensource@ukcloud.com

Share

HHVM vs PHP vs OPCache vs Go vs Node

Lets start by saying, I was just curious as to how fast Go was compared to PHP. I knew it was faster, but wanted some benchmarks.

I found this great article by @jaxbot:
http://jaxbot.me/articles/benchmarks_nodejs_vs_go_vs_php_3_14_2013

He then went on to dismiss PHP as it was too slow; so didn’t include it in the Go 1.1 benchmark test he did later:
http://jaxbot.me/articles/benchmarks_nodejs_vs_go_11_5_27_2013

Fair game I thought. Until a colleague pondered how fast HHVM would be as he’d recently installed it. Given how easy it is to install HHVM these days I decided to do my own benchmarking.

For Node & Go benchmarks, please refer to @jaxbox’s links above.

There is a minor difference in the code – and that is I am using a function to wrap the code. PHP is generally faster to execute code when it’s in a function, plus HHVM ignores code that isn’t in a function.

The code:

PHP 5.5 (opcache.enable_cli=0)
$ php bubble.php
Run 1: 24.121762990952
Run 2: 24.156540155411
Run 3: 24.948321819305
Run 4: 26.411414861679
Run 5: 24.790290117264
Average: 24.882

PHP 5.5 w/OPCache (opcache.enable_cli=1)
$ php bubble.php
Run 1: 24.675834178925
Run 2: 25.641896009445
Run 3: 26.468472003937
Run 4: 24.278208017349
Run 5: 24.843347072601

HHVM (PHP 5.5 OPCache)
$ hhvm -v Eval.Jit=true bubble.php
Run 1: 2.6463210582733
Run 2: 2.6204199790955
Run 3: 2.563747882843
Run 4: 2.9089078903198
Run 5: 2.6408560276031
Average: 2.672
Interesting is that OPcache didn't fair well in this bubblesort...

Now let's compare to the tests from @jaxbot - he had different hardware, so a direct comparison is almost meaningless.. BUT HHVM speaks for itself when compared to the Zend Engine.

  Node.js Go 1.1 PHP 5.5 HHVM jit
Avg of 5 trials 430ms 326.26ms 24.88ms 2.67ms
Best 420ms 290.27ms 24.12ms 2.56ms

Hope that helps someone who wonders how good HHVM is?! 🙂

I finish with a quote from @jaxbox: "the same rules apply here as last time; take these with a lattice of Sodium Chloride, as the benchmarks tell us information, not answers."

Share