Author Archives: BobbyJason

Android Numeric Keypad Password Field

Just discovered something, which may be useful to some of you Googlers!

Problem

To get a the numeric keypad up on an iOS device it’s fairly easy. All you do is apply the pattern attribute. At Sky we use ‘pattern=”\d*”‘ in our codebase. The problem is that this doesn’t work on Android. Android requires that the field has a type attribute of either, ‘tel’ or ‘number’. This creates problems, as it means that if you want a numeric password field, you can’t get the numeric keypad up on Android… or can you?

Introducing -webkit-text-security:

The solution (and this also works for iOS) is to create all numeric password fields as ‘number’ (‘tel’ works too), and apply the following CSS property:

input[type=number] {
  -webkit-text-security: disc;
}

This replaces the text with the asterisk typically found on a password field. This means that you can do away with the ‘pattern’ attribute completely. I’d strongly suggest that you don’t get carried away and use this CSS for all password fields – it should really only be used where you need a numeric keypad for a password!

I’ve confirmed this works on iOS 6.1.2, Android 2.3.3 and Android 4.1.2

Hope that helps someone 🙂

Share

Installing ZooKeeper for PHP on CentOS 6.3

This post is very short, simply as a reference to anyone out there that would like to install ZooKeeper on CentOS 6.3, and connect via the PHP Bindings.

To download ZooKeeper, you can visit Globocom’s GitHub page for updated versions. Below are the versions I used at the time of writing.

Install ZooKeeper:

curl -O http://cloud.github.com/downloads/globocom/zookeeper-centos-6/zookeeper-3.4.3-2.x86_64.rpm
rpm -ivh zookeeper*
service zookeeper restart

ZooKeeper is now up and running, but you need to install some more stuff before you can connect to it!

curl -O http://cloud.github.com/downloads/globocom/zookeeper-centos-6/libzookeeper-3.4.3-2.x86_64.rpm
rpm -ivh libzookeeper*
curl -O http://cloud.github.com/downloads/globocom/zookeeper-centos-6/libzookeeper-devel-3.4.3-2.x86_64.rpm
rpm -ivh libzookeeper-devel*

Install php-zookeeper from Andrei Zmievski:

git clone https://github.com/andreiz/php-zookeeper.git
pear build
./configure
make
make install

add zookeeper.so to your php.ini, or create zookeeper.ini and place it in your php.d folder!

Hope that helps someone!

Cheers

Share

Adding PHPDocumentor (Sami) via Composer

In a previous article I demonstrated setting up a Silex project via composer. In addition to setting up PHPUnit I also mentioned how to get Codesniffer working to PSR2 coding standards. I figured a nice addition to that would be setting up PHPDocumentor.

Note: Using a document generator infers that you need documentation. If you’ve already got code and no documentation then it’s probably worth highlighting that some would say you are doing something wrong. Arguably, you should always have some documentation explaining what the API endpoints need to be, and what they should return. For the purposes of this article I shall assume that you’re like me – and want to create some pretty doucmentation automagically.

I’ve actually opted to use Sami over phpDocumentor2 – for reasons that phpDocumentor seemed to not fully support composer when Silex was installed (conflicting dependencies). There is also the addition that Sami is from Fabien Potencier of Sensio Labs, who wrote Silex.

Getting Started

Okay, so let’s dive right in – I’m assuming you’ve already got the application running from my previous article mentioned above.

Edit your composer.json:

{
    "minimum-stability": "dev",
    "require": {
        "silex/silex": "1.0.*@dev"
    },
    "autoload": {
        "psr-0": {"DVO": "src/"}
    },
    "require-dev": {
        "phpunit/phpunit": "3.7.*",
        "squizlabs/php_codesniffer": "1.*",
        "sami/sami" : "*",
    }
}

Now give composer a little nudge:

$ composer update --dev

Very quickly you can verify that Sami has been installed correctly:

$ ./vendor/bin/sami.php

If that spits out some useful help information then you’re onto a winner. The next step is to create a config file; I’ve gone for a basic implemtation just to get it up and running.

Create a config directory and create a new file called sami.php with the following:

<?php

return new Sami\Sami(__DIR__.'/../src/', array(
    'build_dir' => __DIR__.'/../build/sami/documentation',
    'cache_dir' => __DIR__.'/../build/sami/cache',
));

Give it another whirl:

$ ./vendor/bin/sami.php update config/sami.php

How easy was that? You will see it has dumped a bunch of stuff in the build/sami folder. You can easily browse the documentation and/or setup a vhost to share.

Nice and simple. Let me know if you have any thoughts/feedback.

Bobby (@bobbyjason)

Share

Using CruftFlake with ZeroMQ for ID generation instead of Twitter Snowflake

What is CruftFlake?

CruftFlake is essentially a PHP version of Twitter’s Snowflake. However, rather than using Apache Thrift, CruftFlake uses ZeroMQ.

Snowflake and CruftFlake are both used for generating unique ID numbers at high scale. In large, scalable systems – you tend to move away from the likes of MySQL (with its ever so lovely auto-increment), and move to NoSQL solutions such as MongoDB, CouchDB, Redis, Cassandra and Hadoop/Hbase.

There are many database solutions to address the problem of scalability, and one thing that you’ll find yourself needing more often than not: the ability to generate a unique ID – and that’s what CruftFlake is for.

Why?

I quote from my PHP UK Conference review post:

“If you use technology that was designed to be resilient, and then build your application atop of that with resilience in mind, then there is a very good chance that you app will also be resilient.”

To be fair, this is more about scalability than resilience, but one could argue they go hand in hand. Point being that you can’t rely on a single auto increment value if you have a 100 database servers; it would be… disgusting.

Installing

ZeroMQ

I’m still running things locally, so the installation below will assuming you’re running Mountain Lion.

First things first: Sadly, Mountain Lion and the new version of Xcode doesn’t appear to ship with Autoconf; so you’ll have to install it:

$ cd ~
$ mkdir src
$ cd src
$ curl -OL http://ftpmirror.gnu.org/autoconf/autoconf-latest.tar.gz
$ tar xzf autoconf-latest.tar.gz
$ cd autoconf-*
$ ./configure --prefix=/usr/local
$ make
$ sudo make install

Now that should have sorted that issue out!

Next, you’ll want to install ZeroMQ:

$ cd ~/src
$ curl -v http://download.zeromq.org/zeromq-3.2.2.tar.gz > zeromq.tar.gz
$ tar xzf zeromq.tar.gz
$ cd zeromq-*
$ configure
$ make
$ make install

After that you’ll need to install the PHP bindings:

$ sudo pear channel-discover pear.zero.mq
$ sudo pecl install pear.zero.mq/zmq-beta
$ sudo echo 'extension=zmq.so' >> /etc/php.ini

Verify the install:

$ php -i | grep libzmq
libzmq version => 3.2.2

If you’ve managed that with minimal effort then give yourself a huge pat on the back! 🙂

CruftFlake

Now here comes the easy part. I forked davegardnerisme/cruftflake over to my posmena/cruftflake repo and added in some composer love. I should really do a pull request and maybe @davegardnerisme will permit 🙂

Anyway, create a new folder called cruftflake and create a file called composer.json with the following:

{
    "minimum-stability": "dev",
    "require": {
        "posmena/cruftflake": "*"
    }
}

Then:

$ composer install

Generating an ID

Ready for the clever bit? Open two terminals, both in the cruftflake directory. In one of them, do:

$ php vendor/posmena/cruftflake/scripts/cruftflake.php
Claimed machine ID 512 via fixed configuration.
Binding to tcp://*:5599

That will set the service running. So, now if you want an id – you just go to the other window and type:

$ php vendor/posmena/cruftflake/scripts/client.php
153291408887775232

How’s that for a nice unique number? That’s using the system time, the configured machine id, in addition to a sequence number.

To show how fast it is, you can generate 10,000 in less than 2 seconds:

$ php vendor/posmena/cruftflake/scripts/client.php -n 10000

The duration it takes will of course depend on your server, but don’t forget that this is per process. You can have as many of these running as you wish!

Summary

Easy, right?!

I’m sure that as you’ve been following the tutorial, you’ve been looking at the code and seeing what it’s doing. You will see how ZeroMQ is set up as well as how the generator works.

You may notice that I’ve elected to skip using the ZooKeeper configuration. The reason for this is that ZooKeeper is for running multiple nodes; you don’t need multiple nodes for a quick demo!

I’ve found CruftFlake to be a really neat tool. It’s very much overkill for small projects, but the whole point is the play around with this stuff so you are aware of the scalable solutions out there.

Thanks to @davegardnerisme for letting me fork – if I do issue a pull request, I will be sure to update this post accordingly.

I shall definitely be blogging soon when implementing this into a real-world scenario. Stay tuned!

Share

PHP UK Conference 2013

PHP UK Conference 2013

Sat in a Starbucks in St Pancras station waiting for my train back to Leeds after a great time at the PHP UK Conference – I figured I would use this opportunity to blog about my thoughts of the talks instead of sulking with my man flu. I’m not going in to great detail – just a quick overview!

NOTE: I don’t have the links to the talks yet but I’ve linked to the slides where possible.

The Talks

Fridays Keynote – You Are A Designer

Friday’s opening keynote from Aral Balkan great – I don’t want to play down the rest of the talks but in my opinion this was one of, if not the best talks of the conference. Aral reminded us that user experience should not be an afterthought, but something that is essential at the beginning of the project, right through to completion. There were some references to the genius of the late Steve Jobs, which were a great reminder of his excellent design philosophy – a great example being iDVD which Steve Jobs completely ditched the previous version (from an acquisition) and requested a very simple ‘burn DVD’ button.

It was a really entertaining presentation and I massively recommend you watch the recording – I shall link to it here once the folks at PHPUK have figured out how to upload YouTube videos 🙂 You can checkout his website

Event Stream Processing In PHP

I was sure Ian Barber would do a great job telling us about React PHP, especially after seeing his talk on ZMQ at PHPUK2011. I was right.

For those who don’t know, ReactPHP could be summarised as ‘node for PHP’. Ian had some great examples showing what sort of real world applications you could apply this to – by coding as he spoke. This made the talk much more effective as you could see exactly what he was trying to show. We learned about various ways in which you could set up an event stream for varying purposes and I would very much recommend that you go and watch his video when you get chance.(link here).

One thing that I think was very important to take away from this was that it wasn’t a ‘nodejs vs react’ – it was more of a case of: if you have a requirement for a real time system for a low volume of traffic then React will be perfect for you. If you start to need requests upwards of the hundreds of thousands and into millions – then you should use the correct technology for the job and learn how to use nodejs. Well, at least that’s what I took away from it anyway – and it’s definitely a great way to get into setting up real time systems just using PHP.

Cranking Nginx up to 11

A very informative talk from @h – however I have to admit to never getting round to installing or configuring Nginx before… With this being an advanced talk about squeezing every last bit of performance out of Nginx it was great but I didn’t recognise any of the basic config to start with! I believe Helgi did a talk later in the day at the ‘unconference’ for those who wanted some basic Nginx knowledge but sadly I wasn’t able to make it.

Nonetheless, I took some great tips such as knowing you can connect directly to MySQL and get Nginx to load balance for you. I went away from this talk with a firm assertion that I want to ditch Apache and figure out how to set up Nginx with php-fpm, so thanks Helgi! 🙂 Link to talk here or the slides are here

API Design: It’s Not Rocket Surgery

I consider myself rather knowledgable when it comes to APIs – and this talk from Dave Ingram (@dmi) pretty much confirmed my thoughts. There wasn’t anything in particular that I noted down for further reading – I was very happy with that as it means my API knowledge must be pretty good!

In summary, it covered that you should be creating documented, RESTful (CRUD) APIs, and using a sensible URL for each endpoint. It was a very informative talk and he covered many points such documentation, authentication, headers, formats, caching, documentation and versioning of your documented API.

A good point he made was CORS (Cross-Origin Resource Sharing) which is is a way to allow in-browser cross-origin XHR requests.

For those who would like further information – I’d strongly recommend David’s talk: link here or you can view his slides

Bottleneck Analysis

Another informative talk – this time from Ilia Alshanetsky. He went into the stuff that you would have expected such as using inspect element and/or firebug to see the load times of the resources. Making sure you load things as asynchronously as possible as well as parallelising over multiple DNSs was mentioned. One very useful addition for me was the note about, ‘Boomerang’; a tool that will send your true page load time to your server. It’s all very well knowing that your PHP page loads in 250ms but what use is that if the user can’t physically interact with your page until 2s have passed? Using Boomerang will give you a great insight to how long your users are having to wait. There is even an awesome addon called navtiming.js which pretty much sends back the individual timings of each resource (just like inspect element).

A useful tip I picked up was that if you’re running a redirect on your website to always send your user to a the ‘www’ version of your website in addition to running SSL then you could be incurring a massive delay in the redirect. At the time of writing, only Chrome will be able to show you this delay as firebug doesn’t cover Ssl.

One point that @mattoddie re-iterated to me, was that if your php error_log has any notices and warnings then your app will be slowed down massively. Production code should never have notices and warnings – make sure you check!

We also saw how you can throw these stats into graphite to keep an eye on your users load times, in addition to performing apache bench load testing. A very informative talk and you can view the recording here: link – Slides can be found on Ilias Website

Saturday – Keynote (the diabolical developer)

I’ll be honest; I didn’t fully understand this. Some said it was very clever sarcasm, some said it will change their approach and others didn’t get it either.

All I could take away from it was that you should always think about what you are doing, and why you are doing it. Don’t just do things because the person next to you is telling you that it’s the cool thing to do. You should be driving what is best for you. Be Better.

The link to the talk is here: link There’s already a YouTube video from 2 years ago if you want a very shortened version

The Hypermedia API

I found this to be a very useful talk. The best way I could describe it is an advanced API talk. Ben Longden briefly covered some of the areas Dave Ingram covered in his talk, but quickly progressed to talking about the ‘Richardson Maturity Level/Model’. Having discovered my APIs are typically level 2 (the RESTful ‘CRUD’ type), I discovered that I can basically enhance them by giving them some steroids! The end result is that the response not only contains the data, but also some information about that entity such as what the URL would be to edit it, or delete it.

It’s all centred around the Hypertext Application Language (HAL) which was new to me. Another useful point was reducing requests by using a zoom parameter (the ‘hypertext cache pattern’). So if you would like to get all the information for a user in addition to their messages for a user you could do: website.com/api/v1/user/123?zoom=messages

I shall definitely be doing more research into this, but to see the talk, here it is: link . You can already view his slideshow.

Scaling with HipHop

I think this is one that everyone was looking forward to. Sara Golemon (from Facebook) gave us a very informative talk on the the history of HipHop before progressing onto how it’s evolved.

In brief, HipHop used to be something that you had to use to compile your PHP code before running it. At Facebook this meant waiting 20mins for hphpc to build the application on over 100 build servers before being able to test your change; something that is clearly not ideal! On the road to finding the best solution, they created HPHPi for use in dev environments. This was so that developers didn’t need to wait for the code to build.

However, after much hard work – and lots of very clever people, they came up with HHVM. HHVM uses JIT (just in time) compilation to analyse the data types as the code is executed and generate the necessary machine code for optimal speed.

Sara was very keen to point out that HipHop has obviously been designed for the Facebook codebase; therefore the codebase that will benefit most from it is Facebook. Whilst speed improvements of up to 600% have been noted for FB, you can expect to see only 150-200% speed increase if you’re running WordPress. If you’re a company that has 50 servers to cope with load due to the PHP execution time, I’m pretty sure you’d love to only need 25 servers?

Sara also covered the use of XHP, a PHP extension which makes your frontend code a hell of a lot easier read as well as being a massive help with regard to prevent XSS attacks.

I definitely reccomend you watch the talk – I shall definitely be doing some blogging on the subject! Talk link here

Planning to Fail

I loved this talk from David Gardner. The point was the it’s easy enough to make a reliable system, but is it resilient?

David used Hailo (the taxi app) as his example of a portfolio of technology that was designed to be resilient. If you use technology that was designed to be resilient, and then build your application atop of that with resilience in mind, then there is a very good chance that you app will also be resilient.

Netflix famously announced their lessons learned from the AWS Cloud problems with the Chaos Monkey. David was showing us a multitude of ways that you can acheieve the end goal of having a Chaos Monkey.

I think David’s talk will the source of many blog posts for me, largely due to the great technologies they use at Hailo such as: Cassandra, ZooKeeper, ElasticSearch, NSQ & Cruftflake.

Keep an eye out for those blog posts, I shall probably been looking at Cruftflake soon as it’s a great way to generate unique ID’s far nicer tha UUIDs and the infamous MySQL auto-increment! It’s essentially a PHP version of Twitter Snowflake, but removes the dependency of Thrift.

Definitely a talk to watch: link here but he’s already uploaded his slideshow

You Can’t Optimise What You Cant Measure

So, Juozas “Joe” Kaziukėnas did a great talk – it expanded massively on what Ilia had covered in the bottleneck analysis talk; and that’s using statsd and graphite.

If you write to logs in your application, you’re slowing down your application. End of. With StatsD being a simple NodeJS daemon that utlises the UDP, you can be sure that it’s non-blocking in nature; therefore your app will not suffer. It also means you can literally measure anything and you don’t need to worry about switching on ‘debug mode’ – you can run it all in your live environment without worrying about performance (in fact, you can measure performance).

Joe went on to mention the great tool that is Graphite, which hooks up to StatsD perfectly. We actually use this at Sky, but it was great to have an explanation about how it works etc. Logster (or ‘Lobster’ as Joe likes to cal it) is a tool which allows you to throw all of the log files into Graphite if you happen to not be using StatsD. There was also a mention of DataDog – a website service you can pay if you want to offload your graphing to a 3rd party.

I loved this talk, and I shall definitely be doing my own research into StatsD. You can view his talk here: (link) – but he’s already put his slide up

Monitoring At Scale: Intuitive Dashboard Design

Lorenzo Alberton didn’t leave out any details when it came to effective monitoring. It’s difficult for me to summarise as there was simply a lot of information to take in – but I’ll try in the form of bullets (most using his slide headings!)

  • Create surprise with alerts
  • Show, don’t tell
  • Communicate with clarity
  • Too much data and too little information = problem
  • Heuristics
  • Organise information to support meaning
  • Correlate events to add context
  • Shapes, Sounds & Colours do help
  • Realtime – StadsD & Graphite
  • Averages SUCK – use percentiles
  • Patterns our brains should recognise
  • Heatmaps / Cacti
  • Make the subtle obvious
  • Make the complex/busy simple/clean

You should really view his excellent set of slides

Summary

The PHP UK Conference 2013 was great. I loved it. Apologies if I irritated anyone with my coughing/sneezing/nose blowing. Extra apologies if you also now have man flu!

I’ve learned a great deal, and I have plenty of things to blog about. So stay tuned!

Many thanks to everyone who made PHP UK what it was and of course, Sky for paying for my ticket. Definitely looking forward to next year!

Share