Time is running out for IE8

At long last we are nearing the end of Windows XP and Officer 2003. This is surely good news. For too long businesses around the country have been holding on to Windows XP as their primary operating system. Now in itself, XP is not a bad operating system, but for us web developers, it is the course of many a night without sleep.

Windows XP supports Internet Explorer 8 as the latest version. Internet Explorer is 4 and a half years old already. Come April 2014 it will be five years old but we shall finally all get to wave it good bye, and get on with building websites for the modern browser.

If you are STILL using Windows XP and IE 8, I suggest you take action sooner rather than later. Switch to Chrome or Firefox if you cant upgrade your copy Windows just yet, and don’t forget to tell your IT department to get a move on with the upgrade. Web browsers should never really be more than 1 year old in this day and age – certainly not 5!

Comments Off

My Photo Folio

We’ve started working with MyPhotoFolio to develop a custom content management system. This CMS will form the backbone of their business, which plans to allow photographers to create their very own website. It’s early days right now, but we’re very excited to be involved with it.

Comments Off

Concrete5 “Enterprise Ready”? Not entirely…

We’ve written a lot about Concrete5 in Head Energy’s blog, with significant effort spent on documenting our high level AWS architecture for a large Concrete5 Sites. However, going back one step more (and stepping over my hate for the phrase “enterprise ready”, which seems to be synonymous with “bloated, cryptic and slow as hell”), we’re forced to ask today: is Concrete5 really an “enterprise ready” system?

Our project, built on v5.6.1, probably has 50k+ lines of code these days, and one of the ever increasing worries during our development has been that of slow page loads. We’re seeing un-cached page load times creep up from their early renders of 1s-2s to a current average of 4s-6s, and it’s getting worse linearly with the complexity of the page. The linearity suggests we’re simply introducing a greater workload and have only ourselves to blame, but after just a few minutes of looking for high load anywhere in our architecture I was at a loss, until I did the bog standard check for database queries.

The number of queries Concrete5 generates when using it’s own API is utterly staggering a landing page deemed moderately complex generated precisely 6882 queries to render from start to finish.

The number of queries Concrete5 generates when using it’s own API is utterly staggering a landing page deemed moderately complex generated precisely 6882 queries to render from start to finish, and the agent wasn’t even logged in! This isn’t a problem on a developers machine, where the latency between their MySQL layer and the web server is effectively 0. When we add a 0.5ms latency (think AWS load balancers, traffic managers or just networking!) the delay from latency balloons to 3.441 seconds in additional page load time.

What’s worse is we’ve taken some significant steps to reduce the number of Page List blocks in use on Concrete5, simply because we already knew it was ridiculously heavy on the database with only a few dozen pages in play. We wrote an override which completed what Concrete5 was achieving in the Page List from hundreds of statements in just one (without permissions however, but more on that in a moment).

Finding out the number of database queries Concrete5 generates per page

Concrete5 comes packing the adodb abstraction layer, which has a nice inbuilt function to log all database queries into a table. To make use of it you’ll probably have to create the logging table manually (we did) the statement for which is:

  `created` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  `sql0` varchar(250) NOT NULL DEFAULT '',
  `sql1` text NOT NULL,
  `params` text NOT NULL,
  `tracer` text NOT NULL,
  `timer` decimal(16,6) NOT NULL DEFAULT '0.000000'

Once created switching on the logging function within Concrete5 is as simple as calling


You can use this around particular operations in your code, just to take a peek, or just as a quick solution throw it right at the bottom of:


Now, every time you reload a page in Concrete5, the adodb_logsql table will fill up with the queries generated to make that page load. We’ve confirmed our suspicions that the Page::getBy* methods are kicking out a significant chunk of our statements, and plan to remove as many of these as possible over the next few days in place of customized SQL.

Is Concrete5 Enterprise Ready?

If your environment is very cache friendly, or you don’t have too many pages (arbitrarily e.g. 250+), then yes you’ll love the entire journey without the need to get very technical. If this isn’t the case, then you’ll quickly need a set of proficient developers who are happy to replace chunks of Concrete5′s native functionality with more efficient alternative routines (namely less SQL statements!).

That said, we’re still hugely enamoured with our choice of platform, and do not believe any other system offers a better ratio of user friendliness, flexibility, scalability and out of the box CMS features, at least not yet.


Norwich tech conference

There’s nothing like attending a tech or design conference to keep the old creative juices flowing. This has always required us to, at the very least leave Norfolk and sometimes the country. So it was with great anticipation that we attended the first ever Norwich tech conference, SyncConf.

We are happy to say that the event was a huge success. Some great speakers in attendance, and lots to digest. The main thrust of the conference revolved around Agile development techniques, something we already do here at HeadEnergy, but it was really great to hear how others put the methods into practice.

Our favourite talk of the day was from the founder of MultiMap, Sean Phelan, and listen to how multi map went from a part-time back room operation, to a €50million business that eventually became Microsoft Bing maps.

We really looking forward to more events from the team over at SyncNorwich, big thanks to them.

Comments Off

Christmas sign off

We would like to wish you all a very Merry Christmas. It’s been a good 2012 for us and we plan to make it an even better 2013. Best of luck to you all, in each of your endeavours, for next year and future years.

Comments Off

Bullet proof JavaScript for CMS

A web content management system (CMS) allows editing, styling and publishing of content for a website from a single piece of software. Some common CMS choices include:
  • WordPress
  • Concrete5
  • Joomla
  • Drupal
The main two ways of editing the content in a web based CMS; In page, and off page.

In page, or in-line editing, allows a user to view their changes as they happen in the context of the web site design. Off page editing has a separate section for editing the content so viewing it on page requires the changes to be either previewed, or even saved first.

Each system has its advantages, but where some in page systems really fall down is with regards to JavaScript. JavaScript code executes directly in the web browser to add powerful functionality to a website. The major downside to JavaScript is that it can easily be broken by inexperienced developers and once a single error occurs, the remaining JavaScript fails to execute. 

An example of this happened to us recently when we were adding a third party block to a client’s Concrete5 website. The developer of the block had an intermittent error in their JavaScript code. Once we added the block to a page, we could no longer edit the page to remove the block! The Concrete5 user interface is reliant on JavaScript code, and the error in the third party block was causing the Concrete5 code not to run. Fortunately, we had the know-how to remove the bad block: For those that are stuck on this, you can often revert your pages to an earlier version (that is, a version that doesn’t contain your troublesome block) by accessing the Dashboard > Full Sitemap, clicking on the page in trouble, and selecting Versions.

The really annoying thing, is that this type of problem is incredibly easy to avoid by writing better JavaScript in the first place. Simply detect errors in each code block and deal with them accordingly. This is what we ended up doing with the third party block in this case. 

I want to implore anyone who works with a CMS that allows In Page editing to be more diligent with their JavaScript code and catch and handle exceptions at all times. Just like we do.
Comments Off

PHP on EC2 (AWS) with Multi A-Z and Multi Region (Part 4)

Looking for Part 1? See: http://www.headenergy.co.uk/2012/10/php-on-ec2-aws-with-multi-a-z-and-multi-region
Looking for Part 2? See: http://www.headenergy.co.uk/2012/10/php-on-ec2-aws-with-multi-a-z-and-multi-region-part-2
Looking for Part 3? See: http://www.headenergy.co.uk/2012/11/php-on-ec2-aws-with-multi-a-z-and-multi-region-part-3

A Quick Intro

There seems to be quite an appetite for this mini-series on the web, so I’ve sped up the writing process a little in the hope of mind dumping what remains of this content before people lose interest.

In my recent posts I’ve discussed the problems of deploying a popular PHP CMS into the AWS cloud hosting platform, the options one might consider in spreading out the database tier in a non-split environment, and finally how we can best leverage our choice of Galera. If you’re buying what we’re selling here, then you’ve got a multi-master set up, with good resiliency to zone, and even region, failure at the database tier. This post will go into more detail regarding our chosen configuration, including some performance notes so we’re better informed of the power of this deployment.

Our Configuration Choice

For those that don’t recall, part 3 of this mini-series handled the different configurations open to us with regard to Galera. Our recommendation reads:

We’ve chosen to go with the DBMS tier cluster along with a distributed load balancer as we cross into different regions. This is primarily due to our platform being EC2, as it provides a resilient, reliable and cost effective load balancer, removing the worry of having a single point of failure. This also simplifies the deployment as more database servers are removed or added to the pool, as we only have to configure a single load balancer.

In reality, we don’t know what our load will be precisely, so we’re going to go with the straightforward minimum 3 production servers for our tests, along with 1 additional server on standby.

Benchmark tests (using sysbench with 15 tables of 2m records each) show the potential Transactions Per Second (TPS) and latency for a Galera cluster on EC2; m1.large instances (7.5Gb RAM, 2 cores). This shows that a 4 node Galera cluster on EC2 can support almost 1000 transactions per second with 110 concurrent users. With no serious degredation of latency. As a comparison, the native NDB cluster benchmark with the same tables saw latency increase to in excess of 600ms for the same 110 user concurrency. The following images show our results in greater detail:

As the system scales out it may well prove more efficient to move to the aggregated stack cluster (discussed in the previous post). After examining the the stats and performance of Concrete5, we’re confidant that a single database server can serve multiple application stacks. Particularly poignant – given we know we’ve got a ceiling on the number of Galera nodes available to us in our cluster – we can easily scale out many application stacks radio before we focus on scaling MySQL.


The Initial EC2 Configuration

Based on our assessment above, and assuming use of EC2, we’re going to use the following configuration:

  1. 1 VNC per site (though this is mandated by AWS)
  2. 3 Large EC2 instances running Galera, with an extra on standby.
  3. 6 EBS volumes using RAID0 for the data files and one EBS for the server application and O/S.

In future posts I’ll outline the full deployment structure we’ve chosen, including GlusterFS, a NAT entry point node, a DNS pair, Load balancer(s) (HA Proxy), underpinned by Puppet for very flexible deployments. We’re also likely to use either ZXTM (in conjunction with the Amazon AWS API) or Elastic Load Balancing to achieve automatic scaling in our deployment.

Comments Off

PHP on EC2 (AWS) with Multi A-Z and Multi Region (Part 3)

Looking for Part 1? See: http://www.headenergy.co.uk/2012/10/php-on-ec2-aws-with-multi-a-z-and-multi-region
Looking for Part 2? See: http://www.headenergy.co.uk/2012/10/php-on-ec2-aws-with-multi-a-z-and-multi-region-part-2
Looking for Part 4? See: http://www.headenergy.co.uk/2012/10/php-on-ec2-aws-with-multi-a-z-and-multi-region-part-4

A Quick Intro

In this part of the mini series, we progress into the depths of cloud deployment strategies for the now selected MySQL multi-master replication offering from Codership, Galera. In my previous post I went into detail on why this choice was made, I’d recommend giving it a read if you haven’t!

It is not enough simply to install Galera in the cloud, we need to decide how to leverage the technology within the cloud infrastructure to the best effect for our use case. In this part we’ll discuss 4 of the deployment configurations that are open to us and work out which is best for our needs.

Stack Cluster
So, lets start simple, in this Stack Cluster we are scaffolding an entire dedicated collection of nodes. To scale we simply add another collection of servers from every layer and connect them together.


1. It’s exceedingly easy to manage. You also have the option of placing an entire branch (/stack) of software onto one box, making a very elegant deployment strategy.

2. There’s a direct connection from the applications to their database nodes, minimising latency overheads.


1. Inefficient use of resources for several reasons:

  • Overuse: Database servers usually offer more spare capacity than their application layer counterparts, so dedicating a DBMS to a single branch may be overkill.
  • Bad resource consolidation: One server with a 7Gb buffer pool is much faster than two servers with 4Gb.
  • Increased unproductive overheads: Each server duplicates the work of the others.

2. If a DBMS (Galera) node fails, you’ve lost the entire branch.

3. Increased roll back rate, due to cluster-wide conflicts.

4. Inflexibility: There’s no way here to limit the number of master nodes or perform intelligent load balancing.

DBMS Tier Clustering
To address the short comings of the Stack Cluster, we could move the DBMS tier away from the application branches and present them as a single virtual server, through the use of a load balancer (Head Energy use HA Proxy, even in the EC2 environment at this stage).


1. If one of the DBMS nodes fails, it fails in isolation, as we no longer lose application servers.

2. Improved resource consolidation.

3. Greatly improved flexibility, as we can now dedicate individual DBMS nodes to perform particular roles, and intelligently load balance when required.


1. We now have a new single point of failure in the system, the load balancer. If you deploy this configuration, you’ll need to deploy two load balancer nodes, and arrange a failover between them.

2. Increasing management complexity, as we now have to manage the new load balancer, configuring it properly whenever a node fails or joins the cluster.

3. As we’ve introduced a new layer, the load balancers, we’ll have increased the latency for each query too. This can produce a bottleneck in some applications, and certainly the load balancers should be powerful, well equipped nodes.

4. Spreading this configuration over multiple availability zones (or data centres) may reverse all the benefits of resource consolidation, as each database centre will require at least 2 DBMS nodes.

DBMS Tier Clustering (with a distributed load balancer)

1. A slight modification to the above configuration. In this modified configuration the load balancer is no longer a single point of failure, indeed it can now scale with the application layer and is unlikely to be a performance bottleneck in and of itself.

2. The latencies on communication between the application and server tiers will be lessened.


There are now N load balancers to manage, and reconfigure when nodes leave and enter the pools. This can be somewhat mitigated with software deployment strategies such as Puppet, or using some more advanced load balanced with replication support.

Aggregated Stack Cluster
Last, but not least, the aggregated stack cluster is a hybrid approach of the configurations we’ve seen above. It’s tailored to smaller sites which might not need much more than replication across multiple zones / data centres. This is essentially what the previous configuration would look like if we leave one DBMS per stack.


1. It improves resource utilization of an entire stack cluster.

2. Maintains the benefits offers in the Stack Cluster configuration of simplicity and direct DBMS connections.


1. Not suitable for larger sites due to the single DBMS node for each stack and lack of load balancing.

Our recommendation

We’ve chosen to go with the DBMS tier cluster along with a distributed load balancer as we cross into different regions. This is primarily due to our platform being EC2, as it provides a resilient, reliable and cost effective load balancer, removing the worry of having a single point of failure. This also simplifies the deployment as more database servers are removed or added to the pool, as we only have to configure a single load balancer.

In my next post I’ll show some basic load testing results, and justify the number of Galera boxes I chose to deploy, along with some basic nodes about EC2 and general EC2 thoughts regarding MySQL deployment.



Comments Off

PHP on EC2 (AWS) with Multi A-Z and Multi Region (Part 2)

Looking for Part 1? See: http://www.headenergy.co.uk/2012/10/php-on-ec2-aws-with-multi-a-z-and-multi-region
Looking for Part 3? See: http://www.headenergy.co.uk/2012/11/php-on-ec2-aws-with-multi-a-z-and-multi-region-part-3
Looking for Part 4? See: http://www.headenergy.co.uk/2012/11/php-on-ec2-aws-with-multi-a-z-and-multi-region-part-4

A Quick Intro

In my last post I discussed the need to deploy a popular PHP CMS into the cloud which leveraged the scaleability and multi-az / multi-region redundancy offered by the excellent AWS. One of the first stumbling blocks to spreading the database load for our CMS (Concrete5 – which sure does generate a lot of queries for a page load! – ) was the inability to elegantly split our read / writes and send them to different MySQL nodes. After sifting through a few inadequate solutions, we settled on Galera as our solution.


Galera is a true multi-master, synchronous replication cluster for MySQL based on the well used InnoDB storage engine. Users can deploy the Galera cluster locally in LAN environments, as geo-clusters over the WAN or as a virtual cluster in cloud hosting platforms. Even better, Galera is offered as an Open Source software solution. It can be downloaded freely from http://www.codership.com.

The marketing blurb states:

“Galera Cluster is deployed by users who need highly available MySQL database back-ends with very fast fail-over time, consistent databases with no data loss and reduced investment in high availability architectures. Galera Cluster is ideal for business critical applications like Social networks, Gaming platforms, Telecom/VoiP solutions, Media and entertainment sites, Business Software as a Service (SaaS), Platform as a Service(PaaS), ERP systems, web-shops, e-commerce solutions or similar critical applications.”


Multi-master replication means that all slave nodes can be used as the master at any given time, that is to say we can write to any node in the pool indiscriminately. Unlike the single threaded nature of MySQL native replication which is liable for bottlenecks, Galera runs on a parallel replication approach which improves slave lag significantly. Applications connecting to any server in the cluster can be confident they are reading and writing to the same stateful dataset.

As all nodes communicate to all other nodes, transmitting writes as appropriate, we can forget about the read / write splitting otherwise required by Concrete5 for MySQL replication.

Galera also supports WAN clustering and synchronous replication within that. Unavoidably, and understandably, there may be short periods of lag due to the network round-trip-time required by the synchronous state. Our testing shows that the round trip between Singapore, Ireland and East-Cost U.S is still well within the milliseconds rather than seconds.

For those of you running a slightly tighter applications, Galera also assigns a Global Transaction ID for all replicated transactions, therefore transactions can be uniquely referenced in any node.

Galera also facilitates automatic node joining. The cluster chooses ‘donor’ for the ‘joiner’.

The path has been well trodden on EC2, Galera has been well tested and deployed on the Amazon EC2 environment already.


To start the cluster, Galera must have a minimum of 3 nodes in the cluster to operate effectively. As Galera will try and promote any server to a master server, a cluster of less (e.g. 2)  would suffer from a ‘split brain’  scenario wherein both servers are alive but unable to communicate to each other. With 3 servers this ‘split brain’ scenario cannot happen, so long as two servers continue to communicate. To extend this, an odd number of nodes is recommended as an application scales up.

The number of Galera nodes in the pool should not exceed 10, the mathematics are sound, if a little complicated, and can be seen on Codership here: http://www.codership.com/content/multi-master-arithmetics.

As mentioned, replication only works with the InnoDB storage engine, writes to other table types will simply not be replicated. We are using this “limitation” to easily remove replication for tables we know we don’t need to see replicated, e.g. non-critical logging tables. Rest assured however, DDL statements are replicated in statement level, and so any changes to mysql.* tables are replicated. We can safely issue:CREATE USER… or GRANT… but issuing: INSERT INTO mysql.user… would not be replicated. By their nature, non-transactional engines cannot be supported in multi-master replication.

Concrete5 requires some MyISAM tables for its text-search facility, specifically PageSearchIndex. it is relevant to note that MySQL 5.6 supports text searching within InnoDB tables (http://blogs.innodb.com/wp/2011/12/innodb-full-text-search-in-mysql-5-6-4), Galera will be releasing an updated version soon to incorporate this. In the interim, we’ve removed these indexes and moved to InnoDB regardless. In our case, we’re already deepening the functionality of the search engine by incorporating SOLR, and it has no impact on our existing project roadmap.

The next blog post will cover the various cluster configurations we have open to us on EC2 and, time permitting, some other general considerations for EC2, such as disk io, disaster recovery and more.


Future of Web Apps

We had a great time at the Future of Web Apps conference in London this week. It was a great opportunity to meet up with some old colleagues and hear from some of the greatest people in web design and development right now.

As mentioned in a previous article, we will be putting together a full break down of some of the best sessions that are relevant to both ourselves and our clients over the coming weeks. We’re working on putting this all together as I type, and so far some clear winners include the sessions on:

  • Writing Software for Humans
  • How to destroy the web
  • The Mobile Revolution
  • Pushing the boundaries without breaking the web
  • Designing an elegant Mobile User Experience across multiple devices and platforms.

We’re really excited about these things and more, and look forward to sharing more with you soon!

1 Comment