Uncategorised

When do you move WordPress to a multiple server setup?

February 7, 2014 • Ben May • , , , ,

When it comes to scaling your WordPress hosting infrastructure, the topic always comes up pretty quickly that you need to move from a single server through to multiple servers.

It’s often considered the hardest jump to make, and once you’re running two servers, the jump to three, four or five hundred servers is trivial.

So the question comes up often: when do I need to move from a single to multi-server setup?

A multi-server setup is really the last step you take in setting up a enterprise WordPress (or any app) platform.

Things get much harder

Making the switch to a multi-server architecture is something you really need to think about and consider all the risk attached and any unique situations your WordPress setup has.

Ask yourself

  • Will everything work with Memcache (APC cache wont as expected w/ multiple web servers)
  • How will we process updates and sync across servers?
  • Does the sync need to be active-active? (Much harder and dangerous)
  • Are we going to scale automatically?
  • How much control does our load balancer give us?
  • Does the load balancer support sticky sessions?
  • Does the load balancer allow us to route to nodes based on URL patterns?
  • Are you using something like Gravity forms for user (front end submissions) and how will that sync?
  • Do you have an instant / fast deployment system, especially for core updates?
  • Is there an easy rollback for code deploys?

Working on single servers, you can usually take care of nearly everything above via the command line and quickly look after things.

When you jump to multi-servers, you can’t login to 5 terminals and execute commands at the same time.

You can’t deploy a new WordPress release on 1 web server at a time, or you could risk major database corruption.

Everything has to be automated – using tools like Capistrano – have you learnt & mastered it?

Split the web and database servers easily

There is an easy way to gain some extra web processing power without having to go to multiple web-servers. Split the MySQL service over to it’s own server, and leave the web server to work on just on php/nginx etc.

It leaves server resources loaded lighter so when you get a spike, the web / mysql services don’t load up and jam up everything.

Don’t underestimate the single web server

The reason I wrote this post originally was I had a contrast of two sites I manage. Both sites get considerable traffic.

The multi-server setup works great, and traffic and spike and we don’t even get any notices of server load etc- but there is considerable work in making this platform behave. When I say work, I mean things like maintenance and ongoing planning.

The single server (single web, single MySQL) platform hosts popular Australian sports news site, The Roar. During Melbourne Cup, which is one of the, if not the largest sporting event in Australia.

Over 3-6 hours, the server served up about 1.13 million page views to about half a million unique visitors.

Because the servers are virtual hardware, we were able to scale up the resources a day before the event, and scale down the day after.

The servers were loaded with 16GB RAM, and 4 x Quad core vCPUs.

At a peak, Google Analytics reported just over 18,000 concurrent users – and at this point, things started to slow down and monitoring warnings started to fire. Thankfully that was the peak of the traffic, and nothing crashed. It was a refreshing reminder and showcase of what a single web server / single MySQL server can do.

Caveats

What allowed us to get away with this huge amount of traffic on a single web server, was that the majority of the traffic was spread over 5 or so pages. So the hit rate for caching would have been incredibly high. Nginx and memory caching would have taken nearly all the requests.

This setup would not work if there were 18,000 visitors, and spread across 500 pages, leaving comments, and working with dynamic content.

More articles