Moving your WordPress website from single-server to multi-server? Here are 4 common mistakes to watch out for
Moving your WordPress website from a single-server to multi-server setup comes with many benefits, including greater speed, security, and reliability for your business and customers. But the transition itself may bring its own set of complex issues.
Operating a website on a single server (eg. a cPanel account, vanilla VPS, or a managed WordPress host), allows a developer get away with a lot. However, running a WordPress site on a multi-server environment is more unforgiving of error, and will often highlight any code design issues very quickly.
We’ve deployed numerous complex WordPress websites to multi-server environments, and have noticed a number of commonly repeated mistakes.
Here’s a few things to look out for if you’re moving your WordPress website from single- to multi-server.
Writing files to a local disk
The most common issue when dealing with a multi server environment involves shared files. If you’re on multiple servers, you can’t store files on a local disk anymore.
If a file is stored locally on disk, it will only be accessible by the webserver that created it, not by the others in the network.
Examples of this when dealing with WordPress are;
- Media uploads. By default, WordPress will store your uploads in the local wp-content/uploads directory.
- Plugin-generated files. It is common for plugins to store files locally for caching or other purposes (normally in the uploads/ directory). An example of a plugin that does this is the popular Gravity Forms.
- Theme or custom code generated files. It’s often more convenient to store things to file programmatically either in the theme or via a custom plugin, which will cause issues when the application is moved to a multi-server environment
How to fix it
Here are some of the solutions we’ve employed to handle this problem;
1) Use a network drive that’s shared between all webservers. This is a good solution if you want to continue storing files on disk, as if in a single server environment. Within the Amazon Web Services (AWS) ecosystem, this is done with an elastic file system (EFS) or if you’re using your own platform, this can be achieved via technologies like NFS or GlusterFS.
- Pros: Users can continue using files as normal once the shared drive is set up. Less onboarding for your team.
- Cons: Can cause latency (delays), although this isn’t a common problem for static media and uploads.
2) Use AWS’s S3 object storage. This solution is relatively easy to set up by installing a plugin and making some basic configurations.
- Pros: This solution is cheap and easy.
- Cons: It may not work well with every setup, and more custom code may be needed. There is also a more significant migration path and regression testing.
3) Use a Live Syncing Daemon (lsyncd). A lsyncd synchronises local directories with remote targets based on rsync, but is our least preferable option since the system has multiple flaws.
- Pros: No additional infrastructure required.
- Cons: Can be unreliable, and may have intermittent issues while files are being transferred. Reliability suffers as the size and contents of the shared directories grow. It is not suitable for more than two servers or any kind of cloud automation.
As a best practice, we prefer not to use lsyncd, but have used it in the past when circumstances required it.
Using Native PHP Sessions
If you don’t have experience moving WordPress websites to multi-server systems, PHP sessions become a common issue.
A PHP session is a simple way to group and store data, but native session management ($_SESSION) stores the data locally on whatever webserver it’s running on. Similarly to local files, this session data won’t be shared between servers.
Generally, any code that uses “session_start()” functionality will have issues when you move to multi-server environments. This function will initialise the PHP session system, and may not be able to collate the data it needs.
How to fix it
To fix this issue, have all your session data backed by the database and object cache. In the past, we have used the WP Session Manager plugin, which also provides a nice API.
The WordPress core development team does something similar, storing session data (logins, etc.) in their database.
Object and Page Caching
Caching objects and pages in WordPress is more problematic in a multi-server environment, compared to websites run on a single-server. We’ll break down the unique problems (and solutions) for both.
WordPress Object Cache
Object caches store repeated query results, and are typically not persistent (unlike things like image caching). The object cache data prevents unnecessary queries (requests) from your database servers.
If you use WordPress without a object-cache.php drop-in installed, every query will go to your database and cause an increase in loading times on your database server.
On single-server environments, you can use an APC object cache backend, which is simple to set up, performs well, and doesn’t require any additional infrastructure. However, it doesn’t work in multi-server environments, since cached data can’t be shared between webservers.
How to fix it
One solution we’ve used is to implement a distributed object caching system. We have experience with both Memcached and Redis. In the AWS ecosystem, these services can be provisioned relatively easily using Amazon’s ElastiCache.
WordPress Page Cache
Page caching tools like WP Super Cache will store your page output (the data your website sends out so each page looks how they should) as static files, so WordPress doesn’t have to regenerate the same page content every time someone loads the page.
Using a static page caching tool such as WP Super Cache can become unreliable or behave erratically on a multi server setup. For example, it’s possible for some users to get older content (from an old cache), and you must ensure any page cache purges work across all servers.
How to fix it
Ideally, any page cache plugin you use should store cached data in the object cache or database (object cache preferred). We like to use Batcache, which is used on WordPress.com and the largest WordPress site there is. It also has minimal overhead and is tried and tested in high performance applications.
Parallel code deployment means a new website and features are rolled out simultaneously. It makes all versions of your website available under the same URL, and allows us to determine what the best setup is based on performance.
Deploying code and releasing an update to a site must be carefully orchestrated, so that all web-servers and users see the same code at the same time. Releasing code doesn’t have to be overly complex as long as it’s setup correctly.
Tools like AWS CodeDeploy provide ways to automate and release code to servers all at once. Other simple tools like DeployBot or DeployHQ allow you to connect a repository and a pool of servers with deployment targets.
We have often used Capistrano, a remote Ruby-based server automation and deployment tool. It uses the git repository to pull the updated code, create a version locally, then uses an alias to switch over to the newest version of the site instantly, ensuring no downtime or partial uploads. This also makes rollbacks significantly faster and safer.
Another thing to look out for is progressive uploads, where a deploy service just slowly uploads (often via S/FTP) and can cause PHP compile errors. Imagine updating WordPress core, where there could easily be 100 files changed. All 100 need to go up at the same time, and failure to do so will almost always guarantee a partially broken site, and downtime.