Many thanks to everyone who commented on my recent article and said they’d be interested in a series of posts about more server-oriented PHP topics. There were quite a few requests for a “ten point”-type article introducing the subject, so that seems to be a good place to kick things off.

Building Blocks

There are six classic ways to group and organise the servers that your web-based application runs on.

Shared Hosting

Shared hosting, like the name implies, is where you cram many different websites (normally owned by many different people) onto the same physical box. The upside is that they’re very cheap (because you’re not paying for an entire server, only a slice of one), but the downsides are that you can quickly outgrow a shared hosting server, and shared hosting servers are difficult to make really secure. Use them when you have to, but do your best to trade up to something that you can have all to yourself whenever you can.

Dedicated Server

A dedicated server is a box all to yourself. So long as you can keep the bad guys outside, you have more peace of mind when it comes to security. They cost a bit more than shared hosting, but there are many affordable solutions both in Europe and the US.

A PHP application normally needs no changes at all to move from shared hosting to a dedicated server.

What might need to change is the way you look after the installed application. Unless you go for a managed server, or you are using a server looked after by your customer, it will become your responsibility to look after the operating system installed on the server. You will be responsible for ensuring the server is patched with the latest security fixes. This can eat up quite a bit of time every week, so make sure your customer is paying for this time one way or another!

Two-Tier Architecture

When you outgrow a single server, the next step is to consider moving to a separate database server and web server, also known as a two-tier architecture. This is a popular choice because it’s quick and very painless (just order an additional box, and then move your database onto it), and it buys you the time you need to prepare for scaling up to the next architecture. Even on commodity Intel/AMD servers (I’m talking Xeons and Opterons here, not desktop CPUs!), a two-tier architecture is often enough to handle a public website for a medium-to-large organisation.

The only change a PHP application should need is to update the hostname passed to mysql_connect() et al. Consider moving any background batch jobs you have onto the database server, to further reduce the load on the web server.

It’s a good idea at this point to split your PHP application into separate publishing and admin components, so that you can move the admin website onto the database server. Splitting these up allows the admin site to function well when the main website is being hit hard, but to make it work you’ve got to start thinking about how to share data between the publisher and admin components – the data that your website publishes, and your sessions too.

Once you’ve outgrown the two tier architecture, you can add more capacity on the publishing side by moving to a web farm, and you can make your database server more resilient by upgrading to a cluster. If you don’t mind the extra complexity and the reworking involved, you can also scale further by moving to an n-tier architecture.

Web Farm

Your web server should run out of spare CPU and RAM before your database server. When it does, you can scale your web site horizontally by adding more and more web servers running the publisher component, creating a web farm. To split traffic up between all the publishing servers, we normally use a load balancer (see below) which your hosting provider should be able to provide for you. Web farms also have the advantage of giving your customer resilience; if one box in the farm fails, the other servers should be able to carry on just fine until you’ve fixed the fault.

Because you’ve already split your PHP application into separate publisher and admin components, you shouldn’t need to make many more changes to support web farms. Most of the changes will be to further optimize your application, especially by using more and more caching to make sure the database server doesn’t get overloaded!

Clusters – Active/Active and Active/Passive

Many customers aren’t happy running on a single database server. Hardware does fail. If you lose a box from the web farm, your website might seem a bit slow for a while, but the remaining boxes should continue to publish your website. If you lose your database server, chances are that your website will be down until the hardware has been fixed. That can take hours or even days.

The classic way of making single points of failure like the database server (and legacy web applications) more resilient is to add a hot-swap box. One box is the primary box (the active box), and the other box is the fail-over box (the passive box). They share the same disks (either by direct-attached storage or network-attached storage), and their health is monitored via a heartbeat. When the primary box fails, its heartbeat stops. The fail-over box spots this, starts up its own database server, and becomes the primary box. (Manual fail-over, where someone has to start up the database server by hand, is also a common option here!) Two boxes working in this way are known as an active/passive cluster. They are two machines which are clustered together to appear as one machine to your web servers.

Unfortunately, when your customer finds out that they’re paying for a box that’s effectively doing nothing most of the time, they tend to get upset about it. They often feel better if both machines are doing something, so some database solutions allow you to run the database server on both boxes at the same time. If one box fails, the other box continues to process database requests until the faulty machine is brought back into service. This is known as active/active clustering, and again it makes two servers which are clustered together appear as one machine to your web servers.

Active/active and active/passive clusters are old-school scalability solutions from the days of vertical scaling, when you got more power by buying a bigger box. (Some boxes are so big, they come with their own team of onsite engineers, and a price tag to match). In the web platform, active/active has been replaced by scaling horizontally through web farms … but active/passive is still a good solution for making your database resilient without having to take things to the next level by splitting things up into write-masters and read-only slaves.

Three-Tier and n-Tier Architectures

Another classic approach from the old-school book of tricks is to split your code across three or more levels of boxes, known as three-tier architectures and n-tier architectures. The idea is to cut down on the amount of work that needs to be done on your web servers, by moving some of the work off onto another set of boxes. It’s a solution popular in the Java world (and, I’d argue, part of the definition of “enterprise”-y software), but it also has a place in the PHP world.

Moving your code to run in an n-tier architecture takes time and serious commitment, but if your website is very popular, then it’s worth considering. It allows you to optimize each group of boxes to do one type of work very quickly (for example, having a group of servers just for downloading static images), and it gives you the freedom to use different sizes of hardware for each group. It doesn’t just apply to your web-based application; you can scale the database servers by moving to an n-tier architecture too, something that MySQL in particular is designed to do well.

Summing Up

There are six classic ways to group the servers that your web-based applications run on. Many small PHP shops start small, and scale up their applications when necessary. However popular your customer’s website, all web-based applications written in PHP can be refactored to run on any of the classic architecture solutions, allowing you to grow with the demand that you’re experiencing.

This article is part of The Web Platform, an on-going series of blog posts about the environment that you need to create and nurture to run your web-based application in. If you have any topics that you’d like to see covered in future articles, please leave them in the comments on this page.

No Comments

  1. Stu says:
    October 15th, 2007 at 8:34 am


  2. Alex@Net says:
    October 15th, 2007 at 10:26 am


    I’d like to have more info about performance and security tips in shared hosting environment and some general info on Web Farm and others.


  3. James says:
    October 15th, 2007 at 7:25 pm

    Great article Stu. One question – are the performance implications of running a web server (albeit mostly idle) on the DB box in a two-tier setup for the admin worth considering?

  4. Stu says:
    October 15th, 2007 at 8:08 pm

    Thanks everyone.

    James, the performance implications are worth considering. If the admin function causes a lot of disk i/o, for example, or is unusually memory-intensive, that can stop the database server software from running well.

    Best regards,

  5. Null is Love » Blog Archive » Six Scaling Server Set-Ups says:
    October 16th, 2007 at 9:38 pm

    […] Herbert has posted six ways to group and organise your web servers. He starts small with simple shared hosting and scales upward to web farms, clusters and n-Tier […]

  6. The Challenge With Securing Shared Hosting | Stu On PHP says:
    November 21st, 2007 at 9:04 am

    […] thanks to everyone for their feedback on my first post in this […]

  7. Jonathan Street says:
    November 23rd, 2007 at 5:55 pm

    I didn’t get chance to read this when it first came out and then I had some trouble connecting to your server so I’m glad you published a second entry in this series as it prompted me to return.

    It looks like a nice start. I was a bit confused by one bit though;

    “but to make it work you¬íve got to start thinking about how to share data between the publisher and admin components – the data that your website publishes, and your sessions too.”

    I don’t understand why you would need to share sessions. You wouldn’t need to share sessions for the users becuase they should never interact directly with the database/admin server. You could share admin sessions between both servers but you wouldn’t really *need* to do so.

    Am I missing something?

This Month

October 2007
« Jul   Nov »