13 Steps to peace of mind

If you’re uptight about uptime, if you’re anxious about availability, if you nervously watching Nagios, have I got the program for you. Forget 12 step programs, mine has 13!

First let’s talk load balancing. Load balancing is basically distributing some type of load across multiple servers or resources. For the sake of simplicity, we’ll assume that it’s web requests to web servers that we’re talking about, but it’s all equally applicable to databases, mail servers, or just about any other network accessed computing resource (assuming you can handle the back-end synchronization if required, such as with a database).

There are two primary reasons to load balance:

  1. Capacity: one server may not have enough power or resources to handle all of the requests, so you need to break the load out across multiple servers.
  2. Redundancy: relying on a single server means that if it fails, everything is down. A single point of failure can bring down your critical application. Servers fail; both hardware and software. A blown power supply, bad RAM, a dead NIC, eventually something will go out. No software is 100% bulletproof either. If you have two or more servers handling the load, then even if one fails, the other(s) can take over without your application or users seeing any difficulties.

A load balancer sits between your users (human or machine) and your servers. It typically passes requests through to multiple servers and returns the responses to the end user, acting like a transparent proxy. It can distribute load in many different ways, randomly, round-robin, or in more complex ways. It usually monitors the servers so that if a server dies it is removed from the pool and no user requests are sent there.

Load Balancing Diagram