Redhat has put a lot of effort into making all types of clustering from load-balancing to shared storage and high-availability a core part of its Linux offering. Their solution for a load-balancing cluster with fail over, is the combination of "Linux Virtual Server" (LVS) with "Pulse" for fail over manager and "nanny" for cluster membership management. Personally I don't much like pulse and nanny, mainly because no other distribution uses them, but Redhats load-balancing solution has one advantage, it can be completely, well almost, configured from a web gui. The gui is called Piranha and can be installed with "yum install piranha".
In a production system you will need two load-balancers and at least two "real servers" for load-balancing. i.e a basic two-node cluster. We have two load balancers to avoid a single point of failure. They will be configured in an active-passive configuration. You should install piranha on both servers. Once installed you will need to create a password for the piranha user by running the command
Then start up the piranha gui service with service.
Connect to the piranha front-end by opening a browser and going to "http://<server ip>:3636" and login with username piranha and the password you set. From here we only configure the primary load-balancer. The last step will be to copy the configuration over to the secondary load balancer to complete its configuration.
Load Balancer Global Settings
The interface is really simple to use. The first screen to configure is the "Global Settings". Here you only need to configure the primary public IP address of the primary load-balancer. In this example I use a non-routable address range for the public IP. You can leave the private ip empty. You may use it if you have a spare network card which you will use to connect to your backup load-balancer i.e a redundant network link between the two fail over devices. It is in the "Global Settings" that you also setup the type of LVS you are going to use, NAT, Direct Route or Tunneling, see my previous post on these options.
Configure you Virtual Linux Server
Next you need to configure your "Virtual Servers".Click on the virtual server tab and the "add" These are the settings for the ipvsadm utility for setting up your Linux Virtual Server (LVS). The command line entries for which the gui is an interface are describe in my other posting on Linux load balancing. First you setup the virtual server, i.e. you specify the virtual IP (VIP) address. This is the address that external clients will use to access your web farm. Don't forget to activate the virtual server when you are finished otherwise you won't be able to start the pulse service! In this example I use assign a 2nd IP to the ethernet card (eth0:1) as I was using virtual machines for the purpose of this demo. In a production environment you would may use a separate network card or make it an "alias" on the primary ip's device.
Add Real Servers
After that you need to setup you "real servers" which are the servers in the farm that actually handle requests to the virtual server. Click the "add" button.
These are the details of the real server, pretty self explantory.
Setup Redundant Load Balancer
Now you can setup the backup load-balancer. This is the front-end load balancer that will take over if the primary load-balancer, the server you are currently configuring fails. Click on the "Redundancy" tab and enter the ip details of the backup load-balancer.
Once done copy the configuration file /etc/ha.d/lvs.cf over to the load balancer backup server. On both servers start the pulse daemon with /etc/init.d/pulse start. You can check startup with "ipvsadm -l" or "ipvsadm -l --stats" or "ipvsadm -l -c". You can also check the "control monitoring" tab. It will take several seconds to initialise.
Naturally the real servers will need to be configured to start the service you are virtualising. You can test the fail over by stopping the network on the active load-balancer and you should see the virtual ip transferred to the backup load-balancer. Check this by running "ip a l" or tailing the last few lines of /var/log/messages. Restarting the primary load-balancer will result in the VIP being transferred back.
And thats all one needs to do, to configure load-balancing with fail over in Redhat!