News
Latest
Support Area
Download
Help and Advice
Links
Home
About us
Contact us
Copyright
Privacy Policy


Clustering HOWTO

|
The load balanced cluster I have built was fairly straight forward to configure and get up and running, The standard Enterprise 9 software with the option of "Hight Availablity" checked will install all the software for you.

The heartbeat packages installed are the following..

heartbeat-stonith-1.2.2-0.6
heartbeat-ldirectord-1.2.2-0.6
yast2-heartbeat-2.9.3-0.2
heartbeat-pils-1.2.2-0.6
heartbeat-1.2.2-0.6
Although I am sure not all of them are required. We will need to check dependancies etc on another server if required.

The principle is the following.

Setup a virtual IP address on every single machine within the load balancer, this VIP becomes the IP addresses used in the cluster to maintain availability.

On the Real servers serving the Webpages the IP address is configured using an IP tunnel, the Device is also configured so that it does not respond to any requests directly on this IP address. This is because the server requires the IP tunnel to server requests given to it on the VIP but technically does need the address to server the services offered. Failure to setup the tunnel on the real server will result in the IP requests being dropped as the real server doesn't believe the requests are destined for it.

The script to set this up is as follows and need to be added on boot. for testing I have hard coded this script into the boot.local file located in "/etc/rc.d" but we can write an init script to do it properly.
---------------------------------------------

#!/bin/sh

modprobe ipip

ifconfig tunl0 0.0.0.0 up
echo 1 > /proc/sys/net/ipv4/conf/tunl0/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/tunl0/arp_filter
echo 1 > /proc/sys/net/ipv4/conf/tunl0/arp_ignore

echo 1 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore

ifconfig tunl0 192.168.0.148 netmask 255.255.255.255 broadcast 192.168.0.148 up

---------------------------------------------

The VIP in this case is the address 192.168.0.148, the echo commands ensure the device tunl0 doesn't respond to arp, or other requests on the network.

If you need to use multiple addresses this script can easily be modified to accomodate multiple VIPs as follows.
---------------------------------------------

#!/bin/sh

modprobe ipip

ifconfig tunl0 0.0.0.0 up
echo 1 > /proc/sys/net/ipv4/conf/tunl0/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/tunl0/arp_filter
echo 1 > /proc/sys/net/ipv4/conf/tunl0/arp_ignore

echo 1 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore

ifconfig tunl0 192.168.0.148 netmask 255.255.255.255 broadcast 192.168.0.148
 up

ifconfig tunl0:1 0.0.0.0 up
#echo 1 > /proc/sys/net/ipv4/conf/tunl1/arp_announce
#echo 1 > /proc/sys/net/ipv4/conf/tunl1/arp_filter
#echo 1 > /proc/sys/net/ipv4/conf/tunl1/arp_ignore

echo 1 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore

ifconfig tunl0:1 192.168.0.149 netmask 255.255.255.255 broadcast 192.168.0.149 up

---------------------------------------------

The setup will be identical for every real server. Setting this up doesn't mean you will be able to "ping" the real server so it will be diffecult to test at this stage once the balancer or director is configured, you can then use tcpdump and log files to ensure it is working correctly.

The director requires a script in the similar way to that above, it is possible to use IPTABLES to redirect tcp packets, but itis more complex and will become diffecult to manage with multiple IPS.
---------------------------------------------

#!/bin/sh

# For IP 192.168.0.148

/sbin/ipvsadm -A -t 192.168.0.148:80 -s rr
/sbin/ipvsadm -A -t 192.168.0.148:ssh -s rr

# For IP 192.168.0.148

/sbin/ipvsadm -a -t 192.168.0.148:80 -r 192.168.0.145 -i -w 3
/sbin/ipvsadm -a -t 192.168.0.148:ssh -r 192.168.0.145 -i -w 3

/sbin/ipvsadm -a -t 192.168.0.148:80 -r 192.168.0.151 -i -w 3
/sbin/ipvsadm -a -t 192.168.0.148:ssh -r 192.168.0.151 -i -w 3

---------------------------------------------

The script above firstly identifys the real servers using the first two lines and also the ports / services to redirect.
These first to lines also setup the schedule about what type of balancing service this is. the "-s rr" option indicates a "Round Robin" approach, the director distributes jobs equally amongst the available real servers. The other schemes available are,
wrr - Weighted Round Robin
lc  -  Least-Connection:  assigns more jobs to real servers with fewer active jobs
wlc - Weighted Least-Connection: assigns more  jobs to servers with fewer jobs and relative to the real servers' weight (Ci/Wi). This is the default.
nq  -  Never  Queue:  assigns an incoming job to an idle server

and others. See man page

The last set of lines set up the redirect in this case two real servers on the ips 192.168.0.145 and 192.168.0.151, it also assigns a weight to each of the real addresses so a preference can be used from one server against the other.

The interface for the VIP is not setup in this file, but is setup using heartbeat, so that you can cluster the director with a slave machine.

The clustering is setup very simply, and is controlled by the following files.

/etc/ha.d/haresources
/etc/ha.d/ha.cf
/etc/ha.d/authkeys

The authkeys file is a standard encryption file that is used to encrypt data between the two servers. I would advise using two network interfaces, so that the "hearbeat" can be monitored over a private network, that way it will be harder for anyone snooping to see the heartbeats, and also means you can have multiple hearbeat servers on the same subnet.

The haresources file indicates the resources that will be used when the cluster start..

---------------------------------------------
plate 192.168.0.148 ldirectord
---------------------------------------------

This must contain the name given via "uname -a" of the master server followed by the VIP and the service started, ldirector is the main monitoring service for the load balancer, it may also be better to add in the service that the real servers are running, details below. If you wish to monitor multiple VIPs modify the file to that below.

---------------------------------------------
plate 192.168.0.148 192.168.0.149 ldirectord
---------------------------------------------

The ha.cf file is fairly self explainatory, and shouldn't be modified after inital setup. This file explains timeouts, which interface to monitor, and other important settings for heartbeat. Its well documented and easy to setup.

All these files should be identical on both the primary and secondary, if they are not, the cluster will break.

Additional services.

The ldirector service will monitor a given service depending on how its configured in the following file

/etc/ha.d/conf/ldirector.cf
---------------------------------------------

# Global Directives
checktimeout=3
checkinterval=1
fallback=127.0.0.1:80
autoreload=yes
logfile="/var/log/ldirectord.log"
#logfile="local0"
quiescent=yes

# A sample virual with a fallback that will override the gobal setting
virtual=192.168.0.148:80
        real=192.168.0.145:80 ipip 6
        real=192.168.0.151:80 ipip 6
        fallback=127.0.0.1:80
        service=http
        request="index.html"
        receive="Working Fine"
#       virtualhost=some.domain.com.au
        scheduler=wlc
        persistent=120
        netmask=255.255.255.0
        protocol=tcp

---------------------------------------------

In this case the file is only monitoring http. It will request the page indicated here, and expect a given result if it recieves the result it will add / update the IP Virtual Server table setup in the custom balancer script above. If it fails the test it will reduce the wieght of the eight of the server to 0 so that it is not used when requests are made to the VIP.

When the correct response is recieved it will increse the weight bringing the real server back into operation.

be cautious, the "scheduler" setting must match the service setting given by the balancer script explained above.

And thats it, let me know if there are any questions..

|
© 2024 Localhost. All rights reserved. Terms of Use and Disclaimer
Designed by My Hosts.com
   Last updated March 21 2017.