LVS persistence and AOL proxies

LVS has a simple IP based persistence built-in that can be used to keep the users on the same real servers for a configurable amount of time. This has been explained in my previous post, and it works fine, but in real life users will come from various dynamic connections or even using some ISP proxy servers to browse the internet. For such situations LVS supports the configurable netmask for persistence, allowing us to increase the network mask used in the persistence match (normally we will use /24 for this) sending a bigger range of ips to the same server. This approach works fine for most cases where users will have the same class C ips allocated or the isp proxies will be on the same network range. Unfortunately this doesn’t work for AOL, because the AOL clients will always be proxied by the huge AOL proxy cluster that will send each request from a different real ip. These IPs are not even from the same range and tend to be completely different. This post will show how we can keep these AOL users on the same real server in a LVS-DR setup.

Normally if this would have been a small ISP I am sure people would have ignored their users and the users would have complained back to the ISP that they can’t reach some big sites, and in the end the ISP would have found a friendlier solution for this. Since this is AOL and they have a huge base of clients, we can’t really ignore them and we have to find a solution ourselves.

The best solution on the long term is to rewrite your application so it will no longer be persistent dependent (require that each user request to be processed by the same real server). Unfortunately this is not always possible, or it can take a long time to complete, meaning you need a work-around in the meantime.

As shown in my previous post we can increase the network mask in order to try to keep a bigger range of IPs on the same real server assuming that the remote user IPs will be at least on the same range. This might help for smaller proxy networks, but for AOL this will not be very useful as they will come from various completely different ips. Without being able to make decisions about the content of the packet LVS will not be able to solve this issue. The solution presented here is a work-around and to my knowledge it is the only possible solution when using LVS. We will mark all AOL proxy IPs traffic with iptables and then create for them a separate VIP using FWMARK. Obviously this will break any balancing and will send all the AOL users to a single server.
Note: If your AOL traffic is too big to handle for one server, then this solution will not work for you and you will need a higher level proxy (L7) to make the persistence work for AOL users.

AOL publishes a list of their proxies ips: http://webmaster.info.aol.com/proxyinfo.html Here is a sample bash script that will mark all the packets coming from AOL proxies and mark them to have them handled by one real server (I just aggregated a few ranges to keep the list shorter ;-) ):

#!/bin/bash

iptables -t mangle -F

AOLPROXYS="64.12.0.0/16 149.174.160.0/20 \
152.163.0.0/16 195.93.0.0/18 195.93.64.0/18 198.81.0.0/16 \
202.67.64.128/25 205.188.0.0/16 207.200.112.0/21"

for aolproxys in $AOLPROXYS
do
iptables -t mangle -A PREROUTING -p tcp -s $aolproxys -d VIP/32 \
--dport 80 -j MARK --set-mark 1
done

And the ipvs FWMARK1 service definition:

ipvsadm -A -f 1 -s wrr -p 3600
ipvsadm -a -f 1 -r RS1 -g

resulting in all AOL traffic going to RealServer1. If you are using ldirectord you will probably want to setup also another server as a backup in case RS1 goes down. You still have to mark the traffic using iptables as shown above; the ldirectord.cf service definition should look like this:

virtual=1
real=RS1:80 gate
fallback=RS2:80 gate
persistent=3600
protocol=fwm
...

I am aware that this is not the nicest solution, but it was the only work-around I found if you have to use LVS and deal with AOL users on a persistent web application. If you find this useful don’t forget to check on the AOL proxy IP list and update your configs if needed in the future.

For more details you can check the LVS FAQ.

comments powered by Disqus