TCP Tuning for Busy Apache Webserver on CentOS5

Recently I was in a situation where a very busy webserver was not responding. Strangely, top showed plenty of CPU available. The server was essentially just sitting there. What do do?

Upon further investigation, it turned out that the network queue was saturated. So many incoming connections were being attempted that they were falling off the end. Some TCP tuning was in order. Fortunately the server was not memory-starved so allocating more memory to the network stack was not a problem. Here's what ended up in /etc/sysctl.conf and turned the server back into a faithful workhorse.

# Kernel tuning settings for CentOS5,
# busy webserver with lots of free memory.

# Big queue for the network device
net.core.netdev_max_backlog=30000

# Lots of local ports for connections
net.ipv4.tcp_max_tw_buckets=2000000

# Bump up send/receive buffer sizes
net.core.rmem_default=262141
net.core.wmem_default=262141
net.core.rmem_max=262141
net.core.wmem_max=262141

# Disable TCP selective acknowledgements
net.ipv4.tcp_sack=0
net.ipv4.tcp_dsack=0

# Decrease the amount of time we spend
# trying to maintain connections
net.ipv4.tcp_retries2=5
net.ipv4.tcp_fin_timeout=60
net.ipv4.tcp_keepalive_time=120
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_keepalive_probes=3

# Increase the number of incoming connections
# that can queue up before dropping
net.core.somaxconn=256

# Increase option memory buffers
net.core.optmem_max=20480

There are plenty of other sysctl options to tune, but the above made the most difference.

And netstat -s is your friend.

Topic: 

Comments

Hi.

What would you say that a perfect sysctl would be for a webserver serving zillions of 5K javascript files?

Low write buffer I guess but what else?

Kindly

//Marcus

What were the specs of this server?

This was a while ago, and the server actually ended up reincarnating as several VPS's and then several dedicated boxes. I'm not sure where along the path this entry fell.