On Sunday 13 July 2003 23.10, Mahmood Ahmed wrote:
> 2003/07/13 22:12:43| WARNING! Your cache is running out of
> filedescriptors
>
> so the question is "is there any way to increase the number of file
> descriptors with out rebuilding the squid." he is using redHat 8.0
> with squid2.5 Stable2. ulimit -a shows outputs the following.
No, but you can reconfigure Squid to be a little more conservative
with the filedescriptors it has.
First try:
half_closed_clients off
If that is not sufficient, try
pconn_timeout 30 seconds
And if still not sufficient, try
server_persistent_connections off
And finally, if still short
client_persistent_connections off
Warning: the last setting is incompatible with NTLM authentication.
If may also be the case that the reason why Squid is running out of
filedescriptors is because it runs out of time due to disk I/O
blocking the process. If the Squid is fairly loaded (30
request/second or more) then the "cache_dir ufs .." cache store won't
be able to keep up and you need to use either aufs or diskd. If you
are lucky the Squid binary you have is compiled with support for
these (to find out, just replace ufs by aufs or diskd and run "squid
-k parse") and in such case enabling such cache store is a matter of
just changing squid.conf.
Regards
Henrik
-- Donations welcome if you consider my Free Squid support helpful. https://www.paypal.com/xclick/business=hno%40squid-cache.org If you need commercial Squid support or cost effective Squid or firewall appliances please refer to MARA Systems AB, Sweden http://www.marasystems.com/, info@marasystems.comReceived on Sun Jul 13 2003 - 16:29:20 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:17:57 MST