We're running squid 1.0.18 on a Sparc 10 running Solaris 2.5.1. Most
of the time it works fine, we handle around 400-500 transactions per
minute. However, every day or two it gets into a state where the
number of transactions per minute drop to nearly zero, and connection
attempts to the proxy usually timeout. I'm guessing that this happens
when we get to the file descriptor limit, though I have no hard proof
of this. I've tried to use the cachmgr to get the file descriptor
debug info, but can't make a connection to the server when it's acting
this way, and usually we're in a hurry to start serving requests again
so we don't spend too long debugging. If we kill the squid daemon and
let it restart, things start going normally again.
We've temporarily changed our proxy config so that it isn't caching
any documents, but still this is happening, so it doesn't seem to be
the problem with cache_swap_high being set too far from
cache_swap_low.
I see no unusual messages in the cache.log file, though I do notice
that I get quite a few more of this error message than usual when
we're seeing this problem:
comm_accept: FD 20: accept failure: (71) Protocol error
This could be a symptom of the problem rather than a cause though.
Any hints as to what I might look for to try to diagnose what the
server is doing when it's going slow like this?
Today we failed to notice the problem for a couple of hours, and it
didn't correct itself during this time, but again a simple restart of
the daemon got it going normally again.
Joe
Received on Wed Oct 23 1996 - 14:08:19 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:33:20 MST