I'm curious too. I left the box running tests (Duane, Alex and Matthew
were nice enough to handle the last runs for me since I had to leave two
days early--thanks guys!). So I don't know what the final count will
be. I'm giving the fellows a few days before I start pestering them for
the results.
The last run that finished before I left was at 130 reqs/sec with a
somewhat low hit ratio (~45%) but excellent response times. I've
mentioned here before and during the cacheoff that the Polymix-4
workload is significantly harder than previous workloads, particularly
wrt CPU usage. I took a slightly heftier but lower cost box than at the
previous event so Squids price performance should stack up slightly
better than at previous events though we're still lagging a bit behind
some (though certainly not all) of the proprietary vendors. But anyway,
I think we probably did 130 with a good hit ratio and good response
times with the last configuration I left running. The box under test
was a dual 7200 RPM IDE, single CPU 1GHz PIII, with 2GB of RAM. I
haven't actually calculated what the pricing on an equal box will be but
I think it comes out to around $2500 (the base model is our 1U rackmount
1000 model, which sells for $2289, I think, but this had 1.5GB of extra
RAM and a faster processor which adds a bit on top). So price
performance should be up by 15% or so from last time. This isn't as
good as hardware enhancements alone should account for, but Squid 2.5 is
not as fast as the Reiser RAW Squid 2.4 we ran at the last cacheoff.
COSS should make up for that, and then some, by the next event.
Unfortunately, the cacheoff overall was less interesting than previous
events. Far fewer vendors showed up this time. I think the hard times
in the tech industry, coupled with poor price/performance from some of
the still profitable vendors (Inktomi, NetApp, Infolibria and Cacheflow
are all pretty poor performers relative to their price--in fact, given
equal hardware Squid is generally pretty close to those products, and
you don't have a big fat license to pay with Squid) has led to lack of
participation. The highest performers in the market have their own
reasons for not attending (my guesses: Volera are nearly bankrupt and
have lost two major OEMs Compaq and Dell to Inktomi, Microsoft owns a
segment of the market no matter how they perform--marketing is their
tactic rather than being better or faster, Lucent have economic troubles
and are restructuring right now which may preclude taking part in the
caching market in a serious way, and there are others that I can't think
of now I'm sure). This time it was a pretty small contest. iMimic had
several OEMs showing (good price/performance overall, as always),
AraTech was there, NAIST from Japan, and another group from Japan whose
name escapes me. Hopefully, the industry will pick up again before the
next event...I'd like to see some real competition next time (and I'd
like to have something to run against when COSS and event-io comes into
stability).
Things I learned while preparing and participating this time around:
CPU usage in Squid is a big problem. I've noticed this from real world
results as well, but without a benchmark that is consistent and testable
it was hard to quantify it. Polymix-4 is not just a disk i/o
test...PM-3 and earlier pushed the disk harder and the CPU less. PM-4
is more real-world looking, in most regards. I upgraded the box to a
PIII 1GHz after the first benchmark run (took me a day to locate a real
computer store in Boulder...who would have thunk that CompUSA wouldn't
carry processors!? I've always known they weren't a real computer
store), and /still/ managed to fully peg the CPU. Definitely a limiting
factor...and CPU bandwidth isn't increasing as fast as I'd like, either.
Event i/o from Henrik will help--though it's not the full picture. I
think we're doing some inefficient things in handling of
objects...parsing too many times, moving through too many memory
locations, allocating too many times, etc.
Write bandwidth is our worst enemy when it comes to disk usage. We're
pushing out 2-4 times the data we're bringing in, and Squid has to write
every object to disk. I ran with only two disks this time around
(because RAM is cheaper than disks) and a big ram disk to alleviate some
of the write contention--only >8k objects ever went to a physical disk.
Even so, the insistence on writing everything to disk is killing our
throughput, because writes are so expensive (think unlinking and
writing--too many ops to be efficient). More intelligent writes will be
a very big win (bigger than I had previously thought...again this
workload is more accurate and pokes Squid a little differently). COSS
should fix this right up, I think.
The ram disk idea is a very good one. I'm going to tune the least load
disk selection code to handle ram+disks more effectively and begin
shipping boxes based on this layout. It works strikingly well at
reducing some of the load on the disks, and I think will allow us to
push up a little closer to the fat pipes that I'd like to support. I
think from a really big box (4GB of RAM, dual proc, 4 15k disks) I can
expect to push 30Mbits in a price competitive way. I'd like to push on
up to 48Mbits (T3 speed...a sweet spot for caching performance...ISPs
are not often coming online anymore that are smaller than this). The
2.5 Squid can probably go up to 35-40Mbits now, but hit rates would
truly suck (probably sub-20%). Our biggest box in use today is
supporting up to 22Mbits of web traffic, and is set up in a pretty
traditional way (no ram disk)...
Chemolli Francesco (USI) wrote:
>>For what it's worth, I didn't hit this while preparing for,
>>or at, the
>>cacheoff I don't think.
>>
>
> Whoa there!
>
> Can you give any anticipation on the cacheoff results (limited
> to squid of course)?
>
> I'm curious :)
-- Joe Cooper <joe@swelltech.com> http://www.swelltech.com Web Caching Appliances and SupportReceived on Fri Nov 23 2001 - 13:07:41 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:14:39 MST