Thanks, but we have been running this way for some time now and it works
VERY well for our needs. We have recently upgraded to the Gbit NICs and
are running well, but would like to optimize things by cutting down on
the syscall rate.
-mikep
Evan Klitzke wrote:
>
> I think for larger files the need to use a caching server like Squid
> is diminished because the total time that it will take to push the
> data through the network will vastly outweigh the amount of time it
> takes to access the file and do a disk seek. Especially for large
> files, you'll get > 1Gbit of bandwidth from your disk/storage array
> anyway. Still, if you want to cache such objects anyway and you have
> enough RAM, you might be able to get away with just copying the files
> to a RAM disk and then using a regular HTTP (or what have you) server
> that accesses the RAM disk directly.
>
> There are also a number of network parameters you'll want to look at
> (especially TCP tuning) that are unrelated to file caching that you'll
> want to look at to get optimal throughput.
>
Received on Tue Jun 12 2007 - 09:39:00 MDT
This archive was generated by hypermail pre-2.1.9 : Sun Jul 01 2007 - 12:00:04 MDT