On 02/16/2014 08:05 AM, Dr.x wrote:
> i have implemented aufs with rock
Please note that such a combination is outside the intended
SMP/Rock/aufs use scope.
> i had that logs at logs of rock cache.log !!!
>
> what does that mean ??
Sorry, I do not know what "logs at logs of rock cache.log" means.
> i have done the following
>
> i have 5 hardsisk aufs dir
> 2hardsisk rock dir
>
> i have as below :
>
> workers 8
> #dns_v4_first on
> cpu_affinity_map process_numbers=1,2,3,4,5,6,7,8,9
> cores=2,4,6,8,10,12,14,16,18
> #####################################################################
> if ${process_number} = 4
> include /etc/squid/aufs1.conf
> endif
> ###################################################################
> if ${process_number} = 2
> include /etc/squid/aufs2.conf
> endif
> ################################################################
> if ${process_number} = 6
> include /etc/squid/aufs3.conf
> endif
> #################################################################
> if ${process_number} = 7
> include /etc/squid/aufs4.conf
> endif
> #################################################################
> if ${process_number} = 8
> include /etc/squid/aufs5.conf
> endif
> ===========================================================================
>
> each aufs.conf has dir aufs in it.
>
> but after alll fo that ,
> i have still low bandwith saving !!!!
Your configuration does not share aufs cache_dirs among workers. With
all other factors being equal, that would decrease hit ratios compared
to the same-total-size cache shared by all workers.
> does the errors harmfull
>
> *Worker I/O push queue overflow: ipcIo7.30506r9
Yes. Your Squid tries to cache more than your disks can handle. See
Performance Tuning at
http://wiki.squid-cache.org/Features/RockStore
HTH,
Alex.
Received on Thu Feb 20 2014 - 22:09:14 MST
This archive was generated by hypermail 2.2.0 : Fri Feb 21 2014 - 12:00:06 MST