On 3/26/07, Henrik Nordstrom <henrik@henriknordstrom.net> wrote:
> One way is to set up a separate set of cache_peer for these robots,
> using the no-cache cache_peer option to avoid having that traffic
> cached. Then use cache_peer_access with suitable acls to route the robot
> requests via these peers and deny them from the other normal set of
> peers.
AFAICS, it won't solve the problem as the robots won't be able to
access the "global" cache readonly.
-- GuillaumeReceived on Mon Mar 26 2007 - 03:40:48 MDT
This archive was generated by hypermail pre-2.1.9 : Sat Mar 31 2007 - 13:00:02 MDT