On Mon, 8 Nov 2010 18:32:52 -0500, Kevin Wilcox <kevin.wilcox_at_gmail.com>
wrote:
> Hi all.
>
> This is currently a test environment so making changes isn't an issue.
>
> Initially I had issues with hosts updating <any flavour of Microsoft
> Windows> but solved that with the included squid.conf. I'm even
> getting real cache hits on some of the Windows XP and Windows 7
> updates in my test lab, so the amount of effort I've put in so far is
> pretty well justified. Since the target audience won't have access to
> a local WSUS, I can pretty well count it as a win, even if the rest of
> this email becomes moot.
>
> Then came the big issue - World of Warcraft installation via the
> downloaded client. Things pretty well fell apart. It would install up
> to 20% and crash. Then it would install up to 25% and crash. Then 30%
> and crash. It did that, crashing further in the process each time,
> until it finally installed the base game (roughly 15 crashes). Due to
> clamping down on P2P I disabled that update mechanism and told the
> downloader to use only direct download. I'm averaging 0.00KB/s with
> bursts from 2KB/s to 64 KB/s. If I take squid out of the line I get
> speeds between 1 and 3 MB/s+ and things just work - but that sort of
> defeats the purpose in having a device that will cache
> non-authenticated user content. Having one user download a new 1 GB
> patch, and it being available locally for the other couple of hundred,
> would be ideal. Still, it isn't a deal breaker.
>
> I understand that it could be related to the partial content reply for
> the request and I understand that it could also be related to the
> <URL>/<foo>? style request. Is the best approach to just automatically
> pass anything for blizzard.com/worldofwarcraft.com straight through
> and not attempt to cache the updates? I've seen some comments where
> using
>
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
>
> will cause those requests to not be cached (and I understand why that
> is) but I'm wondering if I should just ignore them altogether,
> especially given the third item - YouTube.
Yes, don't use that QUERY stuff. The dynamic URL which are cacheable will
have expiry and control headers to make it happen. The others are caught
and discarded properly by the new default refresh_pattern for cgi-bin and
\?.
>
> The target population for this cache is rather large. Typically,
> youtube is a huge culprit for bandwidth usage and a lot of the times
> it's hundreds of people hitting the same videos. I've been looking at
> how to cache those and it seems like it's required to either not use
> the above ACL or it's to setup another ACL that specifically allows
> youtube.
>
> All of those comments and workarounds have been regarding the 2.x set
> of squid, though. I'm curious if there is a cleaner way to go about
> caching youtube (or, perhaps I should say, video.google.com) in 3.1.x,
> or if it's possible to cache things like the WoW updates now? We're
> looking to experiment with some proprietary devices that claim to be
> able to cache Windows Updates, YouTube/Google Video, etc., but I'm
> wondering if my woes are just because of my inexperience with squid or
> if they're just that far ahead in terms of functionality?
Caching youtube still currently requires the storeurl feature of 2.7 which
has not bee ported to 3.x.
There are embeded visitor details and timestamps of when the video was
requested in the YT URL which cause the cache to fill up with large videos
at URL which will never be re-requested. This actively prevents any totally
unrelated web objects from using the cache space.
It is a good idea to prevent the YT videos from being stored at all unless
you can de-duplicate them.
>
> Any hints, tips or suggestions would be more than welcome!
>
> Relevant version information and configuration files:
>
<snip>
>
> # Uncomment and adjust the following to add a disk cache directory.
> cache_dir ufs /var/squid/cache 175000 16 256
>
> # Cache Mem - ideal amount of RAM to use
> cache_mem 2048 MB
>
> # Maximum object size - default is 4MB, not nearly enough to be useful
> maximum_object_size 1024 MB
>
> # Maximum object size in memory - we have 4GB, we can handle larger
objects
> maximum_object_size_in_memory 512 MB
Um, no you have 2GB (cache_mem) in which to store these objects. All 4+ of
them.
Amos
Received on Tue Nov 09 2010 - 03:43:47 MST
This archive was generated by hypermail 2.2.0 : Tue Nov 09 2010 - 12:00:02 MST