What is the right way to get squid3 to pass some web sites straight through and not cache them? Specifically, I'd like security.ubuntu.com and everything in the *.archive.ubuntu.com domain to pass through without caching.
This is my guess but it seems not to be passing the requests straight through to the remote servers:
Code:acl ubuntu dstdomain security.ubuntu.com acl ubuntu dstdomain extras.ubuntu.com acl ubuntu dstdomain .archive.ubuntu.com always_direct allow ubuntu
google with 'squid exclude from caching' (w/o quotes) returns someblogs and forums were a similar question is answered
or this : http://ubuntuforums.org/showthread.php?t=1409351
you probably need a "no-cache" or "cache deny" acl statement,
eg like so
orCode:acl no_cache_server1 dstdomain .domain.com no_cache deny no_cache_server1
man squid or the squid.conf file probably explain more.Code:acl NO-CACHE-SITES dstdomain "/etc/squid/not-to-cache-sites.txt" no_cache deny NO-CACHE-SITES
I'm trying those, too, but apt still seems to slow down and stop a little ways in to 'apt-get update'
The squid log entries are like this:Code:... Hit http://fi.archive.ubuntu.com saucy-backports Release.gpg Ign http://fi.archive.ubuntu.com saucy Release Hit http://fi.archive.ubuntu.com saucy-updates Release 45% [Waiting for headers]
Edit: Even after waiting about 15 minutes, 'apt-get update' has not completed.Code:1371829494.873 60507 xx.yy.zz.aa TCP_MISS/503 3874 GET http://fi.archive.ubuntu.com/ubuntu/dists/saucy-backports/Release - HIER_DIRECT/22.214.171.124 text/html 1371829494.941 65 xx.yy.zz.aa TCP_MISS/404 503 GET http://fi.archive.ubuntu.com/ubuntu/dists/saucy/main/source/Sources.diff/Index - HIER_DIRECT/126.96.36.199 text/html
Last edited by Lars Noodén; June 21st, 2013 at 05:04 PM.
Looking at the squid log, you're getting 503 and 404 errors. That's probably why apt-get is taking so long. Try a different mirror as a possible solution.
Yes, that's what I thought they meant, too. But if I set the packet filter to skip redirection for the ip numbers of the Ubuntu repositories, APT works just fine. It seems to be tangling with Squid somehow. I'd rather handle that via Squid and not have to keep a list of ip numbers for the filter.
If you have a transparent proxy, one alternative to consider is using iptables to route around Squid for specific remote IP addresses. I think a rule like this placed ahead of the REDIRECT to port 3128 might work:
I've used a similar rule, with "-s" rather than "-d", to exempt specific internal hosts from being pushed through the Squid cache.Code:/sbin/iptables -t nat -A PREROUTING -p tcp -d ip.of.archive.site --dport 80 -j ACCEPT
So we could assume that something in apt is not happy with the combination of being redirected and proxied, causing your apt-get to hang...
I've never setup or troubleshooted transparent proxies before so I don't know the finer points and I don't immediately see where the problem might be.
If you want to get to the bottom of this you could perhaps do tcpdumps of a regular apt-get update, a plain proxied one , and a transparant proxied session and see how they differ.
maybe workaround :
assuming you redirect port 80 : configure apt to use a proxy so it will use port 3128 (avoiding the redirection), then bypass it in squid with always_direct or no_cache or so ? saves you the trouble of listing the archive IP addresses in your packet filter