Re: [squid-users] Cache digest question

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Tue, 17 Mar 2009 13:07:54 +1200 (NZST)

> Hi,
>
> I'm looking into setting up cache peering - I currently have small
> sets of reverse-proxy squids sitting behind a load balancer, with no
> URI hashing or other content-based switching in play (thanks to a nice
> bug/feature in Foundry's IOS that prevents "graceful" rehashing when
> new servers are added to a VIP..) So I'm looking at other ways to
> scale horizontally our cache capacity (and increase hit rates as I go)
> - so cache-peering in proxy-only mode seems to be a good solution
>
> Due to various reasons, it's looking like cache digests are going to
> be the best way to go in our environment (Option #2 is multicast, but,
> ew). However, one big question I have is this - are cache digests
> intended to replace, or to supplement, normal ICP cache query behavior?

I believe it's replace. Though I may be wrong. I have not seen both in
action together yet.

>
> For example, let's say squid A and squid B exchange cache digests
> every 10 minutes. squid A has just retrieved a cache digest from squid
> B, and then gets a new request for an object one minute after the
> cache exchange. One minute later (8 minutes before the next digest
> exchange), squid A gets a request for the same URL. This object is a
> local miss to squid A, but it in-cache for squid B although it's not
> in the latest digest that squid A has received from B.
>
> Will squid A either 1. Do a normal ICP query to squid B due to the
> fact that it's a cache miss, or 2. Presume that squid B doesn't have
> the object since it wasn't in the last digest, and retrieve it itself?
> In other words, do digest exchanges preclude ICP queries for objects
> requests that are local cache misses and are not in the most-recent
> cache digests that a squid has received?
>
> Personally, I'm hoping the answer is #1, as #2 can easily result in
> duplicated content between the squids, which is exactly what I'm
> trying to avoid here.

2-layer CARP mesh is the 'standard' topology recommended for this since
Wikipedia had such success with it. Where the underlayer does all caching
and the load balancing Squid overlayer splits requests into to the
underlayer using CARP.

Amos
Received on Tue Mar 17 2009 - 01:07:59 MDT

This archive was generated by hypermail 2.2.0 : Tue Mar 17 2009 - 12:00:03 MDT