On Sun, Jan 07, 2001, Henrik Nordstrom wrote:
[snip]
> Or you could simply update the digest you have and hope that it is
> reasonably correct. But unless you can read the store index the digest
> will detoriate by time, typically forgetting older objects, and you
> might also end up with overpopulated digest which is not of any use to
> anyone, or a underpopulated one which uses to much space/bandwidth.
>
> The good news is that digest sizes tends to stabilize once populated, so
> with some cleverness we might actually get away with a digest that is
> continously updated with additions/deletetions without having to rebuild
> it if your goal is to keep an approximation of what objects we have in
> the cache.
>
> For the published digest there is one more criteria which is quite
> important and which AFAICT requires one to have access to the store
> index: We do not wish to publish information about expired objects to
> our neighbours even if in the cache. This because they are not allowed
> to initiate refreshes/revalidations in our cache.
Ok, the important bit here is that for each FS there will be a different
way of generating a digest. Now that the store index is part of the
FS rather than globally, it makes sense that you can't stick digest
generation in the same place.
So, why not just make digest building part of the FS? Define the digest
dimensions as a global thing, and then merging them should be a simple
case of ORing the digests together. Right?
Adrian
-- Adrian Chadd "Here's five for the cake, and <adrian@creative.net.au> five to buy a clue." - Ryan, Whatever it TakesReceived on Sun Jan 07 2001 - 09:28:19 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:13:12 MST