Adrian Chadd wrote:
> So, why not just make digest building part of the FS? Define the digest
> dimensions as a global thing, and then merging them should be a simple
> case of ORing the digests together. Right?
The actual digest building is the same for all, regardless of how they
retreive the information. For a published digest the following
information is required for each object:
* The URL (or MD5 hash)
* Freshness
In order to purge the digest from stale entries one must rebuild it
periodically using the above information. It is NOT sufficient to only
keep track of additions/deletetions in the cache.
Yes, you can do additions/deletetions to the digest between rebuilds,
but that only helps to some extent. The digest will still detoriate by
time alone.
Main reason to why digest deletes on cache deletes does not help much is
because most deletes in the cache are stale objects in the first place,
and as such it is likely they are not even part of the digest due to the
freshness criteria.
If you can get a feedback channel on objects which are about (within 1/2
of the digest exchange interval) to go stale, or deleted before that,
then digest deletes could be made to work, very much reducing the need
for rebuilds.
And no, having each FS build it's own digest and then join them is in
effect no different than having them all populate the same digest in the
first place.
Now lets try to attack the time problem above. What could be done is to
have 2-3 bits of information per published bit. This should be allow one
to make the bits weighted by the freshness of the objects they
represent, and thereby allow bits to expire from the digest by time.
/Henrik
Received on Mon Jan 08 2001 - 16:47:43 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:13:13 MST