I'm using squid in an enviroment where fast amounts of gigabit ethernet is
available.
An option that would be of great use to our enviroment would allows me to
only cache objects that came in slow. Say an object from domain A or
sibling B comes in at 300KB/sec and should not be cached so that I can
use the cache space for the object from domain C or sibling D that only
averages 1KB/sec. Perhaps such an option should then be combined with a
replacement algorithm, where the throughput is a factor in deciding if an
object should be replaced. The throughput level then can float like the
age in LRU.
One could take that a step further and record the average throughput in of
request just like the ping response times in the net_db and use it to
decide on the use of a sibling: if the origin host is faster then do not
use the sibling. Further, averaging the throughput would prevent you from
surprises on requests from networks which only sometimes are very
congested.
I see all kinds of problems with a configuration like this as there is no
way of making an expectation about the throughput of a file due to the
status of the network a few seconds from now.
Jasper
-- Computer Science Helpdesk: +1 (902) 494 2593 Fax/VoiceMail: +1 (877) 211 5401Received on Sun Apr 30 2000 - 10:01:05 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:25 MST