Another thought!!
I setup my linux for measuring traffic metering.
Following traffic groups are my concerns.
{figure1} traffic class
[1] [2]
client ---------+ +-----------> server
<----+ | | +----------
[4] | V | V [3]
-----------------
| cache server |
-----------------
{figure2} tabular format
* !<n> means that port is not <n>
* 80 is squid http accel port
traffic group interface Source Dest
[1] IN anywhere/!80 anywhere/80
[2] OUT cache/!80 anywhere/80
[3] IN anywhere/80 cache/!80
[4] OUT anywhere/80 anywhere/!80
I setup above rules with ipfwadm accounting.
From above ipfwadm accounting, I found that it shows real
bandwidth savings.
See following sample data:
(1) clients -> cache farm: 7179 bits/sec, 10 packets/sec
(2) cache farm -> servers: 3942 bits/sec, 3 packets/sec
(3) servers -> cache farm: 27167 bits/sec, 5 packets/sec
(4) cache farm -> clients: 62333 bits/sec, 10 packets/sec
The difference (4)-(3) shows the data from "locally".
The difference is 35166 bps.
{figure3} typical caching topology
{our network} ==== {cache} ------------- upstream ISP
(1) (2)
In figure3, the link (2) is the target for saving bandwidth.
But the traffic rate of link (2) doesn't be effected by cache(s). (<----
by seeing MRTG graph.) In above example, 35kbps (of cause) is small. but
the difference (saving rate) is above 3~4 Mbps when aggregating all cache
farm. I couldn't notice significant difference between applying of
caches from not applying of those. IMHO, the traffic rate of link (2
) immediately drops to the amount of saving rate from caches.
Is there any points of me to consider ?
--J
Received on Wed Jun 24 1998 - 09:06:23 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:40:49 MST