Re: [squid-users] Could this be a potential problem? Squid stops working and requires restart to work

From: Asim Ahmed _at_ Folio3 <_at_>
Date: Tue, 08 Dec 2009 11:23:07 +0500

Asim Ahmed @ Folio3 wrote:
> I am using Red Hat Enterprise Linux Server release 5.3 (Tikanga) with
shorewall 4.4.4-2 and Squid 3.0 STABLE20-1. My problem is kind of wierd.
Squid stops working after like a day and i need to restart it to let
user browse or use internet. Any parameters to look for? Out of 2 GB RAM
only 200 MB RAM is left free when i find squid halted (before restrting it).
>
> One more question i have is: In just two days my squid cache has
grown to 500 MB. I've set cache_dir as 10GB ... I believe it will not
take long before it will reach this limit! what happens then? does it
start discarding old cache objects or what?
>
> Amos Jeffries wrote:
>> On Mon, 07 Dec 2009 14:47:22 -0900, Chris Robertson <crobertson_at_gci.net>
>> wrote:
>>
>>> Asim Ahmed @ Folio3 wrote:
>>>
>>>> Hi,
>>>>
>>>> I found this in cache.log when i restarted squid after a halt!
>>>>
>>>> CPU Usage: 79.074 seconds = 48.851 user + 30.223 sys
>>>> Maximum Resident Size: 0 KB
>>>> Page faults with physical i/o: 0
>>>> Memory usage for squid via mallinfo():
>>>> total space in arena: 7452 KB
>>>> Ordinary blocks: 7363 KB 285 blks
>>>> Small blocks: 0 KB 1 blks
>>>> Holding blocks: 14752 KB 94 blks
>>>> Free Small blocks: 0 KB
>>>> Free Ordinary blocks: 88 KB
>>>> Total in use: 22115 KB 297%
>>>> Total free: 88 KB 1%
>>>>
>>> This is not likely the source of your trouble...
>>>
>>> http://www.squid-cache.org/mail-archive/squid-users/200904/0535.html
>>>
>>> Chris
>>>
>>
>> That would be right if they were negatives or enough to wrap 32-bit back
>> to positive.
>>
>> Since its only ~300% I'm more inclined to think it's a weird issue with
>> the squid memory cache objects.
>>
>> The bug of this week seems to be a few people now seeing multiple-100%
>> memory usage in Squid on FreeBSD 7+ 64-bit OS. Due to Squid memory-cache
>> objects being very slightly larger than the malloc page size. Causing 2x
>> pages per node instead of just one. And our use of fork() allocating
N time
>> the virtual-memory which mallinfo might report.
>>
>> Asim Ahmed: does that match your OS?
>>
>>
>> Amos
>>
>>
>
> --
>
> Regards,
>
Received on Tue Dec 08 2009 - 06:23:17 MST

This archive was generated by hypermail 2.2.0 : Tue Dec 08 2009 - 12:00:02 MST