Solaris and running out of file descriptors

From: Peter K <pko@dont-contact.us>
Date: Fri, 28 Jan 2000 08:57:37 +0200 (UTC)

Hi

It might be that at my advanced age <heheheh> I may have overlooked the
solution, so bear with me.

Ultra SPARC 1, Solaris 2.6

GNU'ified, gcc 2.95.2 (including GNU binutils 2.9.1), native
compiler/linker not used.

squid-2.2.STABEL5 including all relevant patches available on
squid.nlanr.net

configured --enable-gnuregex --enable-async-io --disable-icmp
           --enable-kill-parent-hack --enable-snmp
           --disable-cache-digests --enable-heap-replacement
           --disable-forw-via-db

compiled with CFLAGS '-O3 -fomit-frame-pointer'

During ./configure it found FD_SETSIZE is 1024, yet that the max file
descriptors it could open is 128.

/etc/system contains
  set rlim_fd_max = 1024

dns_children is set to 8, which seems just enough to cater for peek
times.

As per FAQ 11.4 I added #define SQUID_FD_SETSIZE 1024, albeit somewhat
dubiously since this is not referenced in any of the .[ch] files; it
seems that DEFAULT_FD_SETSIZE might be more applicable ?

in ~/.configure, 'tho, the code which attempts to determine the max no
of file descriptors looks solid enough to me, preventing me to hack
where appropriate as this site depends rather heavily on Squid (small
pipe to the 'Net), obliviating the opportunity to experiment.

It is only occasionally that 'WARNING! Your cache is running out of
filedescriptors' occurs in the cache.log files (19 times during the last
9 days)

However, this should not really have to happen, methinks. The relevant
cache.log entries on startup are attached (the ICP FD is closed shortly
after). Coulds't kindly advise ?

TIA

Peter Kooiman | Voice : +27-12-547-2846
                                        | Cell : +27-82-321-3339
Box 81214, DOORNPOORT, 0017, RSA | e-mail : pko@paradigm-sa.com

Received on Fri Jan 28 2000 - 00:08:48 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:50:45 MST