I tried gnumalloc, but it didn't help my problem:
Squid 1.1b15 response very slow (2-8s) quite often (some times per minute).
We use Digital Unix 4.0A and ufs.
I used telnet to run directly HTTP to be sure it's not client problem.
I was only user of squid, but there was netscape proxy running on same machine.
I repeated it localhost, it's not our local network problem.
Example of access.log
847378220.849 787 128.214.248.23 TCP_HIT/200 2293 GET http://www.csc.fi/ -
NONE/-
847378222.484 625 128.214.248.23 TCP_HIT/200 2293 GET http://www.csc.fi/ -
NONE/-
847378232.239 0 128.214.248.23 TCP_HIT/200 2293 GET http://www.csc.fi/ -
NONE/-
847378309.927 560 128.214.248.23 TCP_HIT/200 2293 GET http://www.csc.fi/ -
NONE/-
847378311.322 660 128.214.248.23 TCP_HIT/200 2293 GET http://www.csc.fi/ -
NONE/-
847378312.651 588 128.214.248.23 TCP_HIT/200 2293 GET http://www.csc.fi/ -
NONE/-
847378319.618 6364 128.214.248.23 TCP_HIT/200 2293 GET http://www.csc.fi/ -
NO
and cache.log
96/11/07 16:51:52| --> Asking for 0 bytes
96/11/07 16:51:52| Removed 8 objects
96/11/07 16:51:53| Scanned 0 objects, Removed 0 expired objects
96/11/07 16:51:53| storeGetSwapSpace: Starting...
96/11/07 16:51:53| storeGetSwapSpace: Need 0 bytes...
96/11/07 16:51:53| storeGetSwapSpace: After Freeing Size: 3435760 kbytes
96/11/07 16:51:53| storeGetSwapSpace: Nothing to free with 3435760 Kbytes in
use
.
96/11/07 16:51:53| --> Asking for 0 bytes
96/11/07 16:51:53| Removed 8 objects
96/11/07 16:51:54| Scanned 23 objects, Removed 0 expired objects
96/11/07 16:51:54| storeGetSwapSpace: Starting...
96/11/07 16:51:54| storeGetSwapSpace: Need 0 bytes...
96/11/07 16:51:54| storeGetSwapSpace: After Freeing Size: 3435704 kbytes
96/11/07 16:51:54| storeGetSwapSpace: Nothing to free with 3435704 Kbytes in
use
top shows that squid state is wait, when there is problems.
iostat or vmstat didn't show anything special, maybe more idle cpu
/v/net/www-cache.funet.fi/latest/logs> vmstat 1 40
Virtual Memory Statistics: (pagesize = 8192)
procs memory pages intr cpu
r w u act free wire fault cow zero react pin pout in sy cs us sy id
2 66 22 23K 2513 5091 334 15 293 0 8 0 384 841 706 22 17 61
3 65 22 23K 2514 5092 44 0 44 0 0 0 166 444 462 13 11 77
2 65 23 23K 2514 5090 44 0 44 0 0 0 309 504 574 15 12 72
> 2 65 23 23K 2498 5106 44 0 44 0 0 0 110 24 388 1 6 93
> 2 65 23 23K 2497 5106 44 0 44 0 0 0 48 107 311 3 6 91
> 2 65 23 23K 2486 5117 44 0 44 0 0 0 37 23 301 0 6 94
> 2 65 23 23K 2507 5104 44 0 44 0 0 0 87 181 371 11 7 82
5 63 22 23K 2558 5026 44 0 44 0 0 0 347 920 650 46 15 39
3 65 22 23K 2555 5024 44 0 44 0 0 0 692 1K 1K 39 18 43
3 65 22 23K 2545 5023 45 0 45 0 0 0 622 1K 1K 46 19 35
4 64 22 23K 2547 5019 44 0 44 2 0 0 718 1K 1K 49 21 30
3 65 22 23K 2540 5029 44 0 44 0 0 0 708 2K 1K 37 24 39
3 64 22 23K 2495 5028 44 0 44 0 0 0 574 1K 821 58 24 18
3 64 22 23K 2474 5036 43 0 43 0 0 0 467 1K 736 25 19 56
3 64 22 23K 2479 5032 43 0 43 0 0 0 506 1K 794 30 18 52
3 64 22 23K 2507 5030 43 0 43 0 0 0 487 806 837 23 15 63
3 65 22 23K 2520 5027 79 10 61 0 13 0 491 1K 816 25 19 57
3 65 22 23K 2545 5026 45 0 45 0 0 0 512 1K 885 39 21 40
3 65 22 23K 2539 5030 44 0 44 0 0 0 893 3K 1K 25 32 44
2 66 22 23K 2680 5030 45 0 45 0 0 0 417 1K 799 32 16 52
Virtual Memory Statistics: (pagesize = 8192)
procs memory pages intr cpu
r w u act free wire fault cow zero react pin pout in sy cs us sy id
2 66 22 23K 2622 5092 44 0 44 0 0 0 333 990 696 17 15 68
2 65 22 23K 2644 5089 44 0 44 0 0 0 387 1K 764 32 16 52
2 65 22 23K 2646 5089 43 0 43 0 0 0 224 694 532 18 12 70
3 64 22 23K 2641 5089 43 0 43 0 0 0 255 704 595 18 12 70
2 64 23 23K 2632 5089 43 0 43 0 0 0 228 765 586 17 12 70
3 64 22 23K 2629 5089 43 0 43 0 0 0 178 414 465 15 11 75
3 64 22 23K 2615 5093 43 0 43 0 0 0 322 750 607 25 13 62
2 65 22 23K 2610 5093 43 0 43 0 0 0 693 574 1K 9 19 72
3 64 22 23K 2611 5092 43 0 43 0 0 0 335 670 661 26 14 60
4 63 22 23K 2605 5096 43 0 43 0 0 0 445 1K 771 28 16 57
2 64 23 23K 2608 5093 43 0 43 0 0 0 390 985 734 19 15 66
> 2 64 23 23K 2604 5096 43 0 43 0 0 0 94 111 371 5 7 87
> 2 64 23 23K 2602 5096 43 0 43 0 0 0 64 174 323 8 7 85
3 64 22 23K 2596 5093 43 0 43 0 0 0 219 638 498 24 12 64
3 64 22 23K 2599 5089 43 0 43 0 0 0 505 1K 789 51 21 28
3 64 22 23K 2596 5090 43 0 43 0 0 0 365 1K 709 16 16 68
3 64 22 23K 2595 5090 43 0 43 0 0 0 529 1K 834 53 19 28
3 64 22 23K 2562 5101 43 0 43 0 0 0 643 1K 916 49 21 31
3 64 22 23K 2554 5096 43 0 43 0 0 0 653 1K 963 39 23 38
I> >% zcat access.log.0.gz | nawk -f access-times.awk
I> > local cached remote cached remote proxied no proxy,cache
I> >Nummer: 114132 0 0 253682
I> >Zeit: 0.8s 0.0s 0.0s 4.3s
I> >Zeit: 2.0s 0.0s 0.0s 5.7s
> Squid estimates how many store buckets to use based on your 'cache_swap'
> setting. It asssumes that the average object size is 20k. If you
> decrease 'store_objects_per_bucket' then you increase the total number
> of buckets, and therefore the rate at which buckets are "cleaned out."
I set store_objects_per_bucket 30 without help.
>
> But I don't think that is your problem. You may want to try building
> squid with GNU libmalloc. Brian Denehy <B-Denehy@adfa.oz.au> reported
> a significant performance improvement on DEC Alpha when using GNU malloc.
>
> Duane W.
>
-- Pekka Järveläinen Tel. +358 9 457 2467 Centre for Scientific Computing/FUNET GSM +358 40 543 7856 Tietotie 6, P.O. Box 405 fax. +358 9 457 2302 02101 ESPOO, FINLAND jarvelai@csc.fiReceived on Fri Nov 08 1996 - 03:16:47 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:33:30 MST