> how about lowering memory_pools_limit?
Ok, tried that.
> BTW, imho default should be quite low, and user given
> ability to up it, not
> the other way around. Who needs high memory_pools_limit
> typically has high
> traffic and is about to tweak stuff anyway.
I agree.
> Store Mem Buffers idle. This is after a spike of activity.
> mempool idle limits
> should handle this. (could you try chunked_mempools, btw? it
> should keep idle
> mem lower automatically, and as a side effect tests
> chunked_mempools IRW. I
> have some worries about how chunked_mempools behaves with
> spikes of traffic,
> there free fragmentation is harder to deal with. In my tests
> it has been ok)
I am already walking on the edge here, and I'd prefer not to
mix potential problems. I will do that when I'm entirely satisfied
with the NTLM code - which I _must_ have.
> Abit worrying is 235M usage. mempools account for 156M, and
> because you have
> very few small allocs, malloc overhead is not an issue.
> seems like some 100M
> is somewhere in a dark matter. It may be freespace
> fragmentation. (look at
> mallinfo() Free blocks). chunked_mempools try to reduce
> freespace fragmentation,
> but I guess memory_pools_limit would also reduce it alot.
> Imagine that one of
> the idle items is 236M deep into the heap, so your process
> size is 236M, no
> matter if all rest is not allcoated.
Nod.
> In any case, memory_pools_limit is too simple for all cases.
> imho it should
> account for some kind of timefactor, and act per pool. But
> thats for later.
Sure.
-- /kinkieReceived on Fri Jun 15 2001 - 08:20:34 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:14:04 MST