On 03/01/2014 09:11 AM, Niki Gorchilov wrote:
> I'm musing on the performance implications of using rock store on SSD.
There is very little experience with SSDs for Squid, especially when
using Rock store (a relatively new feature). Folks naturally expect SSDs
to be faster, but I have not seen (or do not recall) any high-quality
comparisons specific to Squid and Rock store. Someday, we will make one.
> As per my understanding the underlying filesystem is unaware of any
> unused blocks in the big rock file, thereby using fstrim or "discard"
> mount option will have no effect.
Some Linux file systems know that the blocks are unused (at least the
blocks at the end of the file), but unused blocks ought to be irrelevant
for a cache in a steady state (the common/interesting case) because
there are no unused blocks in that state.
> Once the rock file is full, SSD io performance will degrade
> considerably, due to the read-earase-modify-write cycle on every rock
> change.
There should be no erase-modify steps if your rock slot size is a
multiple of OS page and disk block sizes. Only read-write. If that is
not what you see, it may be a bug.
Also, for a typical large cache, it is probably more like
write-write-write-read-write-... cycle because a high portion of cache
hits should come from the memory cache but nearly all cachable misses
are written to disk.
Alex.
Received on Sat Mar 01 2014 - 23:41:04 MST
This archive was generated by hypermail 2.2.0 : Mon Mar 03 2014 - 12:00:05 MST