Dealing with store_table shared by ufs-based cache_dirs

From: Alex Rousskov <rousskov_at_measurement-factory.com>
Date: Wed, 17 Oct 2012 11:05:08 -0600

On 10/17/2012, Alex Rousskov wrote:

> As a part of Large Rock work, I will also try to fix at least one of
> Store API quality problems that complicate any Store-related
> improvements:

> 4) Get rid of the global store_table. Make it local to non-shared
> caching code.

> One of the biggest challenges would be to handle a combination of
> original non-shared caches and modern shared caches. I will discuss this
> in a separate email.

All "original non-shared caches" use a single store_table global as
their index.

All "modern shared caches" use private, per-cache indexes.

Let's look at the disk cache coordinator class (the StoreHashIndex
replacement) and its "find StoreEntry by key" get() method specifically.
The method needs to return a StoreEntry if one of the cache_dirs has the
object with the right key. See StoreController::get for the current
[misplaced] code.

Let's focus on a case where we have several non-shared disk caches.
Since non-shared cache_dirs share an index, it is pointless to ask each
of them about the same key -- repeated questions will be answered from
the same global index shared by all those cache_dirs. This wastes CPU.
The waste might be noticeable in configurations with many caches.

I can think of three sane ways to approach this:

1) KISS. Simple but inefficient.

> StoreEntry Disks::get(const cache_key *key) {
> // ask each cache_dir
> for each cache_dir sd {
> if (StoreEntry *e = sd->get(key))
> return e;
> }
>
> return NULL; // not on disk
> }

Each cache dir will be asked whether it has the object until the object
is found or there are no more cache_dirs to ask. For ufs cache_dirs,
such questions are pointless except for the very first one.

2) Optimize some hits. Screw all misses. Misses are expensive anyway.

> StoreEntry Disks::get(const cache_key *key) {
> // XXX: Layering violation. Check ufs-only global first.
> if (StoreEntry *e = GlobalUfsIndex->get(key))
> return e;
>
> // ask each cache_dir
> for each cache_dir sd {
> if (StoreEntry *e = sd->get(key))
> return e;
> }
>
> return NULL; // not on disk
> }

If the object is not in the global ufs cache_dir index, each cache dir
will be asked whether it has the object. For ufs cache_dirs, such
questions are pointless. The current code implements this option.

3) Complex but optimal:

StoreEntry Disks::get(const cache_key *key) {
    // check one cache dir from the Shared Index group
    if (StoreEntry *e = DirsSharingIndex[0]->get(key))
         return e;

   // ask each cache_dir that maintains individual index
   for each cache_dir sd in DirsWithPrivateIndexes {
       if (StoreEntry *e = sd->get(key))
           return e;
   }

   return NULL; // not on disk
}

This option avoids any repeated questions but requires each cache_dir to
register in one of the two groups: dirs that are sharing the same index
and dirs that have private isolated indexes. This still essentially
leaks the index sharing fact from the UFS layer into core, but in a
hidden way.

4) Remove the problem. Let each UFS cache_dir have their own private
store_table and the expense of some RAM/CPU waste.

This solution completely removes the problem because all cache_dirs
become the same when it comes to their index sharing. However it will
probably cause some RAM waste for configurations with a large number of
ufs cache_dirs because N small store_tables will probably consume more
space than one large store_table representing N dirs. It will probably
also waste some CPU cycles because querying N small store_tables is
slower than querying one large store_table representing N dirs.

The code sketch will be the same as in #1 but with different implications.

The same problem may be present in other methods where querying indexes
of individual cache dirs is needed.

I am tempted to do #4. Which approach do you think we should use?

Thank you,

Alex.
Received on Wed Oct 17 2012 - 17:05:16 MDT

This archive was generated by hypermail 2.2.0 : Wed Oct 17 2012 - 12:00:06 MDT