The NWayCache is a C++ implementation of an N-way set associative cache. You can think of it as a STL map<> but the following is different:
OK, an NWayCache is not much like map.
The NWayCache allocates a chunk of memory, subdivides this into buckets, each bucket containing N slots/items.
The template is:
namespace NWayCache { template <class K, class T, size_t N, class Hasher=GenericHasher<K>, class CacheLockObject=NullLockObject, class BucketLockObject=NullLockObject> class Cache
The parameter K is the key type by which the items are found. T is the item type that contains the data associated with the key. N is the number of slots in each bucket. specifying N=1 turns this into a simple 1-way cache which you probably don't want.
Hasher is the hash class for generating a hash value from a key. It must have method hash(const K&) returning an integral type, preferably an unsigned long. The default hasher is GenericHasher which simply returns the key. If you #define NWAYCACHE_SUPPORT_STRING before including the header file a specialized template will be included that supports reasonable hashing of std::string.
The parameters CacheLockObject and BucketLockObject specifies the locking policy. The default value NullLockObject is a no-op and disables any kind of locking among threads. If you have included <pthread.h> before including <nwaycache.h> you can use MutexLockObject. Using a MutexLockObject as the CacheLockObject will keep the whole cache locking while searching, inserting, erasing and accessing the cache. Specifying MutexLockObject as the BucketLockObject will lock only the relevant bucket and thus increase concurrency between threads. It does not make sense to specify MutexLockObject for both cache- and bucket lock objects.
The Cache class has the following types available to the outside:
The constructor is simple:
Cache(size_t num_buckets, time_t max_item_age)
The num_buckets argument specifies the number of buckets the cache should contain. The total number of items the cache can contain will be num_buckets*N. The argument max_item_age specifies how long an item is valid in seconds. Specifying 0 as max_item_age is stupid.
In the name space an object NotAvailable is defined. This is a special object used for specifying that an item is not present. A slot can be 3 states:
And the logical knowledge of a <key,data> when you ask the cache will be any of these 3:
Using tri-state has advantages over a simple known/not known scheme.
There are two insert methods (see also crupdate later on):
bool insert(const K &key, const T &data); bool insert(const K &key, const NotAvailable_t&);
Assuming that you have a simple <int,string> cache you can insert items into it like:
cache.insert(42,"fortytwo"); cache.insert(43,NotAvailable);
The items will automatically go away when they time out. Both methods will return true on success, and false if they fail (because the item is already in the cache).
The new item may push out older items within the bucket.
The Cache class has two methods that deals with finding items in the cache:
bool find(SlotReference_t &slot_reference, const K &key); bool exists(const K &key);
exists() is a simple do-we-know-this-item method. It does not differentiate between present and not-present. It returns false if the item is unknown, and true if it is (present or not present).
find() searches for the key, and returns false if it could not find it. On success it returns true and modifies the SlotReference to point to the item. The SlotReference also locks the cache and bucket - that is why find() cannot return a simple iterator. Example:
SlotReference_t slot_reference; if(cache.find(sr,42)) { printf("Item found\n"); if(sr.available()) printf("Item is available, value=%s\n", sr.data().c_str()); else printf("Item is not available\n"); } else printf("Not found\n");
You can update items in the cache with new data by calling Cache::update() or by calling update() on a slot reference.
bool update(const K &key, const T &data); bool update(const K &key, const NotAvailable_t&); void update(SlotReference_t &sr, const T &data); void update(SlotReference_t &sr, const NotAvailable_t&);
The two first forms (those without a slot reference) fails if the item is not in the cache.
You can also update an item with slot reference only:
void SlotReference_t::update(const T& data); void SlotReference_t::update(const NotAvailable_t&); void SlotReference_t::update();
The two first forms updates the item with new data/no data. The last form tells the cache that you have modified the data of the item directly and the timestamp of the item should be updated (the two first forms do this automatically)
Like updates, you can erase items directly in the cache or via a slot reference:
void erase(const K &key); void erase(SlotReference_t &slot_reference);
The slot reference can also erase the item it is pointing to:
void erase();
You may have noticed above that inserts fails if the item is already known, and updates fails if the item is not known. This could lead to code like this:
if(!cache.insert(42,"fortytwo")) cache.update(42,"fortytwo"));
This is slightly inefficient and more importantly is a race condition between threads. To remedy this, the cache has a special operation:
void crupdate(const K &key, const NotAvailable_t&); void crupdate(const K &key, const T &data);
These two methods will insert the item if not known, or update it if known. crupdate operations are atomic.
The cache keeps statistics about the hit ratio, and you can access them with the following API:
void queryStatistics(unsigned long &num_positive_hits, unsigned long &num_negative_hits, unsigned long &num_misses, bool reset=false)
The statistics cover only how good find() and exists() work. Please note that erase() and update() call find() themselves.
Some would argue that this should be left to the caller to keep track of
(What you don't use you don't pay for
). However:
If you access the cache from multiple threads you have to keep in mind that the cache statistics are not guaranteed to be coherent unless you use a MutexLockObject for CacheLockObject
If you use MutexLockObject for CacheLockObject or BucketLockObject the following will deadlock:
SlotReference_t slot_reference; cache.find(slot_reference,42); ... cache.insert(42,"fortytwo");
The culprit is that the slot reference keeps a lock on the cache/bucket and insert needs the locks too. You can avoid this in 3 ways:
SlotReference_t slot_reference; cache.find(slot_reference,42); slot_reference.release(); cache.insert(42,"fortytwo");It is safe to call release() more than once including on invalid slot references.
If you copy a slot reference the old slot reference will become invalid. This is due to the locks the slot reference holds. When you copy a slot reference the locks are transferred to the copy. This avoids deadlocks.
You can use an iterator to walk through all the items in the cache. This is not thread safe and no locks will be made:
for(Cache<int,string,4>::iterator iter=cache.begin(); iter!=cache.end(); ++iter) ...
An iterator becomes invalid when you insert or removes items in the cache.
nwaycache has been tested (has passed unit tests) with:
I am not happy with the iterators. There should be two of them: const and non-const. They should also do locking of the cache and buckets. That requires transferring locks in copy-cons/assignment-ops, and that will make the iterators a bit too heavyweight for my taste.
There are opportunities for reducing the amount of code generated when instantiating similar templates. Moving code around would be needed.