CacheMap

CacheMap extends the basic hashmap type by adding a limit to the number of items contained at any given time. In addition, CacheMap sorts the cache entries such that those entries frequently accessed are at the head of the queue, and those least frequently accessed are at the tail. When the queue becomes full, old entries are dropped from the tail and are reused to house new cache entries.

In other words, it retains MRU items while dropping LRU when capacity is reached.

This is great for keeping commonly accessed items around, while limiting the amount of memory used. Typically, the queue size would be set in the thousands (via the ctor)

Constructors

this
this(uint capacity)

Construct a cache with the specified maximum number of entries. Additions to the cache beyond this number will reuse the slot of the least-recently-referenced cache entry.

Members

Functions

add
bool add(K key, V value)

Place an entry into the cache and associate it with the provided key. Note that there can be only one entry for any particular key. If two entries are added with the same key, the second effectively overwrites the first.

get
bool get(K key, V value)

Get the cache entry identified by the given key

opApply
int opApply(int delegate(ref K key, ref V value) dg)

Iterate from MRU to LRU entries

take
bool take(K key)

Remove the cache entry associated with the provided key. Returns false if there is no such entry.

take
bool take(K key, V value)

Remove (and return) the cache entry associated with the provided key. Returns false if there is no such entry.

Properties

size
uint size [@property getter]

Static functions

reaper
void reaper(K k, R r)

Reaping callback for the hashmap, acting as a trampoline

Meta