HashFile

HashFile implements a simple mechanism to store and recover a large quantity of data for the duration of the hosting process. It is intended to act as a local-cache for a remote data-source, or as a spillover area for large in-memory cache instances.

Note that any and all stored data is rendered invalid the moment a HashFile object is garbage-collected.

The implementation follows a fixed-capacity record scheme, where content can be rewritten in-place until said capacity is reached. At such time, the altered content is moved to a larger capacity record at end-of-file, and a hole remains at the prior location. These holes are not collected, since the lifespan of a HashFile is limited to that of the host process.

All index keys must be unique. Writing to the HashFile with an existing key will overwrite any previous content. What follows is a contrived example:

alias HashFile!(char[], char[]) Bucket;

auto bucket = new Bucket ("bucket.bin", HashFile.HalfK);

// insert some data, and retrieve it again
auto text = "this is a test";
bucket.put ("a key", text);
auto b = cast(char[]) bucket.get ("a key");

assert (b == text);
bucket.close;

Constructors

this
this(const(char[]) path, BlockSize block, uint initialRecords)

Construct a HashFile with the provided path, record-size, and inital record count. The latter causes records to be pre-allocated, saving a certain amount of growth activity. Selecting a record size that roughly matches the serialized content will limit 'thrashing'.

Members

Functions

close
void close()

Close this HashFile -- all content is lost.

get
V get(K key, bool clear)

Return the serialized data for the provided key. Returns null if the key was not found.

length
ulong length()

Return the currently populated size of this HashFile

path
const(char[]) path()

Return where the HashFile is located

put
void put(K key, V data, K function(K) retain)

Write a serialized block of data, and associate it with the provided key. All keys must be unique, and it is the responsibility of the programmer to ensure this. Reusing an existing key will overwrite previous data.

remove
void remove(K key)

Remove the provided key from this HashFile. Leaves a hole in the backing file

Structs

BlockSize
struct BlockSize

Define the capacity (block-size) of each record

Variables

EightK
enum BlockSize EightK;
Undocumented in source.
EighthK
enum BlockSize EighthK;
FourK
enum BlockSize FourK;
HalfK
enum BlockSize HalfK;
OneK
enum BlockSize OneK;
QuarterK
enum BlockSize QuarterK;
SixteenK
enum BlockSize SixteenK;
SixtyFourK
enum BlockSize SixtyFourK;
ThirtyTwoK
enum BlockSize ThirtyTwoK;
TwoK
enum BlockSize TwoK;
Undocumented in source.

Meta