hi.

Interview question: a data structure for millions of stock prices

This was in answer to a job posting for what turned out to be a hedge fund quant-like position.  I'll not mention their name since they appear to still be in business.  My description below belies my gross inexperience with large-scale data structures, but it is interesting that I am willing to engineer and calculate every page of a multi-million-record data structure, instead of just saying how to put it into MySQL (which is what I would say now).
 
We are presented with the problem of ensuring that approximately 26.1 million records of financial data are able to be accessed quickly and repeatedly. A relational database implementation provides the system designers with a very clear and robust logical model with which to organize the data, but of course this says nothing about how the data themselves are organized in storage. I propose that storing these data in a B+-tree offers the most efficient user access. I will omit a detailed description of this structure, but it suffices to say that it is a tree of arbitrary order that includes a balancing subroutine as part of its insert function.
Consider these data stored in a relation consisting of the attributes “Day”, “Ticker”, and “Price”. Since each stock can have no more than one closing price per day, we may let the pair (Day, Ticker) uniquely identify each record. Let us assume that a “page” consists of 4096 bytes, each tree node is represented as a page, each attribute requires 4 bytes of storage, and a pointer to a location in memory or on the disk requires 4 bytes of storage. Then each node in the tree can hold 255 records. So our tree would be of order 256, each node holding 255 records and having 256 children.
If we assume that actions A), B), and C) make up the bulk of the queries to this database, a prudent choice would be to store the records in the tree sorted by Day, then Ticker. To facilitate action B), we can also include an index on (Ticker, Date). The main strength of the B+-tree is that since the tree maintains a constant depth, each query returns in the same amount of time; its main drawback is the costly process of balancing the tree. However, if we are considering this 10-year block of information as static historical data, balancing need only be performed once, when the data is loaded into storage.
Since the depth of the tree determines how often the disk must be accessed, we must determine how many levels are necessary for our data. The tree's root node stores 255 records and has 256 children, all of whom make the tree's first level. Each of the root's children can store 255 records, so level 1 holds 65,280 records and has 65,536 children. Level 2 stores 16,711,680 records and has 16,777,216 children. However, since level 3 only needs to store the remaining 9,322,785 records, we only require level 2 to spawn 36,560 children. So we see that all of our data can fit into three levels of a B+-tree, and our additional index, consisting of fewer fields, can fit into two levels. So within four disk accesses, we can return any single record in our data set.
But do we need to access a disk at all? Assuming that our conservative estimates of record count and attribute sizes are correct, the primary storage tree takes up approximately 419.4 megabytes, and the index consumes 314 megabytes. This means that prior to intensive use of these data, it could be easily loaded onto a single stick of DDR SDRAM, eliminating the time-intensive process of drive head reading. But even using strictly magnetic media, the B+-tree schema provides a robust and efficient way to utilize this data set.

Knowledge as an intuitive function of evidence

Evaluation of my weakness