Share |

101: An Introduction to In-Memory Database Systems

By Steve Graves, Co-Founder and CEO, McObject

In the realm of real-time database systems, DRAM memory is the new storage. Technologists have long recognised that grabbing data from main memory is much quicker than accessing it from I/O-based persistent media (disk, flash, etc.). This is the rationale for database management system (DBMS) caching mechanisms, which seek to predict which records will be requested, and hold this data in memory.

But DBMS caching is not enough for some high performance systems - think algo trading and other capital markets applications that gain competitive advantage by shaving even microseconds off latency. One flaw in database caching is that it only speeds up information retrieval, or “database reads.” Updates, or “writes,” ultimately impose I/O as they are written through the cache, to disk. And DBMS caching has itself come to be viewed as a source of latency, inasmuch as it adds to the software execution path and eats up CPU cycles that could otherwise serve time-critical application tasks.

Recognising these drawbacks, some DBMS visionaries asked, “If main memory served as the place where data is stored and manipulated – rather than as a temporary storage location, a la caching – wouldn’t that be a lot faster?” And vendors have turned this insight into a new and growing category of DBMS: the in-memory database system, or IMDS, which uses main memory as its primary storage medium. By keeping all records in main memory at all times, IMDSs eliminate obvious sources of latency, such as physical disk I/O and cache management, as well as less obvious ones, such as reliance on the underlying file system to store and organise data. The result is sorting, storage and retrieval of records dramatically faster than is possible with traditional DBMSs.

In-memory database systems are changing the real-time software landscape, cutting latency in application categories as diverse as financial trading, social networking web sites, and embedded telecom, aerospace and industrial control systems. The trend toward IMDSs is also marked by more than a little hype, with dozens of vendors now marketing their variations on traditional DBMS software as “in memory.” This article focuses on characteristics that define a “real” in-memory database system; examines factors driving the technology’s growth; and considers how features that differ even within the IMDS product category can affect system latency.

Designed for Performance

An in-memory database system can provide virtually all of the features of a traditional DBMS. These include a high-level data definition language (DDL); programming interfaces, such as ODBC, JDBC or native C/C++ or Java APIs; transactions that support the ACID (Atomic, Consistent, Isolated and Durable) properties; features for database durability, such as transaction logging; database indexes, event notifications, multi-user concurrency, and more.

The key difference is that IMDSs eschew traditional DBMSs’ reliance on persistent data storage and eliminate sub-systems that support that storage. That enables IMDSs to keep data in one place – main memory – during processing rather than moving it around. Consider the many “handoffs” required for an application to read a piece of data from an on-disk (traditional) database, modify it and write that record back to the database, as shown in Figure 1. These steps require time and CPU cycles, and cannot be avoided in a traditional database. In contrast, the same process using an in-memory database system entails a single data transfer from the database to the application, and back again.

Figure 1. Data Flow in a Traditional “Disk-Based” DBMS

It is obvious that disk I/O, as a mechanical process, would drag down database performance. But do IMDSs’ other efficiencies – such as eliminating caching, file I/O, and the many data transfers illustrated above – really affect performance? That question can be tested. Most operating systems support a RAM-disk capability that enables applications to use a block of main memory as if it were a hard disk. Deploying a traditional database on such a RAM-disk eliminates physical disk I/O, but leaves processes such as caching and file I/O intact. Does its performance then equal that of an in-memory database?

To find out, McObject set up a benchmark comparing an application’s performance in three database storage scenarios: using an on-disk database system and storing data on a hard drive; using the on-disk database with storage on a RAM-disk; and working with data in RAM using an IMDS with characteristics (such as a navigational C API and in-process architecture) similar to the on-disk DBMS. Moving the on-disk database to a RAM drive accelerated database reads by nearly 4x, and database writes by more than 3x. However, deploying the same application with a true in-memory database system led to even more dramatic performance gains. The IMDS outperformed the RAM-disk-deployed DBMS by 4x for database reads and turned in a startling 420x improvement for database writes.

Will the Real IMDS Please Stand Up?

This performance edge should motivate potential users to check carefully whether they’re getting a “real” IMDS, particularly in light of DBMS vendors’ proliferating IMDS claims. When a database system designed for disk storage is recast as an IMDS — often by simply redeploying the original database system with memory-based analogs of database system artifacts that originally  existed in a file system. One example is the cache management illustrated in Figure 1.  It is still present in many DBMSs that are rechristened as “in-memory,” and it requires substantial memory and CPU cycles, such that even a “cache hit” underperforms an in-memory database.

Much of an in-memory database system’s performance advantage lies in its origins as an IMDS. The design goals and optimisation strategies used to develop IMDSs are diametrically opposed to those underpinning traditional, file system-based DBMSs. Traditional databases are created using a strategy of minimising file and disk I/O, at the expense of consuming more CPU cycles and memory. This design strategy stems from the vendors’ recognition of I/O as the single biggest threat to their databases’ performance. Executing the CPU instructions necessary to fetch data from a cache is orders of magnitude faster than fetching it from the file system. So using those CPU instructions and memory for the cache is a reasonable trade off to improve performance.

In contrast, in-memory database systems by definition eliminate I/O. With disk and file I/O out of the picture, the overriding optimisation goal for IMDSs is reducing memory and CPU demands. After all, we choose to use an in-memory database to improve performance, so optimising our use of the CPU must be a primary goal. Similarly, when DRAM replaces disk as the storage medium, we want to use it as efficiently as possible, and not (for example) waste it by caching what is already in memory. But, once the optimisation strategy of an on-disk DBMS is “baked into” database system code, it can’t be undone, short of a rewrite.

With Memory in Abundance, IMDSs Scale

For system designers contemplating use of an IMDS for larger-scale applications, the technology – and economic – environments have never been more favorable. A decade or more ago, DRAM’s high price posed a disincentive to its use as primary storage for many categories of automated systems. Today, with server-grade DRAM available at $15/GB or below (particularly when purchased in quantity), the cost is ridiculously low, by historic standards. On the horizon, additional hardware advances are expected to make DRAM scale-up even less expensive.  Today’s widely-used RDIMM memory suffers from capacity and mega-transfer (MT/s) limits induced by the electrical load on the data bus. However, new Load-Reduced DIMM (LRDIMM) technology goes some distance toward extending these limits. For example, given a server with two processors, three DIMM slots per channel, and four channels per processor, total system memory can be increased by two to three times over RDIMM capacity in the same system[i].

On the software side, 64-bit OSs are now widely used, and IMDS vendors have embraced this trend, releasing 64-bit products that can work with hundreds of times more data in memory than their 32-bit counterparts. The technology can scale up: in a test using its 64-bit eXtremeDB-64 IMDS, McObject created a 1.17 terabyte, 15.54 billion row database on a SGI server with 80 dual-core 1.6 Ghz Itanium processors running SUSE Linux Enterprise Server 9. For a simple SELECT query, the benchmark clocked 87.78 million query transactions per second using its native, navigational application programming interface (API) and 28.14 million transactions per second using a SQL ODBC API. More complex JOIN operations resulted in performance of 11.13 million operations per second with eXtremeDB’s native API, and 4.36 million operations per second using SQL ODBC.

Of course, a 160-core server like the one described above is nobody’s idea of “garden variety,” or even common high-end, hardware. More typical is the test-bed used by Transaction Network Services, a leading global provider of data communications and interoperability solutions for the payments, telecom and financial markets industries, when it evaluated eXtremeDB-64 before integrating the IMDS in one of its real-time software products. TNS's test platform consisted of Intel Xeon X5570 2.93 GHz hardware, with 8 cores and hyper-threading enabled, running Red Had Enterprise Linux 4, with 72 GB RAM. In this environment, eXtremeDB’s performance exceeded 2 million queries per second with a 10 million-row database. When TNS upped the challenge by increasing database size 3000% (to 300 million records), eXtremeDB’s queries-per-second results fell only minimally – by less than 7% – suggesting near-linear performance scalability from small to very large databases.

IMDSs, Clustering and NoSQL

In addition to scaling up, in-memory database systems can also scale out, with some vendors now offering clustering editions that spread IMDS processing across multiple hardware nodes. The benefits of clustered IMDSs include lower latency and higher throughput, achieved by harnessing the net processing power of many CPUs; fault-tolerance, through redundancy; and a cost-effective path to system growth, since deployments can be expanded by adding low-cost (i.e. “commodity”) servers.

Today, any discussion of a highly scalable, distributed data management solution – like a clustering IMDS – also begs consideration of NoSQL, the somewhat amorphous grouping of data management technologies that has emerged to handle the “big data” of large-scale web sites and other applications. Like clustering IMDSs, NoSQL data management products are typically multi-node and horizontally scalable. Also like IMDSs, some NoSQL technologies leverage the speed advantage of in-memory data access.

For example, distributed memory caching systems – as typified by the memCached product, for example – accelerate IT infrastructure by holding frequently-used items (objects, strings, etc.) in memory outside the enterprise DBMS, spread across multiple nodes or servers.  memCached consists of an in-memory key-value store that is typically compared to a distributed hash table. Each node functions independently but treats memory available on all nodes, in aggregate, as a single memory pool, and can instantly “see” changes that other nodes have made to the key-value store. Like a clustered IMDS, memCached provides both fast in-memory data access, and fault tolerance through redundant servers.

NoSQL solutions such as distributed memory caching can be fast and highly scalable. But in other areas, NoSQL can differ markedly from IMDS and DBMS technologies, with differences including:

Transactions. Clustered database systems that enforce the ACID properties bundle operations into transactions that succeed or fail together, in order to preserve data consistency on all nodes in the cluster. Object-caching (and some other NoSQL) products forego this. Some NoSQL solutions instead implement an “eventually consistent” approach that enables data updates to propagate more freely, increasing parallelism but also posing the risk of working with inconsistent data.

Storage efficiency. Many NoSQL solutions lack indexes, querying languages, normalised data designs and other “real” database tools to support complex queries or sorts. Users can respond by attempting to anticipate requested views of data, pre-computing these and storing the results until needed, but this imposes higher storage (memory) requirements (i.e. two views of the same data will require 2X the memory). In contrast, IMDSs are well-equipped to perform complex operations on-the-fly, eliminating the need to store two or more views.

Recoverability. Many database systems, including IMDSs, support roll-forward recoverability through transaction logging, which is a process of journaling changes to the database as they occur. Some NoSQL technologies including distributed caching lack this feature.

Choosing Among IMDSs

Beyond shared characteristics such as main memory storage, IMDSs exhibit significant technical diversity. If an in-memory database seems like a good conceptual fit for a project, further research is needed in order to choose the right IMDS product. Considerations should include:

IMDS architecture. Some IMDSs embody client/server architecture, in which client applications send requests to a database server. Even when residing on the same computer, client and server processes communicate via inter-process communication (IPC) messaging. Other IMDSs use an in-process architecture in which the database system runs entirely within the application process; instead of separate client and server processes, in-process IMDSs consist of object code libraries that are linked in (compiled with) the application.

Client/server is a mainstay of enterprise DBMSs. Its advantages include the ability to right-size a network by installing the server software on a more powerful computer (clients need not be upsized). As a result, client/server can lend itself naturally to supporting larger numbers of users. In-process architecture is simpler, which means it has a smaller code size (shorter execution path). Simpler code is, by definition, less prone to defects. In-process design eliminates inter-process communication (IPC), resulting in lower latency. In-process IMDSs further accelerate performance and reduce complexity by eliminating server tasks such as managing sessions and connections, and allocating and de-allocating resources.

Application programming interfaces. SQL is used widely in finance and other enterprise sectors. Benefits include familiarity and succinctness in implementing some complex queries. On the negative side, SQL relies on a query optimiser to weigh potential execution plans and choose the best one. This optimiser serves as a “black box” for developers seeking to control speed and predictability. The risk of a SQL query bogging down while the optimiser considers execution strategies can make it a poor fit for eliminating latency. And the risk of the optimiser selecting different plans from one execution to another is poisonous to predictability.

As an alternative, some IMDSs provide a navigational API that is closely integrated with programming languages like C/C++. Such APIs navigate through the database one record at a time; their execution plan is defined in the application code, and latency is known and predictable. They are faster, due to bypassing the parsing, optimisation and execution of dynamic SQL. Information is stored as native-language data types (including structures, vectors and arrays), eliminating the overhead of conversion to SQL data types.

When SQL is a requirement, latency is generally lower with a rules-based rather than a cost-based optimiser. Also, some IMDSs support use of both SQL and native APIs, even in the same application (the navigational interface can be “plugged in” for performance-critical operations, for example).

Persistence/volatility. Data stored in RAM is volatile, but IMDSs can provide features to ameliorate risk in the event that someone pulls the plug. Potential safeguards include transaction logging (discussed above), database replication and use of non-volatile RAM (NV-RAM, or battery-backed RAM). Even within these techniques, a particular IMDS implementation may offer tradeoffs that affect latency, such as the decision whether replication should be synchronous or asynchronous between master and replica nodes.


The developer seeking to minimise data management latency will encounter a number of challenges. One is storage latency, and its major hurdle – the performance “pain” that is unavoidable when using persistent media as primary storage – is overcome by the IMDS technology that is the subject of this article. But the quest for the lowest possible data management latency does not end there, as the preceding discussion of database architectures, APIs and persistence strategies affirms. Additional factors, such as the need for scalability, can send the system designer in other research directions, all within the scope of in-memory database system technology. Truly, the choices that crop up when implementing low-latency data management can seem never-ending. The good news is that so many of the potential performance issues have been worked through, resulting in off-the-shelf IMDS technology that can be successfully applied to many of the toughest latency-reduction challenges.

Add comment

Member Login or Join the Community to post comments