Article Details

A New Approach of Dbms Architecture For Data Admiitance In Memory Through Anti-Caching Technique |

Abul Hasan Khan, Dr. D. K. Pandey, in International Journal of Information Technology and Management | IT & Management

ABSTRACT:

The traditional wisdomfor building disk-based relational database management systems (DBMS) is toorganize data in heavily-encoded blocks stored on disk, with a main memoryblock cache. In order to improve performance given high disk latency, thesesystems use a multi-threaded architecture with dynamic record-level lockingthat allows multiple transactions to access the database at the same time. Previous research hasshown that this results in substantial overhead for on-line transactionprocessing (OLTP) applications. The next generation DBMSs seek to overcomethese limitations with architecture based on main memory resident data. Toovercome the restriction that all data fit in main memory, we propose a newtechnique, called anti-caching, where cold data is moved to disk in atransaction ally-safe manner as the database grows in size. Because datainitially resides in memory, an anti-caching architecture reverses thetraditional storage hierarchy of disk-based systems. Main memory is now theprimary storage device. We implemented aprototype of our anti-caching proposal in a high-performance, main memory OLTPDBMS and performed a series of experiments across a range of database sizes,workload skews, and read/write mixes. We compared its performance with anopen-source, disk-based DBMS optionally fronted by a distributed main memorycache. Our results show that for higher skewed workloads the anti-cachingarchitecture has a performance advantage over either of the other architecturestested of up to 9x for a data size 8x larger than memory. In this paper, we analyzestate-of-the-art approaches to achieving this goal for in-memory databases,which is called as “Anti-Caching” to distinguish it from traditional cachingmechanisms. We conduct extensive experiments to study the effect of eachfine-grained component of the entire process of “Anti-Caching” on bothperformance and prediction accuracy. To avoid the interference from otherunrelated components of specific systems, we implement these approaches on auniform platform to ensure a fair comparison. We also study the usability ofeach approach, and how intrusive it is to the systems that intend toincorporate it. Based on our findings, we propose some guidelines on designinga good “Anti-Caching” approach, and sketch a general and efficient approach, whichcan be utilized in most in-memory database systems without much codemodification.