Memory Hierarchy for Big Data Applications


Code: (TIN2015-66979-R) (MINECO/FEDER, UE)
Main Researcher: Pablo Abad
Starting Date: 01-01-2016
Duration: 3-years

This project outlines the necessity of efficiently combining future challenges concerning Big Data with emerging technologies such as non-volatile memories based on nanoscale memristor devices.

Some years ago, it would have been hard to imagine the amount of information that will be available in the near future. This explosion in volume and structure of data sets, known as Big Data, will enable significant progress in multiple fields, both scientific and economic, including the main challenges identified in the current Spanish Strategy for Science, Technology and Innovation.

One of the big challenges in the near future will be the efficient management of such a big amount of information. Data analysis and processing will certainly provide one of the main pillars supporting most business opportunities related to Big Data. In this context, the advantage over competitors will be for those able to extract the most benefit from data.

In this process, computers will continue to be one of the key supporting technologies. Fast data analysis of Big Data applications will mainly rely on the performance available in next generation computers. The amount of data we are able to process has grown in parallel with technology, and thanks to technological evolution we will be able to overcome the current challenges of Big Data. Currently, silicon-wafer chips face serious challenges to keep on improving performance at the same rate. For this reason, new technologies are being evaluated as an alternative to current fabrication standards. One of the most promising fields focuses on the replacement of current memory technologies by new components made up of magneto-resistors able to provide much higher storage density.

From the computer architecture point of view, one of the key challenges will be the efficient combination of emerging technologies with the requirements of Big Data applications. The optimization of hardware-software interfaces is one of the main aspects to be tackled. For this reason, this project explores the definition of computer architecture with a memory hierarchy design targeted to Big Data applications (therefore, the Big-Mem acronym). As Big Data applications present a set of particular characteristics which make them sensitive to the organization of the memory hierarchy, suitable organization could provide a significant improvement in performance.

The aim of this project will be the evaluation of memory hierarchies implemented with non- volatile memory technologies, analyzing the best way to adapt their structure and mechanisms to optimize the execution of Big Data applications. On the one hand, we will explore the modifications required by mechanisms such as replacement algorithms and coherence protocols for pre-fetching policy. On the other hand, we will propose solutions for dealing with the new limitations imposed by these emerging technologies, such as reliability of write/read latencies.