Why study memory hierarchy?

Memory is a vital resource for computer systems. Since the implementation of the stored-program concept, the memory fills the processor with data and instructions, and also receives from it the results of its computing.

The memory can be seen as a big bottleneck, which limits the computer’s performance and, at the same time, can be an awful villain, responsible for the increase of costs related to the computer solutions. Indeed, users would like to have memories with infinite storage capacity, cheapest, with shortest response time (next to the CPU processing time), and not volatile. Oh yeah, everyone desires memories that never screw up!

The memory hierarchy groups different memory technologies, in a hierarchical order (obviously), which offers to computer programs an illusion that there is a big amount of quick, cheap and non-volatile memory. The spatial and temporal location principles present in our programs allow this hierarchy to make sense and improve computer solutions’ performances with more reasonable costs (cheaper). For its “never screwing up”, well, there are ways of making copies, like backups!

Understanding how this memory hierarchy works is essential for computer professionals and, at the same time, a challenge. This happens because there are different hardware and software components that when put to work together, create this abstraction called Memory Hierarchy. Comprehending these components structures, and mainly how they work and affect the performance, demands a good imagination dose. Amnesia induce this comprehension, by the use of simulations!

Paulo Sérgio Lopes de Souza
Sarita Mazzini Bruschi
LaSDPC/SSC/ICMC/USP