Moving the Database from Disk to Memory

Objectives
After completing this lesson, you will be able to:

After completing this lesson, you will be able to:

  • Describe How SAP HANA Cloud moves the database from disk to memory

Moving the Database from Disk to Memory

New technology presents opportunities

SAP HANA Cloud takes full advantage of the recent trends and advances in computing technology. Many software solutions that are available in the cloud today are actually just conversions of the equivalent on-premise solution. SAP HANA Cloud is not a conversion of SAP HANA on-premise. SAP HANA Cloud was developed from scratch using native cloud design principles such as elasticity and fast deployment.

In the past, the high cost of memory meant that only small amounts were available to use. Small memory was a serious bottleneck in the flow of data from the disk to the CPU. It did not matter how fast the CPU was if the data could not reach it quickly from the disk. But in recent years, the cost of memory has fallen and continues to fall year-on-year. Hardware vendors are now shipping huge amounts of memory in their servers. Memory can now scale up to many terabytes whereas previously, gigabytes of memory was the norm.

With huge amounts of memory available, instead of using disk, we can now store the entire database of even the largest organizations, completely in memory. This gives you instant access to all data and eliminates wait times caused by moving data from disk to memory. We can finally lose the mechanical spinning disk and the latency it brings and rely on memory to provide all data instantly to the CPU. We know that Solid State Devices (SSD) storage is faster than disk but it still can not compete with memory.

To address large amounts of memory, we also use 64-bit operating systems. Traditional 32-bit operating systems cannot address the large amounts of memory now available.

But it is not enough to have huge memory delivering data to the CPU if the bottleneck is then the CPU performance.

So in addition to huge memory, CPU performance continues to improve at a phenomenal rate. We now have high-speed, multi-core CPUs that can take on complex tasks and break them up so then can be processed in parallel to provide fast response times. This means that response times for even the most complex analytical tasks, such as predictive analysis, which used to be long-running batch jobs left overnight, can now be carried out in real time.

So with a combination of huge memory sizes and faster multi-core CPUs, we have now have access to enormous amounts of computing power. Traditional disk-based databases were not designed to take advantage of such modern technology.

Put simply, databases design needed to catch up with the exciting advances in hardware technology. So, a complete rewrite of the database (SAP HANA Cloud database), as well as the applications that run on the database (e.g. SAP S/4HANA) was required.

SAP collaborated with leading hardware partners such as Intel and IBM, who shared the designs of their new hardware architectures. This enabled SAP to develop SAP HANA Cloud in such a way that it could extract every last drop of power from the hardware.

Moving the database from disk to memory

Watch this video to learn about moving the database from disk to memory.

So disk is not needed?

So if we can move the entire database from disk to memory, does this mean SAP HANA Cloud eliminates disk storage? The answer is: No, SAP HANA Cloud still uses disk. One of the key reasons is as follows:

Even though we can fit the entire database in memory, we usually don't want to do that.

Data in memory is classified as hot. Hot data is the data that is frequently used by business and needs to perform very well with instant response. It is usually data that is very recent and is of interest to many parties. Hot data needs to be closest to the CPU for optimum read performance. That is why hot data sits in memory and not on disk.

Data that is not used so often data can be classified as warm, which means that access is important but performance is not priority. Warm data is stored on disk and loaded to memory only when it is needed. Most organizations would not want to store all their historic data in memory. Memory costs are certainly falling but compared to disk, memory is still very expensive. This means that you should deliberately choose a memory size that fits only the hot data and not worry about trying to fit the entire historical data set of the whole organization in memory. When customers choose their SAP HANA Cloud size they need to carefully calculate the memory size requirements of the hot and warm data and choose a memory / disk ratio setup that provides great performance on the most important data and acceptable performance on the less important data.

When you deploy SAP HANA Cloud you choose the key components that determine your required computing power. The key decisions are number of CPUs, memory size and disk size.

Save progress to your learning plan by logging in or creating an account

Login or Register