New technology presents opportunities
SAP HANA Cloud has been developed from scratch with a design that takes full advantage of the recent trends and advances in computing technology. SAP HANA Cloud was not built by taking an existing software product and building on top of it, it was a complete re-design and re-development. This total redevelopment was undertaken to ensure that it could provide a next generation platform built on the very latest technology.
For example, historically, the high cost of memory meant that only small amounts were available to use. Small memory was a serious bottleneck in the flow of data from the disk to the CPU. It did not matter how fast the CPU was if the data could not reach it quickly from the disk.
But in recent years, the cost of memory has fallen and continues to fall year-on-year. Hardware vendors are now shipping huge amounts of memory in their servers. Memory can now scale up to many terabytes whereas previously, gigabytes of memory was the norm.

With huge amounts of memory available, instead of using disk, we can now store the entire database of even the largest organizations, completely in memory. This gives you instant access to all data, and eliminates wait times caused by moving data from disk to memory. We can finally lose the mechanical spinning disk and the latency it brings, and rely on memory to provide all data instantly to the CPU. We know that Solid State Devices (SSD) storage is faster than disk but it still can not compete with memory.
To address large amounts of memory, we also use 64-bit operating systems. Traditional 32-bit operating systems cannot address the large amounts of memory now available.
But it is not enough to have huge memory delivering data to the CPU if the bottleneck is then the CPU performance So in addition to huge memory, CPU performance continues to improve at a phenomenal rate. We now have high-speed, multi-core CPUs that can take on complex tasks and break them up so then can be processed in parallel to provide incredible response times. This means that response times for even the most complex analytical tasks, such as predictive analysis, can be carried out in real time. So with a combination of huge memory sizes and faster multi-core CPUs, we have now have access to enormous amounts of computing power. SAP HANA Cloud exploits multiple CPUs to distribute the worksloads in order to achieve optimal performance. As you add more CPUs, performance is improved. We call this scaling up.
With modern blade-server architecture, cloud providers can now add more memory and more CPUs into their servers very easily and quickly. This allows fast scaling up of the hardware to handle bigger workloads and data volumes. Once the limits of scale-up have been reached for the hardware we can then look at scale-out. This is the deployment of extra worker nodes (more servers) to share the processing load and
But most databases were not designed to take advantage of such modern technology and would not know how to run optimally with large memory and multi-core CPUs. So SAP developed SAP HANA Cloud database from scratch specifically to run on the very latest hardware so that applications could take advantage of in-memory data storage and massively parallel processing.
Put simply, the databases and applications needed to catch up with advances in hardware technology. So, a complete rewrite of the database (SAP HANA Cloud database), as well as the applications that run on the database (e.g. SAP S/4HANA) was required.
SAP built SAP HANA Cloud to fully exploit the latest hardware. SAP collaborated with leading hardware partners who shared the designs of their new CPU and cache architectures. This enabled SAP to develop SAP HANA Cloud in such a way that it could extract every last drop of power from the hardware.
Moving the database from disk to memory
Watch this video to learn about moving the database completely to memory.
So disk is not needed?
We know we can move the entire database from disk to memory. So does this mean SAP HANA Cloud eliminates disk? The answer is: No, SAP HANA Cloud still uses disk.
Even though we can fit the entire database in memory, we usually don't want to do that.
Data in memory is classified as hot. Hot data is frequently used by business and needs to perform very well. It is usually data that is very recent and is of interest to many parties. It might also be data that is processed by customer facing apps and needs to perform well. This data needs to be closest to the CPU for optimum read performance.
Infrequently used data can be classified as warm, which means that fast access is not so important. Warm data is stored on disk and loaded to memory only when needed. Most organizations would not want all data in memory as they regard only a part of it to be classified as hot. Memory costs are certainly falling but compared to disk, memory is still very expensive. This means that you should deliberately size memory optimally to fit only the hot data and not worry about trying to fit the entire data of the organization in memory. When customers choose their SAP HANA Cloud size they need to carefully calculate the memory size requirements of the hot and warm data and choose a memory / disk ratio setup that provides great performance on the most important data and acceptable performance on the less important data.
When you deploy an SAP HANA Cloud tenant you choose the key components that determine your required computing power: number of CPUs, memory size and disk size.