Lesson Overview
In this lesson, you will learn about the concept of elastic scaling and how to change the size in an SAP HANA Cloud, SAP HANA database instance by tailoring compute and storage resources to evolving needs.
Business Case
After starting with an initial SAP HANA Cloud instance, your data set and workload is growing with time. As a responsible administrator, you can scale the resources and change the memory size and storage size of an SAP HANA Cloud database instance using the SAP HANA Cloud Central.
SAP HANA Cloud - Scalability

Customization is difficult from the fixed scale perspective which you know from an on-premises SAP HANA instance with a fixed system size, based on records and based on workload.
The peak workload determines the size of the instance, which usually leads to the need for oversized hardware to cover peak workloads, which in turn leads to increased costs.
Elastic Scale: The scalability in SAP HANA Cloud is elastic. With SAP HANA Cloud you follow actively a customer based resource management path. As you have learned in the previous lessons, you start provision with an initial SAP HANA Cloud instance size and then grow with your data and workload OR you shrink with your workload. This means that you monitor your workload and edit the related resources to your needs to get a pay-as-you-go consumption based pricing approach.
Characteristics of SAP HANA Cloud - Elastic Scale:
- Dynamic resource management- Elastic compute and storage resources for specific projects.
- No upfront costs- No fixed hardware costs, but a really pay-what-you-use approach.
- No oversized or undersized system- React quickly to new and/or changed workloads.
Self-service scaling is only possible for an SAP BTP enterprise account.
SAP HANA Cloud - Dimensional Scaling
See the following video for the dimensional scaling in SAP HANA Cloud.

As a user with the role SPACE DEVELOPER you can change and edit the memory size and storage size of an SAP HANA Cloud database instance using the SAP HANA Cloud Central. After provisioning, it can only be scaled up. Increasing the memory size of the instance can be achieved as a self-service approach, while down-scaling is possible through a support service request. The disk storage space is automatically allocated according to the size of memory of your instance.
Working actively in the Edit Configuration screen provides the following system behavior:
- Scaling up the memory, also increases the number of vCPUs (compute) and disk (storage) values.
- Scaling down the memory only reduces the number of vCPUs (compute), not the disk (storage) space.
- Scaling up and down the disk (storage) space, neither the memory nor the number of vCPUs (compute) is changed.
After you have set your required values and clicked the Save button, the configuration changes will be carried out immediately and your SAP HANA Cloud database instance will restart.
After provisioning, your SAP HANA Cloud database instance can only be scaled up. If you want to decrease the memory size of the instance, please submit a service request. If you are consider requesting a memory downsizing, also consider how you will handle the disk storage size as well.
- Memory - The size of your (compressed) in-memory data in your SAP HANA database. On Microsoft Azure, you can select from 32 GB to 5600 GB of memory. In some regions, only 3776 GB may be available. On Amazon Web Services, you can select from 30 GB to 5970 GB of memory. In some regions, only 3600 GB may be available.
- Compute - The number of vCPUs of your SAP HANA database. The number of vCPUs is allocated according to the size of memory of your instance.
- Storage - The disk storage space of your SAP HANA database. On Microsoft Azure, values range from 120 GB to 32,767 GB. On Amazon Web Services, values range from 120 GB to 16,384 GB.
The SAP HANA Cloud database instance step sizes are described as in the following overview:
SAP HANA Cloud database instance step sizes
Hyperscaler | 1 vCPU per 16 GB / 15 GB | 4 vCPU per 64 GB / 60 GB | 120 vCPUs | 412 vCPUs / 420 vCPUs |
---|---|---|---|---|
Microsoft Azure | up to 960 GB | 960 GB - 1920 GB | 3776 GB | 5600 GB |
Amazon Web Services | up to 900 GB | 900 GB - 1800 GB | 3600 GB | 5970 GB |
SAP HANA Cloud trial/free tier instances have a fixed storage size and storage type.
Scale-up a Database Instance using Cloud Foundry CLI
It is also possible to scale-up an SAP HANA Cloud database instance using the Cloud Foundry CLI. Before you can scale-up a database instance, you need to install the Cloud Foundry CLI, and log on to the SAP BTP as previously explained.

To scale-up an SAP HANA Cloud database using the Cloud Foundry CLI execute the following steps:
- Open a Linux Terminal or Windows PowerShell.
- log-in to the SAP BTP using the command
cf login -o <organisation> -s <space> -a <API endpoint>
and enter your credentials. - To scale-up an existing database, execute the command:Code snippetCopy code
cf update-service <service name> -c '{"data": { "memory": xx, "vcpu": y, "storage": zzz } }'
Example:
Code snippetCopy codecf update-service HC200-Demo-Database -c '{"data": { "memory": 64, "vcpu": 4, "storage": 200 } }'
The above command will scale-up the database to 64GB memory, 4 vCPUs and 200GB storage.
Using the CLI, you can only increase the memory size of an SAP HANA database instance. If you want to decrease the memory size of the instance, please submit a service request. For more information, see Service Requests.
Have a look at the Change the Size of an SAP HANA Database Instance Using the CLI page to have a complete list of parameters that can be used to scale-up a database instance.
Scaling the size of a Data Lake Instance
You can scale in the storage dimension, just by adding a data lake and even the data lake itself can scale independently off the in-memory database, that means with the data lake you can add a storage capacity.
To do so you must be logged on to the SAP BTP cockpit with a user with the role SPACE DEVELOPER and open the SAP HANA Cloud Central. from the Actions menu for your instance, choose the Edit button, change the size of the instance and save your work.
- Coordinator - where you can specify the number of vCPUs for your coordinator node. The minimum value is 2 vCPUs and the maximum is 96 vCPUs.
- Workers - specify the size and number of worker nodes for your instance. Larger worker nodes improve single user performance (scale-up), while more workers nodes provide higher concurrency (scale-out). The minimum number of workers nodes is 1 and the maximum is 10. The minimum value of vCPUs per worker node is 2 vCPUs and the maximum is 96 vCPUs.
- Storage - where you can set the amount of TBs for your instance. The minimum value is 1 TB for Microsoft Azure and 4 TB for Amazon Web Services. The maximum value is 90 TB.