Lesson Overview
In this lesson, you'll learn about the concept of elastic scaling and how to change the size in an SAP HANA Cloud, SAP HANA database instance by tailoring compute and storage resources to evolving needs.
Business Case
After starting with an initial SAP HANA Cloud instance, your data set and workload is growing with time. As a responsible administrator, you can scale the resources and change the memory size and storage size of an SAP HANA Cloud database instance using the SAP HANA Cloud Central.
SAP HANA Cloud – Scalability

Customization is difficult from the fixed scale perspective, which you know from an on-premises SAP HANA instance with a fixed system size, based on records and based on workload.
The peak workload determines the size of the instance, which usually leads to the need for oversized hardware to cover peak workloads, which in turn leads to increased costs.
Elastic Scale: The scalability in SAP HANA Cloud is elastic. With SAP HANA Cloud, you follow actively a customer-based resource management path. As you have learned in the previous lessons, you start provision with an initial SAP HANA Cloud instance size and then grow with your data and workload OR you shrink with your workload. This means that you monitor your workload and edit the related resources to your needs to get a pay-as-you-go consumption-based pricing approach.
Characteristics of SAP HANA Cloud – Elastic Scale:
- Dynamic resource management: Elastic compute and storage resources for specific projects.
- No upfront costs: No fixed hardware costs, but a really pay-what-you-use approach.
- No oversized or undersized system: React quickly to new and/or changed workloads.
Note
Self-service scaling is only possible for a SAP BTP enterprise account. SAP HANA Cloud trial or free tier instances have a fixed storage size and storage type.
SAP HANA Cloud – Dimensional Scaling
See the following video for the dimensional scaling in SAP HANA Cloud.

As a user with the role SPACE DEVELOPER, you can change and edit the memory size and storage size of an SAP HANA Cloud database instance using the SAP HANA Cloud Central. After provisioning, the memory can be scaled-up and scaled-down. Increasing or decreasing the memory size of an instance is available as a self-service. The disk storage space is automatically allocated according to the size of memory of your instance.
When you scale-down the memory of an SAP HANA Cloud instance, the disk space remains unchanged to ensure that no data is lost.Working actively in the Edit Configuration screen provides the following system behavior:
- Scaling up the memory, also increases the number of vCPUs (compute) and disk (storage) values.
- Scaling down the memory only reduces the number of vCPUs (compute), not the disk (storage) space.
- Scaling up and down the disk (storage) space, neither the memory nor the number of vCPUs (compute) is changed.
Activating the Cloud Connector is also available as a self-service option.
After you've set your required values and selected the Save button, the configuration changes will be carried out immediately, and your SAP HANA Cloud database instance will restart.
Database Instance Parameter Values
You can use the following database instance parameter values:
- Memory: The size of your (compressed) in-memory data in your SAP HANA database. On Microsoft Azure, you can select from 32 GB to 5,600 GB of memory. In some regions, only 3,776 GB may be available. On Amazon Web Services, you can select from 30 GB to 5,970 GB of memory. In some regions, only 3,600 GB may be available.
- Compute: The number of vCPUs of your SAP HANA database. The number of vCPUs is allocated according to the size of memory of your instance.
- Storage: The disk storage space of your SAP HANA database. On Microsoft Azure, values range from 120 GB to 32,767 GB. On Amazon Web Services, values range from 120 GB to 16,384 GB.
The SAP HANA Cloud database instance step sizes are described as in the following overview:
SAP HANA Cloud Database Instance Step Sizes
Hyperscaler | 1 vCPU per 16 GB / 15 GB | 4 vCPU per 64 GB / 60 GB | 120 vCPUs | 412 vCPUs / 420 vCPUs |
---|---|---|---|---|
Microsoft Azure | up to 960 GB | 960 GB - 1920 GB | 3,776 GB | 5,600 GB |
Amazon Web Services | up to 900 GB | 900 GB - 1800 GB | 3,600 GB | 5,970 GB |
Scale-Up a Database Instance Using Cloud Foundry CLI
It's also possible to scale-up an SAP HANA Cloud database instance using the Cloud Foundry CLI. Before you can scale-up a database instance, you need to install the Cloud Foundry CLI, and log on to the SAP BTP as previously explained.

To scale-up an SAP HANA Cloud database using the Cloud Foundry CLI, execute the following steps:
- Open a Linux Terminal or Windows PowerShell.
- Log in to the SAP BTP using the command cf login -o <organisation> -s <space> -a <API endpoint>, and enter your credentials.
- To scale-up an existing database, execute the command:Code Snippet1cf update-service <service name> -c '{"data": { "memory": xx, "vcpu": y, "storage": zzz } }'
Example:
Code Snippet1cf update-service HC200-Demo-Database -c '{"data": { "memory": 64, "vcpu": 4, "storage": 200 } }'The above command will scale-up the database to 64-GB memory, four vCPUs, and 200-GB storage.
Have a look at the Change the Size of an SAP HANA Database Instance Using the CLI page to have a complete list of parameters that can be used to scale-up a database instance.
Scaling the Size of a Data Lake Instance
You can scale in the storage dimension just by adding a data lake, and even the data lake itself can scale independently off the in-memory database. That means that with the data lake, you can add a storage capacity.
To do so, you must be logged on to the SAP BTP cockpit with a user with the role SPACE DEVELOPER and open the SAP HANA Cloud Central. From the Actions menu for your instance, choose the Edit button, change the size of the instance, and save your work.
The storage section provides the following properties:
- Coordinator: Where you can specify the number of vCPUs for your coordinator node. The minimum value is 2 vCPUs, and the maximum is 96 vCPUs.
- Workers: Specify the size and number of worker nodes for your instance. Larger worker nodes improve single user performance (scale-up), while more workers nodes provide higher concurrency (scale-out). The minimum number of workers nodes is 1 and the maximum is 10. The minimum value of vCPUs per worker node is 2 vCPUs and the maximum is 96 vCPUs.
- Storage: Where you can set the amount of TBs for your instance. The minimum value is 1 TB for Microsoft Azure and 4 TB for Amazon Web Services. The maximum value is 90 TB.