Exploring Best Practice for Planning Performance to an SAP Analytics Cloud Planning Story

Objective

After completing this lesson, you will be able to describe the performance best practice related to planning stories used in SAP Analytics Cloud.

Before You Start This Unit

When you are working with stories, whether they are built for data visualization or planning, there are many best practices that apply to both. In the Optimizing Performance in Story Design course, we focus on design-specific best practices for pages and widgets when building and viewing stories, laying the groundwork for a story that performs well.

There are topics covered that are important for both analytic and planning story performance. They include best practices such as planning the widget weight per page, limiting the number of cells in a table, avoiding an initial state of tables with complete drilldown, turning off cascading effect filtering, and using collapsed input controls and filters in the initial view as the query will only be sent to the backend system when the user interacts with the input control instead of on page load.

For this reason, it’s important that you have completed the following units in the Optimizing Performance in SAP Analytics Cloud Story Design course:

  1. Exploring Story Design and its Impact on Performance in SAP Analytics Cloud
  2. Optimizing Performance in Story Design
  3. Optimizing Performance in Data Loading

In this lesson, we explore additional best practice recommendations that are more relevant to planning stories, such as data entry and working with the planning table.

Plan Your Story Design

Keep It Simple

Story design plays an important part in performance, so in addition to general story best practices, there are some things to keep in mind when designing planning stories, the first being simplicity. The majority of end users who interact with a planning story are primarily concerned with entering numbers and want to be able to access them quickly, so consider a simple story layout and limit the number of descriptive columns in your tables. You may have a hierarchy node that when expended, slows down performance, so taking steps to simplify the story design can impact on the actual query performance.

Always keep in mind how your story will be used and plan accordingly. For example, if you have a story that uses a planning model but contains some story widgets that are not used for planning, then deselect Planning Enabled from the builder panel of the table. Story developers can leverage the getPlanning().setEnabled API to enable planning in view time whenever it’s needed.

Story with Planning Enabled selected in the builder panel.

Hint

Remember, if you have a large model that you are working with, enable Optimize Story Building Performance in the model preferences so that data updates during story design are prevented, requiring a manual refresh of the data.Optimize Story Building Performance selected (highlighted) in the model preferences

Test Your Story Performance

Test your story performance as you build it. For example, if you didn't test the hierarchy node expansion, you'd not have realized that it was affecting the planning story's performance, allowing you to simplify the story design early on to optimize actual query performance. Testing as you build will help you identify potential issues as well as reduce troubleshooting time later on.

Story performance can be impacted on by factors like data locking, extensive disaggregation to many member combinations, non-key attribute filters, and cell locks. For this reason, it’s important to test your story’s performance in view time using the Performance Insights tool as you build it when using these elements.

Handle Unbooked Cells Strategically

Minimize the display of unbooked cells as they are very expensive to render and can easily multiply when multiple dimensions are nested in the rows or column. Only show unbooked cells on the inner dimensions, where possible. This is because enabling display on outer dimensions first (left or top dimensions) automatically enables it on all inner dimensions (right or bottom). In the following example, you can see that when you deselect unbooked data in either the table or the builder panel, the unbooked data is no longer displayed, which in this case, is Customer C.

Unbooked Data is deselected, and Customer C is no longer visible in the table.

Design Data Action Steps Carefully

If you have a planning story where referencing members is required, use data action copy steps to allow sourcing data from calculated members or parent members to optimize performance. Instead of referencing the source member in each of the data action steps, you can reference the source member in the first step, then set up the additional steps to read the value.

Let’s take a look at an example. You have an allocation in your planning story, and you want to leverage calculated members as the driver on which to spread target values. You want to reduce the reliance on the driver in subsequent steps of the allocation. As the allocation is based on a previous year’s value, you enter the driver value in the current year rather than always referencing the value in a Lookup calculation, as it’s a value that is unlikely to change. Consistently referencing a static value using a calculated account or measure creates unnecessary processing expense with each allocation execution.

A data action for allocation expenses with the Driver Context field highlighted. Reference Dimension is Date (Year, Quarter, Month).

Data actions, multi actions, and calculations also play a big part in performance in planning stories. As such, we have included separate lessons on these topics.

Data Entry and Deletion

One of the key elements to consider when optimizing performance is how users will enter and remove data in the planning story. To prevent timeout errors, it's recommended to avoid copying and pasting large data volumes from Microsoft Excel. Let’s take a look at some common scenarios for data entry and the bulk deletion of data that will optimize data entry performance.

Data Entry Using Mass Data Entry

Mass data entry mode is intended to change many data cells individually and then have them processed at once. To optimize performance when using mass data entry:

  1. Split the update into multiple smaller, segmented updates in order to minimize dependency checks and runtime growth.
  2. Ensure the visible cells in the grid design meet the following conditions so that dependency checks can be skipped:
    • No formulas.
    • No lookups.
    • No dynamic features.
    • No calculated dimensions.
    • Dimensions are restricted to leaves (if necessary, select the Show only leaves in widget option in the hierarchy settings, if available).

Caution

It’s important to adhere to the SAP Analytics Cloud limit of 30,000 cell changes. When the limit is exceeded, the application produces an error message and data entries are rejected.

Data Entry Using Copy and Paste

If you copy a source cell with data, then all underlying values that aggregate up to it are automatically copied and pasted as well. For example, if you copy a Q1 member and paste it to Q2 for the same version, then the January values are copied to April, from February to May, and from March to June.

When copying and pasting data within a table, there are two types of paste operations. You can choose to paste:

  1. Underlying values: is the default mode and transfers detailed data. Use this when the source cell is visible, and you are copying:
    • Between same-level time periods, for example, December to January
    • Between different leaf members, such as between sales managers
  2. Overall values: transfers aggregated data. When the conditions for pasting underlying values are not present, then only overall data is pasted. This is similar to manual entry and data is distributed to the leaf members based on existing proportions between those members if the cell already has a value. To paste overall values:
    • Select the required cell, then from the menu, select Edit and Paste Overall Values Only.
    • Right-click the required cell and select Paste Overall Values Only from the context menu.

Hints for Data Entry Performance

Copy and paste fewer, high-granularity cells. This will have better performance than copying and pasting low granularity. For example, copying facts from one year to another will lead to the same result and generally be faster than copying on month level.

Limit the number of (leaf) cells to be updated. For data entry processing, it is better to avoid updating too many cells at the same time. For example, if your model contains many dimensions where the unassigned member is not available, spreading data to all leaf members for each dimension may result in slow performance.

Filter dimensions selectively. Another way to optimize performance in a planning story is to filter dimensions selectively and use data actions to pre-populate key combinations. Be sure to filter data or adjust target cells to avoid performance warnings. When updating multiple cells, the application runs dependency checks and verifies potential restrictions pairwise, so it is actually faster to split the update into multiple smaller ones, rather than running a single sizable update in one go.

Complex Scenarios

It’s important to take steps to optimize performance with a planning story that contains complex scenarios. Avoid mass data entry into empty cells in a table with many dimensions on the axes, as it may lead to potential memory issues. For tables with many dimensions on the axes, a mass data entry (or copying facts from a Microsoft Excel sheet) onto a high number of empty cells can lead to performance issues. The runtime and memory consumption of a mass data entry depend on the product of the number of unique members for all dimensions on the axes for the area of touched empty cells.

Let’s look at an example. In the following table, you want to enter data for all cells for February and March. The cartesian hull for this area contains 158400 member tuples combinations (the product of the numbers of unique members per dimension): 1 for Region * 11 for Department * 20 for Employee ID * 18 for Hire Date * 20 for Position * 2 for Date.

While the mass data entry in this example can be executed without problems, consider a similar mass data entry onto 100 rows instead of 20. This might span a cartesian hull of billions of member combinations and lead to performance issues or a memory consumption that’s too high, leading to a failure.

Example of a story with a large cartesian hull, as described above the image.]

There are different mitigations you can use to improve performance, including adding input controls on dimensions to split up the mass data entry into multiple smaller parts, removing remove unnecessary dimensions from the axes, or use a data import job to import data into the model. Details on this solution can be found in SAP Note 3287839.

Bulk Deletions

When deleting numerous cells, there's no need to restrict visible dimensions to leaves only for faster deletions.

For optimized performance when performing bulk deletions, consider using contiguous selection. Deleting cells is faster if the selection forms a contiguous rectangle without gaps. Dragging from edge to edge achieves this automatically, whereas removing isolated cells within a large area can slow down processing, even if unbooked cells are included.