Controlling Parallelization

Objective

After completing this lesson, you will be able to Controlling Parallelization.

Controlling Parallelization in a Data Flow

SAP HANA Cloud always attempts to automatically apply parallelization to queries in order to optimize performance. Following good modeling practices will ensure the queries that run on your calculation views have the best chance of being parallelized. However, there are cases when the optimizer will not apply parallelization in a complex, long-running data processing step as it is not able to understand the business semantics and is concerned it might break the results if it tried to parallelize. So instead, it cautiously leaves the processing step as sequential even though it might harm performance.

However, if you are able to ensure the step could be safely run in a parallel way by splitting up the data into semantic partitions, then you can dictate when parallelization should occur.

Within a calculation view, it is possible to force parallelization of data processing by setting a flag Partition Local Execution to mark the start and also the end of the section of a data flow where parallelization is required. The reason you do this is to improve the performance of a calculation view by generating multiple processing threads at specific positions in the data flow where performance bottlenecks can occur.

Screen capture of a Calculation View scenario. The first projection is used to break up data into partitions depending on the Country attribute. The second projection, called Currency_convert_In_Parallel executes a process on three different partitions corresponding to the three Country values: India, France and Iceland. A union node is then used to merge data back together.

The parallelization block begins with a calculation view node that is able to define a table as a data source. This node could be a projection node or an aggregation node. It is not possible to use any other type of data source such as a function or a view.

In the Properties of the chosen start node, a flag Partition Local Execution is set to signal that multiple threads should be generated from this point onwards. It is possible to define a source column as a partitioning value. This means that a partition is created for each distinct value of the selected column. For example, if the column chosen was COUNTRY, then a processing thread is created for each country. Of course, it makes sense to look for partitioning columns where the data can be evenly distributed. The partitioning column is optional. If it is not selected, then the partitioning defined for the table is used.

If you do not explicitly define the partitioning column, then the partitioning rules of the underlying table are applied (assuming the table is partitioned).

To end the parallelization block, you use a union node. However, unlike a regular union node that would always have at least two data sources, a union used to terminate the parallel processing block is fed only from one data source. The union node combines the multiple generated threads but the multiple inputs are not visible in the graphical editor and so the union node appears to have only one source from the node below. You cannot combine any other data sources in the union node that is used to terminate the parallelization. In the Properties of the union node, a flag Partition Local Execution is set to signal the ending of the parallelization block.

Screen Capture of Calculation View scenarios illustrating restrictions in the use of parallelization. See text after the image for details.

There are some restrictions that you should be aware of here.

The parallelization block always starts with a node that includes a data source that must be a table. You can use a table defined in the local container or a synonym which points to an external table. You cannot use another calculation view as the data source or a table function.

Screen Capture of Calculation View scenarios illustrating restrictions in the use of parallelization. See text after the image for details.

Only one parallelization block can be defined per query. This means that you cannot stop and the start another block either in the same calculation view or across calculation views in the complete stack. You cannot nest parallelization blocks, for example, start a parallelization block then start another one inside the original parallelization block.

It is possible to create multiple start nodes within a single parallelization block by choosing different partitioning columns for each start node. If you create multiple start nodes, then all threads that were generated are combined in a single ending union node.

Note

In addition to defining logical partitions using this technique, always remember that SAP HANA will attempt to apply additional parallelization to the logical partitions.

To check the partitioning of the data, you can define a calculated column within the parallelization block with the simple column engine expression partitionid(). In the resulting data set, you will then see a partition number generated for each processing thread.

Other techniques to monitor parallelization:

  • You can also collect trace information by adding WITH PARAMETERS ('PLACEHOLDER' = ('$$CE_SUPPORT$$','')) to the end of the query statement.

  • Use SQL Analyzer to navigate to the nodes that should be parallelized and you should see the nodes are duplicated according to the number of partitions.

Caution

It is important to not overuse this feature as over-parallelizing can lead to poor overall performance where other processes are affected.

Log in to track your progress & complete quizzes