Using Structured Query Language (SQL)

The functionality in WFA on HANA will allow for scripts to be written in Structured Query Language (SQL). While it is valuable to understand SQL, it is not critical for using this guide, as the required syntax will be included in the necessary section.
Tables and Columns function use a snippet of SQL.
The Tables and Columns function will allow you to enter a snippet of SQL for the [column] section of the SQL statement and then automatically insert this snippet into the full SQL statement that is required, for example, SELECT GENDER FROM PERSONAL_INFO.
As the Dimensions function uses full SQL statements, it will allow you to enter full SQL statements.
Note
HA150 - SQL and SQL Script Basics for SAP HANA is a recommended prerequisite for this course.Restrictions on WFA on HANA Configuration Functionality
There are elements of the WFA on HANA configuration that are restricted to internal SAP admin users. These are:
Calculated Columns: This function is outlined in Create Calculations, however, access is restricted to internal SAP admins.
Custom Metrics: SuccessFactors provides 20 custom metrics that are not part of the standard list of metrics available within the supported Metrics Packs. Additional custom metrics must be purchased. The list of standard metrics available in your environment can be seen in the Standard Measure drop down in the Add Measures section. More detail is provided in the Custom Measures later in the unit.
EC User SYS ID and Person ID

There are two ID columns in employee data that are used to identify which records belong to which employee: Users Sys ID and Person ID. The Emp Employment Info table has both these employee ID columns and is used as the mapping between the two. Employees can have more than one record in this table, with the combination of Person ID + User Sys ID the unique identifier for the employee and their employment assignment.
Person ID: The identifier for an individual. Each employee will only have one Person ID which uniquely identifies that employee.
User Sys ID: The identifier for an individual’s employment assignment. Employees can have more than one User Sys ID, typically uniquely identifying each of the employees’ Global Assignments.
Note
Global Assignments and Concurrent Employments are NOT currently supported in WFA on HANA. Go to Workforce Analytics on SAP HANA FAQ in the SAP SuccessFactors People Analytics Help Portal for more information.Employee Central Effective Dating and Transaction Sequencing
Most data in Employee Central (EC) is effective dated meaning a begin/start date and a finish/end date is stored on employee records to show the timespan that data is applicable for. This allows WFA on HANA to extrapolate information for each employee and apply that across a timeline for analysis. Data items such as Name, Address, Department, Division, Status are all effective dated. Tables that contain this type of information can have multiple records per object, for example, Person or User, with each record spanning a separate time frame.
There is some data that does not change over time and as such will not be Effective Dated. This includes columns like Date of Birth, Date of Death, and Country of Birth. This non-effective dated data is always stored in separate tables to the effective dated data, and will only ever contain one record per object, for example, Person.
Effective End Date can be configured in WFA in HANA in one of two ways:
• Using the 'default' functionality: If a column is not specifically defined as an effective to date (see next option) the default functionality will calculate effective end dates as next record start date – one day.
• Setting the Special Use Type as Effective to Date (ToDate) on the Effective End Date column: This option 'forces' the use of the specified column value as the end date. This option is used when the default next record start date – one day – calculation is not appropriate.
Example of EC Dating and Transaction Sequencing

List from the figure:
• Employee was tenured for <1 year, 1st Jan 2012 to 24th Dec 2012 (incl) = 359 days.
• Employee was tenured in the Support department for 6 months, 1st Mar to 31st Aug.
• Employee spent 3 months on Leave with Pay.
• Employee was tenured for 8 months before receiving a promotion
• Employee had been in promoted role for <3 months before terminating.
Effective Dating is applied at a daily level. If edits to the employee result in more than one record on a single day, Transaction Sequencing is used to determine the most 'current', for example, the last record for an employee.
WFA on HANA Data Factory Home Page

1. Company Status: Latest status of the WFA on HANA build.
2. Load Status: Select to access the load log for current and historical builds.
3. Incremental Process: Enable/disable Incremental Processing and frequency, and option to start an out of cycle Incremental Build.
4. Configuration: Quick links to jump to each of the configuration pages.
5. Measure/Dimension Arrangement: Access measure and dimension tools.
6. Validation: Run validation on the configuration to confirm configuration can be processed.
7. Import/Export configuration: Export the configuration for backup. Also you can import to another instance, for example, copy configuration from Test to Production.
8. Edit Hierarchies: Check dimension structures and map and/or relabel nodes.
9.Schedule Initial Load: Schedule an initial load at a preferred date and time range.
10. Debug Process: Processes that are used to perform detailed validation on the configuration.
Workforce Fact Table

To begin, create the structure of what will be included in WFA on HANA by flagging and configuring the different pieces of data that make up the Employee Central data. This will create the Base Input Measures and Analysis & Structural Dimensions that will then be used by WFA on HANA to generate Derived Input and Result Measures.
To start, click on Tables and Columns in the Configuration section of the WFA on HANA Data Factory Screen:
Tables and Columns Overview

a. Return to Data Factory or Analytics Home.
b. List of Fact Tables configured for the instance.
c. Fact Table management.
d. Configure Company Settings and display/hide the validator.
e. List of available source tables from BizX.
f. List of available source data columns for the selected table.
g. Create a calculated column.
h. Edit/configure the selected column.
New Fact Table

Fact tables have the following settings:
Label: Name for the Fact Table.
Type: Use Simple for measure that are used 'as is' from the source, for example where there is no splicing of data. Use Complex when we need to calculate engagements (hires, movements or terminations), or otherwise transform the data, for example, ensure that only the last record of the day is used.
Null Substitute: Inserted as the cell value if the cell is empty, for example, if an employee does not have a Department, use "??".
Group: (optional) If your Fact Tables are grouped, you can choose which group to save this Fact Table to. This becomes useful as the number of Fact Tables grows.
Data Source: Always use Realms Schema Objects. If Realms Logical Objects appears in the list, do NOT use as it is unsupported for WFA on SAP HANA as a data source.
Terminations on Previous Day: Select this option if the employee's Start Date for their Termination record is the first day they are not working, for example, their last day at work was the previous day.
Active: Deselect this if you no longer wish to have this table as part of the HANA OLAP build, for example, the function will no longer process this Fact Table.
To configure the Fact table for Employee Central:
Label: Workforce
Type: Complex
Null Substitute: ??
Group: Default
Data Source: Realms Schema Objects
Terminations on Previous Day: Enabled
Active: Enabled
Adding Data to Workforce Fact Table

The WFA on HANA manual implementation guide walks through adding all the necessary tables and columns for an EC WFA on HANA configuration. This course will focus on common examples. For a full list of configurations, refer to the guide.
When adding a column from a table you will configure the is tenure, rollup type, and special use type for the column.
Special Use Type:
WFA on HANA associates columns for special uses for certain WFA on HANA functionality.
The list contains:
- None: No special use for the column.
- PrimaryPerson: Identifies the column as the primary employee ID, or Person ID for EC. Used to build the facts around an employee.
- FromDate: Identifies the column that stores the start date of the record for effective dating, usually the effective start date column.
- ToDate: This option 'forces' the use of the specified column value as the effective end date. Standard functionality (no effective to Date) will calculate effective end date as next record start date minus one day.
- EffectiveSequence: Identifies the column that stores the effective dating sequence number to determine the order of changes that occur on the same date, usually the effective sequence number column.
- SecondaryPerson: Identifies the column as the secondary employee ID, or USER SYS ID for EC. Used to build the facts around an employee.
- DOB: Identifies the column as date of birth for calculating age dimensions.
- HireDate: Identifies the column for calculating the organizational tenure dimension.
- AssignmentType: Identifies the column for assignment type, which stores if an employee has a standard assignment or a global assignment.

1. Open the Employee Central folder and select the Emp Employment Info table. A list of available columns from this table will appear to the right.
2. Select the following columns from the Emp Employment Info columns:
• Assignment Type
• Original Start Date
• Person ID
• Users Sys ID
3. Assign the special use types:





Final table configuration with unused columns hidden for Emp Employment Info.
Rollup Type
Allows WFA on HANA to handle when a record is spliced in building the fact table. Rollup type controls what happens to the value when the splicing occurs.
Normal: the value will stay as is, the value will be the same on both records.
SOP: the value is only maintained in the original record. Useful when you want the value of the column to only be applicable on the effective start date of a record. For example, a movement should only be counted on the actual start date of the record. If it is spliced, it should not count as a movement on the new records.
Prorata: used for numeric data types. When the record is split, it will prorate the values into old and new record.
Is Tenure

Position tenure can be calculated from the date an employee begins in a job or position. The standard is to calculate based upon a change in the Position ID in the Emp Job Info T table.
Calculated Columns

Some base measures and dimensions are not simple values in an existing column, but must be created or calculated based upon logic. WFA on HANA can create calculated columns in the fact table using SQL scripts. The manual implementation guide provides labels, data types, and example scripts for the standard EC configuration.
Process to Create a Calculated Column
The general process to create a calculate column is:
1. Select Add Calculated Column to bring up the Column Properties.
2. Complete the Label.
3. Configure the Data Type. Changing the data type will change the formatting options available for the column.
4. Set Is Tenure, Rollup Type, and Special Use Case appropriately. Refer to the previous sections for more information.
5. Configure the data type formatting (if necessary).
6. Select edit to enter the SQL script.
7. Enter the SQL Script in the Editor. Remember the script represents one column in the SELECT statement.
8. Select Validate to ensure the formula has been entered correctly, and save.
Find a Field’s Back-end Field Sourcing using Table Reporting
Most of the time the field setup in the Admin Center → Manage Business Configuration will map one to one with a field in the back-end database with a similar name. However, for generic or custom fields this may not be the case.
You can locate the back-end sourcing by using a Table Report:
- Prerequisite - need to turn on Enable Visual Publisher setting in provisioning.
- Go to Report Center and create Report Table. Choose the appropriate reporting domain, for example, for EC you may use Person and Employment Information (as of Date).
- Include in the column configuration field(s) that you want to find the back-end sourcing.
- Select the Data Sets on the header menu and choose Download XL Template.
Note - the report needs to be saved first before you can use the Data Sets menu.
- Open the downloaded file.

The first row shows the front-end field name, and the second row shows the possible back-end field name. For example, for Ethnic Group column in the Global Info for USA table, the back-end field is SF_VCHAR1.
For comparison, here is what the setting in Manage Business Configuration looks like for Ethnic Group:

Adding Generation with a Script Template

Some common complex scripts may have templates that can be added by dragging the appropriate item from the common algorithms section of the Edit Formula tool. Currently, the generation measure is supplied via common algorithms.
Delete Fact Tables
You may have a scenario where you import a template that generates more fact tables and fact table groups than you need for your customer’s configuration. Therefore, you may prefer to remove these unused tables and groups. In this case you need to perform the process in a specific order.
General Steps to remove fact table(s):
- Remove/delete any Key Mappings that use "to be deleted" fact tables.
- Delete fact tables one by one.
- Delete the fact table group.
It is recommended to keep any logic in Workforce fact table related to other metrics pack, at least for the Table and Column and Lookup configuration because there may be restricted calculated columns that use these as sourcing. You can delete any dimension related to other metrics packs that are not used.
Lookup Tables

Most lookups are used only to change/replace internal codes into a label (or external code). There are occasions however, where the label (or external code) is specifically required in the processing of the data. Events and Event Reasons are an example of this, where, to identify when an employee was hired, terminated, or otherwise moved within the organization, the specific Event or Event Reason external code is required. Lookups allow joining tables that do not have a primary or secondary person ID.
General Steps to configure a lookup:
1. Select the Lookups tab in the top navigation bar, then select Add Lookup.
2. Select the appropriate table from the choose lookup table drop down and choose OK. If the table needed doesn’t appear, make sure you have enabled it in the tables and columns section.
3. Select Add Join Column and choose the Lookup Column and the Source Column. then select OK.
Lookup Example

1. Select the Lookups tab in the top navigation bar, then select Add Lookup.
2. Select the FO Event Reason T table from the drop down and choose OK.

3. Select Add Join Column and choose Internal Code for the Lookup Column and Emp Job Info T → Event Reason Filtered for the Source Column.
Standard Lookups for WFA on HANA for Employee Central

Create Events
For WFA on HANA, event lists define employee movement, for example, Hire, Termination, Promotion, Transfer, Demotion, based upon codes. Refer to the Base Input Measures tab of the SF WFA on HANA Data Specification for the required codes used in the instance.
How Do Movements Work?

Configuring movements is completed in two steps.
Step 1 is to configure your 'Event lists', 'Hire, Movement, Terms'.
This is required so the appropriate events can be picked up as hire/internal movement/termination. The logic required should refer back to captured configuration requirements from the customer documented in the SF WFA on HANA Data Specification. How to perform this configuration is covered in this section.
What if Step 1 is not configured properly? The Data Factory will not be able to capture these events and recognize them as movements that need to be mapped in Step 2. Configuring your event lists is crucial in picking up all actions/events that you potentially want to report regardless of whether the event is hire, promotion, demotion, transfer, termination, or other event.

Step 2 is to map each code in the Recruitment Source and Separation Reason Dimensions.
Once all your events are captured in the system as movements, you will need to map each code from unmapped grouping into other groupings in the dimension hierarchy. Recruitment Source allows Movement In measures to break down further into categories, for example, Hire or Promotion (In). Separation Reason allows Movement Out measures to break down further into categories, for example, Voluntary Termination or Promotion (Out). How to configure code mapping for dimensions is covered in the section Configuring the Hierarchies.
What if Step 2 is not configured properly? Although all events/actions have been captured in step 1, they don’t know what kind of movements they should be reported as. If you don’t map the corresponding codes under promotion, demotion, transfer, etc., then you will not get results for those metrics because the Data Factory does NOT know what events should be reporting in the appropriate dimension nodes, for example, Hire, Transfer, Promotion, or Voluntary Termination.
Steps to Configure an Event List

1. Navigate to the Events Lists tab on the top navigation bar and choose Add.
2. Label this Event List.
3. Open the Employee Central → Emp Job Info T folders and drag and drop the Event column onto the Event Code Columns bubble. This will populate the descriptions column.
4. Select the Events that correspond to the movement then choose OK to save.
Lookup Example

1. Navigate to the Event Lists tab on the top navigation bar and choose Add.
2. Label this Event List Hires.
3. Open the Employee Central → Emp Job Info T folders and drag and drop the Event column onto the Event Code Columns bubble.

4. Select the Events that correspond to an employee’s Hire (here Hire and Rehire have been chosen), then choose OK to save.
Standard Event Lists for WFA on HANA for EC

Create Conditions
In WFA on HANA, conditions identify when an event occurs. The next step is to create conditions for the events.
General Steps to Create a Condition

1. Navigate to the Hires, Movements, Terms tab on the top navigation bar and select Add Condition in the appropriate section.
2. Label this Condition.
3. Drag the event code column onto the Event Code Columns box.
4. Select the Condition Method and configure the resulting method
5. Choose OK to save.
Condition Methods
A condition can be identified by one of five methods:
- Change in Value - compares values in the event code column, looking for any change in value.
- Event List - compares the values in the event code column to the event lists created in the previous section.
- Starts With - does pattern matching on the event code column.
- Increase in Value - compares values in the event code column, typically used for promotions and demotions.
- Decrease in Value - compares values in the event code column, typically used for promotions and demotions.
Additionally, when comparing values, you should check the enable table filter. This will limit the comparison of records to the table that contains the event code column. It should always be checked.
Condition Example

Internal Movements condition will look for events that match the Internal movements event list configuration.
1. Navigate to the Hires, Movements, Terms tab on the top navigation bar and select Add Movements Condition.
2. Label this Condition Internal Movement Events.
3. Drag the Event column from the Emp Job Info T table onto the Event Code Columns box.
4. Select Event List from the Choose Condition Method drop down and choose Internal Movements.
5. Choose OK to save.
Create Calculations
Once the conditions have been created, then calculations are created to provide codes that represent the different types of movements. Calculations are created for Hires, Movements, and Terminations.

1. Navigate to the Hires, Movements, Terms tab on the top navigation bar and choose Edit on the condition.
2. Complete the If/Then/Else configuration for the Condition.
3. Choose OK to save.
Note
For Hire Event conditions, the date for the hire is usually the Effective Start Date that is applicable at the Hire event. For System Uploads however, the Effective Start Date is typically the date of the System Upload (not the hire). Checking the Use Hire Date ensures that the actual Hire Date for the employee is used, not the System Upload date.Calculation Example

The hires calculation will output a concatenation of the Event and Event Reason codes when a Hire condition is met or a dummy code when a System Uploads is identified.
1. Navigate to the Hires, Movements, Terms tab on the top navigation bar and choose Edit on the Hires condition.
2. In the IF statement, pull in the Hire Events condition.
3. In the Move Condition statement, pull in Emp Job Info T greater than Event and FO Event Reason T greater than Code columns.
4. Insert an underscore "_" to visually separate the Event from the Code by dragging in a static character between Event and Code using the A button:

Note
The red border on Move Conditions is not an error and can be ignored.5. In the ELSE > Move Condition statement, pull in an IF/THEN statement:

6. In the IF statement, pull in the System Uploads condition.
7. In the Move Condition statement, pull in a static string using the A button and enter "HIR_SYS".
8. Leave the final ELSE > Move Condition empty:

9. Select Toggle Options and ensure the Use Hire Date option is checked for the System Uploads condition:
10. **OPTION** If the employee's Hire Date is to be used for the Hire Event, rather than the default Effective Start Date applicable for the Hire Event, then also select Use Hire Date for the Hire Event condition.
11. Choose OK to save.
Standard Conditions and Calculations for WFA on HANA for Employee Central

Ability to Prioritize Internal Movements
WFA on HANA has the option to prioritize internal movements. When configured, if more than one internal event occurs on the same day, then the most significant internal event will get reported in the system. The user will now have an option to report either the last occurred event or the prioritized event.
Steps to configure priority for internal movements:

- On the Event List tab, configure a separate event list for each internal event like promotion, transfer downgrade, etc.
- Create conditions for each internal event under the Movement section on the Hire, Movements, Terms tab, select the Movement Priority option, and prioritize the internal event with the most significant event on the top.
Note
- If you want to report Hire over internal event, then you need to add another Hire condition (replicate the condition from Hire section) in Movement section again. For example, if an employee Hire and promotion happens on the same day and you want to report Hire over any internal event, then you need to create a hire condition again on the movement section and place it on top of the list.
- If you want to report last occurred event, then select Last event occurred option. In this case, movement priority will not get triggered.
- Create Calculation for internal events.
- Run an initial load
Describe Calculations
Note
This function is only available to internal SAP admin users.This example will use calculations to source and aggregate the necessary Pay Components to create Annual Salary. Then the example will use the Annual Salary to determine the appropriate node in the Salary Range.
This section will explain how Annual Salary is calculated and its corresponding dimension: Salary Range.
Calculation Example

First, review some key fields in the Emp Paycomp Recurring T table:
Default Currency Code | This calculated column configures the desired from_to currency. The ‘from currency’ should be from the Currency column in this table. ‘To Currency’ is what you would like to convert it to. It is not necessarily USD. |
Base Salary | This calculated column configures where base salary is sourced from. It could be from one or multiple pay comp ID depending on your customer’s setting. All we need to keep in mind here is that if a person has multiple pay comp ID available for their base salary, they will overwrite each other based on the sequence of the records. It shouldn’t matter because one could be in weekly frequency and the other could be in monthly frequency. The total annual salary after factoring in frequency should reach a similar figure. |
Target Bonus Amount | This calculated column configures potential bonus that sits on top of the salary. Similar to Base Salary, it is only able to track one bonus value. If there are multiple Bonuses at different times, the latter one will overwrite the former one. |
InvertCurrencyConversionRate | This calculated column allows you to revert the conversion rate from currency conversion table in case the rate is represented in a different way. |
This is the example code of the Annual Salary:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263'Workaround to solve Incremental issue: A = Data("[%EMP_EMPLOYMENT_INFO.ASSIGNMENT_TYPE%]", -2)
'Instantiate forward-filled variables
If Data("[%EMP_JOB_INFO_T.USERS_SYS_ID%]") <> Data("[%EMP_JOB_INFO_T.USERS_SYS_ID%]", -1) Then
Variables("CurrentSalary") = 0.0
Variables("CurrentBonusAmount") = 0.0
End If
'If current record doesn't update salary, then skip.
If Not (Data("[%EMP_PAYCOMP_RECURRING_T.#CALC_COL_4%]") Is Nothing _
And Data("[%EMP_PAYCOMP_RECURRING_T.#CALC_COL_5%]") Is Nothing) Then
'Work out the current currency conversion rate
Dim CurrencyConversionRate As Decimal
Dim DefaultCurrencyCode As String = Data("[%EMP_PAYCOMP_RECURRING_T.#CALC_COL_3%]")
'If conversion rate from the data is not missing
If Not Data("[%MDF_SPLIT_GENERIC_OBJECT_T_CurrencyExchangeRate.SF_FIELD1%]") Is Nothing Then
'Set conversion rate from the data
CurrencyConversionRate = Data("[%MDF_SPLIT_GENERIC_OBJECT_T_CurrencyExchangeRate.SF_FIELD1%]")
'If Currency conversion is unavailable
Else
'Set currency conversion rate as 1 (no conversion)
CurrencyConversionRate = 1.0
End If
'If current record contains salary data
If Not Data("[%EMP_PAYCOMP_RECURRING_T.#CALC_COL_4%]") Is Nothing _
And Data("[%EMP_PAYCOMP_RECURRING_T.#CALC_COL_4%]") <> 0 Then
'Set salary as Salary * Annualization Factor * Currency Conversion Rate
'If the conversion rate needs to be inverted
If Data("[%EMP_PAYCOMP_RECURRING_T.#CALC_COL_6%]") Then
'Invert currency conversion rate
Variables("CurrentSalary") = Data("[%EMP_PAYCOMP_RECURRING_T.#CALC_COL_4%]") * _
Data("[%FO_FREQUENCY_T.ANNUALIZATION_FACTOR%]") / CurrencyConversionRate
Else
'Don't invert currency conversion rate
Variables("CurrentSalary") = Data("[%EMP_PAYCOMP_RECURRING_T.#CALC_COL_4%]") * _
Data("[%FO_FREQUENCY_T.ANNUALIZATION_FACTOR%]") * CurrencyConversionRate
End If
End If
'If current record contains bonus data
If Not Data("[%EMP_PAYCOMP_RECURRING_T.#CALC_COL_5%]") Is Nothing _
And Data("[%EMP_PAYCOMP_RECURRING_T.#CALC_COL_5%]") <> 0 Then
'Set Bonus as bonus * currency conversion
'If the conversion rate needs to be inverted
If Data("[%EMP_PAYCOMP_RECURRING_T.#CALC_COL_6%]") Then
'Invert currency conversion rate
Variables("CurrentBonusAmount") = Data("[%EMP_PAYCOMP_RECURRING_T.#CALC_COL_5%]") / CurrencyConversionRate
Else
'Don't invert currency conversion rate
Variables("CurrentBonusAmount") = Data("[%EMP_PAYCOMP_RECURRING_T.#CALC_COL_5%]") * CurrencyConversionRate
End If
End If
End If
'Return forward filed salary + bonus amount.
Return Variables("CurrentSalary") + Variables("CurrentBonusAmount")
Second, review some of the coding above. Here are a few key parts:
Conversion rate | Depending on if the conversion rate table is joined to your salary table properly or if there is a valid conversion rate for your desired currency combination, you may not be able to pull back a valid conversion rate. If that is the case, the Data Factory will default conversion rate back to 1 which means you will report your original amount from Recurring table without conversion. You can add columns like conversion rate, currency in Drill to Detail to help you investigate the case. |
CurrentSalary variable | This variable calculates salary based on the data configured in: a) Base Salary. b) Annualization factors. c) Conversion rate. d) InvertCurrencyConversionRate. This will give you the salary associated with each record. If the current record is a bonus record, variable will end up storing 0 because it doesn’t have "Base Salary" |
CurrentBonusAmount variable | This variable calculates bonus based on the data configured in: a) Target Bonus Amount. b) Conversion rate. d) InvertCurrencyConversionRate. |
There are a few common pit falls:
- Adding too many pay comp IDs in Base Salary or Target Bonus Amount. Keep in mind that the latter will overwrite the former. If you include a pay comp ID that is not annual salary, that number may overwrite the real annual salary figure.
- Default Currency Code has to match letter by letter to the currency code in the exchange rate table. Best practice is to look at the exchange rate table first to understand the exchange codes and rates in the table.
- If there are multiple bonuses that you need to keep track of, each type of bonus needs its own column in Emp PayComp Recurring T table and its own variable in coding.

Salary range simply uses the value calculated in "annual salary" and converts it into discrete units. Then the ID produced from this formula is used in the dimension: Salary Range.
In the example, the highest value node that would appear in the dimension would be 175,000+ (with an ID of 175). A new dimension node ID would be generated for every 1000 values beneath that. If the annual salary value is empty or less than 0, the ID returned would be ??
Configure Dimensions
Dimensions for analysis of the data need to be configured in WFA on HANA in accordance with the Hierarchy Options and Analysis Options tabs in the WFA on HANA – Data Specification.

To access the dimension editor, from the Dimensions tab in the top navigation bar, choose Add to open Add Dimension.
For a dimension, complete the following:
1. Standard Dimension: Identifies the standard dimension. This dimension will be linked to or as a custom dimension. Dimensions identified as Standard Dimension will automatically include standard nodes for use in Derived Measures (where applicable), Benchmarking (where applicable), and standard labels for Internationalization if languages other than English (US) are enabled for the instance.
2. Dimension Type: ‘Analytical’ for smaller one or two level dimensions or ‘Structural’ for hierarchical type dimensions. Only applies to Custom Dimensions, as Standard Dimensions will be assigned automatically.
3.Dimension Name: Label to be used for the dimension. Only applies to Custom Dimensions, as Standard Dimensions will be assigned automatically.
4. Drop Dimension Columns Here: Shows the column(s) the dimension will be sourced from. Drag columns here from the Available Tables and Columns box. The value of this column in the fact table will match the node ID in the dimension structure.
5. Dimension Structure: Configure ‘Generated’ or ‘Manually Maintained’.
6. Dimension Structure Source: Configure none, ‘SQL’ or ‘Table’.
Dimension Structure Settings

Generated dimension structures use SQL code or Selected Fields to define the nodes and hierarchy for the dimension. Dimensions configured as ‘Generated’ cannot be modified manually in Dimension Editor. The value(s) from the column(s) in the Drop Dimension Columns Here must match a node ID in the generated dimension or they will be placed in the unallocated node.
When creating a generated dimension, you have the following structure sources:
Table: No SQL Code required. Select a source table then you can select fields from that table in the drop-down list to populate the structure type section.
SQL: Populate the SQL code to generate the dimension nodes and hierarchy. Must be used with the logic of the dimension as more complex than selection fields from a single table.
Generated Structures also have a Structure Type of Parent Child or Flat, regardless of the choice of Dimension Source Structure:
Flat: The structure is built with one or more levels with each level ID and level Name sourced as columns from a SQL statement or table. The structure can be a single or multiple level depth, with each level’s nodes having an ID and Description at that level.
Parent/Child: The structure is built by aligning each node with a parent where applicable, recursively building the structure. The configuration must be able to determine the relationship to other nodes to build the structure. For example, an employee has a supervisor ID in the job information table. Each child node (employee) can be mapped to a parent (manager). Structures can have different levels of depth.

Manually Maintained dimension can use SQL code or Selected Fields to define the nodes and hierarchy for the dimension. However, they can also have a manually built node structure in Dimension Editor. These dimensions can be modified in the Dimension Editor. The value(s) from the column(s) in the Drop Dimension Columns Here can be mapped to the nodes for the dimension in the Dimension Editor.
When creating a manually maintained dimension, you have the following Structure Sources:
Table: No SQL Code required. Select a source table that you can select fields from that table in the drop-down list to populate the structure type section.
SQL: Populate the SQL code to generate the dimension nodes and hierarchy. Must be used with the logic of the dimension as more complex than selection fields from a single table.
None: No configuration on the Edit Dimension screen. You manually create the dimension hierarchy, node ID and descriptions, and mappings in the Dimension Editor.
Picklist Tables to Source Labels

Dimensions that are constructed from a picklist can choose the relevant picklist table from the Choose Source Dropdown box. This list is currently very long and can be hard to navigate. The manual implementation guide has the name of the picklist required for each dimension to enter into the Choose Source Dropdown box.
Note
If a dimension override label or label other than the standards supplied are used on a dimension configured as standard, translations for those labels will not be automatically supplied.Note
For an example of utilizing a picklist in SQL script of a dimension, review the Employment Type dimension in this section.Configuring Direct Booking Nodes for Structural Dimensions

For structural dimensions with hierarchies that have data directly linked to parent nodes, you must configure a Node for Direct Booking if you want to use the Workforce Analytics connector to populate data in SAP Analytics Cloud (SAC). This is because SAC only allows data records linked to leaf nodes. This configuration ensures that a child node is available at leaf level to which every data record can map to.
Note
Direct Booking Node can only be configured for specific Structural Dimensions and not Analytical Dimensions.You customize the name of the direct booking node by adding the necessary suffix along with the node label formula %NODE_LABEL%. After you customize and configure the direct booking nodes, they appear under specific structural dimensions when you run a query.
You can individually configure the direct booking nodes for the structural dimensions that require it.Direct Booking Node Example
Scenario:
- Dev Manager 1 has 527 people reporting directly or indirectly.
- Dev Manager 1 has 33 direct reports.
- Dev Engineer 1 has 1 direct report.
- Dev lead 1 has 491 direct reports.
- d 45 has 1 direct report.
- d 47 has 1 direct report.
Without Direct Booking Configured

In the example above, you can see that there are 527 people reporting to Dev Manager 1. The list only comprises of supervisors under Dev Manager 1 and doesn't include direct reports. Thus, when you take a sum of the EOP Headcount under Dev Engineer 1, Dev lead 1, d 44, d 45 and d 47, the total only comes up to 494.
With Direct Booking Configured

After configuring Direct Booking, the configured direct booking node appears as a child node under specific structural dimensions. For example, in the image above, you can see the direct booking node with the custom label Dev Manager 1 - direct reports appears in the query result. The configured Direct Booking Node bridges the gap in the data by showing the number of direct deportees to Dev Manager 1. Now when you take a sum of the EOP Headcount it includes Dev Manager1 – direct reports and the total comes up to 527.
How to Configure Direct Booking
- Go to Workforce Analytics Admin → WFA on SAP Hana Data Factory.
- Under Configurations choose Dimensions.
- Choose to add or edit a structural dimension.
- Select the Add Node for Direct Bookings check box.
- Customize the name of the Direct Booking node by editing the Node label Formula.
- Choose OK.
- Run an Initial Load.
Note
If you’re enabling or disabling direct booking nodes on a structural dimension that has Tree Security configured, then you need to readjust the role-based permissions for all permission roles.
Example of Manually Maintained Dimension without SQL: Gender

1. Choose Add on the Dimensions tab and complete as follows:
Standard Dimension = Gender.
Dimension Column = Emp Personal Info T > Gender.
Dimension Structure = Manually Maintained.
Dimension Structure Source = None.
2. Choose OK to save.
Example of a Manually Maintained with Table: Employment Status

Choose Add on the Dimensions tab and complete as follows:
Standard Dimension = Employment Status.
Dimension Column = Emp Job Info T → Employment Status.
Dimension Structure = Manually Maintained.
Dimension Structure Source = Table.
- Select Choose Source Table and start entering Picklist - Emp Job Info T.Employment Status.
Set the ID Column and the Description Column as ID and LABEL respectively.
ID Column > External Code.
Description Column → Label.
Locale Column → Locale.
- Choose OK to save.
Example of a Manually Maintained with SQL: Employment Type

Choose Add on the Dimensions tab and complete as follows:
Standard Dimension = Employment Type.
Dimension Column = Emp Job Info T → Regular Temp and Emp Job Info T → Full Part Time.
Dimension Structure = Manually Maintained.
Dimension Structure Source = SQL.
- Choose Edit SQL to add the SQL script that will extract the Future Leader labels and enter the following script:
SELECT REGULAR_TEMP.EXTERNAL_CODE || '_' || A."FULL_PART_TIME" AS ID, REGULAR_TEMP.LABEL
FROM (SELECT DISTINCT CASE WHEN IFNULL(FTE, 0) = 0 THEN '??' --Unallocated
WHEN FTE = 1 THEN 'FT' --Full Timers
ELSE 'PT' END AS FULL_PART_TIME
FROM "[%ODS_DATABASE%]"."EMP_JOB_INFO_T"
WHERE NOT IFNULL(FTE, 0) = 0) A
LEFT OUTER JOIN [%PICKLIST(PICKLIST__EMP_JOB_INFO_T.REGULAR_TEMP)%] REGULAR_TEMP
ON REGULAR_TEMP.LOCALE = 'en_US'
- Choose Validate to confirm syntax and then OK to save.
- Before exiting the Add Dimension dialog, set the ID Column and the Description Column as ID and LABEL respectively.
- Choose OK to save.
Example of a Generated with SQL Parent/ Child Dimension: Supervisor

Choose Add on the Dimensions tab and complete as follows:
Standard Dimension = Supervisor.
Dimension Column = Emp Job Info T > manager ID.
Dimension Structure = Generated.
Dimension Structure Source = SQL.
- Choose Edit SQL to add the SQL script that will extract the Supervisor structure and labels. The script can be found in the manual implementation guide. The script returns 3 columns used in step 5.
- Choose Validate to confirm syntax and then OK to save.
- Before exiting the Add Dimension dialog, select Structure Type as Parent/Child and set the ID Column as USERS_SYS_ID, the Description Column as USERS_SYS_NAME and the Parent Column as MANAGER_ID.
- Choose OK to save.
Example of a Generated with SQL Flat Dimension: Location

Choose Add on the Dimensions tab and complete as follows:
Standard Dimension = Location.
Dimension Column = Emp Job Info T → Job Location.
Dimension Structure = Generated.
Dimension Structure Source = SQL.
Choose Edit SQL to add the SQL script that will extract the Location structure and labels. The script can be found in the manual implementation guide. The script returns 8 columns used in step 5. Each column is the ID and description at each level of the hierarchy. Loosely they are Country, State/Province, City, Location.
Choose Validate to confirm syntax and then OK to save.
Before exiting the Add Dimension dialog, select Structure Type as Flat.
Add Level 1, select the ID and Description columns for level 1.
Repeat for Levels 2-4.
Choose OK to save.
Add Measures
For a WFA on HANA configuration, the next step is to create the measures.

From the Measures tab in the top navigation bar, choose Add to open Add Measure.
Configure the following:
Standard Measure: Choose the applicable measure from the drop down selection or "None" if a standard measure will not be used.
Measure Name: The label to apply to the measure. Only applies for measures not identified as standard.
Aggregation Type: How the measure will be aggregated/rolled up.
Data Type: The format for the measure.
Edit Code: Opens the canvas to enter the syntax that will determine how the measure is calculated.
Using the Edit Code Function

When adding or editing a measure, select Edit Code:
List of columns available for use in a Measure – these include base columns from the data source, calculations as configured in the implementation and calculations produced by the WFA on HANA engine.
List of functions available for use in the Measure syntax.
Measure syntax determining how the measure is calculated.
Selecting a column that is used in the measure syntax will display details of that column. For example, the screenshot shows details of the Employment Status column.
Examples of Measure Syntax
Measure syntax is used by the WFA on HANA engine to determine whether to include an object in a measure. The following syntax is the template for the EOP Headcount measure and determine whether to include an employee based on their Employment Status and whether the employee had already terminated:
If((in([%EMP_JOB_INFO_T.EMPLOYMENT_STATUS%], 'A', 'U', 'P', 'S')
Checks to see if the employee is a status that is considered "active" (the definitions for "active" employment can be found in the specification for the instance) and will include only employees who have a status of A, U, P, or S. When configuring this measure users may need to change this line to use the column(s) and values appropriate to the instance.
OR ISNULL([%EMP_JOB_INFO_T.EMPLOYMENT_STATUS%]))
Checks to see if the employee has no Employment Status and if so, will include them. When configuring this measure users may need to change this line to use the column(s) and values appropriate to the instance.
, [%#CFT#.HEAD_COUNT%]
This line uses the headcount value if the above conditions are met.
,0)
Value if not met.
Note
For a list of the SQL syntax and measure configuration of all the standard measures in WFA on HANA for Employee Central, go to the manual implementation guide.Custom Measures
The WFA on HANA cube has available generic measures for each type so technical consultants can implement custom measures without requiring involvement from SuccessFactors. When using these generic measures, they will be allocated under a generic category that the consultant cannot change. Additionally, they come with generic names which can be adjusted using the measure override tool in the WFA administration panel. These generic measures can additionally be used to aid in testing.
There are five generic measures created in each rollup type:
- SUM: SumG1....SumG5
- AVERAGE: AvgG1....AvgG5
- EOP: EOPG1....EOPG5
- SOP: SOPG1....SOPG5
Follow the same process as standard measures to create the custom measure. Then use the measure override to assign an appropriate name.
If you need additional custom measures beyond the generic measures, need direct help from SAP SuccessFactors Technical Services team building the measure, or want to categorize the custom measures, you can purchase ‘SAP SuccessFactors Workforce Analytics Custom Measure’ from the SAP Store. There are two options:
- If you would like SAP to do coding in data factory as well as adding the measure to the menu, each purchase will cover one custom measure.
- If you would like to do coding in data factory by yourself and just need SAP to add it to the menu, each purchase will cover 4 measure definition place holders.
Process Data and Hierarchies
Now that all the sourcing for the WFA on HANA has been configured, the next step is to build the database. After the cube has been built, then hierarchies need to be mapped. The high level steps for these processes are:
Confirm Company Settings.
Validate the Configuration.
Build the Fact Data and Cube.
Edit Hierarchies.
Confirm Company Settings

The Company Setting sets which date WFA on HANA will be processed from. The default setting is the January between 3 and 4 years before the first processing and means WFA on HANA reporting will show data from that date through to today. If a longer (or shorter) total reporting period is required, select Company Settings on the Global Settings menu and adjust accordingly. You can also set the Default Currency Code which displays next to the measure results. Finally, you can enable displaying 2-digit years for the Fiscal Year time hierarchy.
Validate the Configuration

Validation will check the current configuration and attempt to highlight any issues so that they can be resolved prior to running the full build.
1. From Data Factory Home choose Validation in the Configuration bubble.
2. Validation will run through a set of rules and provide pass/fail statuses for each:

3. Use the Workforce Analytics on HANA Validation Guide as a reference and work through each error and resolve them.
4. Re-run the validator until there are no further issues.
Initial Process – Build Fact Data and Cube

Return to Data Factory Home and select Process Initial Load (Fact Data and Cube) in the Initial Process bubble. This will automatically direct you to the Load Status screen.
Load Status Screen Overview

Set Filters (Type, Date, Status) for which Jobs are to be listed in the Load History.
Choose whether the page is to be refreshed continuously, and the interval. Refreshing continuously is useful when a load is currently running, not refreshing is useful when reading a log.
Load History lists current and previous Jobs within the parameters set in the Filters.
Option to stop a currently running Job. This option only appears if there is a currently running Job.
Log entries for the selected Job.
Note
If your build fails, review the section on Troubleshooting & Debugging Build Issues in the course and/or the Validation Guide included in the WFA on HANA Implementation Kit.
Schedule Initial Load with the Data Factory
You can schedule an initial load at a preferred date and time range from the Regular Full Rebuild section in the WFA on HANA Data Factory page. You can select the frequency of the initial load. You can choose Custom from the Update Frequency drop down if you want to run it only for one day. You can also select the Start Date and Start Time Range. You can select from the following Frequencies:

Frequency Type | Description |
None | No initial load runs. |
Monthly | Initial load runs every month on the same date. |
Every 2 months | Initial load runs once every 2 months from the selected date. |
Every 3 months | Initial load runs once every 3 months from the selected date. |
Weekly | Initial load runs after every 7 days. |
Every 2 weeks | Initial load runs after every 14 days. |
Custom | Initial load runs only on the selected date. |
The feature provides you with the flexibility to schedule the initial load for a selected time frame.
Edit Hierarchies
The following section covers the Hierarchy Editor function. SucessFactors provides a customer facing dimension editor in the Admin Center called WFA Dimension Editor. You can learn more about the tool in the Using Workforce Analytics Dimension Editor in the SAP Help Portal or course HR886: SAP SuccessFactors Workforce Analytics Administration.
Both Hierarchy Editor and WFA Dimension Editor will perform the same function, however the WFA Dimension Editor uses a new UI.

Select Edit Hierarchies to configure/manage any manual dimensions.

Edit Hierarchies allows you to manually control the labels and groupings for the Dimension Nodes in Dimensions that are Manually Maintained:
A. List of Dimensions applicable for the instance. Note that Generated Dimensions are grayed out and cannot be modified.
B. Canvas for re-labelling selected nodes.
C. Toolbar.
D. Nodes that have been copied or cut will appear on the clipboard for re-use.
Standard Nodes

Standard Nodes
Standard nodes are default nodes automatically generated within dimensions and are identifiable by the "#" in the node ID. Standard nodes are used to identify dimension nodes required for derived measures, for example, EOP Headcount – Male. You can manage which employees are included in these measures by ensuring the right mapping of nodes into these default standard nodes. for example, to ensure all temporary employees are included in the EOP Headcount – Temporary measure, map any Employment Type codes for temporary employees into the #TEMP node in the Employment Type dimension.

Drag and drop – mapping to a parent: When dragging nodes to map them, drag the node on top of the parent node and the parent node will highlight. The above screenshot shows the F node becoming a child of Female.

Drag and drop – rearranging nodes: When dragging nodes to map them, drag the node on top of the parent node and the parent node will highlight. In the above screenshot the M node will be ordered underneath Male.
Code Mapping: Internal Code vs. External Code

A very common question is: Why do all my headcounts fall under unmapped category rather than the categories I created in the SQL statement?
Appearing in Unmapped simply means the leaf node IDs in generated via SQL doesn’t match the values returned in the data. A common mistake is to return external code in the SQL statement while the column dragged into that top right window is internal code or vice versa.
In order to link an employee attribute (whether it is a multi-fields concatenated ID or single field ID), those you dragged into the window must match exactly the one you return in SQL. When in doubt, add columns in Drill to Detail so you can see the value returned from the fact table.
Benchmarked Hierarchy Example: Gender
Dimensions that are Benchmarked will have predefined groupings in which to map the codes applicable for that instance. These predefined groupings are outlined in the WFA on HANA Data Specification. New and unmapped codes found in the WFA on HANA processing will be allocated to the Unmapped grouping. Drag and drop the M and F codes to map them appropriately as children of Male or Female, and delete the Unmapped folder (optional).

WFA on HANA Data Specification
The WFA on HANA Data Specification shows the Benchmark groupings for EEO Job Categories as follows:
Map the Unmapped codes into their appropriate groups per the WFA on HANA Data Specification:
Standard Hierarchies that Require Mapping
Other Standard Hierarchies Requiring Mapping
Other Standard Hierarchies that Require Mapping
Disability | Grade/Band |
Employment Level | Impact of Loss |
Employment Status | Managerial |
Employment Type | Minority |
Employment Type 2 | Recruitment Source |
Ethnic Background | Risk of Loss |
Future Leader | Separation Reason |
Gender | Veteran Status |
Map remaining dimensions as outlined in the WFA on HANA Data Specification. This list only includes Manually Maintained dimensions that will require some manual re-mapping.
Troubleshooting and Debugging Build Issues

Some issues surface in the Build Fact Data and Cube processing and cause the processing to fail. These issues can often be resolved by using the subsequent error message in the Load Status log as a guide.
Debugging Process
What happens when your initial load or incremental load fails? There isn’t a straight answer for this unfortunately. You will need to check out the error message from load status in data factory. You can utilize the following steps:
- Search "error" or "invalid" in the load status log. This will likely bring you to the first error it had before failing.
- Try to understand the error. If it is a duplicate ID in dimension error, you should be debugging the SQL statement and ensure the uniqueness of each ID. If it is a null value conversion error, check your calculated column and see if you need to check if null before casting it to a different type. If it is an invalid table error, check if that table exists and potentially create a ticket to bring back missing tables that used to be available.
- Check the SAP SuccessFactors Partner Delivery Community (PDC). Search the key word of your error and see if you are able to find similar cases in the past.
- If an answer cannot be found from historical posts, create a new post in the SAP SuccessFactors Partner Delivery Community (PDC).
Keep in mind that the configuration that works in one instance may not work in the other due to data differences. This is very common between a customer’s test instance and production instance.
How to Debug a Calculated Measure
Calculated measures are based on input measures within a formula. If a calculated measure doesn’t work on in the application, check if all the input measures used as part of the formula of the calculated measure are working. If there are input measures that are not working, you need to resolve those issues first.
All input measures must work before the calculated measure could work.
Build Error Case Studies
Take a look at 2 real-world situations, presented in a case-study format.
Build Error Study Case 1: Duplicate ID Error
From the log in data factory, you find an error message referring to duplicate ID. This means some nodes have the same ID which is not allowed when creating a dimension structure. To be able to build a structure, each node needs to be its unique path compared to others. You can achieve this uniqueness via concatenating ID, cast null value, and other methods.
The following is a sample error log from data factory load status:
Error Info:
InfoHRM.Girru.Components.InputProcessingException: Uncaught exception during input processing; Component=component "Parent Child Convertor Workforce __ Organizational Unit" (28) ---> System.Exception: Duplicate id generated as 3631545_3639040_UNK_DIV. Here is the parent and child path for duplication- {LEVEL_1_ID='3631545' LEVEL_1_NAME='ROMPETROL DOWNSTREAM' LEVEL_2_ID='3631545_3639040' LEVEL_2_NAME='Supply Chain' LEVEL_3_ID='3631545_3639040_UNK_DIV' LEVEL_3_NAME='Unknown Division' LEVEL_4_ID='3631545_3639040_6710545_UNK_SUBDIV' LEVEL_4_NAME='Unknown Subdivision' LEVEL_5_ID='3631545_3639040_6710545_6710548_UNK_DEP' LEVEL_5_NAME='Unknown Department' } ,{ LEVEL_1_ID='3631545' LEVEL_1_NAME='ROMPETROL DOWNSTREAM' LEVEL_2_ID='3631545_3639040' LEVEL_2_NAME='Supply Chain' LEVEL_3_ID='3631545_3639040_UNK_DIV' LEVEL_3_NAME='Unknown Division' LEVEL_4_ID='' LEVEL_4_NAME='Unknown Subdivision' LEVEL_5_ID='' LEVEL_5_NAME='Unknown Department' }}
The following is part of a sample query that handles null value across different levels. Basically, each field used in a concatenation should check if it is null. This ensures unique path of each node in your dimension.
select distinct IFNULL("EMP_JOB_INFO_T"."DEPARTMENT",'??') AS LEVEL1ID
,COALESCE(DEPARTMENT_DESC."LABEL"||'
('||DEPARTMENT.EXTERNAL_CODE||')',"EMP_JOB_INFO_T"."DEPARTMENT",'Unknown') AS LEVEL1NAME
,IFNULL("EMP_JOB_INFO_T"."DEPARTMENT",'??')
||'_'|| IFNULL("EMP_JOB_INFO_T"."DIVISION",'??') AS LEVEL2ID
,COALESCE(DIVISION_DESC."LABEL"||'
('||DIVISION.EXTERNAL_CODE||')',CAST("EMP_JOB_INFO_T"."DIVISION" AS NVARCHAR),'Unknown') AS LEVEL2NAME
,IFNULL("EMP_JOB_INFO_T"."DEPARTMENT",'??') ||'_'||
IFNULL("EMP_JOB_INFO_T"."DIVISION",'??') ||'_'||
IFNULL("EMP_JOB_INFO_T"."CUSTOM_VCHAR17",'??') as LEVEL3ID
,COALESCE(GROUPS.EXTERNAL_NAME||'
('||GROUPS.EXTERNAL_CODE||')',"EMP_JOB_INFO_T"."CUSTOM_VCHAR17",'Unknown') AS LEVEL3NAME
Build Error Study Case 2: Unmapped Node Error

After building a dimension based on SQL statement, you notice that all of the headcount falls into the unmapped category rather than spreading across the levels you just built.
The reason is the concatenation of the 5 fields (Department, Division, Vchar17, Vchar18, Cost Center) you dragged into the top right window doesn’t match the bottom level (Level 5 ID) in your SQL query. When the Data Factory determines the category to assign the employee in the dimension, the ID has to match exactly between the employee’s attributes (in the ‘Drop Dimension Columns Here’ area) and the bottom level (leaf) ID allocated to your structure (built from your SQL statement).
How to Debug
If you find yourself in this situation, you can use "LevelxID" in both ID column and Name column. This will show you the ID behind each level and you can see what is not matching between your SQL leaf level ID and your employee attribute ID.
You should also consider simplifying your query when initially building the dimension. Start with 2 levels only, so you get a better idea how it works. Then build up to the 5 levels of the entire desired structure. Just keep in mind that for each new level, you need to adjust both your SQL query and that attribute window’s ‘Drop Dimension Columns Here’. They must match with each other.
Measure and Dimension Configuration

The WFA on HANA implementation so far had configured the Dimensions and Base Input Measures as outlined in the specification. These Dimensions and Base Input Measures are now being used to generate a much wider set of metrics to include Derived Measures, Restricted Measures, and Result Measures.
Configuring Measure and Dimension Combinations

From Data Factory Home, select Measure/Dimension Arrangement in the Configuration bubble.
Configure the measures and dimensions using the following overview of the tool:
A. Cube Measures: List of the configured Base Input Measures.
B. Show Keys: Toggle the display of the Measure ID or the Dimension ID.
C. Selectable Measures: List of the available Base Input, Derived and Result Measures.
D. Show All Measures: Toggle between displaying all available Measures or only the selected Measures.
E. Turn All On/Turn All Off: Click to select/deselect all displayed Measures/Dimensions.
F. Show All Dimensions: Toggle between displaying all available Dimensions or only the selected Dimensions.
G. Bulk Turn On/Off Dimension: After selecting a dimension, this will allow you to turn on or off multiple measures for that dimension.
H. Mirror Dimensions: After choosing a source dimension, this will allow you to replicate the same measures selected for this dimension onto a target dimension.
WFA on HANA will attempt to remove any measure and dimension combinations that are not appropriate. For example, Organizational Tenure dimension will not be enabled for the derived measures that use this dimension like Average Headcount – 1-<2 Years Tenure. However, there are certain measure and dimension combinations that will need to be adjusted manually. These typically include:
Headcount measures with Recruitment Source or Separation Reason dimensions.
Recruitment measures with Separation Reason dimensions.
Termination measures with Recruitment Source dimensions.