Setting Up the Data Mapping Definition and Running Data Mapping - CSV File

Objective

After completing this lesson, you will be able to Configure data mapping definition and run data mapping - CSV File.

Data Mapping Definition and Run - CSV File

Robert, our new consolidation consultant, wants to:

  • Using a CSV file as the source type, understand the process of creating and using mapping definitions within the Define Data Mapping app.
  • To create mapping jobs and manage the import of data into ACDOCU, familiarize himself with the functionality of the Run Data Mapping app.
  • Get into the Traceability Log.

With the Define Data Mapping app, you can define mappings between source dimensions from any external system and target dimensions available in the SAP S/4HANA system, for example, using a CSV file as source type. The target is the ACDOCU table of SAP S/4 HANA Finance for group reporting.

The mappings are created first in Microsoft Excel files and then downloaded in the Data Mapping Definition app. You use these mappings to transform data and ready it for import into the ACDOCU table. The import executes from the Run Data Mapping  app.

Note

If you want to add a description in another language, you must display SAP group reporting Data Collection in this language. Modify the language and region setting under User Settings, then enter the description and save the definition. If you not yet added any translations, the description defaults to English when displaying SAP group reporting Data Collection in another language.

If the mapping file contains errors, they display in the Errors section.

You can download and correct your mapping file in Excel. Choose the download mapping file link. After you correct the errors, upload the mapping file again.

Text Identifier
With the help of the text identifier, you can select the character that encloses text values in your file. If you specify a text identifier in your mapping definition, and it doesn't match the text identifier used in the CSV source file, the data row is rejected.
Rows to Skip

You can specify the number of rows you want to skip before the import process can start. So, if you enter 5, the process will skip the five first rows and start at row 6. Column names at row: Using Column names at row you can select a row number to indicate where the column names are.

Number Formatting
Here you have to apply the decimal and thousands separators that correspond to the ones used in your CSV source file.
Access Restrictions

When creating your data mapping definition, you can optionally add access restrictions to this definition. You can define whether a particular user has access to a mapping definition or not, by restricting access to this definition with one or several specific dimension values. Once the restriction is set, the application compares these dimension values to the global parameters of the user. If the global parameters of the user are included in the access restrictions of the definition, the definition is visible to the user.

Only users with the SAP_GRDC_DDM_EDIT_RESTRICTIONS role are allowed to set access restrictions. The restrictions defined never apply to this user, whatever their global parameters are.

For more information on roles and user rights, refer to the SAP Help portal at: https://help.sap.com/docs/SAP_Group_Reporting_Data_Collection/e29950571a2b4065b1dacaf881c67413/c3401bd8e9354359a777b264d0b16309.html?version=1.0 .

By default, the following dimensions are automatically loaded, based on the dimensions the user selected in the Global parameters tile:

  • ConsolidationLedger
  • ConsolidationVersion
  • ConsolidationChartOfAccounts
  • FiscalYear
  • FiscalPeriod

You need not add these dimensions as target dimensions in your mapping file, except for the ConsolidationUnit dimension. You must enter this dimension as a target dimension.

However, you can use FiscalYear, FiscalPeriod, and ConsolidationVersion as Output columns. In that case, the global parameters are ignored.

Note

Once data from a source type "File" is processed by the mapping, its document type is by default set to DT = 0C.

Parent Definition:

You can create one-level hierarchies consisting in parent and child definitions. The child definition inherits the settings of the parent, including the mapping file. By default, the inherited settings are linked to the parent definition, but you can choose to unlink the different settings and define new ones.

Once you have selected the parent definition, you can see that the child definition inherits the settings of the parent definition. Switch buttons are added to each setting and you can choose to leave them linked to the parent definition or unlink them and define new settings. You can also choose to leave settings to be fixed at run-time.

In the Access Restrictions section, you can use the Linked switch button if you want to define different access restrictions than those of the parent definition.

Parent definitions are indicated by a chevron in front of them. When you click on the chevron, it expands the structure and displays the hierarchy. The parent definition is located on the first level and the child definitions are underneath it. Child definitions cannot be selected as parent definitions.

Example:

50 consolidation units each have a single local accountant. Each local accountant should have access to their consolidation unit only and only be able to upload data to the consolidation unit of which they are in charge.

A single common standard data mapping file is provided by the headquarter to be used by all 50 consolidation units. The maintenance of this file is centralized to ensure that any changes made are applied by all consolidation units.

To meet these requirements, a single-parent data mapping definition and 50 child data mapping definitions can be created. The child definitions inherit all settings from the parent definition, including the central mapping file.

Access restrictions are set up to ensure that local accountants only see their own definitions and jobs. That way, only one central data mapping file needs to be maintained. When it is being updated by the headquarter, it only requires synchronization between the parent and the child definitions for the updated mapping file to be deployed and used group-wide.

To import your data into the ACDOCU table of SAP S/4HANA Finance for group reporting, with the Run Data Mapping app, you can create mapping jobs to select the mapping definition and its source data.

Depending on your source in your Data Mapping definition, the mapped data takes different document types when uploaded to ACDOCU.

If you used the file source scenario, your data is written into document type 0C.

In the Definition Name field, select the data mapping definition you want to run. You can only select valid mapping definitions.

Once you select the mapping definition, the Source Instance  field displays. If the source type of the mapping definition is a CSV file, the Source Instance for CSV field enables you to select a remote web location. This step is optional. You can also select a source instance later on when running your job.

For the documentation for configuring destinations for remotely stored CSV Files, refer to the SAP Help portal at https://help.sap.com/docs/SAP_Group_Reporting_Data_Collection/e29950571a2b4065b1dacaf881c67413/9e69b6c7b0ca4f488caccc083957f25f.html?version=1.0.

If you want to schedule your mapping job, choose Enable Scheduling. You can then schedule a mapping job to run automatically at a scheduled time, with no manual input.

Note

If a scheduled job runs automatically and fails three times in a row, it deactivates.

Note

You cannot create a job based on a data mapping definition for which access restrictions that apply to you have been set. When you create a job, you can only select data mapping definitions that you have access to.

When you select Enable Scheduling, you can also choose the Advanced Settings button to activate the following options:

  • Allow this job to be run on demand to enable this job to be run manually even if the job is scheduled.
  • Don't run this job again if it's already running to skip the next run while the job is not finished running.

Note

If you do not select Allow this job to be run on demand, you cannot manually run a scheduled job. If you must, you must open it and deactivate the scheduling before running it.

Optionally, you can modify the values of the global parameters when you select Enable Scheduling.

When you select Save, you add the job to the Data Mapping Jobs list.

Creating a data mapping job offers different import options.

For example, when uploading data, you can select Update Mode Overwrite. Using this update mode, only the uploaded records are written. Any existing records that share the same item and all of the same sub-assignments are overwritten.

Note

The Read Document Types parameter enables you to select different document types that you want to consider for calculating the delta between data that already exists in ACDOCU table and data imported by data mapping.

The Write Document Type parameter enables you to select the document type on which you want to import data. This parameter is optional. If you do not specify a document type, it applies the default write document type.

When uploading data, you can also select Delete All FS items and Subassignments. The deletion replaces any existing data imported using the same combination of version, period, consolidation unit, and document type. However, the system does not physically delete any data in the system. Instead, it calculates a delta between the new data and any data previously imported, then creates new records for the delta data. All existing records in a block (consisting of ledger, consolidation chart of accounts, version, fiscal year, fiscal period, and consolidation unit) reset to zero before loading the new data.

Then, the data is replaced by posting a delta instead of physically deleting existing posts. You manage the Delete All FS items and Subassignments option on total level, not on document level.

When you keep the Periodic default option as period mode, the source data is interpreted as periodic data, that is, the source data is written to the specified period with no additional calculation.

If you select Year-to-Date as period mode, the source data is interpreted as cumulative (year-to-date) data. The system subtracts the data of the previous periods of the fiscal year from the cumulative data and then writes the delta amount to the specified period.

When all its settings are defined and saved, you can run your mapping job.

In the Select Source File and Run Options  dialog box, choose the following details:

  • The network folder of your source file
  • The source instance of your mapping definition (if required)
  • With a traceability log (if required)

The Test Run option enables you to check for errors in the import and traceability logs, without sending  data to the ACDOCU table. Once you correct the errors, you can run a complete job again to send the data.

Note

The Test Run  option is only available with the ACDOCU table defined as target of your mapping.

In the Runs  tab of a job, you can select a run and choose the Cancel  button to abort the process. You can only cancel job runs with the status Running or Pending and you must refresh the page to update the status.

After running your job, you can open it and consult the status of the last run from the Runs  tab.

The Mapping Logs section provides you with information on the mapping definition run, the source data used, the general parameters applied, mapping events, and any rules that could not be applied.

The Import Logs section provides you with the import results, including warnings and errors, and can be exported in a spreadsheet.

The run information in the header of your Run page gives important information about the status of your last update run.

Run Status  displays the status of the data mapping process, for example, read and select source data, transform it, and send the generated data to SAP S/4HANA. It does not display the status of the import. It can display a success status, even if the import fails.

Import Status indicates the status reported by the SAP S/4HANA system regarding the import. It displays errors if lines are rejected. The import status might not be known at the same time as the run status and updates once the import finishes. The import can take a while. You might need to wait for the import status and the import log to display. In the meantime, the status display is Request sent.

Note

When the Aggregated Rows for Upload column displays 0 or is empty, the import status display is Not run.

If you select Run with Traceability Log in the Select Source Files and Run Options dialog box, the Download Traceability Log… button is available from the list of data mapping jobs or from within the job. Choose this button to analyze the mapping log and import log entries more deeply.

With traceability logs, you can trace generated amounts back to the source data and the mapping rules that generate them. You can also analyze errors that occur during the mapping process and understand why the SAP S/4HANA back-end rejects the generated data.

The structure of the traceability log generates in Microsoft Excel.

The amount of generated data that the Excel file of the traceability log can display is limited to 30 million characters. The data above this limit is excluded from the Excel file of the traceability log.

Note

The traceability log generates on demand. It is only stored for the last run and for one month.

Note

You cannot use traceability logs for scheduled runs.

The Excel traceability log file is organized into three parts:

  • The source data part, where every other row is highlighted in blue
  • The rule path part, where every other row is highlighted in green
  • The output data part, where every other row is highlighted in pink

Horizontal lines are added inside the file to help you distinguish between the different rows before aggregation.

Note

The output columns displayed in the traceability log correspond to the columns displayed in the Import Logs   tab. You can filter these columns with the Settings  button.

As a result, if you select the Uploaded Amount in Local Currency column from the import log, then the Uploaded Amount in Local Currency - Total  and Uploaded Amount in Local Currency - Detail  columns display in the traceability log.

The first part contains the source data, which is the input row ID and the different source field columns, from the CSV source file or ACDOCA table.

Note

If the source data is a CSV file, the input row ID number corresponds to the row number of the CSV file.

The second part contains the location of the rules that generated the data or that led to inconsistencies. Each column corresponds to one mapping. The traceability log displays as many columns as mappings defined in the mapping file of the Data Mapping definition.

Note

The header text of the rule path columns such as Mapping (FY) refers to the mapping tab name of the Excel mapping File. For example, R4 refers to the fourth line in that mapping tab.

For generated rows, the Rule Path  section indicates the row number of the rule in the Excel mapping file.

For rejected rows, the Rule Path  section indicates the column name where the mapping process stops.

The Output Data part contains the data sets uploaded into ACDOCU and contains the following element:

  • The output field columns of the ACDOCU table
  • The (upload) row ID
  • The amounts
  • The mapping message
  • The import message
  • The Row Number columns

The two message columns contain the import error or warning and the reason for the rejection. These come from the mapping, the import, or the SAP S/4HANA system.

Define Data Mapping and Run Data Mapping for Consolidation Unit IT##

Define Data Mapping and Run Data Mapping for Consolidation Unit PT##

Define Data Mapping and Run Data Mapping for Consolidation Unit ES##

Log in to track your progress & complete quizzes