Event Based Integration

Objective

After completing this lesson, you will be able to explain how data is programmatically exchanged with BRH using asynchronous communication

Exchange Data with BRH

Event-based integration from Kafka

As shown in the previous lessons, SAP Batch Release Hub for Life Sciences (hereafter abbreviated with SAP BRH) supports two communication approaches through its API: synchronous and asynchronous.

In Lesson 3, we demonstrated in detail how to use the synchronous API with the Postman tool. In this lesson, we're going to follow the same objective, however with a source that is event-based.

The reader of this lesson should be confident with the content of Lesson 3 before proceeding with this lesson.

Event-Driven architecture is a cornerstone of cloud computing since it's the key to avoiding the construction of systems of monolithic applications relying on immediate responses from other applications.

Apache Kafka

Apache Kafka is a system for managing logs of EVENTS. These logs are called TOPICS.

A TOPIC is just an ordered collection of EVENTS that is stored in a durable way, and they're replicated, so they're stored on more than one disk on more than one server, so that there's no one hardware failure that can make that data go away.

Each one of those EVENTS represents a thing happening in the business like remember a user, maybe a user updates her shipping address, or a train, unloads cargo, or a thermostat reports that the temperature has gone from comfy to is it getting hot in here.

Kafka Connect is this process that does this inputting and this outputting, and it's also an ecosystem of connectors. Kafka, in the box, just like it has Connect for doing data integration, it has an API called Kafka streams.

Note

For more information: What is Apache Kafka®? - YouTube

This tutorial has been made on the scenario of a Manufacturing Execution System that is partially integrated with SAP S/4HANA and that for some reasons must deliver additional information of SAP BRH behind the officially supported SAP S/4HANA integration approach.

As documented, for SAP S/4HANA integration SAP BRH provides a technical component, called "Integration Hub Service", that has a Data Replication API for an object model fitted to the SAP S/4HANA data model.

A Customer of SAP Batch Release Hub for Life Sciences has therefore three approaches for integrating their Manufacturing Execution Systems:

  1. Native MES to SAP S/4HANA integration

    This is the recommended approach to integrate Manufacturing Execution Systems with SAP BRH. This approach then relies on the mechanisms provided by the product to then have the replication of the data in BRH. Technically, it makes use of the Data Replication API however this is transparent to the end users

  2. Using the SAP Batch Release Hub for Lifes Sciences API

    Overview | SAP Batch Release Hub for Life Sciences | SAP Business Accelerator Hub

  3. Using the Data Replication Service for BRH

    While developed and used only for the replicating from SAP S/4HANA to the Batch Release Hub for life sciences, this set of API has a slightly different payload compared to the one native of BRH, since it's tailored to the data model of SAP S/4HANA.

    Overview | Data Replication for Batch Release Hub - Inbound | SAP Business Accelerator Hub

Batch Review

BatchReviewsServiceReplicationService
sourceIdentifiersourceIdentifier
sourceModifiedBysourceModifiedBy
sourceModifiedAtsourceModifiedAt
batch_IDbatch.id
batch_material_IDproduct.id
batch_plant_IDplant.id
batchReviewCheckStatus_codecheckStatus.code
manufacturingDatemanufacturingDate
manufacturingQuantitymanufacturingQuantity
manufacturingUnitOfMeasure_IDmanufacturingUnitOfMeasure
expiryDateexpiryDate
deviationCheckStatusdeviationCheckStatus
createdAt (internal)createdBy (internal)
modifiedBy (internal)modifiedAt (internal)
status_ID (data record status)correlationIdentifier (internal)
commentscomments
 parent

On the left side we can see the schema view of the Batch Review object as provided by the SAP Batch Release Hub for Life Sciences API (Option 2) - Schema View | Batch Record Reviews | SAP Business Accelerator Hub

On the right side we can see the schema view of the Batch Review object as provided by the Replication api (Option 3) - Schema View | Data Replication for Batch Release Hub - Inbound | SAP Business Accelerator Hub

  1. Overall the two schema are quite similar and there is no major data difference
  2. The BatchReviewsServices is "flatter" than the "ReplicationService": the latter uses objects properties for the ID of batch, product, plant and for the code of check status, while the former directly spell them out.
  3. The BatchReviewsServices has few "internal" data fields to enable additional data record provenance tracking
  4. The ReplicationService scheme provides reference to a "parent" BatchReviews object. In order to convey this information with the BatchReviewsService API one would need to post a ComponentRelationStage (Schema View | Component Relations | SAP Business Accelerator Hub

The differences in the two models shouldn't surprise: when the integration is executed from SAP S/4HANA systems, the data objects after being received from the ReplicationService are "adapted" to that implies therefore in this example, the addition of the relation between BatchReview and "parent" BatchReview through the additional call to the ComponentRelationStage endpoint.

The integration approach that we're describing in this tutorial makes use of the freely available Solace sink connector and of the SAP Advanced Event Mesh. SAP Advanced Event Mesh, is part of the SAP Integration Suite, it's a complete event streaming, event management and monitoring platform that makes software event brokers available as a service. With advanced event mesh you can create event broker services in minutes, build a model of your event mesh to design and help implement it, and monitor your services and event mesh to ensure everything runs smoothly.

For this tutorial we choose the approach "2." namely the BatchReviewsService endpoint Overview | Batch Record Reviews | SAP Business Accelerator Hub

Since this also would be the approach used by customers that don't have licensed SAP S/4HANA and therefore is of more generic usage.

Customers of SAP S/4HANA are recommended using the direct integration in S4 and then using the DataReplicationFramework as provided by the addon that BRH delivers them.

SAP Integration Suite, advanced event mesh is a fully managed event streaming and management service that enables enterprise-wide and enterprise-grade event-driven architecture.

What is Advanced Event Mesh?

  • Advanced Event Mesh is a distributed mesh of event brokers that can be deployed across environments, both in the cloud and on-premise
  • It offers a full purpose set of eventing services covering all relevant use cases
  • AEM supports event streaming, event management and event monitoring
  • Brokers fully scale as required and come in T-shirt sizes to perfectly fit different needs

What features and benefits does AEM offer?

  • AEM offers enterprise-grade performance, reliability, security and governance.
  • It scales to very large use cases - and very means very very very in this case. On the other hand, you can start small, if needed, as well.
  • SAP Integration Suite, advanced event mesh offers deployment options across different hyperscalers and in private cloud environments. All it takes is a Kubernetes environment.
  • AEM can be configured to form a distributed mesh of event brokers. Events can flow across the mesh to be consumed where desired.
  • It includes a sophisticated toolset to address tasks like cluster management, mesh management and monitoring/tracing.
  • SAP Integration Suite integrates with SAP backends via different options

A Batch Record Review record is an essential component for the decision of releasing a batch of products.

The BRR is produced from a Manufacturing Execution System and the minimal required set of information are:

  • Batch Identifier
  • Material identifier
  • Plant identifier
  • Batch Review Check status code
  • Status ID
  • Source modified By
  • Source Identifier
  • Source modified date and time

These information must then be loaded in SAP BRH and there be processed semi/automatically according to the assigned release type.

In our example we'll be using the following data, that in normal operations would be automatically loaded in the Kafka queue.

Code Snippet
1
{"batch_ID":"MES0000003","batch_material_ID":"MAT0000001","batch_plant_ID":"PLT0000001", "batchReviewCheckStatus_code": "BCHEC00001","status_ID": "A","sourceModifiedBy": "DEMO", "sourceIdentifier": "KAFKA", "sourceModifiedAt": "2023-04-13T15:51:04.0000000Z"}

For the purpose of this tutorial we'll therefore create the end-to-end flow from the Kafka, assuming that there's a Kafka topic having Batch Record Reviews posted to it, and then we expect to see that data flowing into SAP BRH when we post it to the Kafka topic.

The corresponding online documentation (Try Out | Batch Record Reviews | SAP Business Accelerator Hub) provides all details also at "source code" level.

Here we see that

  1. The endpoint for the POST of BatchReviewStage is "/dso_batchreviews/BatchReviewsStage"
  2. The Content-Type is application/json
  3. hat endpoint accepts also a custom header x-sapbrh-autoactivate

These and other details are needed later for the configuration of the

  • REST Consumer
  • Queue Bindings

The first step, after having received access to the SAP Advanced Event Mesh, is to create a service. In this case we called it "SAP BRH" and choose the EU (Frankfurt) AWS instance. This service can be created in several different ways, and also on customer-controlled infrastructure (for example, on-prem.

Normally these information are not used for regular operations, might be needed for service support.

The "Connect" tab contains important information in order to access the SAP Advanced Event Mesh using different protocols.

The "connectors" that we are going to use have a "native" high performance protocol called "Solace Message Format" and through this page one can not only see the credentials that we'll need to input in the connector properties files, but also sample code in various programming languages on how to use the SMF format from custom code.

The credentials shown on the right side and the "Download PEM" will be required for the configuration of the kafka plugin.

Therefore take note of them, and download the PEM, this will be used later to properly configure the kafka connector.

As one can see from this page, several code examples are provided to connect to SAP Advanced Event Mesh, however for the purpose of this tutorial, and for your integration from Kafka to SAP BRH, there is no need to use the client libraries.

The Solace Web Messaging connection details will be useful later, in order to configure the "Try me!" web client and check the incoming messages as soon as they are provided.

Please take note of following values since will be used for the configuration of the "Try Me"

  • Username
  • Password
  • "Public Internet" address (starts with wss://…

The Kafka Connector will connect to the SAP Advanced Event Mesh using the provided credentials above, and then publish into a queue.

The next step is therefore to create a queue to store the incoming messages and this is done through the "Manage" tab.

The queue "brhbrrqueue" will be storing the messages that SAP Advanced Event Mesh is receiving from the specific kafka topic. As you can see above also other Queues can be managed in the same system.

The "brhbrrqueue" summary page shows all core information needed, in order to properly work.

Some of these parameters are set through the "Edit" button (the pencil on the top right) while others are set through additional steps that we are going to cover in detail below.

When one creates a queue, at the beginning is deactivated and just a label.

Note

Be aware that once created, the name of the queue cannot be changed.

By activating the "incoming" and "outgoing" you enable the queue to receive messages and forward them. Deactivating it allows to "stop the flow" and see if the data is properly arriving, for instance.

Access Type

The access type for delivering messages to consumer flows bound to the Queue.

  • Exclusive

    Exclusive delivery of messages to the first bound consumer flow.

  • Non-Exclusive

    Non-exclusive delivery of messages to all bound consumer flows in a round-robin fashion.

For BRH, typically we would use an "Exclusive" delivery message and only one consumer flow bound to it.

Messages Queued Quota (MB)

The maximum message spool usage allowed by the Queue, in megabytes (MB). A value of 0 only allows spooling of the last message received and disables quota checking.

For BRH, 5000 MB is already a quite huge number and sufficient since we do not expect that 5 GB of messages will be stuck from the Kafka before be deployed in BRH. Nevertheless, the topic of resilience has been addressed elsewhere, and considerations could given of using instead the Data Replication endpoint for large volumes of Kafka POSTS.

Owner

The Client Username that owns the Queue and has permission equivalent to "Delete".

Non-Owner Permission

The permission level for all consumers of the Queue, excluding the owner.

  • No Access: Disallows all access.
  • Read Only: Read-only access to the messages.
  • Consume: Consume (read and remove) messages.
  • Modify Topic: Consume messages or modify the topic/selector.
  • Delete: Consume messages, modify the topic/selector or delete the Client created endpoint altogether.

Maximum Consumer Count

The maximum number of consumer flows that can bind to the Queue. The default is 1000 however for BRH a smaller number should be sufficient. Basically, each KAFKA source could connect independently and here has to be provided the maximum number of concurrent flows.

Many parameters control the behavior of the Queue, and these can be accessed through the Advanced Mode.

While for the purpose of the tutorial this is not needed, since we are using the default values and these are sufficient, it's good to know where all the limits are and how to access them.

At this point one has to create a Connector, that itself uses a REST Consumer to connect to the REST endpoint of BRH

The REST Consumer is the component that takes the message from the queue and posts to the endpoint. The "Host" parameter must be the "uri" from the BRH integration service credentials, without "https"://" and without endpoint.

The port 443 is to be used if one is using the standard port for the https protocol. The Authentication scheme "Oauth 2.0 Client Credentials".

It requires clientid and clientsecret to get a token from the {uaa.url}/oauth/token endpoint.

Note

The Token Endpoint URL is not necessarily from the same as the Host of the messaging!

After successful enablement of the REST Consumer, and of TLS, clicking in "Stats" and then under "details" provides valuable information also showing the connection status, and the credential authentication.

Once the Queue is created and the REST Consumer properly setup, the next step needed is to configure the Queue Binding.

In this example we are binding against the corresponding endpoint, please note:

  • The Post Request Target is the address after the {host} in the -url parameter of the CURL example
  • The Header "Content-Type" must be created and set to "application/json"
  • The Header "x-sapbrh-autoactivate" can be created too, and if set to "false" then it will result in the POST to be staying in the "Stage" area and not be delivered in the "Active" area, as explained in previous lessons.

If everything has been done correctly, than at this point your connector should like as above.

Note

Please note that if the operational state is not "Up", means that there are some errors in the configurations.

There are two different "Try me!" web interfaces, one from the "Open Broker" (shown on the left) and one from Cluster Manager - Services tab (shown on the top right).

They are two truly different applications, with slightly different user interface. However, in both cases you need to ensure that they are properly configured.

Since these are web clients you must be using the credentials and BrokerURL of the "Solace Web messaging". After having established successfully a connection, you need to subscribe to a topic, in our case "brhbrrqueue". Be also aware that if you close the window or change the page, often the credentials are lost and you need to enter them again.

In this screenshot are shown the connection configurations for the "Try Me" as available from the "Open Broker".

Please remember, this application is generic and not BRH specific. While technically you can use this user interface for publishing data into the queue, it should not be done, and instead should be used only as "subscriber" for testing purposes.

At this point we are set to receive data and publish it into SAP BRH through the API. We now need to close the gap and build the other side of the bridge, from the Kafka

Note

The SOLACE Sink Connector is *not* a SAP Product, currently it's licensed under Apache License version 2.0 however itit's

Hub - SolaceProducts/pubsubplus-connector-kafka-sink and are very simple to be followed.

  1. Have Kafka properly running, and able to access internet (either directly or through proxies).
  2. Have Java installed, according to the documentation. For this example we are using SAP JVM, running on a MacBook, installed with "brew kafka install" and operating as service.

    In order to test if it's working properly, use specific instructions, in our case we used%brew services info kafkakafka (homebrew.mxcl.kafka)

    1. Running: YES
    2. Loaded: YES
    3. Schedulable: NO
    4. User: #####
    5. PID: 2484
  3. Have downloaded the PEM certificate from the connect tab, solace messaging, as described previously.
  4. Create a jsscacert store in a well defined location (in this example /Users/DEMO/jssecacerts) and load the PEM certificate in it, using a well thought password keytool -importcert DigiCertGlobalRootCA.crt.pem -keystore /Users/DEMO/jssecacerts -storepass mypassword
  5. Download the archive (this tutorial uses version 2.3.0, from https://github.com/SolaceProducts/pubsubplus-connector-kafka-sink#downloads)
  6. Create a "connectors" folder (for example, /Users/DEMO/connectors)
  7. Extract the archive in the above mentioned "connectors" folder this will generate a subfolder called "/Users/DEMO/connectors/pubsubplus-connector-kafka-sink-2.3.0/"
  8. Edit the configuration file "/Users/DEMO/connectors/pubsubplus-connector-kafka-sink-2.3.0/etc/solace_sink.properties"and properly set the properties, that according to java rules are specified as key=value pairs using the "=" sign as separator.sol.host=<the address of the Secure SMF host from the "Connection Solace Messaging" connection details>sol.username=<the username in the "Connection Solace Messaging" connection details>sol.password=<the password in the "Connection Solace Messaging" connection details>
    • sol.vpn_name=<the name of the message VPN that has been previously created>
    • sol.ssl_trust_store=<the location of the jssecacerts file previously created. In this tutorial would be /Users/DEMO/jssecacerts >
    • sol.ssl_trust_store_password=<the password for accessing the jssecacerts file. In this tutorial mypassword>
    • topics=<the local topic that will be forwarded to the BRH>sol.topics=<the target topics in the SAP Advanced Event Mesh. In this tutorial brhbrrqueue>
  9. edit the /etc/kafka/connect-standalone.properties and specify the location of the connectors: plugin.path=/Users/DEMO/connectors
  10. restart kafka

At this point everything is set and then one has only to start the connect service provided by kafka.

/opt/kafka/3.4.0/bin/connect-standalone /opt/kafka/3.4.0/.bottle/etc/kafka/connect-standalone.properties /Users/DEMO/connectors/pubsubplus-connector-kafka-sink-2.3.0/etc/solace_sink.properties

In order to verify that the Kafka connector is up and running one can use Postman, for example, as shown above.

The same could be achieved also by any other HTTP client such as curl http://localhost:8083/connector-plugins

Code Snippet
1
[{"class":"com.solace.connector.kafka.connect.sink.SolaceSinkConnector","type":"sink","version":"2.3.0"},{"class":"com.solace.connector.kafka.connect.source.SolaceSourceConnector","type":"source","version":"2.3.0"},{"class":"org.apache.kafka.connect.mirror.MirrorCheckpointConnector","type":"source","version":"3.4.0"},{"class":"org.apache.kafka.connect.mirror.MirrorHeartbeatConnector","type":"source","version":"3.4.0"},{"class":"org.apache.kafka.connect.mirror.MirrorSourceConnector","type":"source","version":"3.4.0"}]%

At this point we are set and we expect to be sending a payload in the Kafka local queue, then seeing it appearing in the BRH web user interface in the "Stage area".

Hereafter is the payload that we'll be sending,

Code Snippet
1
{"batch_ID":"MES0000003","batch_material_ID":"MAT0000001","batch_plant_ID":"PLT0000001", "batchReviewCheckStatus_code": "BCHEC00001","status_ID": "A","sourceModifiedBy": "DEMO", "sourceIdentifier": "KafkaSAPAEMBRHAPI", "sourceModifiedAt": "2023-04-13T15:51:04.0000000Z"}

These are the minimal field that must be provided, according to the API for the RBRHAPI endpoint for the BatchReview object, as specified in the API.

  1. Start the Kafka console producer against the topic queue "test"  Kafka-console-producer --topic test --bootstrap-server localhost:9092
  2. Paste the payload
    Code Snippet
    1
    {"batch_ID":"MES0000003","batch_material_ID":"MAT0000001","batch_plant_ID":"PLT0000001", "batchReviewCheckStatus_code": "BCHEC00001","status_ID": "A","sourceModifiedBy": "DEMO", "sourceIdentifier": "KafkaSAPAEMBRHAPI", "sourceModifiedAt": "2023-04-13T15:51:04.0000000Z"}
  3. Observe in the SAP Advanced Event Mesh "Try me!" Web client that the payload has been received
  4. Observe in the SAP Batch Release Hub Data monitoring - batch record review - staging, that the new record has arrived

If for some reasons the data is not shown in the SAP Advanced Event Mesh, then one can use the GitHub - richard-lawrence/Solace-SEMP-V1-Scripts: Example Solace SEMP V1 Scripts and appropriately configuring the environment then collect various logging details to help resolving issues.

At this point we have concluded the tutorial and demonstrated step-by-step how to setup and populate from Kafka to SAP BRH using the Solace connector and the SAP Advanced Event Mesh.

Log in to track your progress & complete quizzes