Logging Information in SAP BTP

Objectives
After completing this lesson, you will be able to:

After completing this lesson, you will be able to:

  • Explain SAP BTP's logging infrastructure for handling events

Logging in SAP BTP

There are two types of events relevant for logging information, such as, what happened, why the Cloud application failed, or what the malfunction was. They are as follows:

  • The first type of event helps you to understand an application internally and identify root causes of malfunctions. Typical examples include incoming requests, status codes, and exceptions in the application. In order to do this, you need to use the application logging services provided by SAP Business Technology Platform (BTP).
  • The second type of event is for audit-relevant events. Typical examples include access and changes to sensitive data, or changes to critical application settings.

Although both types of event seem to be quite similar, their requirements, in terms of how the logging infrastructure handles these events, is quite different. A logger, which logs application internals must be fast as logging these events should not decrease application performance. In the best case scenario, the logs are written asynchronously. The logger does not have to be reliable, missing a few log entries is not a serious problem. Furthermore, these logs are only valuable when investigating recent problems and can be deleted after some time. For audit logs, however, the requirements are different. For legal reasons, these logs have to be retained longer. Furthermore, the logger must be reliable and safe. Therefore, fulfilling these requirements with different tools makes sense.

SAP BTP together with the SAP Cloud SDK, provides the ability to produce and analyze logs.

Since Cloud Foundry encourages a microservice architecture, it uses the open source tool Elastic Stack (https://www.elastic.co) as the central log aggregator. The tool collects logs from multiple services and enables the analysis of these logs in one place using Kibana, the log analyzer of Elastic Stack. Kibana produces a dashboard that logs all the relevant malfunctions and errors in a nice, detailed format.

Logging with a central logging service consists of two steps. First, the application or each individual microservice provides log messages with a certain structure. Second, a central log service collects the logs and makes them available in Kibana.

Log in to track your progress & complete quizzes