Collecting Application Logs and Shipping them to SAP Cloud Logging

Objective

After completing this lesson, you will be able to collect and ship logs

Collecting Application Logs in Kyma and Shipping to SAP Cloud Logging

After deploying your microservice-based application to Kyma, you want to monitor it continuously with Kyma's telemetry capabilities and the integration to SAP Cloud Logging.

In this exercise, you will perform the following tasks:

  1. Deploy a sample application that produces application logs.
  2. Provision an SAP Cloud Logging instance and create a ServiceBinding with credential rotation for it.
  3. Configure a Kyma Telemetry LogPipeline resource that references the binding.
  4. Add deep linking to Kyma dashboard.
  5. Use SAP Cloud Logging to explore logs.

Prerequisites

  • You have successfully created a Kyma runtime instance in the SAP BTP subaccount.
  • You have an entitlement for an SAP Cloud Logging plan.
  • You have added the Telemetry module. See Adding Kyma Modules.
  • You have configured kubectl to work with your Kyma runtime instance.

Task 1: Deploy a sample application that produces application logs

Steps

    In this task, you will create a new hello-kyma deployment and service in a new namespace called telemetry-exercise. The deployment will use the image ghcr.io/sap-samples/kyma-runtime-learning-journey/hello-kyma:1.0.0, which writes application logs for incoming requests to stdout in JSON format.

  1. Create a new namespace calledtelemetry-exercise:

    Code Snippet
    1
    kubectl create namespace telemetry-exercise
  2. In the telemetry-exercise namespace, create a new deployment called hello-kyma:

    Code Snippet
    1
    kubectl apply -n telemetry-exercise -f https://raw.githubusercontent.com/SAP-samples/kyma-runtime-learning-journey/main/unit_9/hello-kyma-deployment-svc.yaml
  3. Verify the successful creation of the deployment:

    Code Snippet
    1
    kubectl get deployments -n telemetry-exercise

    You should see the following output:

    Code Snippet
    12
    NAME READY UP-TO-DATE AVAILABLE AGE hello-kyma 1/1 1 1 2m
  4. Port-forward the `hello-kyma` service to your local machine:

    Code Snippet
    1
    kubectl port-forward -n telemetry-exercise svc/hello-kyma 8080:8080

    Leave this terminal window open, and open http://localhost:8080 in a new browser tab. You should see a Hello Kyma! (Version 1.0.0) page.

  5. Stop the port-forwarding by pressing Ctrl+C in the terminal window.

  6. Verify that logs are written for every request to stdout:

    Code Snippet
    1
    kubectl logs -n telemetry-exercise -l app=hello-kyma

    The logs retrieved with kubectl are stored temporarily on the Node file system. The logs will be rotated frequently and will be lost when the Pod or the Node is rescheduled. The following setup will collect the logs from the Node file system, store them long-term in SAP Cloud Logging, and make them searchable across Pods and namespaces.

Task 2: Provision an SAP Cloud Logging instance and create a binding with credential rotation

Steps

  1. Using the SAP BTP Operator module (which is in your Kyma cluster by default), define a new instance for SAP Cloud Logging:

    Code Snippet
    12345678910111213
    cat <<EOF | kubectl -n telemetry-exercise apply -f - apiVersion: services.cloud.sap.com/v1 kind: ServiceInstance metadata: name: cloud-logging spec: serviceOfferingName: cloud-logging servicePlanName: standard externalName: Cloud Logging parameters: ingest_otlp: # must be enabled for trace and metric data ingestion enabled: true EOF
  2. For the instance you just created, define a new Service Binding that has credential rotation enabled:

    Code Snippet
    1234567891011121314
    cat <<EOF | kubectl -n telemetry-exercise apply -f - apiVersion: services.cloud.sap.com/v1 kind: ServiceBinding metadata: name: cloud-logging spec: serviceInstanceName: cloud-logging externalName: cloud-logging secretName: cloud-logging credentialsRotationPolicy: enabled: true rotationFrequency: 720h rotatedBindingTTL: 24h EOF

    The credential rotation will rotate the credentials in generated Secret periodically, keeping the old credentials valid in parallel for the specified TTL. Whenever the content of the Secret changes, the LogPipeline resource defined in the following step will reload the Secret value dynamically. This assures that the credentials will not expire and are rotated frequently.

  3. Verify the setup by checking the state of the instance and binding:

    Code Snippet
    1
    kubectl -n telemetry-exercise get serviceinstance
    Code Snippet
    1
    kubectl -n telemetry-exercise get servicebinding

    It will take several minutes till the instance is provisioned and the binding and the related Kubernetes Secret are created. However, you can continue with the next steps already because the LogPipeline will pick up the Secret value dynamically as soon as it is ready.

Task 3: Configure a Kyma Telemetry LogPipeline referencing the binding

Steps

  1. Create a LogPipeline:

    Code Snippet
    123456789101112131415161718192021222324252627282930
    cat <<EOF | kubectl apply -f - apiVersion: telemetry.kyma-project.io/v1alpha1 kind: LogPipeline metadata: name: cloud-logging spec: output: http: dedot: true host: valueFrom: secretKeyRef: name: cloud-logging namespace: telemetry-exercise key: ingest-mtls-endpoint tls: cert: valueFrom: secretKeyRef: name: cloud-logging namespace: telemetry-exercise key: ingest-mtls-cert key: valueFrom: secretKeyRef: name: cloud-logging namespace: telemetry-exercise key: ingest-mtls-key uri: /customindex/kyma EOF

    Note

    This example uses a LogPipeline, but the Telemetry module is not limited to logs. You can integrate traces and metrics in a similar way.

Task 4: Use SAP Cloud Logging to explore logs

Steps

    Setting up SAP Cloud Logging also triggered creation of a Secret containing not only the details for pushing data to SAP Cloud Logging but also to get access to the SAP Cloud Logging Dashboard.

    Note

    This example simplifies the setup by using Basic Authentication with a shared secret for the access to the SAP Cloud Logging Dashboard. However, when you're using Kyma in live environments, it's recommended to use single-sign-on (SSO), for example with a SAML configuration. If you use SSO, no credentials for the SAP Cloud Logging Dashboard access are provided with the generated Secret.

  1. To see your credentials, inspect the Secret:

    Code Snippet
    1
    kubectl -n telemetry-exercise get secret cloud-logging -o go-template='{{range $key, $value := .data}}{{$key}}={{$value|base64decode}}{{"\n"}}{{end}}'

    The attribute dashboards-endpoint contains the URL to the SAP Cloud Logging Dashboard, attributes dashboards-username and dashboards-password contain the credentials for authentication.

  2. Open the URL taken from dashboards-endpoint and authenticate yourself with the credentials.

  3. In the Discover section, inspect the logs-json-kyma-* indexes. All logs are instantly available for query with the used log attributes.

Task 5: Add deep linking to Kyma dashboard

Steps

  1. To add navigation and deep links to Kyma dashboard, follow Use SAP Cloud Logging Dashboards.

  2. To try out the new links, go to Kyma dashboard, choose a namespace, and select Discover Logs in SAP Cloud Logging.

Result

Bravo! You have completed this exercise. You have used Kyma's Telemetry module to collect and visualize the logs of your extension with SAP Cloud Logging.

Log in to track your progress & complete quizzes