Principles of Cloud-native Application Development

Objectives
After completing this lesson, you will be able to:

After completing this lesson, you will be able to:

  • Describe the four fundamental pillars for building cloud native applications

The Four pillars of cloud-native applications

The fundamentals of cloud-native applications are based on the four pillars as shown in the figure below.

A popular way to define Cloud-native applications is to consider the four fundamental pillars of DevOps, microservices, containers, and continuous delivery. Each of these pillars refers to a specific aspect that is relevant when building modern Cloud applications, ranging from development mindset, to technologies that act as key enablers for developing cloud applications with certain desired qualities. In this unit, we explore each of these pillars in detail to understand how we can build cloud-native applications in a better way.

Introducing DevOps

Dev vs Ops?

Watch this video for an introduction to DevOps.

Three Core Principles of DevOps

We will now explore the three core principles of DevOps.

Introducing the microservice architecture

Overview

The features of microservices are as follows:

  • Microservices are small, autonomous, and independently deployable services that run in separate processes and communicate with each other over HTTP-based RESTful APIs.
  • Applications are developed as a distributed system of microservices that are often organized around certain characteristics, such as, business domains, organizational team structures, or components that must scale independently of each other.
  • The microservices architecture can offer a higher level of flexibility than a traditional monolithic approach. They have been suggested based on the observation that, with an emerging need to scale and evolve a monolithic application, the ability to quickly deliver changes becomes increasingly difficult. This challenge is often elevated if software engineering principles like clean modularization and a clear separation of concerns have not been applied consistently throughout the monolithic application. In such cases, delivering functionality becomes more and more difficult and risky, because even a small change in the code base strongly relates to a magnitude of components in a way that is hard, or even impossible, to predict. Furthermore, monolithic applications do not allow you to scale specific functionality, but instead require the full replication of all functionality which leads to overhead and increasing costs. Nevertheless, we should keep in mind that the use of a microservices architecture requires us to consider the trade-off between the complexity of a distributed system and the flexibility that is gained for benefits, such as, independent scaling and deployment of microservices.

Benefits of Microservices

We can understand more about microservices by analyzing their key benefits.

What Should the Architectural Approach Be?

After looking at the benefits, we might ask ourselves: how should we approach a microservices architecture? There are two types of approaches: the monolith-first or the microservices-first.

The monolith-first approach aims to leverage the benefit of gaining experience and knowledge about the application domain by first building a monolithic application. When the application is understood, it is then gradually split into microservices according to the discovered service boundaries. This avoids the initial complexity and risk of premature decomposition into a distributed system along suboptimal dimensions, which can result in an application that is hard to refactor and test across the boundaries of its microservices. Nevertheless, the monolith-first approach also bears some risks.

Therefore, the microservices-first approach recommends directly starting the microservices instead of a monolith, as the latter can increase the risk of introducing strong inter-dependencies that are hard to resolve and split up afterward. In particular, monolithic applications naturally tend to favor tight coupling. An example of such tight coupling is a shared persistence model which is usually extremely difficult to separate later on, as it requires migration of code and data.

Therefore, the microservices implementation for each project must include a carefully considered decision that is based on the requirements of the application and the existing knowledge and experience of the development team with respect to the business domain.

Introducing containers

Containers are the third pillar of building Cloud-native applications. Containers offer are a lightweight approach to running multiple processes in an isolated and specifically restricted way on a single hosting operating system or virtual machine. They are realized with Linux control groups and namespaces, where control groups define what processes can do and namespaces define what processes can see.

Given their lightweight isolation, containers enable an easy, reliable, and flexible way of moving software between different computing environments, from our local laptop up to the data centers in the Cloud. Therefore, containers are a key enabling technology for continuously delivering software in a reproducible and highly automated way.

Introducing continuous delivery

With the digital transformation comes greater customer responsiveness, whereby the software requires more frequent updates according to business needs. While traditional software development is based on the waterfall model, which tends to follow rigid, long-term plans with release cycles of several months or years, agile development principles embrace a culture of change and delivering value to customers in short release cycles of only a few days, ideally allowing for changes to be delivered up to several times per day. Here, agile methodologies offer a wide variety of benefits over the traditional waterfall approach.

They provide an increased level of transparency and awareness of the work that is ongoing and still pending, which strengthens effective collaboration among cross-functional teams. Given short iterations of work in well-prioritized iterations of usually only one to two weeks, defects can be removed quickly, eventually allowing features to be delivered in a more predictable, business- and value-oriented way. In particular, with agile methodologies, the definition of value is strongly driven and continuously evaluated by the direct feedback of real users early on in the development process. The short iterations between deliveries are enabled by fostering a high degree of automation and codified knowledge of the delivery process.

Overall, continuous delivery enables us to deliver software quickly with higher quality, increasing its value for customers while allowing for experimentation with less risk. Continuous delivery is the ability to get changes of all types - including new features, configuration changes, bug fixes, and experiments - into production, or into the hands of users, safely and quickly in a sustainable way. We will learn about continuous delivery in the next units.

Introducing the twelve-factor application

A term that is often mentioned in the context of Cloud-native applications is the twelve-factor application. The twelve-factor application is a set of recommendations and best practices for building software-as-a-service application in the Cloud, following Cloud-native development principles. Let us examine each of these factors.

The twelve-factor application defines twelve best practices for building Cloud-native services and applications, such as, the recommendation to manage the codebase with source version control, or to make dependencies of an application explicit by using, for example, dependency management like Maven or Node Package Manager (NPM). A number of the most fundamental guidelines behind the twelve-factor application are as follows:

  • Stateless and self-contained application processes

    The recommendation is to build applications as stateless and self-contained application processes. This allows us to easily develop and dispose of processes without much effort, for example, to be able to provide scalability and resilience.

  • Separation of application code and run-time configuration

    The factors suggest separating application code and run-time configuration. In this way, the consumption of external backing services like persistence services or external APIs can be decoupled from the application code by only binding to these backing services at run-time to retrieve the configuration of the application.

  • Traceability and reproducibility of all changes

    The factors recommend fostering traceability and reproducibility of all changes, in particular, using source version control, making dependencies explicit, and aiming for parity between development and production environments.

The underlying guiding principles outlined above play an important role in building Cloud-native applications. In the next units, we will see how we can use these guiding principles and apply them to build a full side-by-side extension to SAP S/4HANA from scratch.

Log in to track your progress & complete quizzes