Evaluating Cloud Native and REST

Objectives

After completing this lesson, you will be able to:

  • Explain the features of cloud native.
  • Evaluate the principles of REST architecture.

Cloud Native Principles

Cloud native principles are elasticity, pricing, availability, and SLA.

In this unit, we delve into the ABAP Cloud development model (ABAP Cloud), the model in which cloud native applications can be both built in and run on ABAP. But as ABAP Cloud is partly inspired by the emergence of cloud computing and the cloud native paradigm, a brief discussion of both topics is warranted.

As discussed in the course S4CP01: Exploring SAP Cloud ERP, in today's business climate, companies need to adapt business processes quickly to respond to changing business conditions and changing customer demands. This need for adaptation requires applications that are scalable, sturdy, and importantly, flexible. Cloud computing environments are one way that this need is addressed. The other is cloud native.

Cloud Computing

Cloud computing is still computing, but cloud computing is designed in a different way than from the typical on-premise data center infrastructure that IT personnel have traditionally been used in the past. With on-premise infrastructure (often referred to as an on-premise data center), the customer is responsible for the installation and maintenance of physical elements, such as servers and networking equipment. With cloud computing, these infrastructure components are provided by an external cloud provider.

Generally speaking, the following components are provided by the cloud provider:

  • Network
  • Servers (providing compute and memory capacity)
  • Storage
  • Operating systems and virtualization

The initial setup of these components, as well as their ongoing operations, maintenance, and upgrades, is handled by the cloud provider.

Cloud Computing Principles

These components are made available to the customer using the following principles:

  • Elasticity

    Most organizations experience peaks and valleys in resource usage. Payroll, for example, may run twice a month and during those times, extra network and server capacity is warranted. Cloud providers typically have an elasticity feature for customers as part of their offering. This way, as more resources are needed, they can be allocated and both individual and overall application performance can be maintained at desired levels.

  • Pricing

    Cloud computing components are offered to customers at an agreed upon pricing and consumption plant. This can vary from provider to provider. SAP, for example, offers various runtimes and services as part of its Platform as a Service (PaaS) offering SAP BTP, in not only asubscription-based plan, but also two different types of consumption-based purchase plans.

  • Availability

    Returning to the payroll example mentioned previously, the availability of cloud computing resources is semi-monthly (assuming payroll runs twice per month). For other types of business processes (supply chain management processes, for example), the availability varies depending on the type of process in question and how the company chooses to operate. For larger organizations especially, it is fair to say that 24 hours a day, seven days a week, at least one process needs resources to be able to execute. That's where availability comes into play. As part of their offering, cloud providers transparently communicate the expected availability of their components to customers. Again, while technically not required, almost all providers offer a 24x7 availability option, giving customers maximum flexibility if certain operations need to execute continuously or at unpredictable intervals.

  • Service Level Agreement (SLA)

    Closely related to availability is service level agreement (SLA). Whereas availability is usually expressed as a number (24x7), SLA adds in a time dimension (that is, 24x7 for 99.99% of the month). Using this example, if a month has 30 days that would translate to 43,200 minutes (30 days times 24 hours per day times 60 minutes per hour). At a 99.99% SLA, the system would not be available (over the course of that month) for about four and a half minutes (.0001 times 43200).

Is There a Difference?

While at first it may be tempting to think, "Other than the provider, it seems that there is no practical difference between cloud computing and a customer-provided data center", this should not be the case. The consequences of having computing infrastructure provided by a cloud provider mean that software applications should be designed to be infrastructure independent. They should run the same regardless of the specific server, OS, or storage system being provided by the cloud provider. It's entirely conceivable that a cloud provider may change its infrastructure frequently. As well as that, a customer may decide to change cloud providers (that is, a different provider provides superior SLA performance). Moreover, the need to adapt applications quickly means that testing an application on different types of infrastructure to ensure compatibility should not be necessary, not to mention the time and cost of developing different versions of an application just so that it can run on different infrastructure types, which once again, is a non-starter. From the point of view of the developer (and ultimately the end user), the infrastructure components used in cloud computing is an abstraction.

To ensure this abstraction, a top to bottom rethinking of programming models, tools, and technologies used to do application development and maintenance is required in cloud computing. Cloud native has emerged as the result of this rethinking.

What Is Cloud Native?

Cloud native principles include infrastructure independent, microservices, and application programming interfaces.

Cloud native is, first and foremost, an approach – an approach to developing, deploying, and maintaining software applications in a cloud computing environment, which enables fast application adaptability and flexibility. While the specific definition of cloud native varies from source to source, depending on whose definition is being used, there are a few components that all definitions generally agree on and become very important in the understanding of the ABAP Cloud development model:

  • Infrastructure independent
  • Microservices
  • Application Programming Interfaces (APIs)

Infrastructure Independent

As previously mentioned, the cloud provider provides the infrastructure components, which comprise a cloud computing environment (that is, network, servers (providing compute and memory capacity), storage, operating systems, and virtualization). Different cloud providers provide these resources using different techniques, configurations, brands, and so on. A cloud native application runs the same regardless of these differences and regardless of the cloud provider.

Microservices

Many cloud native programming models follow a three-layer approach to application development. The first is the user layer (sometimes referred to as the consumption layer), which is responsible for the visual rendering of the user interface that end users interact with. Second is the data layer, which is where the data that the application needs is permanently stored in some data source (normally a database). In between these two layers is the service layer. The service layer responds to requests triggered by the user layer on one side and in doing so performs operations on the data in its data source at the data layer level. These operations are commonly categorized in the following four types, which are often referred to as CRUD operations:

  • Create
  • Read
  • Update
  • Delete

While not required, these layers are often designed to run in different locations and in different types of runtime environments. Some of these locations and environments may be cloud-based, others on-premise based. This hybrid approach is not uncommon and used by many customers. Microservices design means that each layer is implemented as its own standalone piece. As such, it can be maintained and adapted separately from the other layers while at the same time being able to communicate and coordinate with them in the context of a complete application.

Application Programming Interfaces (APIs)

The communication and coordination referred to in the previous section is carried out by the use of APIs. An API is a technique by which two pieces of software can communicate with each other and exchange or manipulate information using agreed upon data definitions and representations (these are often referred to as a protocol). For example, an app that most people have used is a banking app that allows the user to execute various functions, such as checking the balance of their account or initiating a funds transfer to pay a bill. The app can be accessed freely over the internet using the banks Web site, or installed on a mobile phone. Either way, the design of the app is such that behind the scenes, it uses one or more APIs to perform the various banking functions requested by the app end user. Each API is designed to perform some task an app may need and is typically available "on demand" to be used. The usage is usually based on some form of a "call and response" process (that is, the app makes the call to the API and the API responds in some way).

The bank in this scenario provides the APIs (since they have legal governance of bank accounts) and at their discretion, they may make them available for third-party usage (developers who design products where they wish to add banking features, for example) as well as utilize the APIs themselves in their own app development. APIs have been around for many decades and predate cloud computing. However, the API concept has evolved to encompass the needs of cloud computing and microservices development. One of the ways this evolution has manifested itself is in the adoption of one of the most common API architectures in use today: Representational State Transfer (REST).

REST Architectural Principles

REST API Architecture: REST clients, HTTP methods, load balanced app servers

Continuing with the example of the banking app utilizing an API, for example, to check account balances, we can learn important REST terminology. Under REST, there is a special term used to indicate the information an application may need, namely a "resource" (bank account, in this case). This resource has a "state" (a bank account at all times has a balance) and this state can change (bank balances go up and down) but at any point in time, this state exists. In REST terminology, this state is commonly referred to as the "resource representation". This state can be asked for ("Tell me my balance, please") and even a change to this state can be initiated ("Here's a check to deposit in my account; tell me my updated balance"). Lastly, all of this communication between the app and the API happens over the internet, meaning the state must be "transferred" back and forth. And there you have it: [Resource] "Representational" "State" "Transfer".

Representational State Transfer Summary

  • A set of architectural constraints
  • A set of rules that developers follow when they create their API
  • A client-server architecture made up of clients, servers, and resources, with requests managed through HTTP
  • Stateless client-server communication, meaning no client information is stored between requests and each request is separate and unconnected

REST Architectural Principles

REST APIs are developed based off of an architectural pattern anchored around the following principles:

  • Uniform Resource Interface

    A resource (a bank account, in our case) is uniquely identified based on one addressing mechanism. For example, http://www.somebankserver.com/Account.

  • Client-Server

    Consistent with a microservices design approach, the client (in this case, the app) and the server (where the API is located) are separate layers communicating with each other over the internet.

  • Stateless

    A stateless transfer procedure means that the client makes a request (an app makes a bank balance check request) and when the server-hosted API transfers the balance as a response, the API will not store any information about that request. If, a split second later, the app requests a bank balance check on the same account again, the API will handle it as a brand new request, utilizing nothing from the previous request (not even being aware of it).

    The most popular application used on the internet is the World Wide Web (Web), which uses the HTTP protocol to transfer information back and forth between various clients and servers. As the HTTP protocol is stateless, it serves as the protocol REST uses for transfer.

  • Cacheable

    To improve scalability and performance, caching can be implemented for a REST API. Certain data can be stored in such a way to enable high-speed access to respond to requests quicker. As an example, bank account balances can be stored in cache as opposed to being retrieved from a remote database.

  • Layered System

    Intermediary systems are permitted (to perform load balancing or authentication, for example). The client may not be aware of these intermediate systems and client-server communication will not be negatively affected or compromised.

  • Code on Demand

    Servers can transfer executable code (JavaScript, for example) to the client to extend client capabilities. This is the only one of the principles that is optional.

REST and CRUD

As discussed earlier, the service layer sits between the user layer and the data layer, and mediates communication between the two. REST APIs reside at the service layer and operate as the server in client-server communications (with the user layer acting as the client). Since REST uses HTTP as its underlying transfer and communication protocol, then all REST API requests are HTTP requests also. Clients use four basic HTTP methods to interact with a REST API, as follows:

  • HTTP POST to perform a create operation on a resource
  • HTTP GET to retrieve one specific resource
  • HTTP PUT to perform an update operation on a resource
  • HTTP DELETE to delete a resource

Clients send HTTP requests based on these methods to the REST API, which in turn perform the requested operations on the applicable resources, and send HTTP responses back to clients.

As previously mentioned, REST has become one of the more popular API architectures used as part of the cloud native approach. Cloud native applications utilizing a microservices design along with REST APIs are well suited for the needs of modern consumer expectations for apps that run on mobile applications and desktops, with a sleek modern user interface. SAP and ABAP in particular are not immune from those expectations. As a result, the need arose for ABAP to evolve to embrace cloud computing and cloud native concepts.

Log in to track your progress & complete quizzes