AI requires significant computational power, agility for frequent updates to AI models and a flexible infrastructure to quickly activate and deploy use cases. Modern AI relies on large, pre-trained models that run on scalable cloud infrastructure like GPUs. So, AI innovation and cloud are synonymous. Furthermore, the overall experience is out-of-the-box embedded deeply into SAPs cloud solutions. This is why SAP’s AI innovations based on generative AI will be only available in the cloud. On-prem systems present challenges to your AI adoption. Scaling systems to meet increased AI demand and compute power is costly and time consuming. Significant resources and infrastructure are required to grow and maintain hardware and software systems. On-prem data silos and structures hinder the effectiveness of AI, limiting effective data mining.
Customers are making this choice by themselves already today, out of the 27,000 customers that are already using SAP’s AI capabilities today, only about 1% are using narrow AI on premise.
Embedded AI Versus Custom Extensions
The table shows differences between embedded AI versus custom extensions
Topic | Embedded AI | Custom Extension |
---|---|---|
Definition | In an embedded approach, AI functionality is seamlessly incorporated directly into the SAP Business applications. | Building AI capabilities involves developing custom AI solutions tailored to the specific needs of the business using SAP BTP. |
Characteristics |
|
|
Cost and Resources | Embedded solutions are simpler / straightforward but do have customization limitations. | Building custom solutions provides maximum flexibility but demands resources and expertise. |
Implementation | Quick(er) implementation cycles | Depended on IT infrastructure and SAP BTP integration |
Business AI capabilities need to be directly embedded into applications and extensions. Designed with security, governance, and trust in mind, the AI Foundation on SAP BTP is our new one-stop shop for developers to do exactly that. It provides ready-to-use AI services, AI runtimes, lifecycle management, as well as tooling for generative AI capabilities and business-data connectivity. As part of the AI Foundation, our generative AI hub provides instant access to the most powerful large language models (LLM), such as Azure OpenAI Service and Falcon-40b. On the other side, it offers grounding capabilities based on enterprise data to ensure context - for example, by leveraging our new SAP HANA Cloud vector engine that combines LLMs with relevant organizational data.
Whenever off-the-shelf capabilities are available, it is almost always the most economical choice in contrast to building these capabilities yourself. In other cases, where you meet whitespaces that are not covered by SAP, it is certainly a viable option to assess the development of bespoke solutions.
According to Gartner, by 2028, more than 50% of enterprises that have built large AI models from scratch will abandon their efforts due to costs, complexity and technical debt in deployments.