/
Browse
/
Courses
/
Introduction to AI Core
/
Describing model inferencing operations
Onboarding SAP AI Core and SAP AI Launchpad
Configuring SAP AI Core and SAP AI Launchpad
1 hr 8 min
Quiz
Describing model training operations
Training an ML model
48 min
Quiz
Describing model inferencing operations
Serving an ML Model
47 min
Quiz
Onboarding SAP AI Core and SAP AI Launchpad
Configuring SAP AI Core and SAP AI Launchpad
1 hr 8 min
Quiz
Describing model training operations
Training an ML model
48 min
Quiz
Describing model inferencing operations
Serving an ML Model
47 min
Quiz
Knowledge quiz
It's time to put what you've learned to the test, get 1 right to pass this unit.
1.
In which of the following ways can the costs for serving models be reduced?
Choose the correct answer.
When processing inference requests, Kubernetes allows to scale Model Servers on demand
By applying the same binary string to data used for training
By sending new data to the model every quarter
When using the Autoscale to Zero feature, inference servers are "stopped" until the next request is received
Container