Skip to content

Commit 9c9f588

Browse files
publishing (#18588)
* publishing publishing * Update ai-core-metrics.md
1 parent 39b8098 commit 9c9f588

3 files changed

Lines changed: 58 additions & 34 deletions

File tree

tutorials/ai-core-metrics/ai-core-metrics.md

Lines changed: 58 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
2-
title: Log and Compare Machine Learning Model Quality in SAP AI Core
3-
description: Explore different ways of logging model quality metrics during training and associate custom tags with the generated model for identification.
2+
title: Generate metrics and compare models in SAP AI Core
3+
description: Different ways of Log metrics during training and tag the generated model with the same.
44
auto_validation: true
55
time: 45
66
tags: [ tutorial>license, tutorial>beginner, topic>artificial-intelligence, topic>machine-learning, software-product>sap-ai-launchpad, software-product>sap-ai-core ]
@@ -135,7 +135,7 @@ aic_connection.metrics.log_metrics(
135135
)
136136
```
137137

138-
This reflects as shown in SAP AI Launchpad, Please zoom in to view.
138+
This reflects, after execution, as shown in SAP AI Launchpad, please zoom in to view.
139139

140140
!![image](img/ail/basic.png)
141141

@@ -163,7 +163,7 @@ The variable `i` in is already present in your code to pass to the parameter `st
163163

164164
[ACCORDION-BEGIN [Step 5: ](Attach metrics to generated model)]
165165

166-
Add the following snippet to store metrics on step information. The parameter `value="housepricemodel"` is reference of the artifact name, which references the model that is to be stored in AWS S3. Same name (that is `housepricemodel`) to be used in your workflow.
166+
Add the following snippet to store metrics on step information.
167167

168168
```PYTHON
169169
aic_connection.metrics.log_metrics(
@@ -180,16 +180,21 @@ aic_connection.metrics.log_metrics(
180180
)
181181
```
182182

183+
The parameter `value="housepricemodel"` refers to the artifact name, which references the model that will be stored in AWS S3. It is vital that the name of this parameter matches the name that you defined in your YAML workflow.
184+
185+
Your code should resemble:
186+
187+
!![image](img/ail/modelA.png)
188+
183189
This reflects as shown in SAP AI Launchpad, when executed:
184-
!![image](img/ail/model.png)
190+
!![image](img/ail/modelB.png)
185191

186192
[DONE]
187193
[ACCORDION-END]
188194

189-
[ACCORDION-BEGIN [Step 5: ](Custom Metrics for model inspection)]
195+
[ACCORDION-BEGIN [Step 6: ](Custom Metrics for model inspection)]
190196

191-
Add the following snippet to store metrics based on customized structure. The structure must be type-cast to `str` (string). Here the structure used is [**permutation feature importance**](https://scikit-learn.org/stable/modules/permutation_importance.html#permutation-importance).
192-
The variable `r` and `feature_importances` are already created in the template code.
197+
Add the following snippet to store metrics based on a customized structure.
193198

194199
```PYTHON
195200
aic_connection.metrics.set_custom_info(
@@ -200,6 +205,12 @@ aic_connection.metrics.set_custom_info(
200205
)
201206
```
202207

208+
The structure must be type-cast to `str` (string). Here, the structure used is [**permutation feature importance**](https://scikit-learn.org/stable/modules/permutation_importance.html#permutation-importance).
209+
210+
The variables `r` and `feature_importances` are already created in the starter code.
211+
212+
This reflects as shown in SAP AI Launchpad, when executed:
213+
203214
!![image](img/ail/custom.png)
204215

205216
> ### Permutation Feature Importance
@@ -208,25 +219,28 @@ aic_connection.metrics.set_custom_info(
208219
>
209220
> What it is?
210221
>
211-
> - Indicates which feature is important relative to the model
212-
> - How much change in "error" (loss) can be expected in prediction of the model relative to a feature
222+
> - Indicates, for a given target, model, dataset and task, how much the model depends on a given feature.
223+
> - Gives an empirical estimate of how much loss is attributed to the removal of a given feature.
213224
>
214-
> What it is not?
225+
> What it is not:
215226
>
216-
> - Importance of feature in predicting relative to all the models/ dataset in general.
217-
> - Measure of how good a feature is towards predicting the target.
227+
> - A model, dataset or task agnostic indication of the importance of a given feature. While the method is agnostic, the results are applicable only to the specific input combination.
228+
> - A perfectly accurate indication of the importance of a given feature for a specific prediction. While this is the goal of the method, it does not account for weaknesses in the model.
218229
>
219-
> Advantage in using
230+
> Advantages:
220231
>
221-
> - Model agonist, global `explainability`.
222-
> - "error" can be customized, with reference to `scikit` package implementation.
232+
> - Model agnostic,
233+
> - provides global `explainability` - meaning that it estimates each feature's importance to the prediction task.
234+
> - contributes to model transparency.
235+
> - The method or function used to measure "error" can be customized, with reference to `scikit` package implementation.
236+
223237

224238
[DONE]
225239
[ACCORDION-END]
226240

227241
[ACCORDION-BEGIN [Step 6: ](Tags for execution meta after training)]
228242

229-
Add the following snippet to tag you execution. The `tags` are customizable key-value.
243+
Add the following snippet to tag you execution. The `tags` are customizable key-values.
230244

231245
```PYTHON
232246
aic_connection.metrics.set_tags(
@@ -237,14 +251,16 @@ aic_connection.metrics.set_tags(
237251
)
238252
```
239253

254+
This reflects as shown in SAP AI Launchpad, when executed:
255+
240256
!![image](img/ail/tag.png)
241257

242258
[DONE]
243259
[ACCORDION-END]
244260

245261
[ACCORDION-BEGIN [Step 7: ](Complete Files)]
246262

247-
Check you modified `main.py` with the following expected `main.py`.
263+
Check your modified `main.py` by comparing it with the following expected `main.py`.
248264

249265
```PYTHON
250266
import os
@@ -348,15 +364,15 @@ aic_connection.metrics.set_tags(
348364
)
349365
```
350366

351-
Create `requirements.txt` with following snippet.
367+
Check your modified `main.py` by comparing it with the following expected `main.py`.
352368

353369
```TEXT
354370
sklearn==0.0
355371
pandas
356372
ai-core-sdk>=1.12.0
357373
```
358374

359-
Create `Dockerfile` with following snippet.
375+
Create a file called `Dockerfile` with following snippet. This file must not have a file extension or alternative name.
360376

361377
```TEXT
362378
# Specify which base layers (default dependencies) to use
@@ -381,14 +397,14 @@ RUN chgrp -R 65534 /app && \
381397
chmod -R 777 /app
382398
```
383399

384-
Build Docker image and push contents to cloud.
400+
Build your Docker image and push the contents to the cloud, using the following commands in the terminal.
385401

386402
```BASH
387403
docker build -t <YOUR_DOCKER_REGISTRY>/<YOUR_DOCKER_USERNAME>/house-price:04 .
388404
docker push <YOUR_DOCKER_REGISTRY>/<YOUR_DOCKER_USERNAME>/house-price:04
389405
```
390406

391-
Create a AI workflow file in you GitHub repository named `hello-metrics.yaml` with following snippet. Edit with you own Docker registry secret and username.
407+
Paste the following snippet to a file named `hello-metrics.yaml` in your GitHub repository. Edit it with you own Docker registry secret and username. This file is your AI workflow file.
392408

393409
```YAML
394410
apiVersion: argoproj.io/v1alpha1
@@ -444,9 +460,9 @@ spec:
444460
[ACCORDION-END]
445461
446462
447-
[ACCORDION-BEGIN [Step 8: ](Create configuration and execution)]
463+
[ACCORDION-BEGIN [Step 9: ](Create configuration and execution)]
448464
449-
Create configuration using the following information. The information is taken from the workflow from previous step.
465+
Create a configuration using the following information. The information is taken from the workflow from previous steps. For a reminder of how to create a configuration, see step 11 of [this tutorial](https://developers.sap.com/tutorials/ai-core-data.html/#).
450466
451467
| | Value |
452468
| --- | --- |
@@ -457,24 +473,27 @@ Create configuration using the following information. The information is taken f
457473
| Scenario ID | `learning-datalines`
458474
| Executable ID | `house-metrics-train`
459475

460-
You can type any value for `Input Parameters` `DT_MAX_DEPTH`.
461-
Attach your registered artifact to `Input Artifact` `housedataset`.
476+
The value for `Input Parameters` `DT_MAX_DEPTH` is your choice. Until now, this was set using an environment variable. If no variable is specified, this parameter will continue to be defined by the environment variables.
477+
478+
> Information: This parameter can be defined using an integer to set a maximum depth or as `None`, which means that nodes are expanded until all leaves are single nodes, or contain all contain fewer data points than specified in the `min_samples_split samples`, if specified. For more information, see [the Scikit learn documentation](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html\#)
462479

463-
Create execution from the configuration.
480+
Attach your registered artifact to `Input Artifact`, by specifying `housedataset` for this value.
481+
482+
Create an execution from this configuration.
464483

465484
[DONE]
466485
[ACCORDION-END]
467486

468-
[ACCORDION-BEGIN [Step 9: ](Retrieve metrics)]
487+
[ACCORDION-BEGIN [Step 10: ](Retrieve metrics)]
469488

470489

471490
[OPTION BEGIN [SAP AI Launchpad]]
472491

473-
Click on the `Metrics Resource` tab of your execution.
492+
Navigate through `ML Operations` > `Executions` > `Metrics Resource` tab of your execution.
474493

475494
!![image](img/ail/locate.png)
476495

477-
For metrics tagged with the artifact name, you can also locate in the **Models** details page of the artifact.
496+
For metrics tagged with the artifact name, you can also locate the metrics in the **Models** details page of the artifact.
478497

479498
!![image](img/ail/artifact.png)
480499

@@ -483,12 +502,16 @@ For metrics tagged with the artifact name, you can also locate in the **Models**
483502

484503
[OPTION BEGIN [Postman]]
485504

505+
Navigate through `AI Core` > `lm` > `metrics` > `Get metrics` and double check the `executionId`.
506+
486507
!![image](img/postman/metric.png)
487508

488509
[OPTION END]
489510

490511
[OPTION BEGIN [SAP AI Core SDK]]
491512

513+
Paste and edit, and execute the following snippet:
514+
492515
```PYTHON
493516
response = ai_core_client.metrics.query(
494517
execution_ids = [
@@ -543,18 +566,19 @@ Unnamed: 0: 0.000 +/- 0.000
543566
[OPTION END]
544567
545568
[DONE]
546-
[ACCORDION-END]
547569
570+
[ACCORDION-END]
548571
549-
[ACCORDION-BEGIN [Step 10: ](Compare metrics)]
572+
[ACCORDION-BEGIN [Step 11: ](Compare metrics (optional))]
550573
551-
Create two configurations one with `DT_MAX_DEPTH = 3` and another with `DT_MAX_DEPTH = 6`. Then create execution of those configurations.
574+
Create two configurations: one with `DT_MAX_DEPTH = 3` and another with `DT_MAX_DEPTH = 6`. Then create executions for both of those configurations.
552575
553576
!![image](img/ail/compare-1.png)
554577
555-
You can then get a metric wise comparison between the execution from the two different configurations.
578+
You can then get a metric wise comparison of the executions from the two different configurations.
556579
557580
!![image](img/ail/compare-2.png)
558581
559582
[VALIDATE_1]
583+
560584
[ACCORDION-END]
243 KB
Loading
191 KB
Loading

0 commit comments

Comments
 (0)