You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tutorials/ai-core-data/ai-core-data.md
+38-5Lines changed: 38 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ By the end of the tutorial you will have two models trained on two different dat
28
28
29
29
[ACCORDION-BEGIN [Step 1: ](Modify AI code)]
30
30
31
-
Create a new directory named `hello-aicore-data`.
31
+
Create a new directory named `hello-aicore-data`. The code is different from [previous tutorial](https://developers.sap.com/tutorials/ai-core-code.html/#) as it reads the data from folder (volumes, virtual storage spaces). The content of these volumes is dynamically loaded during execution of workflows.
32
32
33
33
Create a file named `main.py`, and paste the following snippet there:
Your code reads the data file `train.csv` from the location `/app/data`, which will be prepared in a later step. It also reads the variable (hyper-parameter) `DT_MAX_DEPTH` from the environment variables later. When generated, your model will be stored in the location `/app/model/` . You will also learn how to transport this code from SAP AI Core to your own cloud storage.
74
74
75
+
> **Recommendation**: Although the dataset file `train.csv` is not present, it will be dynamically copied during execution to the volume mentioned in `(/app/data)`. Its recommended to pass the filename `(train.csv)` through the environment variable to your code so that if your dataset filename changes, you can dynamically set the dataset file.
76
+
75
77
!
76
78
77
79
Create file `requirements.txt` as shown below. Here, if you don't specify a particular version, as shown for `pandas`, then the latest version of the package will be fetched automatically.
@@ -164,14 +166,19 @@ spec:
164
166
- "python /app/src/main.py"
165
167
```
166
168
167
-
### Understanding your workflow.
169
+
### Understanding changes in your workflow
170
+
171
+
This change to your workflow creates a placeholder through which you can specify a data path (volume) to the container (Docker image in execution).
168
172
169
173
!
170
174
171
175
1. A placeholder named `housedataset` is created.
172
176
2. You specify the **kind of artifact** that the placeholder can accept. **Artifact** is covered in details later in this tutorial.
173
177
3. You use a placeholder to specify the **path that you created in your Dockerfile**, which is where you will copy files to your Docker image.
174
178
179
+
#### Why do we need to create placeholders?
180
+
SAP AI Core only uses your workflows as an interface, so is unaware of the volume/ attachments specified in your Docker image. Your data path is specified in your Dockerfile and has a placeholder in your workflow and data is then expected by the Docker image.
181
+
175
182
[DONE]
176
183
[ACCORDION-END]
177
184
@@ -210,6 +217,7 @@ spec:
210
217
- - name: mypredictor
211
218
template: mycodeblock1
212
219
- name: mycodeblock1
220
+
# Add your resource plan here. The annotation should follow metadata > labels > ai.sap.com/resourcePlan: <plan>
213
221
inputs:
214
222
artifacts: # placeholder for cloud storage attachements
215
223
- name: housedataset # a name for the placeholder
@@ -228,7 +236,7 @@ The following shows the new important lines in the workflows.
228
236
229
237
!
230
238
231
-
##*#Understanding these changes
239
+
####Understanding these changes
232
240
233
241
1. A placeholder named `DT_MAX_DEPTH` is created locally in the workflow. It accepts number content of input type: `string`. The input is then type cast to an integer elsewhere in your code, so it must be a string containing integers. For example: ```"4"``` is acceptable because it is a string containing content that can can be type cast to and integer.
234
242
2. You create an input `env` (Environment) variable to your Docker image, named `DT_MAX_DEPTH`. The value of this variable is fed in from `workflow.parameters.DT_MAX_DEPTH`,the local name from previous point.
@@ -239,6 +247,29 @@ Commit the changes in the GitHub.
239
247
[ACCORDION-END]
240
248
241
249
250
+
[ACCORDION-BEGIN [Step 1: ](Set resource plan)]
251
+
252
+
Add the following snippet in your workflow to specify resource plan. The resource plan helps specify computing resource required to run your Docker image. The computing resources includes GPU, RAM and Processor. If not mentioned the resource plan defaults to `starter` which is the entry level [resource plan](https://help.sap.com/docs/AI_CORE/2d6c5984063c40a59eda62f4a9135bee/57f4f19d9b3b46208ee1d72017d0eab6.html?locale=en-US).
253
+
254
+
255
+
```BASH[6-8]
256
+
spec:
257
+
...
258
+
templates:
259
+
...
260
+
- name: mycodeblock1
261
+
metadata:
262
+
labels:
263
+
ai.sap.com/resourcePlan: starter
264
+
...
265
+
```
266
+
267
+
**INFORMATION**: You can always verify computing resource allocated using the following command `echo $(lscpu)` within your Docker image. The command is the the shell script command of Linux to print system configuration.
268
+
269
+
[DONE]
270
+
[ACCORDION-END]
271
+
272
+
242
273
[ACCORDION-BEGIN [Step 4: ](Observe your scenario and placeholder)]
243
274
244
275
[OPTION BEGIN [SAP AI Launchpad]]
@@ -358,7 +389,7 @@ You need to create AWS S3 object store, using one of the following links:
358
389
359
390
Download and Install the [AWS Command Line Interface (CLI)](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
360
391
361
-
Open your terminal and run:
392
+
To configure settings for your CWS CLI, open your terminal and run:
362
393
363
394
```BASH
364
395
aws configure
@@ -368,6 +399,8 @@ aws configure
368
399
369
400
Enter your AWS credentials. Note that the appearance of the screen will not change as you type. You can leave the `Default output format` entry as blank. Press **enter** to submit your credentials.
370
401
402
+
Your credentials are stored in your system and used by the AWS CLI to interact with AWS. Fore more informaiton, see [Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html)
403
+
371
404
[DONE]
372
405
[ACCORDION-END]
373
406
@@ -506,7 +539,7 @@ With your object store secret created, you can now reference any sub-folders to
506
539
507
540
[OPTION BEGIN [Postman]]
508
541
509
-
Create an artifact for train.csvthat we uploaded to jan folder, by clicking **POST Register artifact** and using the **body** underneath.
542
+
Create an artifact for the `train.csv` file that we uploaded to the `jan` folder, by clicking **POST Register artifact** and using the body underneath.
Copy file name to clipboardExpand all lines: tutorials/ai-core-deploy/ai-core-deploy.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -102,11 +102,13 @@ if __name__ == "__main__":
102
102
#### Understanding your code
103
103
104
104
Where should you load your model from?
105
+
105
106
- Your code reads files from folder `/mnt/models`. This folder path is hard-coded in SAP AI Core, and cannot be modified.
106
107
- Later, you will dynamically place your model file in the path `/mnt/models`.
107
108
- You may place multiple files inside `/mnt/models` as part of your model. These files may have multiple formats, such as `.py` or `.pickle`, however you should not-create sub-directories within it.
108
109
109
110
Which serving engine to use?
111
+
110
112
- Your code uses Flask to create a server, however you may use another python library if you would like to.
111
113
- Your format for prediction REST calls will depend on the implementation of this deployment server.
112
114
- You implement the endpoint `/v2/predict` to make predictions. You may modify the endpoint name and format, but each endpoint must have the prefix `/v<NUMBER>`. For example if you want to create endpoint to greet your server, then the endpoint implementation should be `/v2/greet` or `/v1/greet`
@@ -442,7 +444,7 @@ Switching between deployed models means that you can update the model used in yo
442
444
443
445
[OPTION BEGIN [SAP AI Launchpad]]
444
446
445
-
To create a new configuration, click **ML Operations > Configuration > Create**. Enter the following details amd click **Next**.
447
+
To create a new configuration, click **ML Operations > Configuration > Create**. Enter the following details and click **Next**.
Copy file name to clipboardExpand all lines: tutorials/ai-core-helloworld/ai-core-helloworld.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -217,7 +217,7 @@ The code first takes takes a public [docker image of python](https://hub.docker.
217
217
!
218
218
219
219
> ### What is a Docker Image?
220
-
> A Docker Image is a portable Linux environment, similar to a virtual machine. Docker images are layered environments, which means you may just have Linux OS (for example Distrom) as one Docker image or another Docker image which has python layered on top of that Linux.
220
+
> A Docker Image is a portable Linux environment, similar to a virtual machine. Docker images are layered environments, which means you may just have Linux OS (for example `Distrom`) as one Docker image or another Docker image which has python layered on top of that Linux.
221
221
>
222
222
> While the code in this tutorial is written directly in the workflow, in actual production you will store the code scripts within your Docker Image. The number of code files and programming language are your preferences.
0 commit comments