You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tutorials/ai-core-code/ai-core-code.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -246,7 +246,7 @@ print(response.__dict__)
246
246
[OPTION END]
247
247
248
248
1.**Name**: Enter `credstutorialrepo`. This is becomes an identifier for your Docker credentials within SAP AI Core. This value is your docker registry secret.
249
-
2.**URL**: If you have used your organization's Docker registry then use its URL, otherwise, enter `docker.io`.
249
+
2.**URL**: If you have used your organization's Docker registry then use its URL, otherwise, enter `https://index.docker.io`.
250
250
3.**Username**: Your Docker username.
251
251
4.**Access Token**: The access token generated previously, in the Docker account settings.
Copy file name to clipboardExpand all lines: tutorials/ai-core-data/ai-core-data.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -74,7 +74,7 @@ Your code reads the data file `train.csv` from the location `/app/data`. This da
74
74
75
75
!
76
76
77
-
Create file `requirement.txt` as shown below. Here, if you don't specify a particular version like `pandas` then the latest version of the package will be fetched automatically.
77
+
Create file `requirements.txt` as shown below. Here, if you don't specify a particular version like `pandas` then the latest version of the package will be fetched automatically.
78
78
79
79
```TEXT
80
80
sklearn==0.0
@@ -110,7 +110,7 @@ RUN chgrp -R 65534 /app && \
110
110
111
111
!
112
112
113
-
> **IMPORTANT** Your `Dockerfile`createS empty folders to store your datasets and models (example above `/app/data` and `/app/model/` ). Contents from cloud storage will be copied to and from these folders later. If you place any contents in these folders during time of Docker image building, then the contents will be overwritten.
113
+
> **IMPORTANT** Your `Dockerfile`creates empty folders to store your datasets and models (example above `/app/data` and `/app/model/` ). Contents from cloud storage will be copied to and from these folders later. If you place any contents in these folders during time of Docker image building, then the contents will be overwritten.
114
114
115
115
Build and upload your Docker image to Docker repository, using the following code in the terminal.
[ACCORDION-BEGIN [Step 2: ](Create placeholders for datasets in workflows)]
126
126
127
-
Create a pipeline (yaml file) named `house-price-train.yaml` in your GitHub repository. You may use the existing GitHub path which is already tracked (auto synced) by your application of SAP AI Core.
127
+
Create a pipeline (YAML file) named `house-price-train.yaml` in your GitHub repository. You may use the existing GitHub path which is already tracked (auto synced) by your application of SAP AI Core.
128
128
129
129
```YAML
130
130
apiVersion: argoproj.io/v1alpha1
@@ -696,11 +696,11 @@ Copy the artifact ID of the January dataset. You will use this value in the plac
696
696
697
697
[OPTION BEGIN [SAP AI Launchpad]]
698
698
699
-
Click through **ML Operations** > **Configuration** > **Create**. Enter the following details as shown in the image below. Click **Next**
699
+
Click through **ML Operations** > **Configuration** > **Create**. Enter the following details as shown in the image below. Click **Next**.
700
700
701
701
!
702
702
703
-
The field for `DT_MAX_DEPTH` allows your to use the configuration to pass values to placeholders of hyper-parameters that your prepared earlier in your workflows. In this case, type `3`. Click **Next**.
703
+
The field for `DT_MAX_DEPTH` allows you to use the configuration to pass values to placeholders of hyper-parameters that you prepared earlier in your workflows. In this case, type `3`. Click **Next**.
704
704
705
705
!
706
706
@@ -749,7 +749,7 @@ Use the artifact ID of the `jan` dataset and the placeholder names to create a c
749
749
750
750
[OPTION BEGIN [SAP AI Core SDK]]
751
751
752
-
Paste and edit the code snippet. The key value pair for `DT_MAX_DEPTH` allows your to use the configuration to pass values to placeholders of hyper-parameters that your prepared earlier in your workflows. In this case, type `3`.
752
+
Paste and edit the code snippet. The key value pair for `DT_MAX_DEPTH` allows your to use the configuration to pass values to placeholders of hyper-parameters that your prepared earlier in your workflows. In this case, type `3`. You should locate your `jan` dataset artifact ID by listing all artifacts and use the relevant ID.
753
753
754
754
```PYTHON
755
755
from ai_api_client_sdk.models.parameter_binding import ParameterBinding
@@ -1029,7 +1029,7 @@ Paste and edit the snippet below. You should locate your `feb` dataset artifact
1029
1029
1030
1030
[OPTION BEGIN [SAP AI Core SDK]]
1031
1031
1032
-
Paste and edit the snippet below. You should locate your `feb` dataset artifact ID by listing all artifacts and using the relevant ID.
1032
+
Paste and edit the snippet below. You should locate your `feb` dataset artifact ID by listing all artifacts and use the relevant ID.
1033
1033
1034
1034
```PYTHON
1035
1035
from ai_api_client_sdk.models.parameter_binding import ParameterBinding
0 commit comments