You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
endpoint = "s3-eu-central-1.amazonaws.com", # Change this
143
+
endpoint = "s3.eu-central-1.amazonaws.com", # Change this
144
144
bucket = "asd-11111111-2222-3333-4444-55555555555", # Change this
145
145
region = "eu-central-1", # Change this
146
146
data = {
@@ -159,6 +159,71 @@ You should see the following response:
159
159
> Note that depending on your region, your AWS endpoint syntax may differ from the example above. In the event of an error, try this step again with alternative syntax. For available syntaxes, please see the [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteEndpoints.html)
160
160
161
161
162
+
### Create workflow to serve your model
163
+
164
+
165
+
Save the following executable file in your local system:
1. Ensure that your `resourcePlan` is set to `infer.s`. This will enable the GPU node in deployment. Find all the available resource plans [here](https://help.sap.com/viewer/2d6c5984063c40a59eda62f4a9135bee/LATEST/en-US/57f4f19d9b3b46208ee1d72017d0eab6.html).
176
+
177
+
2. Replace `docker-registry-secret` with the name of your docker registry secret. You can create and use multiple docker secrets in SAP AI Core. [See how to create docker registry secret](https://help.sap.com/viewer/2d6c5984063c40a59eda62f4a9135bee/LATEST/en-US/b29c7437a54f46f39c911052b05aabb1.html).
178
+
179
+
3. Set your docker image URL.
180
+
181
+
Save your executable.
182
+
183
+
184
+
### Sync workflow with SAP AI Core
185
+
186
+
187
+
You will create a folder in your GitHub repository connected SAP AI Core, where you will store the workflow (executable). You will then register this folder as an **Application**in SAP AI Core to enable syncing of the workflow as an executable.
188
+
189
+
> You can create multiple **Applications**in SAP AI Core for syncing multiple folders. This helps you organize separate folders for storing workflows YAML files for separate use cases.
190
+
191
+
1. Create a folder named `tutorial-tf-text-clf`in your GitHub repository connected to SAP AI Core. Place the following workflows inside it:
192
+
193
+
<!-- border -->
194
+
195
+
2. Edit and execute the code below to create an **Application** and sync the folder `tutorial-tf-text-clf`.
196
+
197
+
```PYTHON[4]
198
+
response = ai_core_client.applications.create(
199
+
application_name = "tf-clf-app",
200
+
revision = "HEAD",
201
+
repository_url = "https://github.com/YOUR_GITHUB_USERNAME/YOUR_REPO_NAME", # Change this
202
+
path = "tutorial-tf-text-clf"
203
+
)
204
+
205
+
print(response.__dict__)
206
+
```
207
+
You should then see:
208
+
209
+
<!-- border -->
210
+
211
+
3. Verify your workflow sync status, using the following code:
After yourGia workflows are synced, your **Scenario** will be automatically created in SAP AI Core. The name and ID of the scenario will be same as the one mentioned in your workflows. After The syncing, your workflow will be recognized as an executable.
225
+
226
+
162
227
### Register model as artifact
163
228
164
229
@@ -261,73 +326,6 @@ Follow the steps to upload the files downloaded in step two as a docker image.
261
326
<!-- border -->
262
327
263
328
264
-
265
-
### Create workflow to serve your model
266
-
267
-
268
-
Save the following executable file in your local system:
1. Ensure that your `resourcePlan` is set to `infer.s`. This will enable the GPU node in deployment. Find all the available resource plans0 [here](https://help.sap.com/viewer/2d6c5984063c40a59eda62f4a9135bee/LATEST/en-US/57f4f19d9b3b46208ee1d72017d0eab6.html).
279
-
280
-
2. Replace `docker-registry-secret` with the name of your docker registry secret. You can create and use multiple docker secrets in SAP AI Core. [See how to create docker registry secret](https://help.sap.com/viewer/2d6c5984063c40a59eda62f4a9135bee/LATEST/en-US/b29c7437a54f46f39c911052b05aabb1.html).
281
-
282
-
3. Set your docker image URL.
283
-
284
-
Save your executable.
285
-
286
-
287
-
### Sync workflow with SAP AI Core
288
-
289
-
290
-
You will create a folder in your GitHub repository connected SAP AI Core, where you will store the workflow (executable). You will then register this folder as an **Application** in SAP AI Core to enable syncing of the workflow as an executable.
291
-
292
-
> You can create multiple **Applications** in SAP AI Core for syncing multiple folders. This helps you organize separate folders for storing workflows YAML files for separate use cases.
293
-
294
-
1. Create a folder named `tutorial-tf-text-clf` in your GitHub repository connected to SAP AI Core. Place the following workflows inside it:
295
-
296
-
<!-- border -->
297
-
298
-
2. Edit and execute the code below to create an **Application** and sync the folder `tutorial-tf-text-clf`.
299
-
300
-
```PYTHON[4]
301
-
response = ai_core_client.applications.create(
302
-
application_name = "tf-clf-app",
303
-
revision = "HEAD",
304
-
repository_url = "https://github.com/YOUR_GITHUB_USERNAME/YOUR_REPO_NAME", # Change this
305
-
path = "tutorial-tf-text-clf"
306
-
)
307
-
308
-
print(response.__dict__)
309
-
```
310
-
You should then see:
311
-
312
-
<!-- border -->
313
-
314
-
3. Verify your workflow sync status, using the following code:
After you workflows are synced, your **Scenario** will be automatically created in SAP AI Core. The name and ID of the scenario will be same as the one mentioned in your workflows. After The syncing, your workflow will be recognized as an executable.
Perform an inference on the model created in initialize
47
-
48
-
Returns:
49
-
String prediction of the label for the given test data
50
-
"""
51
-
globalmodel, text_process
52
-
#
28
+
defpredict() ->str:
29
+
text_process=app.config['text_process']
30
+
model=app.config['model']
31
+
53
32
input_data=dict(call_request.json)
54
33
text=str(input_data['text'])
55
-
#
56
-
# Log first
57
-
logging.info("Requested text: "+
58
-
str(text)
59
-
)
60
-
#
61
-
# Prediction
34
+
35
+
app.logger.info(f'Requested text: {text}')
62
36
prediction=model.predict(
63
-
text_process.pre_process([text]) # Important to pass as list
37
+
text_process.pre_process(text)
64
38
)
65
-
logging.info(f"Prediction: {str(prediction)}")
66
-
#
67
-
output=text_process.post_process(prediction)
68
-
#
69
-
# Response
70
-
returnoutput
39
+
40
+
app.logger.info(f"Prediction: {prediction}")
41
+
42
+
returntext_process.post_process(prediction)
71
43
72
44
73
45
if__name__=="__main__":
74
-
init()
75
46
app.run(host="0.0.0.0", debug=True, port=9001)
76
47
77
-
# curl --location --request POST 'http://localhost:9001/v1/predict' --header 'Content-Type: application/json' --data-raw '{"text": "A restaurant with great ambiance"}'
48
+
"""
49
+
To run and debug locally:
50
+
1. Install
51
+
- flask
52
+
- scikit-learn
53
+
- tensorflow==2.10.0
54
+
55
+
2. Run the server from the project's root directory
As an alternative, the server can be run like this:
59
+
$ export SERVE_FILES_PATH=../tf_files && gunicorn --chdir server serve:app -b 0.0.0.0:9001
60
+
61
+
3. Query the endpoint
62
+
$ curl --location --request POST 'http://localhost:9001/v1/predict' --header 'Content-Type: application/json' --data-raw '{"text": "A restaurant with great ambiance"}'
0 commit comments