Skip to content

Commit c76a67a

Browse files
authored
Merge pull request #23466 from gianfranco-s/fix_ai-core-tensorflow-byod
ai-core-tensorflow-byod - Fix broken steps
2 parents bddefc0 + 5034ad7 commit c76a67a

8 files changed

Lines changed: 227 additions & 249 deletions

File tree

tutorials/ai-core-tensorflow-byod/ai-core-tensorflow-byod.md

Lines changed: 66 additions & 68 deletions
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ response = ai_core_client.object_store_secrets.create(
140140
type = "S3",
141141
name = "my-s3-secret1",
142142
path_prefix = "movie-clf",
143-
endpoint = "s3-eu-central-1.amazonaws.com", # Change this
143+
endpoint = "s3.eu-central-1.amazonaws.com", # Change this
144144
bucket = "asd-11111111-2222-3333-4444-55555555555", # Change this
145145
region = "eu-central-1", # Change this
146146
data = {
@@ -159,6 +159,71 @@ You should see the following response:
159159
> Note that depending on your region, your AWS endpoint syntax may differ from the example above. In the event of an error, try this step again with alternative syntax. For available syntaxes, please see the [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteEndpoints.html)
160160

161161

162+
### Create workflow to serve your model
163+
164+
165+
Save the following executable file in your local system:
166+
167+
| Filename | Download link |
168+
| -------- | ------------- |
169+
| `serving_executable.yaml` | [LINK](https://raw.githubusercontent.com/sap-tutorials/Tutorials/master/tutorials/ai-core-tensorflow-byod/files/workflow/serving_executable.yaml) |
170+
171+
In the executable, ensure the following.
172+
173+
<!-- border -->![img](img/acs/6_0.png)
174+
175+
1. Ensure that your `resourcePlan` is set to `infer.s`. This will enable the GPU node in deployment. Find all the available resource plans [here](https://help.sap.com/viewer/2d6c5984063c40a59eda62f4a9135bee/LATEST/en-US/57f4f19d9b3b46208ee1d72017d0eab6.html).
176+
177+
2. Replace `docker-registry-secret` with the name of your docker registry secret. You can create and use multiple docker secrets in SAP AI Core. [See how to create docker registry secret](https://help.sap.com/viewer/2d6c5984063c40a59eda62f4a9135bee/LATEST/en-US/b29c7437a54f46f39c911052b05aabb1.html).
178+
179+
3. Set your docker image URL.
180+
181+
Save your executable.
182+
183+
184+
### Sync workflow with SAP AI Core
185+
186+
187+
You will create a folder in your GitHub repository connected SAP AI Core, where you will store the workflow (executable). You will then register this folder as an **Application** in SAP AI Core to enable syncing of the workflow as an executable.
188+
189+
> You can create multiple **Applications** in SAP AI Core for syncing multiple folders. This helps you organize separate folders for storing workflows YAML files for separate use cases.
190+
191+
1. Create a folder named `tutorial-tf-text-clf` in your GitHub repository connected to SAP AI Core. Place the following workflows inside it:
192+
193+
<!-- border -->![img](img/acs/6_1.png)
194+
195+
2. Edit and execute the code below to create an **Application** and sync the folder `tutorial-tf-text-clf`.
196+
197+
```PYTHON[4]
198+
response = ai_core_client.applications.create(
199+
application_name = "tf-clf-app",
200+
revision = "HEAD",
201+
repository_url = "https://github.com/YOUR_GITHUB_USERNAME/YOUR_REPO_NAME", # Change this
202+
path = "tutorial-tf-text-clf"
203+
)
204+
205+
print(response.__dict__)
206+
```
207+
You should then see:
208+
209+
<!-- border -->![img](img/acs/6_2.png)
210+
211+
3. Verify your workflow sync status, using the following code:
212+
213+
```PYTHON
214+
response = ai_core_client.applications.get_status(application_name = 'tf-clf-app')
215+
216+
print(response.__dict__)
217+
print('*'*80)
218+
print(response.sync_ressources_status[0].__dict__)
219+
```
220+
You should then see:
221+
222+
<!-- border -->![img](img/acs/6_3.png)
223+
224+
After yourGia workflows are synced, your **Scenario** will be automatically created in SAP AI Core. The name and ID of the scenario will be same as the one mentioned in your workflows. After The syncing, your workflow will be recognized as an executable.
225+
226+
162227
### Register model as artifact
163228

164229

@@ -261,73 +326,6 @@ Follow the steps to upload the files downloaded in step two as a docker image.
261326
<!-- border -->![img](img/docker-push.png)
262327
263328
264-
265-
### Create workflow to serve your model
266-
267-
268-
Save the following executable file in your local system:
269-
270-
| Filename | Download link |
271-
| -------- | ------------- |
272-
| `serving_executable.yaml` | [LINK](https://raw.githubusercontent.com/sap-tutorials/Tutorials/master/tutorials/ai-core-tensorflow-byod/files/workflow/serving_executable.yaml) |
273-
274-
In the executable, ensure the following.
275-
276-
<!-- border -->![img](img/acs/6_0.png)
277-
278-
1. Ensure that your `resourcePlan` is set to `infer.s`. This will enable the GPU node in deployment. Find all the available resource plans0 [here](https://help.sap.com/viewer/2d6c5984063c40a59eda62f4a9135bee/LATEST/en-US/57f4f19d9b3b46208ee1d72017d0eab6.html).
279-
280-
2. Replace `docker-registry-secret` with the name of your docker registry secret. You can create and use multiple docker secrets in SAP AI Core. [See how to create docker registry secret](https://help.sap.com/viewer/2d6c5984063c40a59eda62f4a9135bee/LATEST/en-US/b29c7437a54f46f39c911052b05aabb1.html).
281-
282-
3. Set your docker image URL.
283-
284-
Save your executable.
285-
286-
287-
### Sync workflow with SAP AI Core
288-
289-
290-
You will create a folder in your GitHub repository connected SAP AI Core, where you will store the workflow (executable). You will then register this folder as an **Application** in SAP AI Core to enable syncing of the workflow as an executable.
291-
292-
> You can create multiple **Applications** in SAP AI Core for syncing multiple folders. This helps you organize separate folders for storing workflows YAML files for separate use cases.
293-
294-
1. Create a folder named `tutorial-tf-text-clf` in your GitHub repository connected to SAP AI Core. Place the following workflows inside it:
295-
296-
<!-- border -->![img](img/acs/6_1.png)
297-
298-
2. Edit and execute the code below to create an **Application** and sync the folder `tutorial-tf-text-clf`.
299-
300-
```PYTHON[4]
301-
response = ai_core_client.applications.create(
302-
application_name = "tf-clf-app",
303-
revision = "HEAD",
304-
repository_url = "https://github.com/YOUR_GITHUB_USERNAME/YOUR_REPO_NAME", # Change this
305-
path = "tutorial-tf-text-clf"
306-
)
307-
308-
print(response.__dict__)
309-
```
310-
You should then see:
311-
312-
<!-- border -->![img](img/acs/6_2.png)
313-
314-
3. Verify your workflow sync status, using the following code:
315-
316-
```PYTHON
317-
response = ai_core_client.applications.get_status(application_name = 'tf-clf-app')
318-
319-
print(response.__dict__)
320-
print('*'*80)
321-
print(response.sync_ressources_status[0].__dict__)
322-
```
323-
You should then see:
324-
325-
<!-- border -->![img](img/acs/6_3.png)
326-
327-
After you workflows are synced, your **Scenario** will be automatically created in SAP AI Core. The name and ID of the scenario will be same as the one mentioned in your workflows. After The syncing, your workflow will be recognized as an executable.
328-
329-
330-
331329
### Create configuration for deployment
332330
333331

tutorials/ai-core-tensorflow-byod/files/infer/Dockerfile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
1-
FROM tensorflow/tensorflow:latest-gpu
1+
FROM tensorflow/tensorflow:2.10.0-gpu-jupyter
22

33
ENV LANG C.UTF-8
44

55
COPY requirements.txt ./requirements.txt
6-
RUN pip3 install -r requirements.txt
6+
RUN pip3 install --ignore-installed -r requirements.txt
77

88
ENV SERVE_FILES_PATH=/mnt/models
99

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
scikit-learn==0.24.2
2-
joblib==1.2.0
3-
Flask==2.3.2
4-
gunicorn==20.1.0
1+
scikit-learn
2+
Flask
3+
gunicorn
Lines changed: 47 additions & 56 deletions
Original file line numberDiff line numberDiff line change
@@ -1,77 +1,68 @@
1-
# -*- coding: utf-8 -*-
2-
"""
3-
Inference script that extends from the base infer interface
4-
"""
5-
import os
6-
from os.path import exists
7-
from joblib import load
81
import logging
92

103
from flask import Flask
114
from flask import request as call_request
125

13-
from tf_template import Model, TextProcess
6+
from tf_template import Model, TextProcess, AVAILABLE_GPUS
147

158
app = Flask(__name__)
9+
app.logger.setLevel(logging.INFO)
10+
# app.logger.addHandler(logging.FileHandler('server.log')) # Uncomment to save logs to file `server.log`
1611

17-
text_process = None
18-
model = None
19-
20-
@app.before_first_request
21-
def init():
22-
"""
23-
Load the model if it is available locally
24-
"""
25-
import tensorflow as tf
26-
import logging
27-
logging.info(f"Num GPUs Available: {len(tf.config.list_physical_devices('GPU'))}")
28-
29-
global text_process, model
30-
#
31-
# Load text pre and post processor
32-
text_process = TextProcess(os.environ['SERVE_FILES_PATH'])
33-
text_process.max_pad_len
34-
#
35-
# load model
36-
model = Model(
37-
os.environ['SERVE_FILES_PATH']
38-
)
12+
app_has_run_before: bool = False
13+
14+
15+
@app.before_request
16+
def first_run():
17+
global app_has_run_before
18+
if not app_has_run_before:
19+
app.logger.info(f"Num GPUs Available: {len(AVAILABLE_GPUS)}")
3920

40-
return None
21+
app.config['text_process'] = TextProcess()
22+
app.config['model'] = Model()
23+
24+
app_has_run_before = True
4125

4226

4327
@app.route("/v1/predict", methods=["POST"])
44-
def predict():
45-
"""
46-
Perform an inference on the model created in initialize
47-
48-
Returns:
49-
String prediction of the label for the given test data
50-
"""
51-
global model, text_process
52-
#
28+
def predict() -> str:
29+
text_process = app.config['text_process']
30+
model = app.config['model']
31+
5332
input_data = dict(call_request.json)
5433
text = str(input_data['text'])
55-
#
56-
# Log first
57-
logging.info("Requested text: " +
58-
str(text)
59-
)
60-
#
61-
# Prediction
34+
35+
app.logger.info(f'Requested text: {text}')
6236
prediction = model.predict(
63-
text_process.pre_process([text]) # Important to pass as list
37+
text_process.pre_process(text)
6438
)
65-
logging.info(f"Prediction: {str(prediction)}")
66-
#
67-
output = text_process.post_process(prediction)
68-
#
69-
# Response
70-
return output
39+
40+
app.logger.info(f"Prediction: {prediction}")
41+
42+
return text_process.post_process(prediction)
7143

7244

7345
if __name__ == "__main__":
74-
init()
7546
app.run(host="0.0.0.0", debug=True, port=9001)
7647

77-
# curl --location --request POST 'http://localhost:9001/v1/predict' --header 'Content-Type: application/json' --data-raw '{"text": "A restaurant with great ambiance"}'
48+
"""
49+
To run and debug locally:
50+
1. Install
51+
- flask
52+
- scikit-learn
53+
- tensorflow==2.10.0
54+
55+
2. Run the server from the project's root directory
56+
$ export SERVE_FILES_PATH=tf_files && python server/serve.py
57+
58+
As an alternative, the server can be run like this:
59+
$ export SERVE_FILES_PATH=../tf_files && gunicorn --chdir server serve:app -b 0.0.0.0:9001
60+
61+
3. Query the endpoint
62+
$ curl --location --request POST 'http://localhost:9001/v1/predict' --header 'Content-Type: application/json' --data-raw '{"text": "A restaurant with great ambiance"}'
63+
64+
4. Result should be
65+
{
66+
"negative": 0.5039926171302795
67+
}
68+
"""

0 commit comments

Comments
 (0)