Skip to content

Commit 0662010

Browse files
committed
fix typographical issues
1 parent 3224b19 commit 0662010

1 file changed

Lines changed: 16 additions & 16 deletions

File tree

tutorials/ai-core-custom-slm/ai-core-custom-slm.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -8,29 +8,29 @@ author_name: Dhrubajyoti Paul
88
author_profile: https://github.com/dhrubpaul
99
---
1010
# Using small language models on SAP AI Core
11-
<!-- description --> In this tutorial we are going to learn on how to deploy a custom LLM on AI core using ollama for the example we would be taking Gemma as a model from hugging face and deploy it on SAP AI core.
11+
<!-- description --> In this tutorial we are going to learn on how to deploy a custom LLM on AI core using Ollama for the example we would be taking Gemma as a model from hugging face and deploy it on SAP AI core.
1212

1313
## You will learn
14-
- How to Deploy ollama on AI core
15-
- Add models to ollama and inference models
14+
- How to Deploy Ollama on AI core
15+
- Add models to Ollama and inference models
1616

1717
## Prerequisites
1818
Ai core setup and basic knowledge: [Link to documentation](https://developers.sap.com/tutorials/ai-core-setup.html)
1919
Ai core Instance with Standard Plan or Extended Plan
2020
Docker Desktop Setup [Download and Install](https://www.docker.com/products/docker-desktop)
21-
Github Account
21+
GitHub Account
2222

2323
### Architecture Overview
24-
In this tutorial we are deploying ollama an open-source project that serves as a powerful and user-friendly platform for running LLMs on on SAP AI core. which acts as a bridge between the complexities of LLM technology and the desire for an accessible and customizable AI experience.
24+
In this tutorial we are deploying Ollama an open-source project that serves as a powerful and user-friendly platform for running LLMs on on SAP AI core. which acts as a bridge between the complexities of LLM technology and the desire for an accessible and customizable AI experience.
2525

2626
![image](img/solution-architecture.png)
2727

28-
We can pick any model from the above model hubs and connect it to AI core for the example we are going to deploy ollama on AI core and enable Gemma and inference the same.
28+
We can pick any model from the above model hubs and connect it to AI core for the example we are going to deploy Ollama on AI core and enable Gemma and inference the same.
2929

3030
### Adding workflow file to github
3131
Workflows for SAP AI Core are created using YAML or JSON files that are compatible with the SAP AI Core schema. Let’s start with adding a Argo Workflow file to manage: `ollama`.
3232

33-
In your Github Create a new repository, click **Add file** > **Create new file**.
33+
In your GitHub Create a new repository, click **Add file** > **Create new file**.
3434

3535
![image](img/Picture1.png)
3636

@@ -113,10 +113,10 @@ RUN apt-get update && \
113113
apt-get clean && \
114114
rm -rf /var/lib/apt/lists/*
115115
116-
# Install ollama
116+
# Install Ollama
117117
RUN curl -fsSL https://ollama.com/install.sh | sh
118118
119-
# Expose port and set environment variables for ollama
119+
# Expose port and set environment variables for Ollama
120120
ENV ollama_HOST=0.0.0.0
121121
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
122122
ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
@@ -153,7 +153,7 @@ RUN mkdir -p /nonexistent/.ollama && \
153153
chmod -R 770 /nonexistent
154154
# chmod -R 777 /nonexistent/.ollama
155155
156-
# Start nginx and ollama service
156+
# Start nginx and Ollama service
157157
CMD service nginx start && /usr/local/bin/ollama serve
158158
159159
```
@@ -192,7 +192,7 @@ A Pop up will appear on screen and add the following Json with the details to yo
192192
}
193193
```
194194

195-
### Onboarding Github and application on AI core
195+
### Onboarding GitHub and application on AI core
196196

197197
Select on your SAP AI Core connection under **Workspaces app** in the SAP AI Launchpad.
198198

@@ -216,7 +216,7 @@ Use the following information as reference:
216216

217217
- **Password:** Paste your GitHub Personal Access Token, generated in the previous step.
218218

219-
> Note: Password does not gets validated at time of Adding Github Repository its just meant to save Github Creds to AI core. Passwords gets validated at time of creating Application or when Application refreshes connection to AI core.
219+
> Note: Password does not gets validated at time of Adding GitHub Repository its just meant to save GitHub credentials to AI core. Passwords gets validated at time of creating Application or when Application refreshes connection to AI core.
220220

221221
You will see your GitHub onboarding completed in a few seconds. As a next steps we will enable an application on AI core.
222222

@@ -280,15 +280,15 @@ Once you create the deployment, wait for the current status to be set to RUNNING
280280

281281
![image](img/Picture14.png)
282282

283-
Once the deployment is running, you can access the LLM’s using ollama.
283+
Once the deployment is running, you can access the LLM’s using Ollama.
284284

285285
### Pulling llava-phi3 and Performing Inference
286286

287-
Now we need to import llava-phi3 to our ollama pod before we can inference the model so here we would be using SAP AI API to call pull model call in Ollama.
287+
Now we need to import llava-phi3 to our Ollama pod before we can inference the model so here we would be using SAP AI API to call pull model call in Ollama.
288288

289289
[OPTION BEGIN [Postman]]
290290

291-
- Setting up AI core Auth Creds
291+
- Setting up AI core Auth Credentials
292292
![img](img/image.png)
293293

294294
- adding Resource groups to headers
@@ -302,7 +302,7 @@ Now we need to import llava-phi3 to our ollama pod before we can inference the m
302302
```
303303
For your reference, please see the screenshots below.
304304
![img](img/image007.png)
305-
- Once the model is pulled to AI core we can check the list of models deployed under ollama deployment via the following.
305+
- Once the model is pulled to AI core we can check the list of models deployed under Ollama deployment via the following.
306306
```
307307
Endpoint: {{deploymentUrl}}/v1/api/tags
308308
```

0 commit comments

Comments
 (0)