diff --git a/README.MD b/README.MD index d337433686..0e0883c6a3 100644 --- a/README.MD +++ b/README.MD @@ -7,7 +7,7 @@ Welcome to the Markdown tutorial catalog for tutorials you can find on the [SAP # How to contribute -Contributions from external authors may be accepted in the future. However, for now you can provide us with feedback (e.g. outdated screens) and suggestions for improvements on existing tutorials by creating a GitHub issue. +Contributions from external authors may be accepted in the future. However, for now you can provide us with feedback (e.g., outdated screens) and suggestions for improvements on existing tutorials by creating a GitHub issue. We have a large tutorial pipeline in the works, but in case you have something in mind right away, please [create a new issue](https://github.com/SAPDocuments/Tutorials/issues/new) with the Label `enhancement` and let us know what you'd like to see covered in a tutorial. diff --git a/tutorials/ai-core-code/ai-core-code.md b/tutorials/ai-core-code/ai-core-code.md index a089e192df..928d081872 100644 --- a/tutorials/ai-core-code/ai-core-code.md +++ b/tutorials/ai-core-code/ai-core-code.md @@ -147,7 +147,7 @@ Open your terminal and navigate to your `hello-aicore-code` directory. You will ![image](img/navigate.png) -Copy and edit the following command to build your docker image. The command follows the format `docker build -t //:`. So for example, if you are using your organization's registry which has the URL `myteam.myorg`, The command should be `docker build -t myteam.myorg/yourusername/house-price:01 .` +Copy and edit the following command to build your docker image. The command follows the format `docker build -t //:`. So for example, if you are using your organization's registry which has the URL `myteam.myorg`, The command should be `docker build -t myteam.myorg/yourusername/house-price:01 .` ```BASH docker build -t docker.io//house-price:01 . @@ -563,7 +563,7 @@ response.__dict__ [OPTION END] -The execution will go from **UNKOWN** to **RUNNING** then to the **DEAD** state. Resolving this is covered in next step. +The execution will go from **UNKNOWN** to **RUNNING** then to the **DEAD** state. Resolving this is covered in next step. ### Look for error logs in execution @@ -712,13 +712,13 @@ Check the status of your execution. When the status turns to **COMPLETED**, you ### Scheduling Execution (optional) -AI core Also provides the functionality to auto shedule Executions based on Time. +AI core Also provides the functionality to auto schedule Executions based on Time. -To shedule an Execution at particular time of the day visit ML `operations > shedules` and click on Add +To schedule an Execution at particular time of the day visit ML `operations > schedules` and click on Add ![image](img/ail/Schedule1.jpg) -Choose senario as House price and click on next +Choose scenario as House price and click on next ![image](img/ail/Schedule2.jpg) @@ -726,7 +726,7 @@ Choose Executable and click on next. ![image](img/ail/Schedule3.jpg) -Now a screen will appear where you can choose between the avilable Execution config and click on next +Now a screen will appear where you can choose between the available Execution config and click on next ![image](img/ail/Schedule4.jpg) diff --git a/tutorials/ai-core-custom-llm/ai-core-custom-LLM.md b/tutorials/ai-core-custom-llm/ai-core-custom-LLM.md index 3500746f82..d1cf7fba2e 100644 --- a/tutorials/ai-core-custom-llm/ai-core-custom-LLM.md +++ b/tutorials/ai-core-custom-llm/ai-core-custom-LLM.md @@ -1,398 +1,398 @@ ---- -parser: v2 -auto_validation: true -time: 45 -tags: [ tutorial>beginner, topic>artificial-intelligence, topic>machine-learning, software-product>sap-business-technology-platform, software-product>sap-ai-core ] -primary_tag: software-product>sap-ai-core -author_name: Dhrubajyoti Paul -author_profile: https://github.com/dhrubpaul ---- -# Using Custom models on SAP AI Core VIA ollama - In this tutorial we are going to learn on how to deploy a custom LLM on AI core using ollama for the example we would be taking Gemma as a model from hugging face and deploy it on SAP AI core. - -## You will learn -- How to Deploy ollama on AI core -- Add models to ollama and inference models - -## Prerequisites -Ai core setup and basic knowledge: [Link to documentation](https://developers.sap.com/tutorials/ai-core-setup.html) -Ai core Instance with Standard Plan or Extended Plan -Docker Desktop Setup [Download and Install](https://www.docker.com/products/docker-desktop) -Github Account - -### Architecture Overview -In this tutorial we are deploying ollama an open-source project that serves as a powerful and user-friendly platform for running LLMs on on SAP AI core. which acts as a bridge between the complexities of LLM technology and the desire for an accessible and customizable AI experience. - -![image](img/solution-architecture.png) - -We can pick any model from the above model hubs and connect it to AI core for the example we are going to deploy ollama on AI core and enable Gemma and inference the same. - -### Adding workflow file to github -Workflows for SAP AI Core are created using YAML or JSON files that are compatible with the SAP AI Core schema. Let’s start with adding a Argo Workflow file to manage: `ollama`. - -In your Github Create a new repository, click **Add file** > **Create new file**. - -![image](img/Picture1.png) - -Type `LearningScenarios/ollama.yaml` into the Name your file field. This will automatically create the folder `workflows` and a workflow named `ollama.yaml` inside it. - -![image](img/Picture2.png) - -> CAUTION Do not use the name of your workflow file (`ollama.yaml`) as any other identifier within SAP AI Core. - -![image](img/Picture3.png) - -Now copy and paste the following snippet to the editor. -```yaml -apiVersion: ai.sap.com/v1alpha1 -kind: ServingTemplate -metadata: - name: ollama - annotations: - scenarios.ai.sap.com/description: "Run a ollama server on SAP AI Core" - scenarios.ai.sap.com/name: "ollama" - executables.ai.sap.com/description: "ollama service" - executables.ai.sap.com/name: "ollama" - labels: - scenarios.ai.sap.com/id: "ollama" - ai.sap.com/version: "0.0.1" -spec: - template: - apiVersion: "serving.kserve.io/v1beta1" - metadata: - annotations: | - autoscaling.knative.dev/metric: concurrency - autoscaling.knative.dev/target: 1 - autoscaling.knative.dev/targetBurstCapacity: 0 - labels: | - ai.sap.com/resourcePlan: infer.s - spec: | - predictor: - imagePullSecrets: - - name: - minReplicas: 1 - maxReplicas: 1 - containers: - - name: kserve-container - image: docker.io//ollama:ai-core - ports: - - containerPort: 8080 - protocol: TCP -``` -Replace `` with Default and replace `` with your docker username. - -**NOTE** - we'll generate the docker image referred here in the following steps. - -### Create a Docker account and generate a Docker access token and Install Docker -[Sign Up](https://www.docker.com/) for a Docker account. - -Click on the profile button (your profile name) and then select **Account Settings**. - -![image](img/Picture4.png) - -Select **Security** from the navigation bar and click **New Access Token**. - -![image](img/Picture5.png) - -###Creating a Docker Image - -Create a directory (folder) named `custom-llm`. -Create a file `Dockerfile`. Paste the following snippet in the file. - -```dockerfile -# Specify the base layers (default dependencies) to use -ARG BASE_IMAGE=ubuntu:22.04 -FROM ${BASE_IMAGE} - -# Update and install dependencies -RUN apt-get update && \ - apt-get install -y \ - ca-certificates \ - nginx \ - curl && \ - apt-get clean && \ - rm -rf /var/lib/apt/lists/* - -# Install ollama -RUN curl -fsSL https://ollama.com/install.sh | sh - -# Expose port and set environment variables for ollama -ENV ollama_HOST=0.0.0.0 -ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin -ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 -ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility - -# Configure nginx for reverse proxy -RUN echo "events { use epoll; worker_connections 128; } \ - http { \ - server { \ - listen 8080; \ - location ^~ /v1/api/ { \ - proxy_pass http://localhost:11434/api/; \ - proxy_set_header Host \$host; \ - proxy_set_header X-Real-IP \$remote_addr; \ - proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for; \ - proxy_set_header X-Forwarded-Proto \$scheme; \ - } \ - location ^~ /v1/chat/ { \ - proxy_pass http://localhost:11434/v1/chat/; \ - proxy_set_header Host \$host; \ - proxy_set_header X-Real-IP \$remote_addr; \ - proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for; \ - proxy_set_header X-Forwarded-Proto \$scheme; \ - } \ - } \ - }" > /etc/nginx/nginx.conf && \ - chmod -R 777 /var/log/nginx /var/lib/nginx /run - -EXPOSE 8080 - -# Create directory for user nobody SAP AI Core run-time -RUN mkdir -p /nonexistent/.ollama && \ - chown -R nobody:nogroup /nonexistent && \ - chmod -R 770 /nonexistent -# chmod -R 777 /nonexistent/.ollama - -# Start nginx and ollama service -CMD service nginx start && /usr/local/bin/ollama serve - -``` - -in the same directory open terminal and run the following commands: - - -1.Login to docker hub -```powershell -docker login -u -p -``` - -2.Build the docker image -```powershell -docker build --platform=linux/amd64 -t docker.io//ollama:ai-core . -``` - -3.Push the docker image to docker hub to be used by deployment in SAP AI Core -```powershell -docker push docker.io//ollama:ai-core -``` - -### Storing docker secrets to AI core - -This step is required once. Storing Docker credentials enables SAP AI Core to pull (download) your Docker images from a private Docker repository. Use of a private Docker image prevents others from seeing your content. - -Select your SAP AI Core connection under the **Workspaces app**. - -Click **Docker Registry Secrets** in the **AI Core Administration app**. Click Add. - -A Pop up will appear on screen and add the following Json with the details to your Docker Creds. - -```json -{ - ".dockerconfigjson": "{\"auths\":{\"YOUR_DOCKER_REGISTRY_URL\":{\"username\":\"YOUR_DOCKER_USERNAME\",\"password\":\"YOUR_DOCKER_ACCESS_TOKEN\"}}}" -} -``` - -### Onboarding Github and application on AI core - -Select on your SAP AI Core connection under **Workspaces app** in the SAP AI Launchpad. - -![image](img/Picture6.png) - -Under the **Git Repositories** section in **AI Core Administration app**, click **Add**. - -> WARNING If you don’t see the AI Core Administration app, check that you had selected your SAP AI Core connection from the Workspaces app. If it is still not visible then ask your SAP AI Launchpad administrator to assign roles to you so that you can access the app. - -![image](img/Picture7.png) - -Enter your GitHub repository details (created in the previous step) in the dialog box that appears, and click **Add**. - -![image](img/Picture8.png) - -Use the following information as reference: - -- **URL:** Paste the URL of your GitHub repository and add the suffix /workflows. - -- **Username:** Your GitHub username. - -- **Password:** Paste your GitHub Personal Access Token, generated in the previous step. - -> Note: Password does not gets validated at time of Adding Github Repository its just meant to save Github Creds to AI core. Passwords gets validated at time of creating Application or when Application refreshes connection to AI core. - -You will see your GitHub onboarding completed in a few seconds. As a next steps we will enable an application on AI core. - -![image](img/Picture9.png) - -Go to your SAP AI Launchpad. -In the AI Core **Administration app**, click **Applications** > **Create**. - -![image](img/Picture10.png) - -Using the reference below as a guide, specify the details of your application. This form will create your application on your SAP AI Launchpad. - -![image](img/Picture11.png) - -Use the following information for reference: - - -- **Application Name:** An identifier of your choice. learning-scenarios-app is used as an example of best practice in this tutorial because it is a descriptive name. - -- **Repository URL:** Your GitHub account URL and repository suffix. This helps you select the credentials to access the repository. - -- **Path:** The folder in your GitHub where your workflow is located. For this tutorial it is LearningScenarios. - -- **Revision:** The is the unique ID of your GitHub commit. Set this to HEAD to have it automatically refer to the latest commit. - -**NOTE:** - -1. If creation of application fails, check the ollama.yaml file, and ensure that the names in lines #4, #7, #9, and #11 are unique, and haven't been used previously. - -2. Generate a fresh classic git token for authentication, with all necessary privileges provided during creation. - -3. Ensure that you have put the correct url in `YOUR_DOCKER_REGISTRY_URL` while setting up docket secret. - -4. Refresh the launchpad, and create a fresh application. - -### Creating configuration - -Go to **ML Operations** > **Configurations**. Click on the **Create** button. - -![image](img/Picture12.png) - -Enter the model Name and choose the workflow with following parameters - -```json -"name": "ollama", -"scenario_id": "ollama", -"executable_id": "ollama", -``` - -Then click on **next** > **review and create**. - -### Deploying Ollama to AI core - -In the model click on **create deployment**. A screen will appear - -Set duration as standard and click on the **Review** button. - -![image](img/Picture13.png) - -Once you create the deployment, wait for the current status to be set to RUNNING. - -![image](img/Picture14.png) - -Once the deployment is running, you can access the LLM’s using ollama. - -### Pulling Gemma inside Ollama deployment - -Now we need to import Gemma to our ollama pod before we can inference the model so here we would be using SAP AI API to call pull model call in Ollama. - -[OPTION BEGIN [Postman]] - -Setting up AI core Auth Creds -![image](img/setup_auth_creds.png) - -adding Resource groups to headers -![image](img/setup-resource-group.png) - -making the Model to import to POD -```json -{ - "name": "gemma:2b" -} -``` - -![image](img/pulling-model.png) - -Once the model is pulled to AI core we can check the list of models deployed under ollama deployment via the following. -![image](img/check-deployment.png) - -[OPTION END] - - -[OPTION BEGIN [Jupyter Notebook]] - -**NOTE** - Before execution of the following code block, update the url, , and to the corresponding deployment url for the model in use. - -``` -import requests -import json - -url = "https://api.ai.prasfodeuonly.aws.ml.hana.ondemand.com/v2/inference/deployments/d78749e2ab8c3/v1/api/pull" - -payload = json.dumps({ - "model": "gemma:2b" -}) -headers = { - 'AI-Resource-Group': , - 'Content-Type': 'application/json', - 'Authorization': 'Bearer ' -} - -response = requests.request("POST", url, headers=headers, data=payload) - -print(response.text) -``` - -``` -# Check the model list -endpoint = f"{inference_base_url}/api/tags" -print(endpoint) - -response = requests.get(endpoint, headers=headers) -print('Result:', response.text) -``` - -``` -completion_api_endpoint = f"{inference_base_url}/api/generate" - -#test ollama's completion api -json_data = { - "model": model, - "prompt": "What color is the sky at different times of the day? Respond in JSON", - "format": "json", #JSON mode - "stream": False #Streaming or not -} - -response = requests.post(url=completion_api_endpoint, headers=headers, json=json_data) -print('Result:', response.text) -``` - -[OPTION END] - -### Inferencing Gemma - -[OPTION BEGIN [Postman]] - -``` -{ - "model": "gemma:2b", - "prompt": "What color is the sky at different times of the day? Respond in JSON", - "format": "json", - "stream": false -} -``` - -![image](img/infrence.png) - -[OPTION END] - -[OPTION BEGIN [Jupyter Notebook]] - -``` -completion_api_endpoint = f"{inference_base_url}/api/generate" -chat_api_endpoint = f"{inference_base_url}/api/chat" -openai_chat_api_endpoint = f"{deployment_url}/v1/chat/completions" - -#test ollama's completion api -json_data = { - "model": model, - "prompt": "What color is the sky at different times of the day? Respond in JSON", - "format": "json", #JSON mode - "stream": False #Streaming or not -} - -response = requests.post(url=completion_api_endpoint, headers=headers, json=json_data) -print('Result:', response.text) -``` - -[OPTION END] +--- +parser: v2 +auto_validation: true +time: 45 +tags: [ tutorial>beginner, topic>artificial-intelligence, topic>machine-learning, software-product>sap-business-technology-platform, software-product>sap-ai-core ] +primary_tag: software-product>sap-ai-core +author_name: Dhrubajyoti Paul +author_profile: https://github.com/dhrubpaul +--- +# Using Custom models on SAP AI Core VIA ollama + In this tutorial we are going to learn on how to deploy a custom LLM on AI core using ollama for the example we would be taking Gemma as a model from hugging face and deploy it on SAP AI core. + +## You will learn +- How to Deploy ollama on AI core +- Add models to ollama and inference models + +## Prerequisites +Ai core setup and basic knowledge: [Link to documentation](https://developers.sap.com/tutorials/ai-core-setup.html) +Ai core Instance with Standard Plan or Extended Plan +Docker Desktop Setup [Download and Install](https://www.docker.com/products/docker-desktop) +GitHub Account + +### Architecture Overview +In this tutorial we are deploying ollama an open-source project that serves as a powerful and user-friendly platform for running LLMs on on SAP AI core. which acts as a bridge between the complexities of LLM technology and the desire for an accessible and customizable AI experience. + +![image](img/solution-architecture.png) + +We can pick any model from the above model hubs and connect it to AI core for the example we are going to deploy ollama on AI core and enable Gemma and inference the same. + +### Adding workflow file to github +Workflows for SAP AI Core are created using YAML or JSON files that are compatible with the SAP AI Core schema. Let’s start with adding a Argo Workflow file to manage: `ollama`. + +In your GitHub Create a new repository, click **Add file** > **Create new file**. + +![image](img/Picture1.png) + +Type `LearningScenarios/ollama.yaml` into the Name your file field. This will automatically create the folder `workflows` and a workflow named `ollama.yaml` inside it. + +![image](img/Picture2.png) + +> CAUTION Do not use the name of your workflow file (`ollama.yaml`) as any other identifier within SAP AI Core. + +![image](img/Picture3.png) + +Now copy and paste the following snippet to the editor. +```yaml +apiVersion: ai.sap.com/v1alpha1 +kind: ServingTemplate +metadata: + name: ollama + annotations: + scenarios.ai.sap.com/description: "Run a ollama server on SAP AI Core" + scenarios.ai.sap.com/name: "ollama" + executables.ai.sap.com/description: "ollama service" + executables.ai.sap.com/name: "ollama" + labels: + scenarios.ai.sap.com/id: "ollama" + ai.sap.com/version: "0.0.1" +spec: + template: + apiVersion: "serving.kserve.io/v1beta1" + metadata: + annotations: | + autoscaling.knative.dev/metric: concurrency + autoscaling.knative.dev/target: 1 + autoscaling.knative.dev/targetBurstCapacity: 0 + labels: | + ai.sap.com/resourcePlan: infer.s + spec: | + predictor: + imagePullSecrets: + - name: + minReplicas: 1 + maxReplicas: 1 + containers: + - name: kserve-container + image: docker.io//ollama:ai-core + ports: + - containerPort: 8080 + protocol: TCP +``` +Replace `` with Default and replace `` with your docker username. + +**NOTE** - we'll generate the docker image referred here in the following steps. + +### Create a Docker account and generate a Docker access token and Install Docker +[Sign Up](https://www.docker.com/) for a Docker account. + +Click on the profile button (your profile name) and then select **Account Settings**. + +![image](img/Picture4.png) + +Select **Security** from the navigation bar and click **New Access Token**. + +![image](img/Picture5.png) + +###Creating a Docker Image + +Create a directory (folder) named `custom-llm`. +Create a file `Dockerfile`. Paste the following snippet in the file. + +```dockerfile +# Specify the base layers (default dependencies) to use +ARG BASE_IMAGE=ubuntu:22.04 +FROM ${BASE_IMAGE} + +# Update and install dependencies +RUN apt-get update && \ + apt-get install -y \ + ca-certificates \ + nginx \ + curl && \ + apt-get clean && \ + rm -rf /var/lib/apt/lists/* + +# Install ollama +RUN curl -fsSL https://ollama.com/install.sh | sh + +# Expose port and set environment variables for ollama +ENV ollama_HOST=0.0.0.0 +ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin +ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 +ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility + +# Configure nginx for reverse proxy +RUN echo "events { use epoll; worker_connections 128; } \ + http { \ + server { \ + listen 8080; \ + location ^~ /v1/api/ { \ + proxy_pass http://localhost:11434/api/; \ + proxy_set_header Host \$host; \ + proxy_set_header X-Real-IP \$remote_addr; \ + proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for; \ + proxy_set_header X-Forwarded-Proto \$scheme; \ + } \ + location ^~ /v1/chat/ { \ + proxy_pass http://localhost:11434/v1/chat/; \ + proxy_set_header Host \$host; \ + proxy_set_header X-Real-IP \$remote_addr; \ + proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for; \ + proxy_set_header X-Forwarded-Proto \$scheme; \ + } \ + } \ + }" > /etc/nginx/nginx.conf && \ + chmod -R 777 /var/log/nginx /var/lib/nginx /run + +EXPOSE 8080 + +# Create directory for user nobody SAP AI Core run-time +RUN mkdir -p /nonexistent/.ollama && \ + chown -R nobody:nogroup /nonexistent && \ + chmod -R 770 /nonexistent +# chmod -R 777 /nonexistent/.ollama + +# Start nginx and ollama service +CMD service nginx start && /usr/local/bin/ollama serve + +``` + +in the same directory open terminal and run the following commands: + + +1.Login to docker hub +```powershell +docker login -u -p +``` + +2.Build the docker image +```powershell +docker build --platform=linux/amd64 -t docker.io//ollama:ai-core . +``` + +3.Push the docker image to docker hub to be used by deployment in SAP AI Core +```powershell +docker push docker.io//ollama:ai-core +``` + +### Storing docker secrets to AI core + +This step is required once. Storing Docker credentials enables SAP AI Core to pull (download) your Docker images from a private Docker repository. Use of a private Docker image prevents others from seeing your content. + +Select your SAP AI Core connection under the **Workspaces app**. + +Click **Docker Registry Secrets** in the **AI Core Administration app**. Click Add. + +A Pop up will appear on screen and add the following Json with the details to your Docker credentials. + +```json +{ + ".dockerconfigjson": "{\"auths\":{\"YOUR_DOCKER_REGISTRY_URL\":{\"username\":\"YOUR_DOCKER_USERNAME\",\"password\":\"YOUR_DOCKER_ACCESS_TOKEN\"}}}" +} +``` + +### Onboarding GitHub and application on AI core + +Select on your SAP AI Core connection under **Workspaces app** in the SAP AI Launchpad. + +![image](img/Picture6.png) + +Under the **Git Repositories** section in **AI Core Administration app**, click **Add**. + +> WARNING If you don’t see the AI Core Administration app, check that you had selected your SAP AI Core connection from the Workspaces app. If it is still not visible then ask your SAP AI Launchpad administrator to assign roles to you so that you can access the app. + +![image](img/Picture7.png) + +Enter your GitHub repository details (created in the previous step) in the dialog box that appears, and click **Add**. + +![image](img/Picture8.png) + +Use the following information as reference: + +- **URL:** Paste the URL of your GitHub repository and add the suffix /workflows. + +- **Username:** Your GitHub username. + +- **Password:** Paste your GitHub Personal Access Token, generated in the previous step. + +> Note: Password does not gets validated at time of Adding GitHub Repository its just meant to save GitHub credentials to AI core. Passwords gets validated at time of creating Application or when Application refreshes connection to AI core. + +You will see your GitHub onboarding completed in a few seconds. As a next steps we will enable an application on AI core. + +![image](img/Picture9.png) + +Go to your SAP AI Launchpad. +In the AI Core **Administration app**, click **Applications** > **Create**. + +![image](img/Picture10.png) + +Using the reference below as a guide, specify the details of your application. This form will create your application on your SAP AI Launchpad. + +![image](img/Picture11.png) + +Use the following information for reference: + + +- **Application Name:** An identifier of your choice. learning-scenarios-app is used as an example of best practice in this tutorial because it is a descriptive name. + +- **Repository URL:** Your GitHub account URL and repository suffix. This helps you select the credentials to access the repository. + +- **Path:** The folder in your GitHub where your workflow is located. For this tutorial it is LearningScenarios. + +- **Revision:** The is the unique ID of your GitHub commit. Set this to HEAD to have it automatically refer to the latest commit. + +**NOTE:** + +1. If creation of application fails, check the ollama.yaml file, and ensure that the names in lines #4, #7, #9, and #11 are unique, and haven't been used previously. + +2. Generate a fresh classic git token for authentication, with all necessary privileges provided during creation. + +3. Ensure that you have put the correct url in `YOUR_DOCKER_REGISTRY_URL` while setting up docket secret. + +4. Refresh the launchpad, and create a fresh application. + +### Creating configuration + +Go to **ML Operations** > **Configurations**. Click on the **Create** button. + +![image](img/Picture12.png) + +Enter the model Name and choose the workflow with following parameters + +```json +"name": "ollama", +"scenario_id": "ollama", +"executable_id": "ollama", +``` + +Then click on **next** > **review and create**. + +### Deploying Ollama to AI core + +In the model click on **create deployment**. A screen will appear + +Set duration as standard and click on the **Review** button. + +![image](img/Picture13.png) + +Once you create the deployment, wait for the current status to be set to RUNNING. + +![image](img/Picture14.png) + +Once the deployment is running, you can access the LLM’s using ollama. + +### Pulling Gemma inside Ollama deployment + +Now we need to import Gemma to our ollama pod before we can inference the model so here we would be using SAP AI API to call pull model call in Ollama. + +[OPTION BEGIN [Postman]] + +Setting up AI core Auth Credentials +![image](img/setup_auth_creds.png) + +adding Resource groups to headers +![image](img/setup-resource-group.png) + +making the Model to import to POD +```json +{ + "name": "gemma:2b" +} +``` + +![image](img/pulling-model.png) + +Once the model is pulled to AI core we can check the list of models deployed under ollama deployment via the following. +![image](img/check-deployment.png) + +[OPTION END] + + +[OPTION BEGIN [Jupyter Notebook]] + +**NOTE** - Before execution of the following code block, update the url, , and to the corresponding deployment url for the model in use. + +``` +import requests +import json + +url = "https://api.ai.prasfodeuonly.aws.ml.hana.ondemand.com/v2/inference/deployments/d78749e2ab8c3/v1/api/pull" + +payload = json.dumps({ + "model": "gemma:2b" +}) +headers = { + 'AI-Resource-Group': , + 'Content-Type': 'application/json', + 'Authorization': 'Bearer ' +} + +response = requests.request("POST", url, headers=headers, data=payload) + +print(response.text) +``` + +``` +# Check the model list +endpoint = f"{inference_base_url}/api/tags" +print(endpoint) + +response = requests.get(endpoint, headers=headers) +print('Result:', response.text) +``` + +``` +completion_api_endpoint = f"{inference_base_url}/api/generate" + +#test ollama's completion api +json_data = { + "model": model, + "prompt": "What color is the sky at different times of the day? Respond in JSON", + "format": "json", #JSON mode + "stream": False #Streaming or not +} + +response = requests.post(url=completion_api_endpoint, headers=headers, json=json_data) +print('Result:', response.text) +``` + +[OPTION END] + +### Inferencing Gemma + +[OPTION BEGIN [Postman]] + +``` +{ + "model": "gemma:2b", + "prompt": "What color is the sky at different times of the day? Respond in JSON", + "format": "json", + "stream": false +} +``` + +![image](img/infrence.png) + +[OPTION END] + +[OPTION BEGIN [Jupyter Notebook]] + +``` +completion_api_endpoint = f"{inference_base_url}/api/generate" +chat_api_endpoint = f"{inference_base_url}/api/chat" +openai_chat_api_endpoint = f"{deployment_url}/v1/chat/completions" + +#test ollama's completion api +json_data = { + "model": model, + "prompt": "What color is the sky at different times of the day? Respond in JSON", + "format": "json", #JSON mode + "stream": False #Streaming or not +} + +response = requests.post(url=completion_api_endpoint, headers=headers, json=json_data) +print('Result:', response.text) +``` + +[OPTION END] \ No newline at end of file diff --git a/tutorials/ai-core-custom-slm/ai-core-custom-slm.md b/tutorials/ai-core-custom-slm/ai-core-custom-slm.md index 0c86dad32b..7d48c3bf38 100644 --- a/tutorials/ai-core-custom-slm/ai-core-custom-slm.md +++ b/tutorials/ai-core-custom-slm/ai-core-custom-slm.md @@ -8,29 +8,29 @@ author_name: Dhrubajyoti Paul author_profile: https://github.com/dhrubpaul --- # Using small language models on SAP AI Core - In this tutorial we are going to learn on how to deploy a custom LLM on AI core using ollama for the example we would be taking Gemma as a model from hugging face and deploy it on SAP AI core. + In this tutorial we are going to learn on how to deploy a custom LLM on AI core using Ollama for the example we would be taking Gemma as a model from hugging face and deploy it on SAP AI core. ## You will learn -- How to Deploy ollama on AI core -- Add models to ollama and inference models +- How to Deploy Ollama on AI core +- Add models to Ollama and inference models ## Prerequisites Ai core setup and basic knowledge: [Link to documentation](https://developers.sap.com/tutorials/ai-core-setup.html) Ai core Instance with Standard Plan or Extended Plan Docker Desktop Setup [Download and Install](https://www.docker.com/products/docker-desktop) -Github Account +GitHub Account ### Architecture Overview -In this tutorial we are deploying ollama an open-source project that serves as a powerful and user-friendly platform for running LLMs on on SAP AI core. which acts as a bridge between the complexities of LLM technology and the desire for an accessible and customizable AI experience. +In this tutorial we are deploying Ollama an open-source project that serves as a powerful and user-friendly platform for running LLMs on on SAP AI core. which acts as a bridge between the complexities of LLM technology and the desire for an accessible and customizable AI experience. ![image](img/solution-architecture.png) -We can pick any model from the above model hubs and connect it to AI core for the example we are going to deploy ollama on AI core and enable Gemma and inference the same. +We can pick any model from the above model hubs and connect it to AI core for the example we are going to deploy Ollama on AI core and enable Gemma and inference the same. ### Adding workflow file to github Workflows for SAP AI Core are created using YAML or JSON files that are compatible with the SAP AI Core schema. Let’s start with adding a Argo Workflow file to manage: `ollama`. -In your Github Create a new repository, click **Add file** > **Create new file**. +In your GitHub Create a new repository, click **Add file** > **Create new file**. ![image](img/Picture1.png) @@ -79,7 +79,7 @@ spec: - containerPort: 8080 protocol: TCP ``` -Replace `` with Default and replace `` with your docker username. +Replace `` with the exact name of the Docker Registry Secret based on your configuration. This name must match the value used in the `imagePullSecrets.name` field of your YAML file. **NOTE** - we'll generate the docker image referred here in the following steps. @@ -113,10 +113,10 @@ RUN apt-get update && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* -# Install ollama +# Install Ollama RUN curl -fsSL https://ollama.com/install.sh | sh -# Expose port and set environment variables for ollama +# Expose port and set environment variables for Ollama ENV ollama_HOST=0.0.0.0 ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ENV LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 @@ -153,7 +153,7 @@ RUN mkdir -p /nonexistent/.ollama && \ chmod -R 770 /nonexistent # chmod -R 777 /nonexistent/.ollama -# Start nginx and ollama service +# Start nginx and Ollama service CMD service nginx start && /usr/local/bin/ollama serve ``` @@ -192,7 +192,7 @@ A Pop up will appear on screen and add the following Json with the details to yo } ``` -### Onboarding Github and application on AI core +### Onboarding GitHub and application on AI core Select on your SAP AI Core connection under **Workspaces app** in the SAP AI Launchpad. @@ -216,7 +216,7 @@ Use the following information as reference: - **Password:** Paste your GitHub Personal Access Token, generated in the previous step. -> Note: Password does not gets validated at time of Adding Github Repository its just meant to save Github Creds to AI core. Passwords gets validated at time of creating Application or when Application refreshes connection to AI core. +> Note: Password does not gets validated at time of Adding GitHub Repository its just meant to save GitHub credentials to AI core. Passwords gets validated at time of creating Application or when Application refreshes connection to AI core. You will see your GitHub onboarding completed in a few seconds. As a next steps we will enable an application on AI core. @@ -280,15 +280,15 @@ Once you create the deployment, wait for the current status to be set to RUNNING ![image](img/Picture14.png) -Once the deployment is running, you can access the LLM’s using ollama. +Once the deployment is running, you can access the LLM’s using Ollama. ### Pulling llava-phi3 and Performing Inference -Now we need to import llava-phi3 to our ollama pod before we can inference the model so here we would be using SAP AI API to call pull model call in Ollama. +Now we need to import llava-phi3 to our Ollama pod before we can inference the model so here we would be using SAP AI API to call pull model call in Ollama. [OPTION BEGIN [Postman]] -- Setting up AI core Auth Creds +- Setting up AI core Auth Credentials ![img](img/image.png) - adding Resource groups to headers @@ -297,12 +297,12 @@ Now we need to import llava-phi3 to our ollama pod before we can inference the m - Once you have deployed the model in SAP AI Core, you can use the pull endpoint to load additional models, such as llava-phi3. ```json { - "name": " llava-phi3" + "name": "llava-phi3" } ``` For your reference, please see the screenshots below. ![img](img/image007.png) -- Once the model is pulled to AI core we can check the list of models deployed under ollama deployment via the following. +- Once the model is pulled to AI core we can check the list of models deployed under Ollama deployment via the following. ``` Endpoint: {{deploymentUrl}}/v1/api/tags ``` diff --git a/tutorials/ai-core-data/img/aics/config.png b/tutorials/ai-core-data/img/aics/config.png deleted file mode 100644 index 606c2921c5..0000000000 Binary files a/tutorials/ai-core-data/img/aics/config.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/aics/deploy_url.png b/tutorials/ai-core-data/img/aics/deploy_url.png new file mode 100644 index 0000000000..c967789d90 Binary files /dev/null and b/tutorials/ai-core-data/img/aics/deploy_url.png differ diff --git a/tutorials/ai-core-data/img/aics/model.png b/tutorials/ai-core-data/img/aics/model.png deleted file mode 100644 index 5eca5da7e0..0000000000 Binary files a/tutorials/ai-core-data/img/aics/model.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/aics/predict.png b/tutorials/ai-core-data/img/aics/predict.png new file mode 100644 index 0000000000..60f30764be Binary files /dev/null and b/tutorials/ai-core-data/img/aics/predict.png differ diff --git a/tutorials/ai-core-data/img/ail/AccountInfoCore.png b/tutorials/ai-core-data/img/ail/AccountInfoCore.png deleted file mode 100644 index 24b417b401..0000000000 Binary files a/tutorials/ai-core-data/img/ail/AccountInfoCore.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/DOSS.png b/tutorials/ai-core-data/img/ail/DOSS.png deleted file mode 100644 index b2b6006ea0..0000000000 Binary files a/tutorials/ai-core-data/img/ail/DOSS.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/OSS.png b/tutorials/ai-core-data/img/ail/OSS.png deleted file mode 100644 index 8f615d8f3a..0000000000 Binary files a/tutorials/ai-core-data/img/ail/OSS.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/artifact1.jpg b/tutorials/ai-core-data/img/ail/artifact1.jpg deleted file mode 100644 index 3b36ff0c3a..0000000000 Binary files a/tutorials/ai-core-data/img/ail/artifact1.jpg and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/artifact2.jpg b/tutorials/ai-core-data/img/ail/artifact2.jpg deleted file mode 100644 index 9f4e037538..0000000000 Binary files a/tutorials/ai-core-data/img/ail/artifact2.jpg and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/artifact3.jpg b/tutorials/ai-core-data/img/ail/artifact3.jpg deleted file mode 100644 index e608eb3cb6..0000000000 Binary files a/tutorials/ai-core-data/img/ail/artifact3.jpg and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/artifact4.jpg b/tutorials/ai-core-data/img/ail/artifact4.jpg deleted file mode 100644 index 031ddeaa68..0000000000 Binary files a/tutorials/ai-core-data/img/ail/artifact4.jpg and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/artifact5.jpg b/tutorials/ai-core-data/img/ail/artifact5.jpg deleted file mode 100644 index fbafd97803..0000000000 Binary files a/tutorials/ai-core-data/img/ail/artifact5.jpg and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/artifact6.jpg b/tutorials/ai-core-data/img/ail/artifact6.jpg deleted file mode 100644 index 0b85371304..0000000000 Binary files a/tutorials/ai-core-data/img/ail/artifact6.jpg and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/artifact7.jpg b/tutorials/ai-core-data/img/ail/artifact7.jpg deleted file mode 100644 index 7652377870..0000000000 Binary files a/tutorials/ai-core-data/img/ail/artifact7.jpg and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/config-1.png b/tutorials/ai-core-data/img/ail/config-1.png deleted file mode 100644 index 12ae518de3..0000000000 Binary files a/tutorials/ai-core-data/img/ail/config-1.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/config-2.png b/tutorials/ai-core-data/img/ail/config-2.png deleted file mode 100644 index 6560af993c..0000000000 Binary files a/tutorials/ai-core-data/img/ail/config-2.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/config-3.png b/tutorials/ai-core-data/img/ail/config-3.png deleted file mode 100644 index 756037bf43..0000000000 Binary files a/tutorials/ai-core-data/img/ail/config-3.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/config-f-1.png b/tutorials/ai-core-data/img/ail/config-f-1.png deleted file mode 100644 index dcc76593cc..0000000000 Binary files a/tutorials/ai-core-data/img/ail/config-f-1.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/config-f-2.png b/tutorials/ai-core-data/img/ail/config-f-2.png deleted file mode 100644 index 8539e1c3a3..0000000000 Binary files a/tutorials/ai-core-data/img/ail/config-f-2.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/config1.jpg b/tutorials/ai-core-data/img/ail/config1.jpg new file mode 100644 index 0000000000..814cdc18b1 Binary files /dev/null and b/tutorials/ai-core-data/img/ail/config1.jpg differ diff --git a/tutorials/ai-core-data/img/ail/config2.jpg b/tutorials/ai-core-data/img/ail/config2.jpg new file mode 100644 index 0000000000..5a190375c7 Binary files /dev/null and b/tutorials/ai-core-data/img/ail/config2.jpg differ diff --git a/tutorials/ai-core-data/img/ail/config21.jpg b/tutorials/ai-core-data/img/ail/config21.jpg new file mode 100644 index 0000000000..53b4d24269 Binary files /dev/null and b/tutorials/ai-core-data/img/ail/config21.jpg differ diff --git a/tutorials/ai-core-data/img/ail/config22.jpg b/tutorials/ai-core-data/img/ail/config22.jpg new file mode 100644 index 0000000000..fd01aea6d1 Binary files /dev/null and b/tutorials/ai-core-data/img/ail/config22.jpg differ diff --git a/tutorials/ai-core-data/img/ail/config3.jpg b/tutorials/ai-core-data/img/ail/config3.jpg new file mode 100644 index 0000000000..cf052c06dc Binary files /dev/null and b/tutorials/ai-core-data/img/ail/config3.jpg differ diff --git a/tutorials/ai-core-data/img/ail/create_conf.png b/tutorials/ai-core-data/img/ail/create_conf.png deleted file mode 100644 index dd2769de8e..0000000000 Binary files a/tutorials/ai-core-data/img/ail/create_conf.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/create_exec.png b/tutorials/ai-core-data/img/ail/create_exec.png deleted file mode 100644 index ecff6617f8..0000000000 Binary files a/tutorials/ai-core-data/img/ail/create_exec.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/dataset.png b/tutorials/ai-core-data/img/ail/dataset.png deleted file mode 100644 index 2e82e0f09d..0000000000 Binary files a/tutorials/ai-core-data/img/ail/dataset.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/dep_update1.png b/tutorials/ai-core-data/img/ail/dep_update1.png new file mode 100644 index 0000000000..65a509c443 Binary files /dev/null and b/tutorials/ai-core-data/img/ail/dep_update1.png differ diff --git a/tutorials/ai-core-data/img/ail/dep_update2.jpg b/tutorials/ai-core-data/img/ail/dep_update2.jpg new file mode 100644 index 0000000000..8c6c35528e Binary files /dev/null and b/tutorials/ai-core-data/img/ail/dep_update2.jpg differ diff --git a/tutorials/ai-core-data/img/ail/dep_update3.jpg b/tutorials/ai-core-data/img/ail/dep_update3.jpg new file mode 100644 index 0000000000..1976ec5eda Binary files /dev/null and b/tutorials/ai-core-data/img/ail/dep_update3.jpg differ diff --git a/tutorials/ai-core-data/img/ail/deploy1.jpg b/tutorials/ai-core-data/img/ail/deploy1.jpg new file mode 100644 index 0000000000..f8f430c12a Binary files /dev/null and b/tutorials/ai-core-data/img/ail/deploy1.jpg differ diff --git a/tutorials/ai-core-data/img/ail/deploy2.jpg b/tutorials/ai-core-data/img/ail/deploy2.jpg new file mode 100644 index 0000000000..b0667fa8f5 Binary files /dev/null and b/tutorials/ai-core-data/img/ail/deploy2.jpg differ diff --git a/tutorials/ai-core-data/img/ail/deploy3.png b/tutorials/ai-core-data/img/ail/deploy3.png new file mode 100644 index 0000000000..46f98cf6e0 Binary files /dev/null and b/tutorials/ai-core-data/img/ail/deploy3.png differ diff --git a/tutorials/ai-core-data/img/ail/execute-1.png b/tutorials/ai-core-data/img/ail/execute-1.png deleted file mode 100644 index d614a33463..0000000000 Binary files a/tutorials/ai-core-data/img/ail/execute-1.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/output.png b/tutorials/ai-core-data/img/ail/output.png deleted file mode 100644 index 8d724705b6..0000000000 Binary files a/tutorials/ai-core-data/img/ail/output.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/output2.png b/tutorials/ai-core-data/img/ail/output2.png deleted file mode 100644 index 7a865e7378..0000000000 Binary files a/tutorials/ai-core-data/img/ail/output2.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/placeholder.png b/tutorials/ai-core-data/img/ail/placeholder.png deleted file mode 100644 index bb1ccaac5d..0000000000 Binary files a/tutorials/ai-core-data/img/ail/placeholder.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/predict-url.jpg b/tutorials/ai-core-data/img/ail/predict-url.jpg new file mode 100644 index 0000000000..35d6b2991d Binary files /dev/null and b/tutorials/ai-core-data/img/ail/predict-url.jpg differ diff --git a/tutorials/ai-core-data/img/ail/resource1.jpg b/tutorials/ai-core-data/img/ail/resource1.jpg new file mode 100644 index 0000000000..187097b1c8 Binary files /dev/null and b/tutorials/ai-core-data/img/ail/resource1.jpg differ diff --git a/tutorials/ai-core-data/img/ail/resource2.jpg b/tutorials/ai-core-data/img/ail/resource2.jpg new file mode 100644 index 0000000000..f3c50edf56 Binary files /dev/null and b/tutorials/ai-core-data/img/ail/resource2.jpg differ diff --git a/tutorials/ai-core-data/img/ail/scenario.png b/tutorials/ai-core-data/img/ail/scenario.png deleted file mode 100644 index 5beb1b1fee..0000000000 Binary files a/tutorials/ai-core-data/img/ail/scenario.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/sch1.png b/tutorials/ai-core-data/img/ail/sch1.png deleted file mode 100644 index 6939b94982..0000000000 Binary files a/tutorials/ai-core-data/img/ail/sch1.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/sch2.png b/tutorials/ai-core-data/img/ail/sch2.png deleted file mode 100644 index 8ef678f544..0000000000 Binary files a/tutorials/ai-core-data/img/ail/sch2.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/sch3.png b/tutorials/ai-core-data/img/ail/sch3.png deleted file mode 100644 index 655c607741..0000000000 Binary files a/tutorials/ai-core-data/img/ail/sch3.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/sch4.png b/tutorials/ai-core-data/img/ail/sch4.png deleted file mode 100644 index b23da2178d..0000000000 Binary files a/tutorials/ai-core-data/img/ail/sch4.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/sch5.png b/tutorials/ai-core-data/img/ail/sch5.png deleted file mode 100644 index f94b88cfeb..0000000000 Binary files a/tutorials/ai-core-data/img/ail/sch5.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/sch6.png b/tutorials/ai-core-data/img/ail/sch6.png deleted file mode 100644 index 0d793d06fc..0000000000 Binary files a/tutorials/ai-core-data/img/ail/sch6.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/sch7.png b/tutorials/ai-core-data/img/ail/sch7.png deleted file mode 100644 index d410a16ad2..0000000000 Binary files a/tutorials/ai-core-data/img/ail/sch7.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/ail/stop.jpg b/tutorials/ai-core-data/img/ail/stop.jpg new file mode 100644 index 0000000000..e73b35e640 Binary files /dev/null and b/tutorials/ai-core-data/img/ail/stop.jpg differ diff --git a/tutorials/ai-core-data/img/ail/workflow-scn.png b/tutorials/ai-core-data/img/ail/workflow-scn.png deleted file mode 100644 index 38fecb4e04..0000000000 Binary files a/tutorials/ai-core-data/img/ail/workflow-scn.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/aws-configure.png b/tutorials/ai-core-data/img/aws-configure.png deleted file mode 100644 index ed35086d01..0000000000 Binary files a/tutorials/ai-core-data/img/aws-configure.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/aws-model.png b/tutorials/ai-core-data/img/aws-model.png deleted file mode 100644 index 240a9513d3..0000000000 Binary files a/tutorials/ai-core-data/img/aws-model.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/aws-upload.png b/tutorials/ai-core-data/img/aws-upload.png deleted file mode 100644 index 17fad99f9c..0000000000 Binary files a/tutorials/ai-core-data/img/aws-upload.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/aws-upload2.png b/tutorials/ai-core-data/img/aws-upload2.png deleted file mode 100644 index 7b2bc1cc63..0000000000 Binary files a/tutorials/ai-core-data/img/aws-upload2.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/code-docker.png b/tutorials/ai-core-data/img/code-docker.png deleted file mode 100644 index 0916c94e5a..0000000000 Binary files a/tutorials/ai-core-data/img/code-docker.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/code-main.png b/tutorials/ai-core-data/img/code-main.png deleted file mode 100644 index 2a96735bf3..0000000000 Binary files a/tutorials/ai-core-data/img/code-main.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/code-model.png b/tutorials/ai-core-data/img/code-model.png deleted file mode 100644 index bbf7054b49..0000000000 Binary files a/tutorials/ai-core-data/img/code-model.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/download.png b/tutorials/ai-core-data/img/download.png deleted file mode 100644 index 3ff8ad1b74..0000000000 Binary files a/tutorials/ai-core-data/img/download.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/pipeline.png b/tutorials/ai-core-data/img/pipeline.png deleted file mode 100644 index f8973d6f3f..0000000000 Binary files a/tutorials/ai-core-data/img/pipeline.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/pipeline2.png b/tutorials/ai-core-data/img/pipeline2.png deleted file mode 100644 index 649c5da590..0000000000 Binary files a/tutorials/ai-core-data/img/pipeline2.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/postman/artifact.png b/tutorials/ai-core-data/img/postman/artifact.png deleted file mode 100644 index 02d676b2e4..0000000000 Binary files a/tutorials/ai-core-data/img/postman/artifact.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/postman/locate-artifact.png b/tutorials/ai-core-data/img/postman/locate-artifact.png deleted file mode 100644 index 222f4702d3..0000000000 Binary files a/tutorials/ai-core-data/img/postman/locate-artifact.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/postman/model.png b/tutorials/ai-core-data/img/postman/model.png deleted file mode 100644 index 5406c84203..0000000000 Binary files a/tutorials/ai-core-data/img/postman/model.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/postman/predict1.jpg b/tutorials/ai-core-data/img/postman/predict1.jpg new file mode 100644 index 0000000000..dc99978ad4 Binary files /dev/null and b/tutorials/ai-core-data/img/postman/predict1.jpg differ diff --git a/tutorials/ai-core-data/img/postman/s3-download.png b/tutorials/ai-core-data/img/postman/s3-download.png deleted file mode 100644 index 5e3aa7700e..0000000000 Binary files a/tutorials/ai-core-data/img/postman/s3-download.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/postman/s3.png b/tutorials/ai-core-data/img/postman/s3.png deleted file mode 100644 index ddcb998576..0000000000 Binary files a/tutorials/ai-core-data/img/postman/s3.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/save.png b/tutorials/ai-core-data/img/save.png deleted file mode 100644 index a9a0a123b2..0000000000 Binary files a/tutorials/ai-core-data/img/save.png and /dev/null differ diff --git a/tutorials/ai-core-data/img/workflow-env.png b/tutorials/ai-core-data/img/workflow-env.png deleted file mode 100644 index 18fb7d9182..0000000000 Binary files a/tutorials/ai-core-data/img/workflow-env.png and /dev/null differ diff --git a/tutorials/ai-core-deploy/ai-core-deploy.md b/tutorials/ai-core-deploy/ai-core-deploy.md index ebd3019b39..daf9eea8d3 100644 --- a/tutorials/ai-core-deploy/ai-core-deploy.md +++ b/tutorials/ai-core-deploy/ai-core-deploy.md @@ -17,9 +17,9 @@ If you are an SAP Developer or SAP employee, please refer to the following links [How to create a BTP Account (internal)](https://me.sap.com/notes/3493139) [SAP AI Core](https://help.sap.com/docs/sap-ai-core?version=INTERNAL&locale=en-US&state=PRODUCTION) If you are an external developer or a customer or a partner kindly refer to this [tutorial](https://developers.sap.com/tutorials/btp-cockpit-entitlements.html) -- You have connected code to the AI workflows of SAP AI Core using [this tutorial](ai-core-code). -- You have trained a model using SAP AI Core, such as the house price predictor model in [this tutorial](ai-core-data), or your own model trained in your local system. If you trained your own local model, follow [this tutorial](ai-core-tensorflow-byod) to use it with SAP AI Core. -- You know how to locate artifacts. This is explained in [this tutorial](ai-core-data). + - You have connected code to the AI workflows of SAP AI Core using [this tutorial](ai-core-code). + - You have trained a model using SAP AI Core, such as the house price predictor model in [this tutorial](ai-core-data), or your own model trained in your local system. If you trained your own local model, follow [this tutorial](ai-core-tensorflow-byod) to use it with SAP AI Core. + - You know how to locate artifacts. This is explained in [this tutorial](ai-core-data). ## You will learn - How to create deployment server an for AI model @@ -32,7 +32,7 @@ You will create a deployment server for AI models to use in online inferencing. The deployment server demonstrated in this tutorial can only be used in the backend of your AI project. For security reasons, in your real set up you will not be able to directly make prediction calls from your front end application to the deployment server. Doing so will lead to an inevitable Cross-origin Resource Sharing (CORS) error. As a temporary resolution, please deploy another application between your front end application and this deployment server. This middle application should use the SAP AI Core SDK (python package) to make calls to the deployment server. -Please find downloadable sample notebooks for the tutorials : . Note that these tutorials are for demonstration purposes only and should not be used in production environments. To execute them properly, you'll need to set up your own S3 bucket or provision services from BTP, including an AI Core with a standard plan for narrow AI and an extended plan for GenAI HUB. Ensure you input the service keys of these services into the relevant cells of the notebook. +Please find downloadable sample notebooks for the tutorials : . Note that these tutorials are for demonstration purposes only and should not be used in production environments. To execute them properly, you'll need to set up your own S3 bucket or provision services from BTP, including an AI Core with a standard plan for narrow AI and an extended plan for Generative AI Hub. Ensure you input the service keys of these services into the relevant cells of the notebook. [Link to notebook](https://github.com/SAP-samples/ai-core-samples/blob/main/02_ai_core/tutorials/01_create_your_first_machine_learning_project_using_sap_ai_core/01_05_make_predictions_for_house_prices_with_sap_ai_core/make-prediction.ipynb) --- @@ -339,7 +339,7 @@ Paste and edit the snippet below. Your should use the configuration ID generated ```PYTHON response = ai_core_client.deployment.create( - configuration_id="YOUR_CONFIGURATIO_ID", + configuration_id="YOUR_CONFIGURATION_ID", resource_group='default' ) @@ -559,10 +559,10 @@ print(response.__dict__) ### Check Running Resources (optional) -You can check the Current running Pods Using in AI Lauchpad Choosing the Deployment and clicking on Scaling tab +You can check the current running Pods Using in AI Launchpad, choosing the Deployment, and clicking on Scaling tab ![resource](img/ail/resource1.jpg) -Similary if you want to check for resource plan just visit the resources tab +Similarly if you want to check for resource plan just visit the resources tab -![resource](img/ail/resource2.jpg) +![resource](img/ail/resource2.jpg) diff --git a/tutorials/ai-core-deploy/img/ail/dep_update1.png b/tutorials/ai-core-deploy/img/ail/dep_update1.png new file mode 100644 index 0000000000..65a509c443 Binary files /dev/null and b/tutorials/ai-core-deploy/img/ail/dep_update1.png differ diff --git a/tutorials/ai-core-deploy/img/ail/dep_update3.jpg b/tutorials/ai-core-deploy/img/ail/dep_update3.jpg new file mode 100644 index 0000000000..1976ec5eda Binary files /dev/null and b/tutorials/ai-core-deploy/img/ail/dep_update3.jpg differ diff --git a/tutorials/ai-core-deploy/img/ail/deploy3.png b/tutorials/ai-core-deploy/img/ail/deploy3.png new file mode 100644 index 0000000000..46f98cf6e0 Binary files /dev/null and b/tutorials/ai-core-deploy/img/ail/deploy3.png differ diff --git a/tutorials/ai-core-deploy/img/ail/resource1.jpg b/tutorials/ai-core-deploy/img/ail/resource1.jpg new file mode 100644 index 0000000000..187097b1c8 Binary files /dev/null and b/tutorials/ai-core-deploy/img/ail/resource1.jpg differ diff --git a/tutorials/ai-core-deploy/img/ail/resource2.jpg b/tutorials/ai-core-deploy/img/ail/resource2.jpg new file mode 100644 index 0000000000..f3c50edf56 Binary files /dev/null and b/tutorials/ai-core-deploy/img/ail/resource2.jpg differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/ai-core-prompt-registry.md b/tutorials/ai-core-genaihub-prompt-registry/ai-core-prompt-registry.md new file mode 100644 index 0000000000..973bbc0f32 --- /dev/null +++ b/tutorials/ai-core-genaihub-prompt-registry/ai-core-prompt-registry.md @@ -0,0 +1,350 @@ +--- +parser: v2 +auto_validation: true +time: 45 +primary_tag: software-product>sap-business-technology-platform +tags: [ tutorial>beginner, topic>artificial-intelligence, topic>machine-learning, software-product>sap-business-technology-platform ] +author_name: Smita Naik +author_profile: https://github.com/I321506 +--- + +# Leveraging Prompt Registry for Seamless Orchestration + In this tutorial, we will explore the Prompt Registry feature within Generative AI Hub, focusing on how to efficiently manage and utilize prompt templates in orchestration. You will learn how to register, sync, and integrate prompt templates into the orchestration workflow, ensuring dynamic and structured interactions with AI models. + +## You will learn +- How to register and sync a prompt template in Generative AI Hub. +- How to consume the registered template within an orchestration workflow. +- How to apply Grounding techniques, Data Masking, and Content Filtering to refine responses. + +## Prerequisites +- Setup Environment: +Ensure your instance and AI Core credentials are properly configured according to the steps provided in the initial tutorial +- Orchestration Deployment: +Ensure at least one orchestration deployment is ready to be consumed during this process. +- Refer to [this tutorial understand the basic consumption of GenAI models using orchestration.](https://developers.sap.com/tutorials/ai-core-orchestration-consumption.html) +- Basic Knowledge: +Familiarity with the orchestration workflow is recommended + +### Prompt Registry + +[OPTION BEGIN [AI Launchpad]] + +A **Prompt Registry** is a centralized system for storing, managing, and versioning prompt templates used in AI-driven applications. It allows developers and teams to reuse, modify, and track changes in prompts efficiently. This is particularly useful in large-scale AI projects where prompts need to be standardized, refined, and deployed across different models or scenarios. + +**Why Use a Prompt Registry?** + +- **Consistency** – Ensures uniform prompts across different use cases. + +- **Version Control** – Tracks prompt iterations and allows rollback if needed. + +- **Collaboration** – Enables teams to work on prompt engineering collaboratively. + +- **Automation** – Integrates prompts seamlessly into AI workflows and CI/CD pipelines. + +There are two key approaches to managing prompts in a **Prompt Registry**: + +1. **Imperative API (Direct API Control for Dynamic Prompt Management)**: The Imperative API allows you to create, update, and manage prompt templates dynamically via API calls. This approach is best suited for interactive design-time use cases, where you need to iteratively refine prompts and track their versions. Each change is explicitly made via CRUD operations, and you can manage versions manually. +2. **Declarative API (Git-based Sync for Automated Prompt Management)**: The Declarative API, on the other hand, integrates with SAP AI Core applications and is ideal for CI/CD pipelines. Instead of managing templates through direct API interactions, you define them as YAML files in a Git repository. The system automatically syncs these templates, ensuring that updates are seamlessly reflected in the prompt registry without manual intervention. + +Next, let's dive into the Declarative approach to creating a prompt template. + +[OPTION END] + +### Create a Prompt Template (Declarative) + +[OPTION BEGIN [AI Launchpad]] + +- The declarative approach allows you to manage prompt templates using Git repositories, ensuring automatic synchronization with the Prompt Registry. Instead of making API calls to create and update templates manually, you define them in YAML files, commit them to Git, and let the system handle synchronization. + +- Create a prompt template and push it to your git repository. The file must be named in the following format: “**your-template-name**.prompttemplate.ai.sap.yaml”. + +- YAML File Structure: Copy the below code + +```YAML + +name: multi_task +version: 1.1.1 +scenario: multi-task-processing +spec: + template: + - role: "system" + content: "{{ ?instruction }}" + - role: "user" + content: "Take {{ ?user_input }} from here" + defaults: + instruction: "default instruction" + user_input: "default user input" + additionalFields: + isDev: true + validations: + required: true + blockedModels: + - name: "gpt-4" + versions: "gpt-4-vision" + - name: "gpt-4o" + versions: "*" + +``` +![img](img/image001.png) + +**Note** - The defaults and additionalFields fields are optional. The additionalFields field is unstructured and can be used to store metadata or configuration objects. Refer to the screenshot above for reference. + +- Once the YAML file is created and pushed to Git, the system automatically syncs it with the Prompt Registry. + +[OPTION END] + +### Onboarding Github and Application on AI core + +[OPTION BEGIN [AI Launchpad]] + +- Select on your SAP AI Core connection under Workspaces app in the SAP AI Launchpad. Under the **Git Repositories** section in **AI Core Administration app**, click **Add**. + +![img](img/image003.png) + +**Note:** If you don’t see the AI Core Administration app, check that you had selected your SAP AI Core connection from the Workspaces app. If it is still not visible then ask your SAP AI Launchpad administrator to assign roles to you so that you can access the app. + +![img](img/img_1.png) +**Enter your GitHub Repository Details** + +Use the following information as reference: + +- **URL**: Paste the URL of your GitHub repository. + - Example: **https://github.tools.sap/your-username/your-repository** + +- **Username**: Your GitHub username. + - Example: **johndoe** + +- **Password**: Paste your GitHub Personal Access Token, Follow below steps to create the Access Token + +![img](img/img_2.png) +![img](img/img_3.png) +![img](img/img_4.png) +![img](img/img_5.png) +![img](img/img_6.png) +![img](img/img_7.png) +![img](img/img_8.png) + +**Note:** Password does not gets validated at time of Adding Github Repository its just meant to save Github Creds to AI core. Passwords gets validated at time of creating Application or when Application refreshes connection to AI core. + +You will see your GitHub onboarding completed in a few seconds. As a next steps we will enable an application on AI core. + +- Go to your **SAP AI Launchpad**.In the **AI Core Administration app**, click **Applications > Create**. + +![img](img/image007.png) + +- Using the reference below as a guide, specify the details of your application. This form will create your application on your SAP AI Launchpad. + +- Use the following information for reference: + - **Application Name**: An identifier of your choice. + - **Repository URL**: Your GitHub account URL and repository suffix. This helps you select the credentials to access the repository. + - **Path**: The folder in your GitHub where your workflow is located. For this tutorial it is LearningScenarios. + - **Revision**: The is the unique ID of your GitHub commit. Set this to **HEAD** to have it automatically refer to the latest commit. + +**Click on the application you created, then select 'Sync' to synchronize your changes.** +![img](img/image009.png) + +After synchronization, navigate to **ML Operations > Scenarios** in the **SAP AI Core Launchpad** and verify your scenario by checking the name specified in your YAML file. + +![img](img/image025.png) + +[OPTION END] + +### Verifying and Consuming the Prompt Template + +[OPTION BEGIN [AI Launchpad]] + +Once the template is synced to the **AI Core Launchpad**, follow these steps to integrate it into your orchestration: + +- Navigate to Generative AI Hub and select the Edit Workflow option. Then, disable the Grounding module. + +![img](img/image027.png) + +- Click on the Template tab, click on the Select icon, and choose your synced template from the list. + +![img](img/image028.png) + +- Configure Data Masking by selecting the sensitive information categories (e.g., Name, Organization) that need to be masked. sensitive information. + +![img](img/image029.png) + +- Set Input Filtering thresholds for content moderation categories such as Hate, Self-Harm, Sexual, and Violence. Adjust the settings to Allow Safe and Low / Block Medium and High as needed. + +![img](img/image030.png) + +- Select Model Configuration by choosing the appropriate model for orchestration. + +![img](img/image031.png) + +- Set Output Filtering using the same threshold settings as input filtering to ensure consistency in moderated responses. + +![img](img/image032.png) + +- Once all configurations are complete, click Test to validate your orchestration workflow. + - Instruction: "Provide a brief explanation of SAP AI Core and its key functionalities." + - User Input: "What are the main capabilities of SAP AI Core?" + +- After entering these values, execute the test to verify the response. The system should return relevant details based on your configured prompt template and filtering settings. +![img](img/image033.png) + +[OPTION END] + +### Prompt Templates for Different Use Cases and Reusability + +[OPTION BEGIN [AI Launchpad]] + +**In Step 4**, we experimented with a single prompt. Now, let's explore some predefined prompt templates designed for various tasks. + +To proceed: +- **Go to the Git repository** and edit the YAML file. + +- Keep only the following three fields **constant** in the YAML file: + + -**name** + + -**version** (Ensure you increment the version, e.g., from 1.1.1 to 1.1.2 when making updates.) + + -**scenario** + +- **Copy and paste** the relevant prompt templates from below into the YAML files. Modify only the **spec** section of the **YAML** file while keeping other sections unchanged. + +![img](img/image034.png) +- **Save the file and sync** it to the application. + +**NOTE:**- Please refer to Step 4 for details on modifying the YAML file in Git, syncing it with the application, and ensuring the changes are reflected correctly. + +- **Test the different tasks** using these templates to see how they adapt to different use cases. + +**Note:** This section provides reusable prompt templates designed for various use cases in SAP AI Launchpad. Each template follows a structured format to ensure consistent and accurate outputs. Below are the prompt templates for different NLP tasks. + +#### The Prompt Template Resource + +**Template for Text Expansion** + +```YAML +spec: + template: + - role: "system" + content: | + Expand the following short text into a detailed explanation. + Return output as: + Expanded Text: {{ expanded_output }} + - role: "user" + content: "Text: {{ ?short_text }}" + additionalFields: + isDev: true + validations: + required: true +``` +**Template for Multi-Task Processing** +```YAML +spec: + template: + - role: "system" + content: | + Perform multiple tasks at once: Detect language and translate to English. + Respond in the following format: + Language: + Converted to English: + - role: "user" + content: "Text: {{ ?input_text }}" + additionalFields: + isDev: true + validations: + required: true +``` + +**Template for Spell Check and Correction** + +```YAML +spec: + template: + - role: "system" + content: "Correct any spelling and grammatical errors in the given text. Corrected Text: {{ corrected_output }}" + - role: "user" + content: "{{ ?input_text }}" + defaults: + input_text: "default input text" + + additionalFields: + isDev: true + validations: + required: true +``` + +**Template for Sentiment Analysis** + +```YAML +spec: + template: + - role: "system" + content: | + Classify the sentiment of the given text. + Respond in the following format: + Sentiment: {{ classification_output }} + - role: "user" + content: "Text: {{ ?input_text }}" + additionalFields: + isDev: true + validations: + required: true +``` + +**Template for Text Summarization** + +```YAML +spec: + template: + - role: "system" + content: | + Summarize the following text. + Respond in the following format: + Summary: {{ summary_output }} + - role: "user" + content: "Text: {{ ?input_text }}" + additionalFields: + isDev: true + validations: + required: true +``` + +**Template for Tone Adjustment** + +```YAML +spec: + template: + - role: "system" + content: | + Translate the following input to a Corporate language. + Respond in the following format: + Corporate Version: {{ corporate_output }} + - role: "user" + content: "Text: {{ ?input_text }}" + additionalFields: + isDev: true + validations: + required: true +``` + +**Template for Question Answering** + +```YAML +spec: + template: + - role: "system" + content: | + Answer the question based on the given context. + Respond in the following format: + Answer: {{ answer_output }} + - role: "user" + content: | + Context: {{ ?context }} + Question: {{ ?question }} + additionalFields: + isDev: true + validations: + required: true +``` +**NOTE:** If required, you can create a new YAML file for different tasks instead of modifying the existing one. This helps maintain clarity and version control. + +[OPTION END] + diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image001.png b/tutorials/ai-core-genaihub-prompt-registry/img/image001.png new file mode 100644 index 0000000000..97fb216fd5 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image001.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image003.png b/tutorials/ai-core-genaihub-prompt-registry/img/image003.png new file mode 100644 index 0000000000..73eb90ba2a Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image003.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image005.png b/tutorials/ai-core-genaihub-prompt-registry/img/image005.png new file mode 100644 index 0000000000..3b9376b206 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image005.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image007.png b/tutorials/ai-core-genaihub-prompt-registry/img/image007.png new file mode 100644 index 0000000000..f7f5c76e5e Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image007.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image009.png b/tutorials/ai-core-genaihub-prompt-registry/img/image009.png new file mode 100644 index 0000000000..18908328fe Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image009.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image011.png b/tutorials/ai-core-genaihub-prompt-registry/img/image011.png new file mode 100644 index 0000000000..bd2d5133a0 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image011.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image013.png b/tutorials/ai-core-genaihub-prompt-registry/img/image013.png new file mode 100644 index 0000000000..8bb5ba51aa Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image013.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image015.png b/tutorials/ai-core-genaihub-prompt-registry/img/image015.png new file mode 100644 index 0000000000..b92d5f5d68 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image015.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image017.png b/tutorials/ai-core-genaihub-prompt-registry/img/image017.png new file mode 100644 index 0000000000..0678e5cabe Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image017.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image019.png b/tutorials/ai-core-genaihub-prompt-registry/img/image019.png new file mode 100644 index 0000000000..e7b2749baf Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image019.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image021.png b/tutorials/ai-core-genaihub-prompt-registry/img/image021.png new file mode 100644 index 0000000000..3f34ad289d Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image021.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image023.png b/tutorials/ai-core-genaihub-prompt-registry/img/image023.png new file mode 100644 index 0000000000..8b519ab803 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image023.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image025.png b/tutorials/ai-core-genaihub-prompt-registry/img/image025.png new file mode 100644 index 0000000000..b0d06f8609 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image025.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image026.png b/tutorials/ai-core-genaihub-prompt-registry/img/image026.png new file mode 100644 index 0000000000..b211654ac7 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image026.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image027.png b/tutorials/ai-core-genaihub-prompt-registry/img/image027.png new file mode 100644 index 0000000000..3442a8c11d Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image027.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image028.png b/tutorials/ai-core-genaihub-prompt-registry/img/image028.png new file mode 100644 index 0000000000..0d6f8a57cd Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image028.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image029.png b/tutorials/ai-core-genaihub-prompt-registry/img/image029.png new file mode 100644 index 0000000000..bf58978294 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image029.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image030.png b/tutorials/ai-core-genaihub-prompt-registry/img/image030.png new file mode 100644 index 0000000000..45a05b1c0d Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image030.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image031.png b/tutorials/ai-core-genaihub-prompt-registry/img/image031.png new file mode 100644 index 0000000000..17fa3b190a Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image031.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image032.png b/tutorials/ai-core-genaihub-prompt-registry/img/image032.png new file mode 100644 index 0000000000..510b1de7fe Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image032.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image033.png b/tutorials/ai-core-genaihub-prompt-registry/img/image033.png new file mode 100644 index 0000000000..fc1f4df455 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image033.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/image034.png b/tutorials/ai-core-genaihub-prompt-registry/img/image034.png new file mode 100644 index 0000000000..22c12452ce Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/image034.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/img_1.png b/tutorials/ai-core-genaihub-prompt-registry/img/img_1.png new file mode 100644 index 0000000000..752501e180 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/img_1.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/img_10.png b/tutorials/ai-core-genaihub-prompt-registry/img/img_10.png new file mode 100644 index 0000000000..c1bd14144d Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/img_10.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/img_2.png b/tutorials/ai-core-genaihub-prompt-registry/img/img_2.png new file mode 100644 index 0000000000..d4c5e896bb Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/img_2.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/img_3.png b/tutorials/ai-core-genaihub-prompt-registry/img/img_3.png new file mode 100644 index 0000000000..6b12227cd2 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/img_3.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/img_4.png b/tutorials/ai-core-genaihub-prompt-registry/img/img_4.png new file mode 100644 index 0000000000..08641f2c65 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/img_4.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/img_5.png b/tutorials/ai-core-genaihub-prompt-registry/img/img_5.png new file mode 100644 index 0000000000..c02ab4ed2b Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/img_5.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/img_6.png b/tutorials/ai-core-genaihub-prompt-registry/img/img_6.png new file mode 100644 index 0000000000..97b5889061 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/img_6.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/img_7.png b/tutorials/ai-core-genaihub-prompt-registry/img/img_7.png new file mode 100644 index 0000000000..d9f81eb226 Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/img_7.png differ diff --git a/tutorials/ai-core-genaihub-prompt-registry/img/img_8.png b/tutorials/ai-core-genaihub-prompt-registry/img/img_8.png new file mode 100644 index 0000000000..1ff3a447ad Binary files /dev/null and b/tutorials/ai-core-genaihub-prompt-registry/img/img_8.png differ diff --git a/tutorials/btp-app-create-ui-freestyle-sapui5/SAPUI5freestyle_appgen.png b/tutorials/btp-app-create-ui-freestyle-sapui5/SAPUI5freestyle_appgen.png deleted file mode 100644 index 78bc7826b0..0000000000 Binary files a/tutorials/btp-app-create-ui-freestyle-sapui5/SAPUI5freestyle_appgen.png and /dev/null differ diff --git a/tutorials/btp-app-create-ui-freestyle-sapui5/SAPUI5freestyle_entityselect.png b/tutorials/btp-app-create-ui-freestyle-sapui5/SAPUI5freestyle_entityselect.png deleted file mode 100644 index c9e842a6d1..0000000000 Binary files a/tutorials/btp-app-create-ui-freestyle-sapui5/SAPUI5freestyle_entityselect.png and /dev/null differ diff --git a/tutorials/btp-app-create-ui-freestyle-sapui5/btp-app-create-ui-freestyle-sapui5.md b/tutorials/btp-app-create-ui-freestyle-sapui5/btp-app-create-ui-freestyle-sapui5.md deleted file mode 100644 index 0cb79a7c2e..0000000000 --- a/tutorials/btp-app-create-ui-freestyle-sapui5/btp-app-create-ui-freestyle-sapui5.md +++ /dev/null @@ -1,218 +0,0 @@ ---- -author_name: Mahati Shankar -author_profile: https://github.com/smahati -title: Create a UI Using Freestyle SAPUI5 -description: This tutorial shows you how to create a Freestyle SAPUI5 app on top of your CAP application. -keywords: cap -auto_validation: true -time: 20 -tags: [ tutorial>beginner, software-product-function>sap-cloud-application-programming-model, programming-tool>node-js, software-product>sap-business-technology-platform, software-product>sap-fiori-tools, software-product>sapui5] -primary_tag: software-product-function>sap-cloud-application-programming-model ---- - -## Prerequisites - - Before you start with this tutorial, you have two options: - - Follow the instructions in **Step 16: Start from an example branch** of [Prepare Your Development Environment for CAP](btp-app-prepare-dev-environment-cap) to checkout the [`cap-business-logic`](https://github.com/SAP-samples/cloud-cap-risk-management/tree/cap-business-logic) branch. - - Complete the previous tutorial [Add Business Logic to Your Application](btp-app-cap-business-logic) with all its prerequisites. - - - -## Details -### You will learn - - How to create a Freestyle SAPUI5 app on top of your CAP application - - How to start the application - ---- -> This tutorial will soon be phased out. -> -> For more tutorials about how to develop and deploy a full stack CAP application on SAP BTP, see: -> -> - [Develop a Full-Stack CAP Application Following SAP BTP Developer’s Guide](https://developers.sap.com/group.cap-application-full-stack.html) -> - [Deploy a Full-Stack CAP Application in SAP BTP, Cloud Foundry Runtime Following SAP BTP Developer’s Guide](https://developers.sap.com/group.deploy-full-stack-cap-application.html) -> - [Deploy a Full-Stack CAP Application in SAP BTP, Kyma Runtime Following SAP BTP Developer’s Guide](https://developers.sap.com/group.deploy-full-stack-cap-kyma-runtime.html) -> -> To continue learning how to implement business applications on SAP BTP, see: -> -> - [SAP BTP Developer’s Guide](https://help.sap.com/docs/btp/btp-developers-guide/what-is-btp-developers-guide?version=Cloud&locale=en-US) -> - [Related Hands-On Experience](https://help.sap.com/docs/btp/btp-developers-guide/related-hands-on-experience?version=Cloud&locale=en-US) -> - [Tutorials for ABAP Cloud](https://help.sap.com/docs/btp/btp-developers-guide/tutorials-for-abap-cloud?version=Cloud&locale=en-US) -> - [Tutorials for SAP Cloud Application Programming Model](https://help.sap.com/docs/btp/btp-developers-guide/tutorials-for-sap-cloud-application-programming-model?version=Cloud&locale=en-US) - -[ACCORDION-BEGIN [Step 1: ](SAP Fiori elements application vs. freestyle UI5 application)] -What is the difference between a freestyle SAPUI5 app and the SAP Fiori elements based application that you have already built in the tutorial [Create an SAP Fiori Elements-Based UI](btp-app-create-ui-fiori-elements)? As mentioned, both the freestyle app and the SAP Fiori elements app are based on SAPUI5. - -SAP Fiori elements app: - -- is built with SAPUI5 where most of the code resides outside your own development project in central components -- much of its logic is controlled by metadata from your OData service -- standard use cases available out of the box -- there are options to adjust your application outside of the possibilities given you via metadata with the so-called "Flexible Programming Model" - -Freestyle UI5 app: - -- lives mainly in your own project - all the views and controllers are in it -- still comes with all the features of SAPUI5 (super rich SAP Fiori compliant [set of UI controls](https://sapui5.hana.ondemand.com/#/controls), [data binding](https://sapui5.hana.ondemand.com/#/topic/e5310932a71f42daa41f3a6143efca9c), [model view controller](https://sapui5.hana.ondemand.com/#/topic/91f233476f4d1014b6dd926db0e91070), and so on) -- you can do what you need to do using SAPUI5, third party, and open-source components -- greater amount of work for standard use cases because you have to program them yourself but also greater freedom and optimization - -Fortunately, you also have a choice of several templates that get your application kick started for freestyle UI5. They copy the initial code into your project and any change necessary for the app can be done manually by you in the code. - -[VALIDATE_1] -[ACCORDION-END] ---- -[ACCORDION-BEGIN [Step 2: ](Creating the application)] -1. In VS Code, invoke the Command Palette ( **View** → **Command Palette** or **⇧⌘P** for macOS / Ctrl + Shift + P for Windows) and choose **Fiori: Open Application Generator**. - - - > In case you get an error launching the SAP Fiori application generator, refer to the [FAQ](https://help.sap.com/viewer/42532dbd1ebb434a80506113970f96e9/Latest/en-US) to find a solution. - -2. Choose template type **Deprecated Templates** and template **SAP Fiori Worklist Application**. - - ![SAPUI5 freestyle](createSAPUI5freestyle_app.png) - - -4. Choose **Next**. - -5. In the next dialog, choose **Use a Local CAP Project** and choose your current **`cpapp`** project. - - > In case you get the error: `Node module @sap/cds isn't found. Please install it and try again.` - - > This is an issue with the SAP Fiori application generator not finding the corresponding CAP modules, due to different repositories. This should be a temporary issue. For the meantime you can work around it by opening a command line and running the following command: - - > ```bash - > npm install --global @sap/cds-dk --@sap:registry=https://npmjs.org/ - > ``` - - > See the [CAP Troubleshooting guide](https://cap.cloud.sap/docs/advanced/troubleshooting#npm-installation) for more details. - -5. Select the **`RiskService (Node.js)`** as the OData service and choose **Next**. - - ![CAPpro](datasourceselection.png) - -6. On the **Entity Selection** screen, select the following values and choose **Next**. - - ![Entity Selection](SAPUI5freestyle_entityselect.png) - -7. Enter `mitigations` as the module name and `Mitigations` as the application title. - -8. Enter `ns` as the namespace and `Mitigations` as the description for the application. - -9. Leave the default values for all other settings. - -9. Choose **Finish** to generate the application. - - ![Project Names Miti](SAPUI5freestyle_appgen.png) - -[DONE] -[ACCORDION-END] ---- -[ACCORDION-BEGIN [Step 3: ](Starting the application)] -1. Make sure `cds watch` is still running in the project root folder: - - ```Shell/Bash - cds watch - ``` - -2. Open the URL . - - You now see a new HTML page. - - !![UI5 App](freestylelaunch.png) - -3. Choose the `/mitigations/webapp/index.html` entry. - - !![UI5 App IDs](freestyleguidids.png) - - As a result, you can see a list but you can only see the IDs of the mitigations both in the list view and on the detail page. This is because the freestyle template only got the information from you that the `Object Collection ID` is the `ID` property of the `mitigations` service. You now need to add additional SAPUI5 controls that are bound to additional properties of the `mitigations` service. - -4. Open the view of the work list `app/mitigations/webapp/view/Worklist.view.xml` and add the following code, removing the `ID` and `tableUnitNumberColumnTitle` columns and instead adding `Description`, `Owner` and `Timeline` columns: - - ```XML[2-10,19-23] - - - - - - - - - - - - - - - - - - - - - - ``` - -5. Open the view of the object `app/mitigations/webapp/view/Object.view.xml` and also replace `ID` and add `Description`, `Owner`, and `Timeline` using SAPUI5 controls like `ObjectStatus` (you can copy the whole code and replace the existing code in the file): - - ```XML[4,16,28-34] - - - - - - - </semantic:titleHeading> - - <semantic:headerContent> - <ObjectNumber - /> - </semantic:headerContent> - - <semantic:sendEmailAction> - <semantic:SendEmailAction id="shareEmail" press=".onShareEmailPress"/> - </semantic:sendEmailAction> - - <semantic:content> - <l:VerticalLayout> - <ObjectStatus title="Description" text="{description}"/> - <ObjectStatus title="Owner" text="{owner}"/> - <ObjectStatus title="Timeline" text="{timeline}"/> - </l:VerticalLayout> - </semantic:content> - - - </semantic:SemanticPage> - - </mvc:View> - ``` - -6. Refresh the `mitigations` application in your browser. - - You can now see the new content in the work list ... - - !![SAPUI5 App List](freestyleui5list.png) - - ... as well as in the detail object page. - - !![SAPUI5 App Object](freestyleui5object.png) - -[DONE] -The result of this tutorial can be found in the [`create-ui-freestyle-sapui5`](https://github.com/SAP-samples/cloud-cap-risk-management/tree/create-ui-freestyle-sapui5) branch. - - -[ACCORDION-END] ---- \ No newline at end of file diff --git a/tutorials/btp-app-create-ui-freestyle-sapui5/createSAPUI5freestyle_app.png b/tutorials/btp-app-create-ui-freestyle-sapui5/createSAPUI5freestyle_app.png deleted file mode 100644 index 8b5c5bfa12..0000000000 Binary files a/tutorials/btp-app-create-ui-freestyle-sapui5/createSAPUI5freestyle_app.png and /dev/null differ diff --git a/tutorials/btp-app-create-ui-freestyle-sapui5/datasourceselection.png b/tutorials/btp-app-create-ui-freestyle-sapui5/datasourceselection.png deleted file mode 100644 index f834473be1..0000000000 Binary files a/tutorials/btp-app-create-ui-freestyle-sapui5/datasourceselection.png and /dev/null differ diff --git a/tutorials/btp-app-create-ui-freestyle-sapui5/freestyleguidids.png b/tutorials/btp-app-create-ui-freestyle-sapui5/freestyleguidids.png deleted file mode 100644 index 2bdc59670e..0000000000 Binary files a/tutorials/btp-app-create-ui-freestyle-sapui5/freestyleguidids.png and /dev/null differ diff --git a/tutorials/btp-app-create-ui-freestyle-sapui5/freestylelaunch.png b/tutorials/btp-app-create-ui-freestyle-sapui5/freestylelaunch.png deleted file mode 100644 index b3ecb3eb7b..0000000000 Binary files a/tutorials/btp-app-create-ui-freestyle-sapui5/freestylelaunch.png and /dev/null differ diff --git a/tutorials/btp-app-create-ui-freestyle-sapui5/freestyleui5list.png b/tutorials/btp-app-create-ui-freestyle-sapui5/freestyleui5list.png deleted file mode 100644 index 21799c8aa1..0000000000 Binary files a/tutorials/btp-app-create-ui-freestyle-sapui5/freestyleui5list.png and /dev/null differ diff --git a/tutorials/btp-app-create-ui-freestyle-sapui5/freestyleui5object.png b/tutorials/btp-app-create-ui-freestyle-sapui5/freestyleui5object.png deleted file mode 100644 index d3aaae3504..0000000000 Binary files a/tutorials/btp-app-create-ui-freestyle-sapui5/freestyleui5object.png and /dev/null differ diff --git a/tutorials/cp-trial-quick-onboarding/cp-trial-quick-onboarding.md b/tutorials/cp-trial-quick-onboarding/cp-trial-quick-onboarding.md index 5c12494278..a395743411 100644 --- a/tutorials/cp-trial-quick-onboarding/cp-trial-quick-onboarding.md +++ b/tutorials/cp-trial-quick-onboarding/cp-trial-quick-onboarding.md @@ -157,3 +157,4 @@ Here are some of the tasks you can use the CLI for: - Subscribing to applications To find out more about the btp CLI, you can have a look at this tutorial: [Get Started with the SAP BTP command line interface (btp CLI)](cp-sapcp-getstarted). + \ No newline at end of file diff --git a/tutorials/tutorial-first-steps/tutorial-first-steps.md b/tutorials/tutorial-first-steps/tutorial-first-steps.md index 131ba43af2..08d13fc81a 100644 --- a/tutorials/tutorial-first-steps/tutorial-first-steps.md +++ b/tutorials/tutorial-first-steps/tutorial-first-steps.md @@ -4,8 +4,8 @@ auto_validation: true time: 10 tags: [ tutorial>beginner ] primary_tag: topic>sap-community -author_name: Joshua Margo -author_profile: https://github.com/jmmargo +author_name: Riley Rainey +author_profile: https://github.com/rbrainey --- # Get to Know SAP Tutorials @@ -197,7 +197,7 @@ Select **View all badges**. ### What other achievements are there? -You can also see a list of key achievements that can you obtain, like doing your first tutorial or completing 3 missions. +You can also see a list of key achievements that you can obtain, like doing your first tutorial or completing 3 missions. Go back to the [developers home page](https://developers.sap.com), and make sure you are logged in with your SAP ID (top right). Scroll down a little and on your right are your achievements.