diff --git a/tutorials/ai-core-genai-hana-vector/ai-core-genai-hana-vector.md b/tutorials/ai-core-genai-hana-vector/ai-core-genai-hana-vector.md
index 395ab5373d..93a11a6f7e 100644
--- a/tutorials/ai-core-genai-hana-vector/ai-core-genai-hana-vector.md
+++ b/tutorials/ai-core-genai-hana-vector/ai-core-genai-hana-vector.md
@@ -13,136 +13,141 @@ author_profile: https://github.com/dhrubpaul
## Prerequisites
-- Access to SAP AI core with SAP extended plan.
-- Access to a Hana DB instance (you can refer to [this tutorial](https://developers.sap.com/tutorials/hana-cloud-deploying.html))
+- Access to SAP AI core with sap extended plan.
- Have python3 installed in your system.
- Have generative-ai-hub-sdk installed in your system.
-- Have hana-ml installed in your system.
## You will learn
- How to create a table and store embeddings in HANA Vector Store.
- How to use the embeddings in Retrieval Augmented Generation.
Please find downloadable sample notebooks for the tutorials : Note that these tutorials are for demonstration purposes only and should not be used in production environments. To execute them properly, you'll need to set up your own S3 bucket or provision services from BTP, including an AI Core with a standard plan for narrow AI and an extended plan for GenAI HUB. Ensure you input the service keys of these services into the relevant cells of the notebook.
-[Link to notebook](https://github.com/SAP-samples/ai-core-samples/blob/main/08_VectorStore/Hana/rag_hana_vector.ipynb)
+[Link to notebook](https://github.com/SAP-samples/ai-core-samples/blob/main/09_BusinessAIWeek/workshop_notebook-final.ipynb)
-### Loading vector data from a csv file
+### Installing generative-ai-hub-sdk
-Download the following ```csv``` file and save it in your system.
+To install the generative-ai-hub-sdk package in your system, open your terminal or command prompt and run the following command.
- [Download File](https://raw.githubusercontent.com/SAP-samples/ai-core-samples/main/09_BusinessAIWeek/files/GRAPH_DOCU_2503.csv)
+```
+pip3 install generative-ai-hub-sdk
+```
-Execute the following python code in the same folder. This will load the data and store it in a data-frame.
+Once the package is installed, you need to configure proxy modules to use the large language models. We recommend setting these values as environment variables for AI Core credentials via a configuration file. The default path for this file is ~/.aicore/config.json.
-```PYTHON
-import pandas as pd
-df = pd.read_csv('GRAPH_DOCU_2503.csv', low_memory=False)
-df.head(3)
-```
+Open Notepad and replace the placeholder values in the JSON file with your AI Core service keys, which you downloaded from BTP. Save the file by pressing Command + S. When prompted, navigate to ~/.aicore/ and save the file as config.json.
-
+The default path can be overridden by setting the AICORE_HOME environment variable to the folder path from which the config file should be read.
-### Connection to the HANA Vector store
+
-Execute the following python code to create a connection to the HANA Vector storage.
+Source: https://pypi.org/project/generative-ai-hub-sdk/
+
+### Loading vector data from a csv file
+
+The dataset in [GRAPH_DOCU_2503.csv](files/GRAPH_DOCU_QRC3.csv) includes pre-computed vector embeddings. We have incorporated these vectors into the CSV file to facilitate a quick start with vector search, eliminating the need to generate embeddings using an additional function.
+
+This dataset is specifically derived from the Graph Engine section of the [SAP HANA Cloud documentation](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-graph-reference/sap-hana-cloud-sap-hana-database-graph-reference), as found on the SAP Help Portal.
```PYTHON
-from hana_ml import ConnectionContext
-# cc = ConnectionContext(userkey='VDB_BETA', encrypt=True)
-cc= ConnectionContext(
- address='
',
- port='',
- user='',
- password='',
- encrypt=True
- )
-print(cc.hana_version())
-print(cc.get_current_schema())
+# Read data from 'GRAPH_DOCU_2503.csv' and store each row in the 'data' list
+import csv
+
+data = []
+with open('GRAPH_DOCU_2503.csv', encoding='utf-8') as csvfile:
+ csv_reader = csv.reader(csvfile)
+ for row in csv_reader:
+ try:
+ data.append(row)
+ except:
+ print(row)
```
-
+
-### Creating a table
+### Creating a connection using dbapi
-To create a table, execute the following python command.
+First, you'll need to install the hdbcli library to connect to the SAP HANA database. Run the following command to install it:
```PYTHON
-# Create a table
-cursor = cc.connection.cursor()
-sql_command = '''CREATE TABLE GRAPH_DOCU_QRC3_2201(ID BIGINT, L1 NVARCHAR(3), L2 NVARCHAR(3), L3 NVARCHAR(3), FILENAME NVARCHAR(100), HEADER1 NVARCHAR(5000), HEADER2 NVARCHAR(5000), TEXT NCLOB, VECTOR_STR NCLOB);'''
-cursor.execute(sql_command)
-cursor.close()
+!pip install hdbcli
```
-
-
-### Uploading the data to the database
+You can establish a connection to the HANA Vector storage by replacing the address, port number, username, and password with the values provided in your credentials.
-Execute the following code to upload the data to the database.
+After the installation is complete, you can establish a secure connection to the HANA Vector storage by replacing the placeholder values with the actual details provided in your credentials:
```PYTHON
-from hana_ml.dataframe import create_dataframe_from_pandas
-v_hdf = create_dataframe_from_pandas(
- connection_context=cc,
- pandas_df=df,
- table_name="GRAPH_DOCU_QRC3_2201",
- allow_bigint=True,
- append=True
+# Establish a secure connection to an SAP HANA database using hdbcli
+import hdbcli
+from hdbcli import dbapi
+
+cc = dbapi.connect(
+ address='',
+ port='443',
+ user='',
+ password='',
+ encrypt=True
)
```
-
+
-### Creating a VECTOR column
+### Creating a table and adding data
-Add a new column ```VECTOR``` to the table to store the vectors. Execute the following python code.
+HANA Vector storage organizes data in tables. To store your data, you can create a table in HANA Vector storage. Replace TABLENAME with a name of your choice, ensuring that it contains only uppercase letters and numbers.
```PYTHON
-# Add REAL_VECTOR column
-cursor = cc.connection.cursor()
-sql_command = '''ALTER TABLE GRAPH_DOCU_QRC3_2201 ADD (VECTOR REAL_VECTOR(1536));'''
+# Create a table
+cursor = cc.cursor()
+sql_command = '''CREATE TABLE TABLENAME(ID1 BIGINT, ID2 BIGINT, L1 NVARCHAR(3), L2 NVARCHAR(3), L3 NVARCHAR(3), FILENAME NVARCHAR(100), HEADER1 NVARCHAR(5000), HEADER2 NVARCHAR(5000), TEXT NCLOB, VECTOR_STR REAL_VECTOR);'''
cursor.execute(sql_command)
cursor.close()
```
-
+
+
+**Note**: In the SQL command, we are creating a table named TABLE2503. You can choose any name for your table but ensure that you consistently use the same name throughout your operations. Remember that you cannot create multiple tables with the same name.
-### Creating vectors from strings
+Once you have created the table, you can populate it with the data that you converted to a Data Frame in the previous step. The SQL INSERT command will add all the contents from the CSV file, including text chunks, embeddings, and metadata, into the table. This demonstrates how you can insert text chunks into the HANA Vector database for implementing retrieval-augmented generation (RAG).
-The vectors for the strings can be created using the ```TO_REAL_VECTOR()``` function. Execute the following code to update the VECTOR column with the vectors.
+Replace TABLENAME with the name of your table. This process is for inserting pre-existing embeddings from a file.
```PYTHON
-# Create vectors from strings
-cursor = cc.connection.cursor()
-sql_command = '''UPDATE GRAPH_DOCU_QRC3_2201 SET VECTOR = TO_REAL_VECTOR(VECTOR_STR);'''
-cursor.execute(sql_command)
-cursor.close()
+# Inserting data into the specified table using a prepared SQL statement with real vector conversion.
+cursor = cc.cursor()
+sql_insert = 'INSERT INTO TABLENAME(ID1, ID2, L1, L2, L3, FILENAME, HEADER1, HEADER2, TEXT, VECTOR_STR) VALUES (?,?,?,?,?,?,?,?,?,TO_REAL_VECTOR(?))'
+cursor.executemany(sql_insert,data[1:])
```
-
+
### Setting up hana_ml and generative-ai-hub-sdk
-Import the ```hana_ml``` and ```generative-ai-hub-sdk``` packages. Set the proxy version of generative-ai-hub-sdk to ```gen-ai-hub``` for an AI Core proxy.
+First, you'll need to install the hana_ml package to work with SAP HANA and set up the AI Core proxy using generative-ai-hub-sdk. Run the following command to install hana_ml:
+
+```PYTHON
+!pip install hana_ml
+```
-Execute the following python code.
+After installing, you can proceed with the following code to import the necessary packages and initialize the AI Core proxy:
```PYTHON
import hana_ml
print(hana_ml.__version__)
-
from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client
proxy_client = get_proxy_client('gen-ai-hub') # for an AI Core proxy
```
-
-
+
### Get Embeddings
-Define the function ```get_embedding()``` to generate embeddings for our input texts. Execute the following python code.
+Embeddings are vector representations of text data that capture the semantic meaning of the text. Define a get_embedding() function to generate embeddings from text data using the text-embedding-ada-002 model. This function will be used to convert user prompts into embeddings.
+
+For example, if a user enters the prompt "How can I run a shortest path algorithm?", the get_embedding() function will convert the prompt into an embedding, which can then be used to search for similar data in the HANA database using similarity search.
```PYTHON
# Get embeddings
+!pip install generative-ai-hub-sdk[langchain]
from gen_ai_hub.proxy.native.openai import embeddings
def get_embedding(input, model="text-embedding-ada-002") -> str:
@@ -152,46 +157,40 @@ def get_embedding(input, model="text-embedding-ada-002") -> str:
)
return response.data[0].embedding
```
-
-
-### Running vector search
+
-Define a function ```run_vector_search()```. This function will search the vector database and finds the rows which are most similar to a given query.
+### Running vector search
-Execute the following python code
+Define a function run_vector_search(). This function will search the vector database and finds the rows which are most similar to a given query.
```PYTHON
+# Perform a vector search on the table using the specified metric and return the top k results
+cursor = cc.cursor()
def run_vector_search(query: str, metric="COSINE_SIMILARITY", k=4):
if metric == 'L2DISTANCE':
sort = 'ASC'
else:
sort = 'DESC'
query_vector = get_embedding(query)
- sql = '''SELECT TOP {k} "ID", "HEADER1", "HEADER2", "TEXT"
- FROM "GRAPH_DOCU_QRC3_2201"
- ORDER BY "{metric}"("VECTOR", TO_REAL_VECTOR('{qv}')) {sort}'''.format(k=k, metric=metric, qv=query_vector, sort=sort)
- hdf = cc.sql(sql)
- df_context = hdf.head(k).collect()
- return df_context
+ sql = '''SELECT TOP {k} "ID2", "TEXT"
+ FROM "TABLENAME"
+ ORDER BY "{metric}"("VECTOR_STR", TO_REAL_VECTOR('{qv}')) {sort}'''.format(k=k, metric=metric, qv=query_vector, sort=sort)
+ cursor.execute(sql)
+ hdf = cursor.fetchall()
+ return hdf[:k]
```
-Now we can test the function by sending a query. The function prints the rows that are most similar to the queries.
-
-```PYTHON
-query = "How can I run a shortest path algorithm?"
-df_context = run_vector_search(query=query, metric="COSINE_SIMILARITY",k=4)
-df_context
-```
+
-
+**Note**: By default, we are using cosine similarity for RAG.
### Creating a prompt template
Create a prompt template to do retrieval augmented generation on your prompts. Execute the following python code.
```PYTHON
-# Prompt. Do also use your knowledge from outside the given context.
+# Create a prompt template
promptTemplate_fstring = """
You are an SAP HANA Cloud expert.
You are provided multiple context items that are related to the prompt you have to answer.
@@ -205,176 +204,47 @@ from langchain.prompts import PromptTemplate
promptTemplate = PromptTemplate.from_template(promptTemplate_fstring)
```
-
+
### Querying the LLM
-Now create a function ```ask_llm()``` to query the LLM while using the similar vectors as context. Execute the following python code.
-
-```PYTHON
-from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
-from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client
-proxy_client = get_proxy_client('gen-ai-hub') # for an AI Core proxy
+Now create a function retrieve_and_query_llm() to query the LLM while using the similar vectors as context.
-def ask_llm(query: str, retrieval_augmented_generation: bool, metric='COSINE_SIMILARITY', k = 4) -> str:
-
- class color:
- RED = '\033[91m'
- BLUE = '\033[94m'
- BOLD = '\033[1m'
- END = '\033[0m'
- context = ''
- if retrieval_augmented_generation == True:
- print(color.RED + 'Running retrieval augmented generation.' + color.END)
- print(color.RED + '\nEmbedding the query string and running HANA vector search.' + color.END)
- context = run_vector_search(query, metric, k)
- print(color.RED + '\nHANA vector search returned {k} best matching documents.'.format(k=k) + color.END)
- print(color.RED + '\nGenerating LLM prompt using the context information.' + color.END)
- else:
- print(color.RED + 'Generating LLM prompt WITHOUT context information.' + color.END)
- prompt = promptTemplate.format(query=query, context=context)
- print(color.RED + '\nAsking LLM...' + color.END)
- llm = ChatOpenAI(proxy_model_name='gpt-4', proxy_client=proxy_client)
- response = llm.invoke(prompt).content
- print(color.RED + '...completed.' + color.END)
- print(color.RED + '\nQuery: ' + color.END, query)
- print(color.BLUE + '\nResponse:' + color.BLUE)
- print(response)
-```
-
-
-
-Now you can test the function using a query. Run the following python code.
+The code defines a function retrieve_and_query_llm() that performs a similarity search and generates a response using a language model. It first retrieves contextually relevant information from a vector search based on the user's query, then constructs a prompt and calculates its token length using the tiktoken library. The function subsequently invokes a GPT-4 model through the Chat OpenAI class to generate and print a response based on the constructed prompt.
```PYTHON
-query = "I want to calculate a shortest path. How do I do that?"
-
-response = ask_llm(query=query, retrieval_augmented_generation=True, k=4)
-```
-
-
+# Import necessary modules, and define a function to query an LLM with a formatted prompt and vector-based context
-### Creating a connection using dbapi
-
-```PYTHON
-import hdbcli
-from hdbcli import dbapi
-
-cc = dbapi.connect(
- address='',
- port='',
- user='',
- password='',
- encrypt=True
- )
-```
-
-### Using insert methods to write directly into the table
-
-Execute the following code to extract data from the csv file and store it in a list
-
-```PYTHON
-import csv
-
-data = []
-with open('GRAPH_DOCU_2503.csv', encoding='utf-8') as csvfile:
- csv_reader = csv.reader(csvfile)
- for row in csv_reader:
- try:
- data.append(row)
- except:
- print(row)
-```
-
-
-
-### Creating a table and adding data
-
-To create a table, execute the following python code.
-
-```PYTHON
-# Create a table
-cursor = cc.cursor()
-sql_command = '''CREATE TABLE TABLE10043(ID1 BIGINT, ID2 BIGINT, L1 NVARCHAR(3), L2 NVARCHAR(3), L3 NVARCHAR(3), FILENAME NVARCHAR(100), HEADER1 NVARCHAR(5000), HEADER2 NVARCHAR(5000), TEXT NCLOB, VECTOR_STR REAL_VECTOR);'''
-cursor.execute(sql_command)
-cursor.close()
-```
-
-Now we insert our data into the table we created
-
-```PYTHON
-cursor = cc.cursor()
-sql_insert = 'INSERT INTO TABLE10043(ID1, ID2, L1, L2, L3, FILENAME, HEADER1, HEADER2, TEXT, VECTOR_STR) VALUES (?,?,?,?,?,?,?,?,?,TO_REAL_VECTOR(?))'
-cursor.executemany(sql_insert,data[1:])
-```
-
-
-
-### Modifying the run_vector_search function
-
-We can modify the run_vector_search function to make use of the VECTOR_STR column for similarity search
-
-```PYTHON
-cursor = cc.cursor()
-def run_vector_search(query: str, metric="COSINE_SIMILARITY", k=4):
- if metric == 'L2DISTANCE':
- sort = 'ASC'
- else:
- sort = 'DESC'
- query_vector = get_embedding(query)
- sql = '''SELECT TOP {k} "ID2", "TEXT"
- FROM "TABLE10043"
- ORDER BY "{metric}"("VECTOR_STR", TO_REAL_VECTOR('{qv}')) {sort}'''.format(k=k, metric=metric, qv=query_vector, sort=sort)
- cursor.execute(sql)
- hdf = cursor.fetchall()
- return hdf[:k]
-```
-
-
-
-### Querying the LLM
-
-Now create a function ```ask_llm()``` to query the LLM while using the similar vectors as context. Execute the following python code.
-
-```PYTHON
+!pip install tiktoken
+import tiktoken
from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
-from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client
-proxy_client = get_proxy_client('gen-ai-hub') # for an AI Core proxy
-
-def ask_llm(query: str, retrieval_augmented_generation: bool, metric='COSINE_SIMILARITY', k=4) -> str:
- class color:
- RED = '\033[91m'
- BLUE = '\033[94m'
- BOLD = '\033[1m'
- END = '\033[0m'
+def retrieve_and_query_llm(query: str, metric='COSINE_SIMILARITY', k = 4) -> str:
context = ''
- if retrieval_augmented_generation == True:
- print(color.RED + 'Running retrieval augmented generation.' + color.END)
- print(color.RED + '\nEmbedding the query string and running HANA vector search.' + color.END)
- context = run_vector_search(query, metric, k)
- print(color.RED + '\nHANA vector search returned {k} best matching documents.'.format(k=k) + color.END)
- print(color.RED + '\nGenerating LLM prompt using the context information.' + color.END)
- else:
- print(color.RED + 'Generating LLM prompt WITHOUT context information.' + color.END)
- prompt = promptTemplate.format(query=query, context=' '.join(str(df_context)))
- print(color.RED + '\nAsking LLM...' + color.END)
- llm = ChatOpenAI(proxy_model_name='gpt-4', proxy_client=proxy_client)
+ context = run_vector_search(query, metric, k)
+ prompt = promptTemplate.format(query=query, context=' '.join(str(context)))
+ encoding = tiktoken.get_encoding("cl100k_base")
+ num_tokens = len(encoding.encode(str(prompt)))
+ print('no of tokens'+ str(num_tokens))
+ llm = ChatOpenAI(proxy_model_name='gpt-4-32k',max_tokens = 8000)
response = llm.invoke(prompt).content
- print(color.RED + '...completed.' + color.END)
- print(color.RED + '\nQuery: ' + color.END, query)
- print(color.BLUE + '\nResponse:' + color.BLUE)
+ print('Query: '+ query)
+ print('\nResponse:')
print(response)
```
-
+
+
+### Testing the function
-Now you can test the function using a query. Run the following python code.
+Now you can test the function using a query. Run the following python code for the same -
```PYTHON
-query = "I want to calculate a shortest path. How do I do that?"
+# Query the LLM with a request about calculating the shortest path and retrieve the response
-response = ask_llm(query=query, retrieval_augmented_generation=True, k=4)
+query = "I want to calculate a shortest path. How do I do that?"
+response = retrieve_and_query_llm(query=query, k=4)
+response
```
-
+
\ No newline at end of file
diff --git a/tutorials/ai-core-genai-hana-vector/images/hana1.png b/tutorials/ai-core-genai-hana-vector/images/hana1.png
deleted file mode 100644
index e4d8f35d24..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/hana1.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/hana2.png b/tutorials/ai-core-genai-hana-vector/images/hana2.png
deleted file mode 100644
index df563598c3..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/hana2.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/hana3.png b/tutorials/ai-core-genai-hana-vector/images/hana3.png
deleted file mode 100644
index 189a8a9ee0..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/hana3.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/hana4.png b/tutorials/ai-core-genai-hana-vector/images/hana4.png
deleted file mode 100644
index ca969b39b8..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/hana4.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/hana5.png b/tutorials/ai-core-genai-hana-vector/images/hana5.png
deleted file mode 100644
index 731368b024..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/hana5.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/img1.png b/tutorials/ai-core-genai-hana-vector/images/img1.png
deleted file mode 100644
index d7b21a325d..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/img1.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/img10.png b/tutorials/ai-core-genai-hana-vector/images/img10.png
deleted file mode 100644
index 5eb165c005..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/img10.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/img11.png b/tutorials/ai-core-genai-hana-vector/images/img11.png
deleted file mode 100644
index 07e3878a81..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/img11.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/img12.png b/tutorials/ai-core-genai-hana-vector/images/img12.png
deleted file mode 100644
index 4dd15754a2..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/img12.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/img2.png b/tutorials/ai-core-genai-hana-vector/images/img2.png
deleted file mode 100644
index 8cbaa3faad..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/img2.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/img3.png b/tutorials/ai-core-genai-hana-vector/images/img3.png
deleted file mode 100644
index 118335f956..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/img3.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/img4.png b/tutorials/ai-core-genai-hana-vector/images/img4.png
deleted file mode 100644
index aa87b68675..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/img4.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/img5.png b/tutorials/ai-core-genai-hana-vector/images/img5.png
deleted file mode 100644
index 560c34e36e..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/img5.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/img6.png b/tutorials/ai-core-genai-hana-vector/images/img6.png
deleted file mode 100644
index ffa1b8e6fe..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/img6.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/img7.png b/tutorials/ai-core-genai-hana-vector/images/img7.png
deleted file mode 100644
index 139f783ba0..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/img7.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/img8.png b/tutorials/ai-core-genai-hana-vector/images/img8.png
deleted file mode 100644
index 96fe7a82fd..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/img8.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/img9.png b/tutorials/ai-core-genai-hana-vector/images/img9.png
deleted file mode 100644
index 846b009ee0..0000000000
Binary files a/tutorials/ai-core-genai-hana-vector/images/img9.png and /dev/null differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/step1.png b/tutorials/ai-core-genai-hana-vector/images/step1.png
new file mode 100644
index 0000000000..74e16191c8
Binary files /dev/null and b/tutorials/ai-core-genai-hana-vector/images/step1.png differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/step10.png b/tutorials/ai-core-genai-hana-vector/images/step10.png
new file mode 100644
index 0000000000..6f3bc59294
Binary files /dev/null and b/tutorials/ai-core-genai-hana-vector/images/step10.png differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/step2.png b/tutorials/ai-core-genai-hana-vector/images/step2.png
new file mode 100644
index 0000000000..9d6fff11cb
Binary files /dev/null and b/tutorials/ai-core-genai-hana-vector/images/step2.png differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/step3.png b/tutorials/ai-core-genai-hana-vector/images/step3.png
new file mode 100644
index 0000000000..068b808592
Binary files /dev/null and b/tutorials/ai-core-genai-hana-vector/images/step3.png differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/step4.png b/tutorials/ai-core-genai-hana-vector/images/step4.png
new file mode 100644
index 0000000000..acaf02ec06
Binary files /dev/null and b/tutorials/ai-core-genai-hana-vector/images/step4.png differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/step40.png b/tutorials/ai-core-genai-hana-vector/images/step40.png
new file mode 100644
index 0000000000..594067fdf3
Binary files /dev/null and b/tutorials/ai-core-genai-hana-vector/images/step40.png differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/step5.png b/tutorials/ai-core-genai-hana-vector/images/step5.png
new file mode 100644
index 0000000000..78d0759323
Binary files /dev/null and b/tutorials/ai-core-genai-hana-vector/images/step5.png differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/step6.png b/tutorials/ai-core-genai-hana-vector/images/step6.png
new file mode 100644
index 0000000000..47113c69b8
Binary files /dev/null and b/tutorials/ai-core-genai-hana-vector/images/step6.png differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/step7.png b/tutorials/ai-core-genai-hana-vector/images/step7.png
new file mode 100644
index 0000000000..bc1c01ec92
Binary files /dev/null and b/tutorials/ai-core-genai-hana-vector/images/step7.png differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/step8.png b/tutorials/ai-core-genai-hana-vector/images/step8.png
new file mode 100644
index 0000000000..1fd200d0c7
Binary files /dev/null and b/tutorials/ai-core-genai-hana-vector/images/step8.png differ
diff --git a/tutorials/ai-core-genai-hana-vector/images/step9.png b/tutorials/ai-core-genai-hana-vector/images/step9.png
new file mode 100644
index 0000000000..8aab80f13a
Binary files /dev/null and b/tutorials/ai-core-genai-hana-vector/images/step9.png differ