This agent is designed to answer questions related to documents you uploaded to Vertex AI RAG Engine. It utilizes Retrieval-Augmented Generation (RAG) with the Vertex AI RAG Engine to fetch relevant documentation snippets and code references, which are then synthesized by an LLM (Gemini) to provide informative answers with citations.
This diagram outlines the agent's workflow, designed to provide informed and context-aware responses. User queries are processed by agent development kit. The LLM determines if external knowledge (RAG corpus) is required. If so, the VertexAiRagRetrieval tool fetches relevant information from the configured Vertex RAG Engine corpus. The LLM then synthesizes this retrieved information with its internal knowledge to generate an accurate answer, including citations pointing back to the source documentation URLs.
| Attribute | Details |
|---|---|
| Interaction Type | Conversational |
| Complexity | Intermediate |
| Agent Type | Single Agent |
| Components | Tools, RAG, Evaluation |
| Vertical | Horizontal |
- Retrieval-Augmented Generation (RAG): Leverages Vertex AI RAG Engine to fetch relevant documentation.
- Citation Support: Provides accurate citations for the retrieved content, formatted as URLs.
- Clear Instructions: Adheres to strict guidelines for providing factual answers and proper citations.
-
Google Cloud Account: You need a Google Cloud account.
-
Python 3.10+: Ensure you have Python 3.10 or a later version installed.
-
uv: For dependency management and packaging. Please follow the instructions on the official uv website for installation.
curl -LsSf https://astral.sh/uv/install.sh | sh -
Git: Ensure you have git installed.
-
Clone the Repository:
git clone https://github.com/google/adk-samples.git cd adk-samples/python/agents/RAG -
Install Dependencies:
uv sync
This command reads the
pyproject.tomlfile and installs all the necessary dependencies into a virtual environment. -
Set up Environment Variables: Rename the file ".env.example" to ".env" Follow the steps in the file to set up the environment variables.
-
Setup Corpus: If you have an existing corpus in Vertex AI RAG Engine, please set corpus information in your .env file. For example: RAG_CORPUS='projects/123/locations/us-central1/ragCorpora/456'.
If you don't have a corpus setup yet, please follow "How to upload my file to my RAG corpus" section. The
prepare_corpus_and_data.pyscript will automatically create a corpus (if needed) and update theRAG_CORPUSvariable in your.envfile with the resource name of the created or retrieved corpus.
The rag/shared_libraries/prepare_corpus_and_data.py script helps you set up a RAG corpus and upload an initial document. By default, it downloads Alphabet's 2025 10-K PDF and uploads it to a new corpus.
-
Authenticate with your Google Cloud account:
gcloud auth application-default login
-
Set up environment variables in your
.envfile: Ensure your.envfile (copied from.env.example) has the following variables set:GOOGLE_CLOUD_PROJECT=your-project-id GOOGLE_CLOUD_LOCATION=your-location # e.g., us-central1 -
Configure and run the preparation script:
-
To use the default behavior (upload Alphabet's 10K PDF): Simply run the script:
uv run python rag/shared_libraries/prepare_corpus_and_data.py
This will create a corpus named
Alphabet_10K_2025_corpus(if it doesn't exist) and upload the PDFgoog-10-k-2025.pdfdownloaded from the URL specified in the script. -
To upload a different PDF from a URL: a. Open the
rag/shared_libraries/prepare_corpus_and_data.pyfile. b. Modify the following variables at the top of the script:# --- Please fill in your configurations --- # ... project and location are read from .env ... CORPUS_DISPLAY_NAME = "Your_Corpus_Name" # Change as needed CORPUS_DESCRIPTION = "Description of your corpus" # Change as needed PDF_URL = "https://path/to/your/document.pdf" # URL to YOUR PDF document PDF_FILENAME = "your_document.pdf" # Name for the file in the corpus # --- Start of the script ---
c. Run the script:
uv run python rag/shared_libraries/prepare_corpus_and_data.py
-
To upload a local PDF file: a. Open the
rag/shared_libraries/prepare_corpus_and_data.pyfile. b. Modify theCORPUS_DISPLAY_NAMEandCORPUS_DESCRIPTIONvariables as needed (see above). c. Modify themain()function at the bottom of the script to directly callupload_pdf_to_corpuswith your local file details:def main(): initialize_vertex_ai() corpus = create_or_get_corpus() # Uses CORPUS_DISPLAY_NAME & CORPUS_DESCRIPTION # Upload your local PDF to the corpus local_file_path = "/path/to/your/local/file.pdf" # Set the correct path display_name = "Your_File_Name.pdf" # Set the desired display name description = "Description of your file" # Set the description # Ensure the file exists before uploading if os.path.exists(local_file_path): upload_pdf_to_corpus( corpus_name=corpus.name, pdf_path=local_file_path, display_name=display_name, description=description ) else: print(f"Error: Local file not found at {local_file_path}") # List all files in the corpus list_corpus_files(corpus_name=corpus.name)
d. Run the script:
uv run python rag/shared_libraries/prepare_corpus_and_data.py
-
More details about managing data in Vertex RAG Engine can be found in the official documentation page.
You can run the agent using the ADK command in your terminal. from the root project directory:
-
Run agent in CLI:
adk run rag
-
Run agent with ADK Web UI:
adk web
Select the RAG from the dropdown
Here's a quick example of how a user might interact with the agent:
Example 1: Document Information Retrieval
User: What are the key business segments mentioned in Alphabet's 2025 10-K report?
Agent: According to Alphabet's 2025 10-K report, the key business segments are:
- Google Services (including Google Search, YouTube, Google Maps, Play Store)
- Google Cloud (offering cloud computing services, data analytics, and AI solutions)
- Other Bets (including Waymo for autonomous driving technology) [Source: goog-10-k-2025.pdf]
The evaluation can be run from the RAG directory using
the pytest module:
uv sync --dev
uv run pytest evalThe evaluation framework consists of three key components:
-
test_eval.py: The main test script that orchestrates the evaluation process. It uses the
AgentEvaluatorfrom Google ADK to run the agent against a test dataset and assess its performance based on predefined criteria. -
conversation.test.json: Contains a sequence of test cases structured as a conversation. Each test case includes:
- A user query (e.g., questions about Alphabet's 2025 10-K report)
- Expected tool usage (which tools the agent should call and with what parameters)
- Reference answers (ideal responses the agent should provide)
-
test_config.json: Defines evaluation criteria and thresholds:
tool_trajectory_avg_score: Measures how well the agent uses the appropriate toolsresponse_match_score: Measures how closely the agent's responses match the reference answers
When you run the evaluation, the system:
- Loads the test cases from conversation.test.json
- Sends each query to the agent
- Compares the agent's tool usage against expected tool usage
- Compares the agent's responses against reference answers
- Calculates scores based on the criteria in test_config.json
This evaluation helps ensure the agent correctly leverages the RAG capabilities to retrieve relevant information and generates accurate responses with proper citations.
The Agent can be deployed to Vertex AI Agent Engine using the following commands:
uv run python deployment/deploy.pyAfter deploying the agent, you'll be able to read the following INFO log message:
Deployed agent to Vertex AI Agent Engine successfully, resource name: projects/<PROJECT_NUMBER>/locations/us-central1/reasoningEngines/<AGENT_ENGINE_ID>
Please note your Agent Engine resource name and update .env file accordingly as this is crucial for testing the remote agent.
You may also modify the deployment script for your use cases.
After deploying the agent, follow these steps to test it:
-
Update Environment Variables:
- Open your
.envfile. - The
AGENT_ENGINE_IDshould have been automatically updated by thedeployment/deploy.pyscript when you deployed the agent. Verify that it is set correctly:AGENT_ENGINE_ID=projects/<PROJECT_NUMBER>/locations/us-central1/reasoningEngines/<AGENT_ENGINE_ID>
- Open your
-
Grant RAG Corpus Access Permissions:
- Ensure your
.envfile has the following variables set correctly:GOOGLE_CLOUD_PROJECT=your-project-id RAG_CORPUS=projects/<project-number>/locations/us-central1/ragCorpora/<corpus-id> - Run the permissions script:
chmod +x rag/shared_libraries/grant_permissions.sh ./rag/shared_libraries/grant_permissions.sh
This script will:
- Read the environment variables from your
.envfile - Create a custom role with RAG Corpus query permissions
- Grant the necessary permissions to the AI Platform Reasoning Engine Service Agent
- Ensure your
-
Test the Remote Agent:
- Run the test script:
uv run python deployment/run.py
This script will:
- Connect to your deployed agent
- Send a series of test queries
- Display the agent's responses with proper formatting
- Run the test script:
The test script includes example queries about Alphabet's 2025 10-K report. You can modify the queries in deployment/run.py to test different aspects of your deployed agent.
The Agent Starter Pack is the recommended way to create and deploy a production-ready version of this agent. We have built custom lifecycle hooks into this template so that the Agent Starter Pack automatically handles building your RAG corpus and granting IAM permissions during deployment.
To create your project using uv:
uvx agent-starter-pack create my-rag-agent -a adk@RAG -d agent_engine -ds vertex_ai_search
cd my-rag-agentNext, run the installation command. This will prompt you to automatically build the sample RAG Corpus and configure your .env file:
make installFinally, deploy the agent to Google Cloud. This will package your agent, push it to Vertex AI Agent Engine, and automatically grant the new Agent Identity permissions to query your RAG Corpus:
make backendYou can customize system instruction for the agent and add more tools to suit your need, for example, google search.
You can read more about official Vertex RAG Engine documentation for more details on customizing corpora and data.
You can also integrate your preferred retrieval sources to enhance the agent's
capabilities. For instance, you can seamlessly replace or augment the existing
VertexAiRagRetrieval tool with a tool that utilizes Vertex AI Search or any
other retrieval mechanism. This flexibility allows you to tailor the agent to
your specific data sources and retrieval requirements.
When running the prepare_corpus_and_data.py script, you may encounter an error related to API quotas, such as:
Error uploading file ...: 429 Quota exceeded for aiplatform.googleapis.com/online_prediction_requests_per_base_model with base model: textembedding-gecko.
This is especially common for new Google Cloud projects that have lower default quotas.
Solution:
You will need to request a quota increase for the model you are using.
- Navigate to the Quotas page in the Google Cloud Console: https://console.cloud.google.com/iam-admin/quotas
- Follow the instructions in the official documentation to request a quota increase: https://cloud.google.com/vertex-ai/docs/quotas#request_a_quota_increase
This agent sample is provided for illustrative purposes only and is not intended for production use. It serves as a basic example of an agent and a foundational starting point for individuals or teams to develop their own agents.
This sample has not been rigorously tested, may contain bugs or limitations, and does not include features or optimizations typically required for a production environment (e.g., robust error handling, security measures, scalability, performance considerations, comprehensive logging, or advanced configuration options).
Users are solely responsible for any further development, testing, security hardening, and deployment of agents based on this sample. We recommend thorough review, testing, and the implementation of appropriate safeguards before using any derived agent in a live or critical system.

