Skip to content
Prev Previous commit
Next Next commit
feat(brightdata): add Bright Data integration with 8 tools (#4183)
* feat(brightdata): add Bright Data integration with 8 tools

Add complete Bright Data integration supporting Web Unlocker, SERP API,
Discover API, and Web Scraper dataset operations. Includes scrape URL,
SERP search, discover, sync scrape, scrape dataset, snapshot status,
download snapshot, and cancel snapshot tools.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(brightdata): address PR review feedback

- Fix truncated "Download Snapshot" description in integrations.json and docs
- Map engine-specific query params (num/count/numdoc, hl/setLang/lang/kl,
  gl/cc/lr) per search engine instead of using Google-specific params for all
- Attempt to parse snapshot_id from cancel/download response bodies instead
  of hardcoding null

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* lint

* fix(agiloft): change bgColor to white; fix docs truncation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(brightdata): avoid inner quotes in description to fix docs generation

The docs generator regex truncates at inner quotes. Reword the
download_snapshot description to avoid embedded double quotes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(brightdata): disable incompatible DuckDuckGo and Yandex URL params

DuckDuckGo kl expects region-language format (us-en) and Yandex lr
expects numeric region IDs (213), not plain two-letter codes. Disable
these URL-level params since Bright Data normalizes localization through
the body-level country param.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
  • Loading branch information
waleedlatif1 and claude authored Apr 15, 2026
commit a39dc158cfae3ede40523c6423e7411ca05d989e
15 changes: 15 additions & 0 deletions apps/docs/components/icons.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -2087,6 +2087,21 @@ export function BrandfetchIcon(props: SVGProps<SVGSVGElement>) {
)
}

export function BrightDataIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg {...props} viewBox='54 93 22 52' fill='none' xmlns='http://www.w3.org/2000/svg'>
<path
d='M62 95.21c.19 2.16 1.85 3.24 2.82 4.74.25.38.48.11.67-.16.21-.31.6-1.21 1.15-1.28-.35 1.38-.04 3.15.16 4.45.49 3.05-1.22 5.64-4.07 6.18-3.38.65-6.22-2.21-5.6-5.62.23-1.24 1.37-2.5.77-3.7-.85-1.7.54-.52.79-.22 1.04 1.2 1.21.09 1.45-.55.24-.63.31-1.31.47-1.97.19-.77.55-1.4 1.39-1.87z'
fill='currentColor'
/>
<path
d='M66.70 123.37c0 3.69.04 7.38-.03 11.07-.02 1.04.31 1.48 1.32 1.49.29 0 .59.12.88.13.93.01 1.18.47 1.16 1.37-.05 2.19 0 2.19-2.24 2.19-3.48 0-6.96-.04-10.44.03-1.09.02-1.47-.33-1.3-1.36.02-.12.02-.26 0-.38-.28-1.39.39-1.96 1.7-1.9 1.36.06 1.76-.51 1.74-1.88-.09-5.17-.08-10.35 0-15.53.02-1.22-.32-1.87-1.52-2.17-.57-.14-1.47-.11-1.57-.85-.15-1.04-.05-2.11.01-3.17.02-.34.44-.35.73-.39 2.81-.39 5.63-.77 8.44-1.18.92-.14 1.15.2 1.14 1.09-.04 3.8-.02 7.62-.02 11.44z'
fill='currentColor'
/>
</svg>
)
}

export function BrowserUseIcon(props: SVGProps<SVGSVGElement>) {
return (
<svg
Expand Down
2 changes: 2 additions & 0 deletions apps/docs/components/ui/icon-mapping.ts
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ import {
BoxCompanyIcon,
BrainIcon,
BrandfetchIcon,
BrightDataIcon,
BrowserUseIcon,
CalComIcon,
CalendlyIcon,
Expand Down Expand Up @@ -215,6 +216,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
attio: AttioIcon,
box: BoxCompanyIcon,
brandfetch: BrandfetchIcon,
brightdata: BrightDataIcon,
browser_use: BrowserUseIcon,
calcom: CalComIcon,
calendly: CalendlyIcon,
Expand Down
201 changes: 201 additions & 0 deletions apps/docs/content/docs/en/tools/brightdata.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,201 @@
---
title: Bright Data
description: Scrape websites, search engines, and extract structured data
---

import { BlockInfoCard } from "@/components/ui/block-info-card"

<BlockInfoCard
type="brightdata"
color="#FFFFFF"
/>

## Usage Instructions

Integrate Bright Data into the workflow. Scrape any URL with Web Unlocker, search Google and other engines with SERP API, discover web content ranked by intent, or trigger pre-built scrapers for structured data extraction.



## Tools

### `brightdata_scrape_url`

Fetch content from any URL using Bright Data Web Unlocker. Bypasses anti-bot protections, CAPTCHAs, and IP blocks automatically.

#### Input

| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Bright Data API token |
| `zone` | string | Yes | Web Unlocker zone name from your Bright Data dashboard \(e.g., "web_unlocker1"\) |
| `url` | string | Yes | The URL to scrape \(e.g., "https://example.com/page"\) |
| `format` | string | No | Response format: "raw" for HTML or "json" for parsed content. Defaults to "raw" |
| `country` | string | No | Two-letter country code for geo-targeting \(e.g., "us", "gb"\) |

#### Output

| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `content` | string | The scraped page content \(HTML or JSON depending on format\) |
| `url` | string | The URL that was scraped |
| `statusCode` | number | HTTP status code of the response |

### `brightdata_serp_search`

Search Google, Bing, DuckDuckGo, or Yandex and get structured search results using Bright Data SERP API.

#### Input

| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Bright Data API token |
| `zone` | string | Yes | SERP API zone name from your Bright Data dashboard \(e.g., "serp_api1"\) |
| `query` | string | Yes | The search query \(e.g., "best project management tools"\) |
| `searchEngine` | string | No | Search engine to use: "google", "bing", "duckduckgo", or "yandex". Defaults to "google" |
| `country` | string | No | Two-letter country code for localized results \(e.g., "us", "gb"\) |
| `language` | string | No | Two-letter language code \(e.g., "en", "es"\) |
| `numResults` | number | No | Number of results to return \(e.g., 10, 20\). Defaults to 10 |

#### Output

| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | array | Array of search results |
| ↳ `title` | string | Title of the search result |
| ↳ `url` | string | URL of the search result |
| ↳ `description` | string | Snippet or description of the result |
| ↳ `rank` | number | Position in search results |
| `query` | string | The search query that was executed |
| `searchEngine` | string | The search engine that was used |

### `brightdata_discover`

AI-powered web discovery that finds and ranks results by intent. Returns up to 1,000 results with optional cleaned page content for RAG and verification.

#### Input

| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Bright Data API token |
| `query` | string | Yes | The search query \(e.g., "competitor pricing changes enterprise plan"\) |
| `numResults` | number | No | Number of results to return, up to 1000. Defaults to 10 |
| `intent` | string | No | Describes what the agent is trying to accomplish, used to rank results by relevance \(e.g., "find official pricing pages and change notes"\) |
| `includeContent` | boolean | No | Whether to include cleaned page content in results |
| `format` | string | No | Response format: "json" or "markdown". Defaults to "json" |
| `language` | string | No | Search language code \(e.g., "en", "es", "fr"\). Defaults to "en" |
| `country` | string | No | Two-letter ISO country code for localized results \(e.g., "us", "gb"\) |

#### Output

| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `results` | array | Array of discovered web results ranked by intent relevance |
| ↳ `url` | string | URL of the discovered page |
| ↳ `title` | string | Page title |
| ↳ `description` | string | Page description or snippet |
| ↳ `relevanceScore` | number | AI-calculated relevance score for intent-based ranking |
| ↳ `content` | string | Cleaned page content in the requested format \(when includeContent is true\) |
| `query` | string | The search query that was executed |
| `totalResults` | number | Total number of results returned |

### `brightdata_sync_scrape`

Scrape URLs synchronously using a Bright Data pre-built scraper and get structured results directly. Supports up to 20 URLs with a 1-minute timeout.

#### Input

| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Bright Data API token |
| `datasetId` | string | Yes | Dataset scraper ID from your Bright Data dashboard \(e.g., "gd_l1viktl72bvl7bjuj0"\) |
| `urls` | string | Yes | JSON array of URL objects to scrape, up to 20 \(e.g., \[\{"url": "https://example.com/product"\}\]\) |
| `format` | string | No | Output format: "json", "ndjson", or "csv". Defaults to "json" |
| `includeErrors` | boolean | No | Whether to include error reports in results |

#### Output

| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `data` | array | Array of scraped result objects with fields specific to the dataset scraper used |
| `snapshotId` | string | Snapshot ID returned if the request exceeded the 1-minute timeout and switched to async processing |
| `isAsync` | boolean | Whether the request fell back to async mode \(true means use snapshot ID to retrieve results\) |

### `brightdata_scrape_dataset`

Trigger a Bright Data pre-built scraper to extract structured data from URLs. Supports 660+ scrapers for platforms like Amazon, LinkedIn, Instagram, and more.

#### Input

| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Bright Data API token |
| `datasetId` | string | Yes | Dataset scraper ID from your Bright Data dashboard \(e.g., "gd_l1viktl72bvl7bjuj0"\) |
| `urls` | string | Yes | JSON array of URL objects to scrape \(e.g., \[\{"url": "https://example.com/product"\}\]\) |
| `format` | string | No | Output format: "json" or "csv". Defaults to "json" |

#### Output

| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `snapshotId` | string | The snapshot ID to retrieve results later |
| `status` | string | Status of the scraping job \(e.g., "triggered", "running"\) |

### `brightdata_snapshot_status`

Check the progress of an async Bright Data scraping job. Returns status: starting, running, ready, or failed.

#### Input

| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Bright Data API token |
| `snapshotId` | string | Yes | The snapshot ID returned when the collection was triggered \(e.g., "s_m4x7enmven8djfqak"\) |

#### Output

| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `snapshotId` | string | The snapshot ID that was queried |
| `datasetId` | string | The dataset ID associated with this snapshot |
| `status` | string | Current status of the snapshot: "starting", "running", "ready", or "failed" |

### `brightdata_download_snapshot`

Download the results of a completed Bright Data scraping job using its snapshot ID. The snapshot must have ready status.

#### Input

| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Bright Data API token |
| `snapshotId` | string | Yes | The snapshot ID returned when the collection was triggered \(e.g., "s_m4x7enmven8djfqak"\) |
| `format` | string | No | Output format: "json", "ndjson", "jsonl", or "csv". Defaults to "json" |
| `compress` | boolean | No | Whether to compress the results |

#### Output

| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `data` | array | Array of scraped result records |
| `format` | string | The content type of the downloaded data |
| `snapshotId` | string | The snapshot ID that was downloaded |

### `brightdata_cancel_snapshot`

Cancel an active Bright Data scraping job using its snapshot ID. Terminates data collection in progress.

#### Input

| Parameter | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| `apiKey` | string | Yes | Bright Data API token |
| `snapshotId` | string | Yes | The snapshot ID of the collection to cancel \(e.g., "s_m4x7enmven8djfqak"\) |

#### Output

| Parameter | Type | Description |
| --------- | ---- | ----------- |
| `snapshotId` | string | The snapshot ID that was cancelled |
| `cancelled` | boolean | Whether the cancellation was successful |


1 change: 1 addition & 0 deletions apps/docs/content/docs/en/tools/meta.json
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
"attio",
"box",
"brandfetch",
"brightdata",
"browser_use",
"calcom",
"calendly",
Expand Down
2 changes: 2 additions & 0 deletions apps/sim/app/(landing)/integrations/data/icon-mapping.ts
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ import {
BoxCompanyIcon,
BrainIcon,
BrandfetchIcon,
BrightDataIcon,
BrowserUseIcon,
CalComIcon,
CalendlyIcon,
Expand Down Expand Up @@ -215,6 +216,7 @@ export const blockTypeToIconMap: Record<string, IconComponent> = {
attio: AttioIcon,
box: BoxCompanyIcon,
brandfetch: BrandfetchIcon,
brightdata: BrightDataIcon,
browser_use: BrowserUseIcon,
calcom: CalComIcon,
calendly: CalendlyIcon,
Expand Down
53 changes: 52 additions & 1 deletion apps/sim/app/(landing)/integrations/data/integrations.json
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,7 @@
"name": "Agiloft",
"description": "Manage records in Agiloft CLM",
"longDescription": "Integrate with Agiloft contract lifecycle management to create, read, update, delete, and search records. Supports file attachments, SQL-based selection, saved searches, and record locking across any table in your knowledge base.",
"bgColor": "#263A5C",
"bgColor": "#FFFFFF",
"iconName": "AgiloftIcon",
"docsUrl": "https://docs.sim.ai/tools/agiloft",
"operations": [
Expand Down Expand Up @@ -1743,6 +1743,57 @@
"integrationTypes": ["sales", "analytics"],
"tags": ["enrichment", "marketing"]
},
{
"type": "brightdata",
"slug": "bright-data",
"name": "Bright Data",
"description": "Scrape websites, search engines, and extract structured data",
"longDescription": "Integrate Bright Data into the workflow. Scrape any URL with Web Unlocker, search Google and other engines with SERP API, discover web content ranked by intent, or trigger pre-built scrapers for structured data extraction.",
"bgColor": "#FFFFFF",
"iconName": "BrightDataIcon",
"docsUrl": "https://docs.sim.ai/tools/brightdata",
"operations": [
{
"name": "Scrape URL",
"description": "Fetch content from any URL using Bright Data Web Unlocker. Bypasses anti-bot protections, CAPTCHAs, and IP blocks automatically."
},
{
"name": "SERP Search",
"description": "Search Google, Bing, DuckDuckGo, or Yandex and get structured search results using Bright Data SERP API."
},
{
"name": "Discover",
"description": "AI-powered web discovery that finds and ranks results by intent. Returns up to 1,000 results with optional cleaned page content for RAG and verification."
},
{
"name": "Sync Scrape",
"description": "Scrape URLs synchronously using a Bright Data pre-built scraper and get structured results directly. Supports up to 20 URLs with a 1-minute timeout."
},
{
"name": "Scrape Dataset",
"description": "Trigger a Bright Data pre-built scraper to extract structured data from URLs. Supports 660+ scrapers for platforms like Amazon, LinkedIn, Instagram, and more."
},
{
"name": "Snapshot Status",
"description": "Check the progress of an async Bright Data scraping job. Returns status: starting, running, ready, or failed."
},
{
"name": "Download Snapshot",
"description": "Download the results of a completed Bright Data scraping job using its snapshot ID. The snapshot must have ready status."
},
{
"name": "Cancel Snapshot",
"description": "Cancel an active Bright Data scraping job using its snapshot ID. Terminates data collection in progress."
}
],
"operationCount": 8,
"triggers": [],
"triggerCount": 0,
"authType": "api-key",
"category": "tools",
"integrationTypes": ["search", "developer-tools"],
"tags": ["web-scraping", "automation"]
},
{
"type": "browser_use",
"slug": "browser-use",
Expand Down
2 changes: 1 addition & 1 deletion apps/sim/blocks/blocks/agiloft.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ export const AgiloftBlock: BlockConfig = {
category: 'tools',
integrationType: IntegrationType.Productivity,
tags: ['automation'],
bgColor: '#263A5C',
bgColor: '#FFFFFF',
icon: AgiloftIcon,
authMode: AuthMode.ApiKey,

Expand Down
Loading
Loading