Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
update
Signed-off-by: hao-xu5 <hxu44@apple.com>
  • Loading branch information
hao-xu5 committed Nov 5, 2025
commit 931fdbdd4c1b4863b0bb03a58e18478a269cf460
107 changes: 107 additions & 0 deletions docs/reference/data-sources/spark.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,17 @@

Spark data sources are tables or files that can be loaded from some Spark store (e.g. Hive or in-memory). They can also be specified by a SQL query.

**New in Feast:** SparkSource now supports advanced table formats including **Apache Iceberg**, **Delta Lake**, and **Apache Hudi**, enabling ACID transactions, time travel, and schema evolution capabilities.

## Disclaimer

The Spark data source does not achieve full test coverage.
Please do not assume complete stability.

## Examples

### Basic Examples

Using a table reference from SparkSession (for example, either in-memory or a Hive Metastore):

```python
Expand Down Expand Up @@ -51,8 +55,111 @@ my_spark_source = SparkSource(
)
```

### Table Format Support

SparkSource now supports advanced table formats for modern data lakehouse architectures:

#### Apache Iceberg

```python
from feast.infra.offline_stores.contrib.spark_offline_store.spark_source import SparkSource
from feast.table_format import IcebergFormat

# Basic Iceberg configuration
iceberg_format = IcebergFormat(
catalog="my_catalog",
namespace="my_database"
)

my_spark_source = SparkSource(
name="user_features",
path="my_catalog.my_database.user_table",
table_format=iceberg_format,
timestamp_field="event_timestamp"
)
```

Time travel with Iceberg:

```python
# Read from a specific snapshot
iceberg_format = IcebergFormat(
catalog="spark_catalog",
namespace="lakehouse"
)
iceberg_format.set_property("snapshot-id", "123456789")

my_spark_source = SparkSource(
name="historical_features",
path="spark_catalog.lakehouse.features",
table_format=iceberg_format,
timestamp_field="event_timestamp"
)
```

#### Delta Lake

```python
from feast.infra.offline_stores.contrib.spark_offline_store.spark_source import SparkSource
from feast.table_format import DeltaFormat

# Basic Delta configuration
delta_format = DeltaFormat()

my_spark_source = SparkSource(
name="transaction_features",
path="s3://my-bucket/delta-tables/transactions",
table_format=delta_format,
timestamp_field="transaction_timestamp"
)
```

Time travel with Delta:

```python
# Read from a specific version
delta_format = DeltaFormat()
delta_format.set_property("versionAsOf", "5")

my_spark_source = SparkSource(
name="historical_transactions",
path="s3://my-bucket/delta-tables/transactions",
table_format=delta_format,
timestamp_field="transaction_timestamp"
)
```

#### Apache Hudi

```python
from feast.infra.offline_stores.contrib.spark_offline_store.spark_source import SparkSource
from feast.table_format import HudiFormat

# Basic Hudi configuration
hudi_format = HudiFormat(
table_type="COPY_ON_WRITE", # or "MERGE_ON_READ"
record_key="user_id",
precombine_field="updated_at"
)

my_spark_source = SparkSource(
name="user_profiles",
path="s3://my-bucket/hudi-tables/user_profiles",
table_format=hudi_format,
timestamp_field="event_timestamp"
)
```

## Configuration Options

The full set of configuration options is available [here](https://rtd.feast.dev/en/master/#feast.infra.offline_stores.contrib.spark_offline_store.spark_source.SparkSource).

### Table Format Options

- **IcebergFormat**: See [Python API reference](https://rtd.feast.dev/en/master/#feast.table_format.IcebergFormat)
- **DeltaFormat**: See [Python API reference](https://rtd.feast.dev/en/master/#feast.table_format.DeltaFormat)
- **HudiFormat**: See [Python API reference](https://rtd.feast.dev/en/master/#feast.table_format.HudiFormat)

## Supported Types

Spark data sources support all eight primitive types and their corresponding array types.
Expand Down
Loading