Welcome! Here we use a basic example to explain key concepts and user flows in Feast.
We focus on a specific example (that does not include online features + models):
- Use case: building a platform for data scientists to share features for training offline models
- Stack: you have data in a combination of data warehouses (to be explored in a future module) and data lakes (e.g. S3)
To support this, you'll need:
| Concept | Requirements |
|---|---|
| Data sources | FileSource (with S3 paths and endpoint overrides) and FeatureViews registered with feast apply |
| Feature views | Feature views tied to data sources that are shared by data scientists, registered with feast apply |
| Provider | In feature_store.yaml, specifying the aws provider to ensure your registry can be stored in S3 |
| Registry | In feature_store.yaml, specifying a path (within an existing S3 bucket) the registry is written to. Users + model servers will pull from this to get the latest registered features + metadata |
| Transformations | Feast supports last mile transformations with OnDemandFeatureViews that can be re-used |
There are three user groups here worth considering. The ML platform team, the data scientists, and the ML engineers scheduling models in batch.
The team here sets up the centralized Feast feature repository in GitHub. This is what's seen in feature_repo_aws/.
Here, the first thing a platform team needs to do is setup the feature_store.yaml within a version controlled repo like GitHub:
project: feast_demo_aws
provider: aws
registry: s3://[YOUR BUCKET]/registry.pb
online_store: null
offline_store:
type: file
flags:
alpha_features: true
on_demand_transforms: trueA quick explanation of what's happening here:
| Key | What it does | Example |
|---|---|---|
project |
Gives infrastructure isolation via namespacing (e.g. online stores + Feast objects). | any unique name (e.g. feast_demo_aws) |
provider |
Defines registry location & sets defaults for offline / online stores | gcp enables registries in GCS and sets BigQuery + Datastore as the default offline / online stores. |
registry |
Defines the specific path for the registry (local, gcs, s3, etc) | s3://[YOUR BUCKET]/registry.pb |
online_store |
Configures online store (if needed) | null, redis, dynamodb, datastore, postgres, hbase (each have their own extra configs) |
offline_store |
Configures offline store, which executes point in time joins | bigquery, snowflake.offline, redshift, spark, trino (each have their own extra configs) |
flags |
(legacy) Soon to be deprecated way to enable experimental functionality. |
- Generally, custom offline + online stores and providers are supported and can plug in.
- Project
- users can only request features from a single project
- Provider
- defaults can be easily overriden in
feature_store.yaml.- For example, one can use the
awsprovider and specify Snowflake as the offline store.
- For example, one can use the
- defaults can be easily overriden in
- Offline Store
- we recommend users use data warehouses or Spark as their offline store for performant training dataset generation.
- Here, we use file sources for instructional purposes. This will directly read from files (local or remote) and use Dask to execute point-in-time joins.
- A project can only support one type of offline store (cannot mix Snowflake + file for example)
- we recommend users use data warehouses or Spark as their offline store for performant training dataset generation.
- Online Store
- If you don't need to power real time models with fresh features, this is not needed.
- If you are precomputing predictions in batch ("batch scoring"), then the online store is optional. You should be using the offline store and running
feature_store.get_historical_features
With the feature_store.yaml setup, you can now run feast apply to create & populate the registry.
We setup CI/CD to automatically manage the registry. You'll want e.g. a GitHub workflow that
- on pull request, runs
feast plan - on PR merge, runs
feast apply.
See feast_plan.yml as an example of a workflow that automatically runs feast plan on new incoming PRs, which alerts you on what changes will occur.
- This is useful for helping PR reviewers understand the effects of a change.
- One example is whether a PR may change features that are already depended on in production by another model (e.g.
FeatureService).
This will parse the feature, data source, and feature service definitions and publish them to the registry. It may also setup some tables in the online store to materialize batch features to.
Sample output of feast apply:
Registered entity driver_id
Registered feature view driver_hourly_stats
Deploying infrastructure for driver_hourly_statsWe don't dive into this deeply, but you don't want to allow arbitrary users to clone the feature repository, change definitions and run feast apply. Thus, you should lock down your registry (e.g. with an S3 bucket policy) to only allow changes from your CI/CD user and perhaps some ML engineers.
Feast comes with an experimental Web UI. Users can already spin this up locally with feast ui, but you may want to have a Web UI that is universally available. Here, you'd likely deploy a service that runs feast ui on top of a feature_store.yaml, with some configuration on how frequently the UI should be refreshing its registry.
Many Feast users use tags on objects extensively. Some examples of how this may be used:
- To give more detailed documentation on a
FeatureView - To highlight what groups you need to join to gain access to certain feature views.
- To denote whether a feature service is in production or in staging.
Additionally, users will often want to have a dev/staging environment that's separate from production. In this case, once pattern that works is to have separate projects:
├── .github
│ └── workflows
│ ├── production.yml
│ └── staging.yml
│
├── staging
│ ├── driver_repo.py
│ └── feature_store.yaml
│
└── production
├── driver_repo.py
└── feature_store.yamlData scientists will be using or authoring features in Feast.
There are two ways they can use Feast:
- Use Feast primarily as a way of pulling production ready features.
- See the
client/folder for an example of how users can pull features by only having afeature_store.yaml - This is not recommended since data scientists cannot register feature services to indicate they depend on certain features in production.
- See the
- [Recommended] Have a local copy of the feature repository (e.g.
git clone) and author / iterate / re-use features.- Data scientist can:
- iterate on features locally
- apply features to their own dev project with a local registry & experiment
- build feature services in preparation for production
- submit PRs to include features that should be used in production (including A/B experiments, or model training iterations)
- Data scientist can:
Data scientists can also investigate other models and their dependent features / data sources / on demand transformations through the repository or through the Web UI (by running feast ui)
Data scientists or ML engineers can use the defined FeatureService (corresponding to model versions) and schedule regular jobs that generate batch predictions (or regularly retrain).
Feast right now requires timestamps in get_historical_features, so what you'll need to do is append an event timestamp of now(). e.g.
# Get the latest feature values for unique entities
entity_df = pd.DataFrame.from_dict({"driver_id": [1001, 1002, 1003, 1004, 1005],})
entity_df["event_timestamp"] = pd.to_datetime('now')
training_df = store.get_historical_features(
entity_df=entity_df, features=store.get_feature_service("model_v2"),
).to_df()
# Make batch predictions
predictions = model.predict(training_df)As a result:
- You have file sources (possibly remote) and a remote registry (e.g. in S3)
- Data scientists are able to author + reuse features based on a centrally managed registry.
- ML engineers are able to use these same features with a reference to the registry to regularly generating predictions on the latest timestamp.
- You have CI/CD setup to automatically update the registry + online store infrastructure when changes are merged into the version controlled feature repo.

