Skip to content

Commit 810b42d

Browse files
committed
Add more conceptual descriptions in module 1
Signed-off-by: Danny Chiao <danny@tecton.ai>
1 parent 2fc11a8 commit 810b42d

File tree

1 file changed

+39
-5
lines changed

1 file changed

+39
-5
lines changed

module_1/README.md

Lines changed: 39 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,9 @@ In this module, we focus on building features for online serving, and keeping th
1212

1313
- [Workshop](#workshop)
1414
- [Step 1: Install Feast](#step-1-install-feast)
15-
- [Step 2: Spin up Kafka + Redis + Feast services](#step-2-spin-up-kafka--redis--feast-services)
16-
- [Step 3: Materialize batch features & ingest streaming features](#step-3-materialize-batch-features--ingest-streaming-features)
15+
- [Step 2: Inspect the `feature_store.yaml`](#step-2-inspect-the-feature_storeyaml)
16+
- [Step 3: Spin up Kafka + Redis + Feast services](#step-3-spin-up-kafka--redis--feast-services)
17+
- [Step 4: Materialize batch features & ingest streaming features](#step-4-materialize-batch-features--ingest-streaming-features)
1718
- [A note on Feast feature servers + push servers](#a-note-on-feast-feature-servers--push-servers)
1819
- [Conclusion](#conclusion)
1920
- [FAQ](#faq)
@@ -28,11 +29,44 @@ First, we install Feast with Spark and Redis support:
2829
pip install "feast[spark,redis]"
2930
```
3031

31-
## Step 2: Spin up Kafka + Redis + Feast services
32+
## Step 2: Inspect the `feature_store.yaml`
33+
34+
```yaml
35+
project: feast_demo_local
36+
provider: local
37+
registry:
38+
path: data/local_registry.db
39+
cache_ttl_seconds: 5
40+
online_store:
41+
type: redis
42+
connection_string: localhost:6379
43+
offline_store:
44+
type: file
45+
```
46+
47+
The key thing to note for now is the online store has been configured to be Redis. This is specifically for a single Redis node. If you want to use a Redis cluster, then you'd change this to something like:
48+
49+
```yaml
50+
project: feast_demo_local
51+
provider: local
52+
registry:
53+
path: data/local_registry.db
54+
cache_ttl_seconds: 5
55+
online_store:
56+
type: redis
57+
redis_type: redis_cluster
58+
connection_string: "redis1:6379,redis2:6379,ssl=true,password=my_password"
59+
offline_store:
60+
type: file
61+
```
62+
63+
Because we use `redis-py` under the hood, this means Feast also works well with hosted Redis instances like AWS Elasticache ([docs](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/ElastiCache-Getting-Started-Tutorials-Connecting.html)).
64+
65+
## Step 3: Spin up Kafka + Redis + Feast services
3266

3367
We then use Docker Compose to spin up a local Kafka cluster and automatically publish events to it.
3468
- This leverages a script (in `kafka_demo/`) that creates a topic, reads from `feature_repo/data/driver_stats.parquet`, generates newer timestamps, and emits them to the topic.
35-
- This also deploys a Feast push server (on port 6567) + a Feast feature server (on port 6566). The Dockerfile mostly delegates to calling the `feast serve` CLI command:
69+
- This also deploys a Feast push server (on port 6567) + a Feast feature server (on port 6566). These servers embed a `feature_store.yaml` file that enables them to connect to a remote registry. The Dockerfile mostly delegates to calling the `feast serve` CLI command, which instantiates a Feast python server ([docs](https://docs.feast.dev/reference/feature-servers/python-feature-server)):
3670
```yaml
3771
FROM python:3.7
3872
@@ -62,7 +96,7 @@ Attaching to zookeeper, redis, broker, feast_push_server, feast_feature_server,
6296
...
6397
```
6498

65-
## Step 3: Materialize batch features & ingest streaming features
99+
## Step 4: Materialize batch features & ingest streaming features
66100

67101
We'll switch gears into a Jupyter notebook. This will guide you through:
68102
- Registering a `FeatureView` that has a single schema across both a batch source (`FileSource`) with aggregate features and a stream source (`PushSource`).

0 commit comments

Comments
 (0)