Skip to content

Commit 46738cb

Browse files
committed
Update order of user groups
Signed-off-by: Danny Chiao <danny@tecton.ai>
1 parent cdc5bfa commit 46738cb

File tree

1 file changed

+10
-10
lines changed

1 file changed

+10
-10
lines changed

module_0/README.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@ We focus on a specific example (that does not include online features + models):
1010

1111
- [Installing Feast](#installing-feast)
1212
- [Reviewing Feast concepts](#reviewing-feast-concepts)
13-
- [User flows](#user-flows)
14-
- [User flow 1: ML Platform Team](#user-flow-1-ml-platform-team)
13+
- [User groups](#user-groups)
14+
- [User group 1: ML Platform Team](#user-group-1-ml-platform-team)
1515
- [Step 0: Setup S3 bucket for registry and file sources](#step-0-setup-s3-bucket-for-registry-and-file-sources)
1616
- [Step 1: Setup the feature repo](#step-1-setup-the-feature-repo)
1717
- [Step 1a: Use your configured S3 bucket](#step-1a-use-your-configured-s3-bucket)
@@ -25,14 +25,14 @@ We focus on a specific example (that does not include online features + models):
2525
- [Step 2d (optional): Setup a Web UI endpoint](#step-2d-optional-setup-a-web-ui-endpoint)
2626
- [Step 2e (optional): Merge a sample PR in your fork](#step-2e-optional-merge-a-sample-pr-in-your-fork)
2727
- [Other best practices](#other-best-practices)
28-
- [User flow 2: ML Engineers](#user-flow-2-ml-engineers)
28+
- [User group 2: ML Engineers](#user-group-2-ml-engineers)
2929
- [Step 1: Fetch features for batch scoring](#step-1-fetch-features-for-batch-scoring)
3030
- [Step 2 (optional): Scaling to large datasets](#step-2-optional-scaling-to-large-datasets)
31-
- [User flow 3: Data Scientists](#user-flow-3-data-scientists)
31+
- [User group 3: Data Scientists](#user-group-3-data-scientists)
3232
- [Conclusion](#conclusion)
3333

3434
# Installing Feast
35-
Before we get started, first install Feast with AWS dependencies. Due to a bug in Feast 0.21, we'll also need s3fs for this tutorial to directly fetch from an S3 source:
35+
Before we get started, first install Feast with AWS dependencies. Due to a bug in Feast 0.21, we'll also need s3fs for this tutorial to directly fetch from an S3 data source:
3636

3737
```bash
3838
pip install "feast[aws]"
@@ -52,10 +52,10 @@ Let's quickly review some Feast concepts needed to build this use case. You'll n
5252
| Offline store | The compute that Feast will use to execute point in time joins. Here we use `file` |
5353
| Online store | The low-latency storage Feast can materialize offline feature values to power online inference. In this module, we do not need one. |
5454

55-
# User flows
56-
There are three user groups here worth considering. The ML platform team, the data scientists, and the ML engineers scheduling models in batch.
55+
# User groups
56+
There are three user groups here worth considering. The ML platform team, the ML engineers running batch inference on models, and the data scientists building the model.
5757

58-
## User flow 1: ML Platform Team
58+
## User group 1: ML Platform Team
5959
The team here sets up the centralized Feast feature repository in GitHub. This is what's seen in `feature_repo_aws/`.
6060

6161
### Step 0: Setup S3 bucket for registry and file sources
@@ -373,7 +373,7 @@ Additionally, users will often want to have a dev/staging environment that's sep
373373
├── driver_repo.py
374374
└── feature_store.yaml
375375
```
376-
## User flow 2: ML Engineers
376+
## User group 2: ML Engineers
377377

378378
Data scientists or ML engineers can use the defined `FeatureService` (corresponding to model versions) and schedule regular jobs that generate batch predictions (or regularly retrain).
379379

@@ -434,7 +434,7 @@ path = store.get_historical_features(
434434
# Continue with distributed training or batch predictions from the BigQuery dataset.
435435
```
436436

437-
## User flow 3: Data Scientists
437+
## User group 3: Data Scientists
438438
Data scientists will be using or authoring features in Feast. They can similarly generate in memory dataframes using `get_historical_features(...).to_df()` or larger datasets with methods like `get_historical_features(...).to_bigquery()` as described above.
439439

440440
We don't need to do anything new here since data scientists will be doing many of the same steps you've seen in previous user flows.

0 commit comments

Comments
 (0)