You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: pubsub/streaming-analytics/README.md
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -61,6 +61,12 @@ Sample(s) showing how to use [Google Cloud Pub/Sub] with [Google Cloud Dataflow]
61
61
gcloud scheduler jobs run publisher-job
62
62
```
63
63
64
+
1. Set `REGION` to a Dataflow [regional endpoint].
65
+
66
+
```
67
+
export REGION=your-cloud-region
68
+
```
69
+
64
70
## Setup
65
71
66
72
The following instructions will help you prepare your development environment.
@@ -91,6 +97,7 @@ The following instructions will help you prepare your development environment.
91
97
The following example will run a streaming pipeline. It will read messages from a Pub/Sub topic, then window them into fixed-sized intervals, and write one file per window into a GCS location.
92
98
93
99
+`--project`: sets the Google Cloud project ID to run the pipeline on
100
+
+`--region`: sets the Dataflow regional endpoint
94
101
+`--inputTopic`: sets the input Pub/Sub topic to read messages from
95
102
+`--output`: sets the output GCS path prefix to write files to
96
103
+`--runner [optional]`: specifies the runner to run the pipeline, defaults to `DirectRunner`
0 commit comments