This project is for testing load generation scenarios against Temporal. This is primarily used by the Temporal team to benchmark features and situations. Backwards compatibility may not be maintained.
Omes (pronounced oh-mess) is the Hebrew word for "load" (עומס).
- Go 1.25+
protoc+protoc-gen-goor mise for Kitchen Sink Workflow
- Java 8+
- TypeScript: Node 16+
- Python: uv
- .NET
And if you're running the fuzzer (see below)
SDK and tool versions are defined in versions.env.
This (simplified) diagram shows the main components of Omes:
flowchart TD
subgraph "CLI"
RunWorker["run-worker"]
RunScenario["run-scenario"]
RunScenarioWithWorker["run-scenario-with-worker"]
end
Workers["Worker(s)"]
WorkflowsAndActivities["Workflows and Activities"]
RunScenarioWithWorker --> RunWorker
RunScenarioWithWorker --> RunScenario
RunWorker --> |"start"| Workers
RunScenario --> |"start"| Scenario
Scenario --> |"start"| Executor
Workers --> |"consume"| WorkflowsAndActivities
Executor --> |"produce"| WorkflowsAndActivities
- Scenario: starts an Executor to run a particular load configuration
- Executor: produces concurrent executions of workflows and activities requested by the Scenario
- Workers: consumes the workflows and activities started by the Executor
Omes includes a process metrics sidecar - a Go-based HTTP server that monitors CPU and memory usage of the worker process. The sidecar is started by run-worker (or run-scenario-with-worker) after spawning the worker subprocess.
Endpoints:
/metrics- Prometheus-format metrics (process_cpu_percent,process_resident_memory_bytes)/info- JSON metadata (sdk_version,build_id,language)
Configuration flags:
--worker-process-metrics-address- Address for the sidecar HTTP server (required to enable)--worker-metrics-version-tag- SDK version to report in/info(defaults to--version)
Example:
go run ./cmd run-worker --language python --run-id my-run \
--worker-process-metrics-address :9091 \
--worker-metrics-version-tag v1.24.0Metrics export:
When using the Prometheus instance (--prom-instance-addr), sidecar metrics can be exported to parquet files on shutdown. The export includes SDK version, build ID, and language from the /info endpoint.
--prom-export-worker-metrics- Path to export metrics parquet file--prom-export-process-job- Prometheus job name for process metrics (default:omes-worker-process)--prom-export-worker-info-address- Address to fetch/infofrom during export (e.g.,localhost:9091)
Scenarios are defined using plain Go code. They are located in the scenarios folder. There are already multiple defined that can be used.
A scenario must select an Executor. The most common is the KitchenSinkExecutor which is a wrapper on the
GenericExecutor specific for executing the Kitchen Sink workflow. The Kitchen Sink workflow accepts
actions and is implemented in every worker language.
For example, here is scenarios/workflow_with_single_noop_activity.go:
func init() {
loadgen.MustRegisterScenario(loadgen.Scenario{
Description: "Each iteration executes a single workflow with a noop activity.",
ExecutorFn: func() loadgen.Executor {
return loadgen.KitchenSinkExecutor{
TestInput: &kitchensink.TestInput{
WorkflowInput: &kitchensink.WorkflowInput{
InitialActions: []*kitchensink.ActionSet{
kitchensink.NoOpSingleActivityActionSet(),
},
},
},
}
},
})
}NOTE: The file name where the
Registerfunction is called, will be used as the name of the scenario.
- Use snake case for scenario file names.
- Use
KitchenSinkExecutorfor most basic scenarios, adding common/generic actions as need, but for unique scenarios useGenericExecutor. - When using
GenericExecutor, use methods of*loadgen.Runin yourExecuteas much as possible. - Liberally add helpers to the
loadgenpackage that will be useful to other scenario authors.
During local development it's typically easiest to run both the worker and the scenario together.
You can do that like follows. If you want an embedded server rather than one you've already started,
pass --embedded-server.
go run ./cmd run-scenario-with-worker --scenario workflow_with_single_noop_activity --language goNotes:
- Cleanup is not automatically performed here
- Accepts combined flags for
run-workerandrun-scenariocommands
go run ./cmd run-worker --run-id local-test-run --language goNotes:
--embedded-servercan be passed here to start an embedded localhost server--task-queue-suffix-index-startand--task-queue-suffix-index-endrepresent an inclusive range for running the worker on multiple task queues. The process will create a worker for every task queue from<task-queue>-<start>through<task-queue>-end. This only applies to multi-task-queue scenarios.
go run ./cmd run-scenario --scenario workflow_with_single_noop_activity --run-id local-test-runNotes:
- Run ID is used to derive ID prefixes and the task queue name, it should be used to start a worker on the correct task queue and by the cleanup script.
- By default the number of iterations or duration is specified in the scenario config. They can be overridden with CLI flags.
- See help output for available flags.
go run ./cmd cleanup-scenario --scenario workflow_with_single_noop_activity --run-id local-test-runThe --version flag can be used to specify a version of the SDK to use, it accepts either
a version number like v1.24.0 or you can also pass a local path to use a local SDK version.
This is useful while testing unreleased or in-development versions of the SDK.
go run ./cmd run-scenario-with-worker --scenario workflow_with_single_noop_activity --language go --version /path/to/go-sdkFor example, to build a go worker image using v1.24.0 of the Temporal Go SDK:
go run ./cmd/dev build-worker-image --language go --version v1.24.0This will produce an image tagged like <current git commit hash>-go-v1.24.0.
If version is not specified, the SDK version specified in versions.env will be used.
Publishing images is done via CI, using the build-push-worker-image command.
See the GHA workflows for more information.
The project scenario is different from the built-in kitchen-sink style scenarios. It is intended
to be a more ergnomomic way of writing load tests, akin to writing a Temporal sample.
Instead of driving workflows and activities that Omes owns (through a client that Omes owns), the
project scenario enables you to write a project where you configure the worker(s), client, and load
(workflows, activites, Nexus operations, etc.), and pass them as entrypoints to a harness. Omes calls into
this harness repeatedly (i.e. for 'x' --iterations or 'y' --duration) through a gRPC interface to
generate load.
As such, it's a good fit if you:
- need to write a load test
- are more familiar with Temporal-native code than Omes's framework
and are not restricted by the current limitations:
- Python is the only implemented project language right now
- the load pattern is limited to a steady-rate executor (i.e. "run 'x' times or run for 'y' duration), more nuanced load patterns will need to create their own scenario + executor (the existing method)
Writing a project should be fairly similar to writing a Temporal sample, and requires only a little familiarity with the project harness interface.
To get started:
- projects should be written at
workers/<lang>/projects/tests/<name>, this is a path convention expected by Omes when building your project - use the example
helloworld/project as a reference to get started - take a look at
workers/python/harness/src/harness/__init__.pyas an entrypoint to the harness and how it works
On the worker-side, it's fairly similar to how Omes starts and runs workers already:
- Omes builds a program that can start a worker via a command
- Omes executes the command that starts the worker process against the built program The only difference is that the program is now your project + harness.
On the runner-side, it's a bit different than typical Omes scenarios because the runner needs to call into the harness to drive load. Consequently, it too needs to build your project + harness to run. So the flow looks like:
- Omes runner builds a program that can start a gRPC server (
project-server) via a commandproject-serveris language-agnostic interface for runner to tell your project to create load
- Omes runner executes the command that starts the
project-serverin another process - Omes runner calls into gRPC server repeatedly over the course of the load test to spawn load
- Omes runner shuts down
project-serveron conclusion of load test
To run a project:
- Run your project worker:
go run ./cmd run-worker \
--language python \
--project-name helloworld \
--run-id local-project-test \
--server-address <your server address>Make sure to point your server-address to a running Temporal server. Alternatively, use the
--embedded-server to spin one up for you.
- Run the project scenario
go run ./cmd run-scenario \
--scenario project \
--iterations 1 \
--server-address <your server address> \
--run-id local-project-test \
--option language=python \
--option project-name=helloworldFor local all-in-one development, run the worker and scenario together:
go run ./cmd run-scenario-with-worker \
--scenario project \
--iterations 1 \
--language python \
--project-name helloworld \
--run-id local-project-test \
--embedded-server \
--option language=python \
--option project-name=helloworldTo run a project via Docker:
docker build \
-f dockerfiles/python.Dockerfile \
--build-arg PROJECT_NAME=helloworld \
--build-arg SDK_VERSION=v1.25.0 \
-t omes-python-project-helloworld .
docker network create omes-project-net
docker run -d --rm \
--name omes-python-project-worker \
--network omes-project-net \
omes-python-project-helloworld \
--run-id local-project-test \
--embedded-server-address 0.0.0.0:7233
docker run --rm \
--network omes-project-net \
omes-python-project-helloworld \
run-scenario \
--scenario project \
--run-id local-project-test \
--server-address omes-python-project-worker:7233 \
--iterations 1 \
--connect-timeout 30s
docker stop omes-python-project-worker
docker network rm omes-project-netThis docker workflow it is not yet wired into go run ./cmd/dev build-worker-image.
The throughput_stress scenario can be configured to run "sleep" activities with different configurations.
The configuration is done via a JSON file, which is passed to the scenario with the
--option sleep-activity-per-priority-json=@<file> flag. Example:
echo '{"count":{"type":"fixed","value":5},"groups":{"high":{"weight":2,"sleepDuration":{"type":"uniform","min":"2s","max":"4s"}},"low":{"weight":3,"sleepDuration":{"type":"discrete","weights":{"5s":3,"10s":1}}}}}' > sleep.json
go run ./cmd run-scenario-with-worker --scenario throughput_stress --language go --option sleep-activity-json=@sleep.json --run-id default-run-id
This runs 5 sleep activities per iteration, where "high" has a weight of 2 and sleeps for a random duration between 2-4s, and "low" has a weight of 3 and sleeps for either 5s or 10s.
Look at DistributionField to learn more about different kinds of distrbutions.
The throughput_stress scenario can generate Nexus load if the scenario is started with --option nexus-endpoint=my-nexus-endpoint:
temporal operator nexus endpoint create \
--name my-nexus-endpoint \
--target-namespace default \ # Change if needed
--target-task-queue throughput_stress:default-run-id
- Start the scenario with the given run-id:
go run ./cmd run-scenario-with-worker --scenario throughput_stress --language go --option nexus-endpoint=my-nexus-endpoint --run-id default-run-id
The fuzzer scenario makes use of the kitchen sink workflow (see below) to exercise a wide
range of possible actions. Actions are pre-generated by the kitchen-sink-gen tool, written in
Rust, and are some combination of actions provided to the workflow as input, and actions to be
run by a client inside the scenario executor.
You can run the fuzzer with new random actions like so:
go run ./cmd run-scenario-with-worker --scenario fuzzer --iterations 1 --language csThe fuzzer automatically creates a Nexus endpoint and generates Nexus operations. To use an existing endpoint instead:
go run ./cmd run-scenario-with-worker --scenario fuzzer --iterations 1 --language go --option nexus-endpoint=my-endpointBy default, the scenario will spit out a last_fuzz_run.proto binary file containing the generated
actions. To re-run the same set of actions, you can pass in such a file like so:
go run ./cmd run-scenario-with-worker --scenario fuzzer --iterations 1 --language cs --option input-file=last_fuzz_run.protoOr you can run with a specific seed (seeds are printed at the start of the scenario):
go run ./cmd run-scenario-with-worker --scenario fuzzer --iterations 1 --language cs --option seed=131962944538087455However, the fuzzer is also sensitive to its configuration, and thus the seed will only produce the exact same set of actions if the config has also not changed. Thus you should prefer to save binary files rather than seeds.
Please do collect interesting fuzz cases in the scenarios/fuzz_cases.yaml file. This file
currently has seeds, but could also easily reference stored binary files instead.
The Kitchen Sink workflows accepts a DSL generated by the kitchen-sink-gen Rust tool, allowing us
to test a wide variety of scenarios without having to imagine all possible edge cases that could
come up in workflows. Input may be saved for regression testing, or hand written for specific cases.
Build by running go run ./cmd/dev build kitchensink.
Test by running go test -v ./loadgen -run TestKitchenSink.
Prefix with env variable SDK=<sdk> to test a specific SDK only.
A scenario can only fail if an Execute method returns an error, that means the control is fully in the scenario
authors's hands. For enforcing a timeout for a scenario, use options like workflow execution timeouts or write a
workflow that waits for a signal for a configurable amount of time.
- Nicer output that includes resource utilization for the worker (when running all-in-one)
- Ruby worker
Use the dev command for development tasks:
go run ./cmd/dev install # Install tools (default: all)
go run ./cmd/dev lint-and-format # Lint and format workers (default: all)
go run ./cmd/dev test # Test workers (default: all)
go run ./cmd/dev build # Build worker images (default: all)
go run ./cmd/dev clean # Clean worker artifacts (default: all)
go run ./cmd/dev build-proto # Build kitchen-sink protoOr target specific languages: go run ./cmd/dev build go java python
All versions are defined in versions.env.