From 4dced14ad5d5430ff7e256b13b7585ae1ac6fae6 Mon Sep 17 00:00:00 2001 From: Eric Allam Date: Mon, 27 Apr 2026 18:13:29 +0100 Subject: [PATCH 1/8] chore: fix CONTRIBUTING.md setup steps and scope db:seed to webapp (#3450) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## Summary Two fixes that together get a fresh-machine setup working from `CONTRIBUTING.md` end-to-end with no manual workarounds: ### `CONTRIBUTING.md` - Fix wrong path in the migration walkthrough: `cd packages/database` → `cd internal-packages/database`. The current path doesn't exist; this breaks step 2 for every contributor adding a migration. - Renumber duplicate `4.` steps in **Adding migrations** and the skipped `5.` in the hello-world **Running** section. - Combine three sequential `pnpm run build --filter ...` calls into one (Turbo parallelizes filters): `pnpm run build --filter webapp --filter trigger.dev --filter @trigger.dev/sdk`. - Add a `pnpm run db:seed` step after migrate. The seed creates the local user, `References` org, and reference projects (including `hello-world` with the stable `proj_rrkpdguyagvsoktglnod`). Removes the manual instruction to edit the `externalRef` column in Postgres. - Mention ClickHouse and the ClickHouse migrator alongside Postgres/Redis in the Docker step (they're already part of `pnpm run docker`, just invisible in the docs). - Remove the V1-era **Add sample jobs** section. `references/job-catalog` no longer exists; the hello-world flow above replaces it. ### `turbo.json` Scope `db:seed` to `webapp#db:seed → webapp#build`. The previous root-level entry queued `build` for every workspace package — including `references-*`, `docs`, `kubernetes-provider`, `coordinator`, etc. Only `webapp` actually has a `db:seed` script, so the rest of those builds were dead weight. Worse: a single broken reference (today, `references-realtime-hooks-test` failing under Turbopack with `node:fs/promises`) kills the whole seed pipeline. After the change, `turbo run db:seed --dry-run` plan drops from 27 tasks to 20 — only `webapp` and its real transitive workspace deps. Reference projects no longer block seeding. ## Test plan - [x] Fresh-machine setup followed end-to-end on a wiped Postgres + ClickHouse: migrate → seed → build → webapp → CLI login → `trigger dev` → triggered `hello-world`, run completed with `{"message":"Hello, world!"}`. - [x] `turbo run db:seed --dry-run=json` confirms 20 tasks, all webapp deps, no reference packages. - [ ] CI green on the renamed turbo task name. --- CONTRIBUTING.md | 88 +++++++++++++++---------------------------------- turbo.json | 4 +-- 2 files changed, 29 insertions(+), 63 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 88e24cba4f0..4d54b0df9d4 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -71,7 +71,7 @@ branch are tagged into a release periodically. Feel free to update `SESSION_SECRET` and `MAGIC_LINK_SECRET` as well using the same method. -8. Start Docker. This starts the required services like Postgres & Redis. If this is your first time using Docker, consider going through this [guide](DOCKER_INSTALLATION.md) +8. Start Docker. This starts the required services: Postgres, Redis, Electric, and ClickHouse (the ClickHouse migrator runs once on first start). If this is your first time using Docker, consider going through this [guide](DOCKER_INSTALLATION.md). ``` pnpm run docker @@ -81,11 +81,15 @@ branch are tagged into a release periodically. ``` pnpm run db:migrate ``` -10. Build everything +10. Build the webapp, CLI, and SDK ``` - pnpm run build --filter webapp && pnpm run build --filter trigger.dev && pnpm run build --filter @trigger.dev/sdk + pnpm run build --filter webapp --filter trigger.dev --filter @trigger.dev/sdk ``` -11. Run the app. See the section below. +11. Seed the database. This creates a local user, a `References` org, and the reference projects (including `hello-world`) with stable IDs. + ``` + pnpm run db:seed + ``` +12. Run the app. See the section below. ## Running @@ -105,22 +109,17 @@ We use the `/references/hello-world` subdirectory as a staging ground for ### First-time setup -First, make sure you are running the webapp according to the instructions above. Then: - -1. Visit http://localhost:3030 in your browser and create a new project called "hello-world". +First, make sure you are running the webapp according to the instructions above. The seed step from setup already created a `hello-world` project under the `References` org with the stable ref `proj_rrkpdguyagvsoktglnod` — log in at http://localhost:3030 with any email to access it. Then: -2. In Postgres go to the "Projects" table and for the project you create change the `externalRef` to `proj_rrkpdguyagvsoktglnod`. - -3. Build the CLI +1. Build the CLI (skip if you already ran the build step in setup) ```sh -# Build the CLI pnpm run build --filter trigger.dev # Make it accessible to `pnpm exec` pnpm i ``` -4. Change into the `/references/hello-world` directory and authorize the CLI to the local server: +2. Change into the `/references/hello-world` directory and authorize the CLI to the local server: ```sh cd references/hello-world @@ -168,24 +167,24 @@ If you want additional debug logging, you can use the `--log-level debug` flag: pnpm exec trigger dev --log-level debug ``` -6. If you make any changes in the CLI/Core/SDK, you'll need to `CTRL+C` to exit the `dev` command and restart it to pickup changes. Any changes to the files inside of the `hello-world/src/trigger` dir will automatically be rebuilt by the `dev` command. +5. If you make any changes in the CLI/Core/SDK, you'll need to `CTRL+C` to exit the `dev` command and restart it to pickup changes. Any changes to the files inside of the `hello-world/src/trigger` dir will automatically be rebuilt by the `dev` command. -7. Navigate to the `hello-world` project in your local dashboard at localhost:3030 and you should see the list of tasks. +6. Navigate to the `hello-world` project in your local dashboard at localhost:3030 and you should see the list of tasks. -8. Go to the "Test" page in the sidebar and select a task. Then enter a payload and click "Run test". You can tell what the payloads should be by looking at the relevant task file inside the `/references/hello-world/src/trigger` folder. Many of them accept an empty payload. +7. Go to the "Test" page in the sidebar and select a task. Then enter a payload and click "Run test". You can tell what the payloads should be by looking at the relevant task file inside the `/references/hello-world/src/trigger` folder. Many of them accept an empty payload. -9. Feel free to add additional files in `hello-world/src/trigger` to test out specific aspects of the system, or add in edge cases. +8. Feel free to add additional files in `hello-world/src/trigger` to test out specific aspects of the system, or add in edge cases. ## Adding and running migrations -1. Modify internal-packages/database/prisma/schema.prisma file -2. Change directory to the packages/database folder +1. Modify `internal-packages/database/prisma/schema.prisma`. +2. Change directory to the database package: ```sh - cd packages/database + cd internal-packages/database ``` -3. Create a migration +3. Create a migration: ``` pnpm run db:migrate:dev:create @@ -193,50 +192,17 @@ pnpm exec trigger dev --log-level debug This creates a migration file. Check the migration file does only what you want. If you're adding any database indexes they must use `CONCURRENTLY`, otherwise they'll lock the table when executed. -4. Run the migration. - -``` -pnpm run db:migrate:deploy -pnpm run generate -``` - -This executes the migrations against your database and applies changes to the database schema(s), and then regenerates the Prisma client. - -4. Commit generated migrations as well as changes to the schema.prisma file -5. If you're using VSCode you may need to restart the Typescript server in the webapp to get updated type inference. Open a TypeScript file, then open the Command Palette (View > Command Palette) and run `TypeScript: Restart TS server`. - -## Add sample jobs - -The [references/job-catalog](./references/job-catalog/) project defines simple jobs you can get started with. - -1. `cd` into `references/job-catalog` -2. Create a `.env` file with the following content, - replacing `` with an actual key: +4. Run the migration: -```env -TRIGGER_API_KEY=[TRIGGER_DEV_API_KEY] -TRIGGER_API_URL=http://localhost:3030 -``` - -`TRIGGER_API_URL` is used to configure the URL for your Trigger.dev instance, -where the jobs will be registered. - -3. Run one of the the `job-catalog` files: - -```sh -pnpm run events -``` - -This will open up a local server using `express` on port 8080. Then in a new terminal window you can run the trigger-cli dev command: - -```sh -pnpm run dev:trigger -``` + ``` + pnpm run db:migrate:deploy + pnpm run generate + ``` -See the [Job Catalog](./references/job-catalog/README.md) file for more. + This executes the migrations against your database and applies changes to the database schema(s), and then regenerates the Prisma client. -4. Navigate to your trigger.dev instance ([http://localhost:3030](http://localhost:3030/)), to see the jobs. - You can use the test feature to trigger them. +5. Commit the generated migration files as well as the changes to `schema.prisma`. +6. If you're using VSCode you may need to restart the TypeScript server in the webapp to get updated type inference. Open a TypeScript file, then open the Command Palette (View > Command Palette) and run `TypeScript: Restart TS server`. ## Making a pull request diff --git a/turbo.json b/turbo.json index 025a7226472..8f2c862d030 100644 --- a/turbo.json +++ b/turbo.json @@ -35,10 +35,10 @@ "db:migrate:deploy": { "cache": false }, - "db:seed": { + "webapp#db:seed": { "cache": false, "dependsOn": [ - "build" + "webapp#build" ] }, "db:studio": { From e8f1a7a0a15986ee2b870f5ea50448fc005d12ca Mon Sep 17 00:00:00 2001 From: ThullyoCunha Date: Mon, 27 Apr 2026 18:15:27 -0300 Subject: [PATCH 2/8] fix(helm): expand CLICKHOUSE_PASSWORD in webapp CLICKHOUSE_URL via kubelet (#3449) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## Summary When the official Helm chart is deployed with an external ClickHouse and `clickhouse.external.existingSecret` set — the documented path for not committing secrets to `values.yaml` — the webapp pod crash-loops on startup: ``` goose run: parse "http://default:${CLICKHOUSE_PASSWORD}@:8123?secure=false": net/url: invalid userinfo ``` Context in vouch request #3443. Re-opening in draft status per bot policy (previous attempt was #3445, closed by automation because it wasn't draft; no changes to the patch). ## Root cause Two pieces interact: 1. `hosting/k8s/helm/templates/_helpers.tpl` renders `CLICKHOUSE_URL` (and `RUN_REPLICATION_CLICKHOUSE_URL`) with a shell-style literal `${CLICKHOUSE_PASSWORD}` expecting bash expansion at container start. 2. `docker/scripts/entrypoint.sh` does `export GOOSE_DBSTRING="$CLICKHOUSE_URL"` — single-pass POSIX sh substitution, so the inner `${...}` survives as literal text and goose rejects it. Reproduces against the latest published chart (`oci://ghcr.io/triggerdotdev/charts/trigger:4.0.5`) and `main`. ## Fix Switch the two helpers (external + `existingSecret` branch) from shell-style `${CLICKHOUSE_PASSWORD}` to Kubernetes' `$(CLICKHOUSE_PASSWORD)`. Kubelet substitutes `$(VAR)` at pod-creation time from earlier env entries, and the chart already declares `CLICKHOUSE_PASSWORD` from the Secret immediately before `CLICKHOUSE_URL`, so the URL reaches the entrypoint with the real password already inlined. No entrypoint change, no image change. The plain-password branch (no `existingSecret`) is unchanged. Operator caveat added as template comments: `CLICKHOUSE_PASSWORD` must be URL-userinfo-safe since kubelet substitutes verbatim without percent-encoding. Hex-encoded passwords (e.g. `openssl rand -hex 32`) are safe by construction. ## Verification - `helm template` against `external.existingSecret` now renders `value: "http://default:$(CLICKHOUSE_PASSWORD)@:8123?secure=false"` (was `${CLICKHOUSE_PASSWORD}`). - `helm template` against the plain-password branch is byte-identical to before. - Deployed end-to-end on a staging EKS cluster (Meistrari platform): webapp container reaches `goose: successfully migrated database to version: 6`, Node.js ClickHouse client connects at runtime. ## Alternatives considered - **Change `entrypoint.sh`** to `eval` / `envsubst` the URL — larger surface, touches every deployment mode (Docker Compose + k8s) and every container image. - **Mirror the Postgres pattern** (chart reads the full URL via `valueFrom.secretKeyRef`, as in `trigger-v4.postgres.useSecretUrl`) — cleaner long-term but requires a new `values.yaml` field and a migration path for existing users. Happy to follow up with that as a separate PR if the minimal fix here isn't the preferred direction. ## Changeset None added — the Helm chart isn't versioned through `@changesets/cli` (docs/chart-only PRs historically merge without a changeset, e.g. #2671). Happy to add one if the policy changed. Closes #3443. --- hosting/k8s/helm/templates/_helpers.tpl | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/hosting/k8s/helm/templates/_helpers.tpl b/hosting/k8s/helm/templates/_helpers.tpl index 09901518086..8615a34e0ae 100644 --- a/hosting/k8s/helm/templates/_helpers.tpl +++ b/hosting/k8s/helm/templates/_helpers.tpl @@ -400,6 +400,19 @@ ClickHouse hostname {{/* ClickHouse URL for application (with secure parameter) + +Note on the external+existingSecret branch: the password is expanded via +Kubernetes' `$(VAR)` syntax, not shell `${VAR}`. Kubelet substitutes +`$(CLICKHOUSE_PASSWORD)` at container-creation time from the +CLICKHOUSE_PASSWORD env var declared just before CLICKHOUSE_URL in +webapp.yaml. Shell-style `${...}` does not work here because +`docker/scripts/entrypoint.sh` assigns CLICKHOUSE_URL to GOOSE_DBSTRING +with a single-pass expansion (`export GOOSE_DBSTRING="$CLICKHOUSE_URL"`), +so any inner `${...}` reaches goose verbatim and fails URL parsing. + +CLICKHOUSE_PASSWORD must contain only URL-userinfo-safe characters — the +value is substituted verbatim, so `@ : / ? # [ ] %` break the URL. Use a +hex-encoded password or percent-encode before storing in the Secret. */}} {{- define "trigger-v4.clickhouse.url" -}} {{- if .Values.clickhouse.deploy -}} @@ -410,7 +423,7 @@ ClickHouse URL for application (with secure parameter) {{- $protocol := ternary "https" "http" .Values.clickhouse.external.secure -}} {{- $secure := ternary "true" "false" .Values.clickhouse.external.secure -}} {{- if .Values.clickhouse.external.existingSecret -}} -{{ $protocol }}://{{ .Values.clickhouse.external.username }}:${CLICKHOUSE_PASSWORD}@{{ .Values.clickhouse.external.host }}:{{ .Values.clickhouse.external.httpPort | default 8123 }}?secure={{ $secure }} +{{ $protocol }}://{{ .Values.clickhouse.external.username }}:$(CLICKHOUSE_PASSWORD)@{{ .Values.clickhouse.external.host }}:{{ .Values.clickhouse.external.httpPort | default 8123 }}?secure={{ $secure }} {{- else -}} {{ $protocol }}://{{ .Values.clickhouse.external.username }}:{{ .Values.clickhouse.external.password }}@{{ .Values.clickhouse.external.host }}:{{ .Values.clickhouse.external.httpPort | default 8123 }}?secure={{ $secure }} {{- end -}} @@ -419,6 +432,9 @@ ClickHouse URL for application (with secure parameter) {{/* ClickHouse URL for replication (without secure parameter) + +See the note on clickhouse.url above — same `$(VAR)` vs `${VAR}` rationale +applies to the replication URL. */}} {{- define "trigger-v4.clickhouse.replication.url" -}} {{- if .Values.clickhouse.deploy -}} @@ -427,7 +443,7 @@ ClickHouse URL for replication (without secure parameter) {{- else if .Values.clickhouse.external.host -}} {{- $protocol := ternary "https" "http" .Values.clickhouse.external.secure -}} {{- if .Values.clickhouse.external.existingSecret -}} -{{ $protocol }}://{{ .Values.clickhouse.external.username }}:${CLICKHOUSE_PASSWORD}@{{ .Values.clickhouse.external.host }}:{{ .Values.clickhouse.external.httpPort | default 8123 }} +{{ $protocol }}://{{ .Values.clickhouse.external.username }}:$(CLICKHOUSE_PASSWORD)@{{ .Values.clickhouse.external.host }}:{{ .Values.clickhouse.external.httpPort | default 8123 }} {{- else -}} {{ $protocol }}://{{ .Values.clickhouse.external.username }}:{{ .Values.clickhouse.external.password }}@{{ .Values.clickhouse.external.host }}:{{ .Values.clickhouse.external.httpPort | default 8123 }} {{- end -}} From 9e99c81f645dbd65d18bb4230b35874096c3cb40 Mon Sep 17 00:00:00 2001 From: nicktrn <55853254+nicktrn@users.noreply.github.com> Date: Tue, 28 Apr 2026 09:54:16 +0100 Subject: [PATCH 3/8] ci: skip privileged PR jobs on fork PRs (#3458) Fork PRs can't access org secrets or push to GHCR, so these two `pull_request` jobs hard-fail with no path to passing: - `claude-md-audit` - needs `CLAUDE_CODE_OAUTH_TOKEN` - `helm-pr-prerelease` `prerelease` job - needs `packages: write` to push the chart Hit this on #3449. Approving the run didn't help; the jobs ran and failed at the privileged step. The chart-validation `lint-and-test` job is fork-safe and stays untouched - that remains the merge gate for Helm changes. Gate both jobs on same-repo head: ```yaml if: github.event.pull_request.head.repo.full_name == github.repository ``` Other PR workflows already handle forks fine: `pr_checks` (typecheck/units/e2e/sdk-compat) falls back to anonymous DockerHub pulls when secrets are missing. --- .github/workflows/claude-md-audit.yml | 4 +++- .github/workflows/helm-pr-prerelease.yml | 1 + 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/.github/workflows/claude-md-audit.yml b/.github/workflows/claude-md-audit.yml index ddba0180401..c03179d4dfd 100644 --- a/.github/workflows/claude-md-audit.yml +++ b/.github/workflows/claude-md-audit.yml @@ -16,7 +16,9 @@ concurrency: jobs: audit: - if: github.event.pull_request.draft == false + if: >- + github.event.pull_request.draft == false && + github.event.pull_request.head.repo.full_name == github.repository runs-on: ubuntu-latest permissions: contents: read diff --git a/.github/workflows/helm-pr-prerelease.yml b/.github/workflows/helm-pr-prerelease.yml index 8df045945e6..f5bbfebde8d 100644 --- a/.github/workflows/helm-pr-prerelease.yml +++ b/.github/workflows/helm-pr-prerelease.yml @@ -54,6 +54,7 @@ jobs: prerelease: needs: lint-and-test + if: github.event.pull_request.head.repo.full_name == github.repository runs-on: ubuntu-latest permissions: contents: read From 91fd8a8a039ffdea80c7159c1f733614ea7aef20 Mon Sep 17 00:00:00 2001 From: nicktrn <55853254+nicktrn@users.noreply.github.com> Date: Tue, 28 Apr 2026 10:22:44 +0100 Subject: [PATCH 4/8] chore(security): close dependabot alerts q2 (#3456) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Closes ~80 dependabot alerts (3 critical, ~25 high, ~31 medium) by bumping direct deps where possible and narrowly overriding the rest. Cloud uses `resend` email transport and Node 20 - all bumps are safe for both cloud and self-hosters. ## Direct upgrades | Package | Where | From | To | Why | |---|---|---|---|---| | `vite` | root devDeps | ^5.4.21 | *(removed)* | dead pin; vitest pulls vite transitively | | `dompurify` | apps/webapp | ^3.2.6 | ^3.4.1 | XSS CVEs | | `effect` | apps/webapp | ^3.11.7 | ^3.21.2 | AsyncLocalStorage CVE in Effect fibers | | `nodemailer` | internal-packages/emails | ^7.0.11 | ^8.0.6 | SMTP CRLF injection (only affects self-hosters w/ smtp/aws-ses transport) | | `uuid` | apps/webapp | ^9.0.0 | ^14.0.0 | buffer bounds check; ESM-only but bundled by Remix | | `uuid` + `@types/uuid` | packages/trigger-sdk | ^9.0.0 | *(removed)* | dead deps, no usage | | `@types/uuid` | apps/webapp | ^9.0.0 | *(removed)* | uuid 14 ships its own types | | `tar` | packages/cli-v3 | ^7.5.4 | ^7.5.13 | path traversal CVEs | | `testcontainers` + `@testcontainers/postgresql` + `@testcontainers/redis` | internal-packages/testcontainers | ^10.28.0 | ^11.14.0 | dev/test cleanup; one-line API fix for `RedisContainer(image)` | | `rimraf` | webapp + 6 packages | ^3.0.2 / ^5.0.7 | ^6.0.1 | dev/build tool consolidation | ## Scoped overrides All bound by both `>=` and `<` to avoid major-version yanks. | Override | Closes | |---|---| | `tar@>=7 <7.5.11` → `^7.5.11` | supervisor's `@kubernetes/client-node 1.0.0` chain | | `axios@>=1.0.0 <1.15.0` → `^1.15.0` | replaces older 1.9.0 pin | | `systeminformation@>=5.0.0 <5.31.0` → `^5.31.0` | bumps existing 5.27.14 pin | | `lodash@>=4.0.0 <4.18.0` → `^4.18.0` | bumps existing 4.17.23 pin | | `lodash-es@>=4.0.0 <4.18.0` → `^4.18.0` | new (mirrors lodash) | | `dompurify@>=3 <3.4.0` → `^3.4.1` | catches transitive dompurify via mermaid | | `vite@>=5.0.0 <6.4.2` → `^6.4.2` | path traversal; vite 5 has no patch | | `rollup@>=4 <4.59.0` → `^4.59.0` | path traversal in vite/vitest chain | | `flatted@>=3 <3.4.2` → `^3.4.2` | prototype pollution in eslint flat-cache | | `picomatch@>=2 <2.3.2` → `^2.3.2` | ReDoS in 2.x branch (transitive) | | `picomatch@>=4 <4.0.4` → `^4.0.4` | ReDoS in 4.x branch (vitest/tinyglobby) | | `minimatch@>=3 <3.1.3` → `^3.1.3` | ReDoS in eslint 8 chain | | `protobufjs@>=7 <7.5.5` → `^7.5.5` | **critical** RCE via @opentelemetry/otlp-transformer | | `fast-xml-parser@>=4 <4.5.5` → `^4.5.5` | DOCTYPE bypass + others (4.x branch via aws-sdk in supervisor) | | `fast-xml-parser@>=5 <5.7.0` → `^5.7.0` | **critical** + others (5.x branch via aws-sdk in webapp) | | `path-to-regexp@>=0.1 <0.1.13` → `^0.1.13` | ReDoS in express 4 / @remix-run/express | | `ajv@>=8 <8.18.0` → `^8.18.0` | DoS | | `socket.io-parser@>=4 <4.2.6` → `^4.2.6` | DoS in @trigger.dev/core's socket.io | | `postcss@>=8 <8.5.10` → `^8.5.10` | XSS via stringify | | `yaml@>=2 <2.8.3` → `^2.8.3` | DoS | | `semver@>=5 <5.7.2` → `^5.7.2` | ReDoS in 5.x | | `defu@>=6 <6.1.5` → `^6.1.5` | prototype pollution via __proto__ in @prisma/config c12 chain | ## Dismissed (~47) | Reason | Cluster | Count | |---|---|---| | `not_used` | langsmith + next 15.x in references/* | 10 | | `not_used` | minimatch 8.x via prisma-generator-ts-enums (references/prisma-6) | 3 | | `not_used` | basic-ftp via puppeteer in references/hello-world + references/seed | 2 | | `not_used` | hono / @hono/node-server / express-rate-limit / path-to-regexp 8.x / @modelcontextprotocol/sdk - all via mcp-sdk chain (dormant in webapp; dev-only localhost in cli-v3) | 22 | | `not_used` | fastify / @fastify/static / file-type via evalite devDep | 5 | | `tolerable_risk` | rollup 3 + minimatch 5/8/9/10 dev/build tooling | 13 | ## Notes - **mcp-sdk chain**: `@vercel/sdk` in webapp imports `Vercel` API client only; `mcp-server/*` subpath isn't loaded at runtime. cli-v3's MCP server runs only via `trigger mcp` on developer machines. Bumping `@modelcontextprotocol/sdk` to latest (1.29.0) wouldn't close these alerts anyway - it ships hono ^4.11.4 which is still vulnerable - so dismissal is the cleaner call. - **References ignore list**: confirmed with current dependabot ignore config; added `references/seed/package.json` (only gap). - **undici** alerts (CVE-2026-1527, 4 alerts) will auto-close: lockfile already at 6.25.0 > patched 6.24.0; just needs Dependabot rescan. - **Effect 3.20 fix** is a runtime-only scheduler fix, no public API changes - verified with research agent against our four `effect/*` imports. - **uuid 14** is ESM-only; we only call `validate`/`version` (no crypto needed) so Node 20 requirement isn't load-bearing for us. ## Public packages (`packages/*`) Minimal surface, deliberately. None of these change published runtime behaviour - all changesets-worthy public package changes are deferred to a regular release pass. | Package | Change | Runtime impact | |---|---|---| | `packages/trigger-sdk` | Removed dead `uuid` dep (no source imports) | None - dep was unused | | `packages/cli-v3` | `tar` ^7.5.4 → ^7.5.13 | Patch bump within already-allowed 7.x range; nothing CLI consumers see | | `packages/core` / `packages/build` / `packages/python` / `packages/rsc` / `packages/react-hooks` / `packages/schema-to-json` | `rimraf` ^3.0.2 → ^6.0.1 in devDeps | Build-time only, no runtime change | No changeset added because nothing in these packages affects what published consumers run. ## Validation - Webapp typecheck (forced, no cache) passes after every commit - Smoke-tested testcontainers v11 changes via real `postgresTest` + `redisTest` (sync.test.ts, releaseConcurrency.test.ts) - both pass - Webapp built + verified `require("uuid")` no longer in CJS server output (now bundled inline) - Test env webapp deployed at `dependabot-q2.rc0` (cloud#740) - no issues observed - Test suite run with package prerelease passed --- apps/webapp/package.json | 10 +- apps/webapp/remix.config.js | 1 + internal-packages/emails/package.json | 4 +- internal-packages/otlp-importer/package.json | 2 +- internal-packages/testcontainers/package.json | 6 +- internal-packages/testcontainers/src/utils.ts | 10 +- package.json | 27 +- packages/cli-v3/package.json | 4 +- packages/core/package.json | 2 +- packages/react-hooks/package.json | 2 +- packages/rsc/package.json | 2 +- packages/trigger-sdk/package.json | 4 +- pnpm-lock.yaml | 1642 ++++++++--------- 13 files changed, 791 insertions(+), 925 deletions(-) diff --git a/apps/webapp/package.json b/apps/webapp/package.json index 007c9f39350..0880eb71037 100644 --- a/apps/webapp/package.json +++ b/apps/webapp/package.json @@ -147,9 +147,9 @@ "cross-env": "^7.0.3", "cuid": "^2.1.8", "date-fns": "^4.1.0", - "dompurify": "^3.2.6", + "dompurify": "^3.4.1", "dotenv": "^16.4.5", - "effect": "^3.11.7", + "effect": "^3.21.2", "emails": "workspace:*", "eventsource": "^4.0.0", "evt": "^2.4.13", @@ -227,7 +227,7 @@ "tiny-invariant": "^1.2.0", "ulid": "^2.3.0", "ulidx": "^2.2.1", - "uuid": "^9.0.0", + "uuid": "^14.0.0", "ws": "^8.11.0", "zod": "3.25.76", "zod-error": "1.5.0", @@ -249,7 +249,6 @@ "@types/bcryptjs": "^2.4.2", "@types/compression": "^1.7.2", "@types/cookie": "^0.6.0", - "@types/dompurify": "^3.2.0", "@types/eslint": "^8.4.6", "@types/express": "^4.17.13", "@types/humanize-duration": "^3.27.1", @@ -270,7 +269,6 @@ "@types/slug": "^5.0.3", "@types/supertest": "^6.0.2", "@types/tar": "^6.1.4", - "@types/uuid": "^9.0.0", "@types/ws": "^8.5.3", "@typescript-eslint/eslint-plugin": "^5.59.6", "@typescript-eslint/parser": "^5.59.6", @@ -292,7 +290,7 @@ "prettier": "^2.8.8", "prettier-plugin-tailwindcss": "^0.3.0", "prop-types": "^15.8.1", - "rimraf": "^3.0.2", + "rimraf": "^6.0.1", "style-loader": "^3.3.4", "supertest": "^7.0.0", "tailwind-scrollbar": "^3.0.1", diff --git a/apps/webapp/remix.config.js b/apps/webapp/remix.config.js index a4ad1bd228e..130c1591962 100644 --- a/apps/webapp/remix.config.js +++ b/apps/webapp/remix.config.js @@ -31,6 +31,7 @@ module.exports = { "parse-duration", "uncrypto", "std-env", + "uuid", ], browserNodeBuiltinsPolyfill: { modules: { diff --git a/internal-packages/emails/package.json b/internal-packages/emails/package.json index 68b01563b81..65bc33e9d42 100644 --- a/internal-packages/emails/package.json +++ b/internal-packages/emails/package.json @@ -13,7 +13,7 @@ "@aws-sdk/client-sesv2": "^3.716.0", "@react-email/components": "0.0.16", "@react-email/render": "^0.0.12", - "nodemailer": "^7.0.11", + "nodemailer": "^8.0.6", "react": "^18.2.0", "react-email": "^2.1.1", "resend": "^3.2.0", @@ -21,7 +21,7 @@ "zod": "3.25.76" }, "devDependencies": { - "@types/nodemailer": "^7.0.4", + "@types/nodemailer": "^8.0.0", "@types/react": "18.2.69" }, "engines": { diff --git a/internal-packages/otlp-importer/package.json b/internal-packages/otlp-importer/package.json index 72e46c2f9d3..6f5cd39665f 100644 --- a/internal-packages/otlp-importer/package.json +++ b/internal-packages/otlp-importer/package.json @@ -28,7 +28,7 @@ }, "devDependencies": { "@types/node": "^20", - "rimraf": "^3.0.2", + "rimraf": "^6.0.1", "ts-proto": "^1.167.3" }, "engines": { diff --git a/internal-packages/testcontainers/package.json b/internal-packages/testcontainers/package.json index 0d70ac6a3c2..104f982cc28 100644 --- a/internal-packages/testcontainers/package.json +++ b/internal-packages/testcontainers/package.json @@ -15,11 +15,11 @@ "ioredis": "^5.3.2" }, "devDependencies": { - "@testcontainers/postgresql": "^10.28.0", - "@testcontainers/redis": "^10.28.0", + "@testcontainers/postgresql": "^11.14.0", + "@testcontainers/redis": "^11.14.0", "@trigger.dev/core": "workspace:*", "std-env": "^3.9.0", - "testcontainers": "^10.28.0", + "testcontainers": "^11.14.0", "tinyexec": "^0.3.0" }, "scripts": { diff --git a/internal-packages/testcontainers/src/utils.ts b/internal-packages/testcontainers/src/utils.ts index ea344e63f65..b3f69f77d0a 100644 --- a/internal-packages/testcontainers/src/utils.ts +++ b/internal-packages/testcontainers/src/utils.ts @@ -75,7 +75,9 @@ export async function createRedisContainer({ port?: number; network?: StartedNetwork; }) { - let container = new RedisContainer().withExposedPorts(port ?? 6379).withStartupTimeout(120_000); // 2 minutes + let container = new RedisContainer("redis:7.2") + .withExposedPorts(port ?? 6379) + .withStartupTimeout(120_000); // 2 minutes if (network) { container = container.withNetwork(network).withNetworkAliases("redis"); @@ -97,7 +99,7 @@ export async function createRedisContainer({ const [error] = await tryCatch(verifyRedisConnection(startedContainer)); if (error) { - await startedContainer.stop({ timeout: 30 }); + await startedContainer.stop({ timeout: 30_000 }); throw new Error("verifyRedisConnection error", { cause: error }); } @@ -236,7 +238,7 @@ export async function useContainer( metadata.useDurationMs = useDurationMs; } finally { // WARNING: Testcontainers by default will not wait until the container has stopped. It will simply issue the stop command and return immediately. - // If you need to wait for the container to be stopped, you can provide a timeout. The unit of timeout option here is second - await logCleanup(name, container.stop({ timeout: 10 }), metadata); + // If you need to wait for the container to be stopped, you can provide a timeout. The unit of timeout option here is milliseconds (changed from seconds in testcontainers v11) + await logCleanup(name, container.stop({ timeout: 10_000 }), metadata); } } diff --git a/package.json b/package.json index ce34f5bad27..ac4290e9236 100644 --- a/package.json +++ b/package.json @@ -62,7 +62,6 @@ "tsx": "^3.7.1", "turbo": "^1.10.3", "typescript": "5.5.4", - "vite": "^5.4.21", "vite-tsconfig-paths": "^4.0.5", "vitest": "3.1.4" }, @@ -90,17 +89,35 @@ "@types/node": "20.14.14", "express@^4>body-parser": "1.20.3", "@remix-run/dev@2.17.4>tar-fs": "2.1.4", - "testcontainers@10.28.0>tar-fs": "3.1.1", + "tar@>=7 <7.5.11": "^7.5.11", "form-data@^2": "2.5.4", "form-data@^3": "3.0.4", "form-data@^4": "4.0.4", - "axios@1.9.0": ">=1.12.0", + "axios@>=1.0.0 <1.15.0": "^1.15.0", "js-yaml@>=3.0.0 <3.14.2": "3.14.2", "js-yaml@>=4.0.0 <4.1.1": "4.1.1", "jws@<3.2.3": "3.2.3", "qs@>=6.0.0 <6.14.1": "6.14.1", - "systeminformation@>=5.0.0 <5.27.14": "5.27.14", - "lodash@>=4.0.0 <4.17.23": "4.17.23" + "systeminformation@>=5.0.0 <5.31.0": "^5.31.0", + "lodash@>=4.17 <4.18.0": "^4.18.0", + "lodash-es@>=4.17 <4.18.0": "^4.18.0", + "dompurify@>=3 <3.4.0": "^3.4.1", + "vite@>=5.0.0 <6.4.2": "^6.4.2", + "rollup@>=4 <4.59.0": "^4.59.0", + "flatted@>=3 <3.4.2": "^3.4.2", + "picomatch@>=2 <2.3.2": "^2.3.2", + "picomatch@>=4 <4.0.4": "^4.0.4", + "minimatch@>=3 <3.1.3": "^3.1.3", + "protobufjs@>=7 <7.5.5": "^7.5.5", + "fast-xml-parser@>=4 <4.5.5": "^4.5.5", + "fast-xml-parser@>=5 <5.7.0": "^5.7.0", + "path-to-regexp@>=0.1 <0.1.13": "^0.1.13", + "ajv@>=8 <8.18.0": "^8.18.0", + "socket.io-parser@>=4 <4.2.6": "^4.2.6", + "postcss@>=8 <8.5.10": "^8.5.10", + "yaml@>=2 <2.8.3": "^2.8.3", + "semver@>=5 <5.7.2": "^5.7.2", + "defu@>=6 <6.1.5": "^6.1.5" }, "onlyBuiltDependencies": [ "@depot/cli", diff --git a/packages/cli-v3/package.json b/packages/cli-v3/package.json index 24cc211535d..44047ac1da6 100644 --- a/packages/cli-v3/package.json +++ b/packages/cli-v3/package.json @@ -64,7 +64,7 @@ "cpy-cli": "^5.0.0", "execa": "^8.0.1", "find-up": "^7.0.0", - "rimraf": "^5.0.7", + "rimraf": "^6.0.1", "ts-essentials": "10.0.1", "tshy": "^3.0.2", "tsx": "4.17.0" @@ -140,7 +140,7 @@ "std-env": "^3.7.0", "strip-ansi": "^7.1.0", "supports-color": "^10.0.0", - "tar": "^7.5.4", + "tar": "^7.5.13", "tiny-invariant": "^1.2.0", "tinyexec": "^0.3.1", "tinyglobby": "^0.2.10", diff --git a/packages/core/package.json b/packages/core/package.json index 35e60bd7c89..8c2a5b2143d 100644 --- a/packages/core/package.json +++ b/packages/core/package.json @@ -215,7 +215,7 @@ "ai": "^6.0.0", "defu": "^6.1.4", "esbuild": "^0.23.0", - "rimraf": "^3.0.2", + "rimraf": "^6.0.1", "superjson": "^2.2.1", "ts-essentials": "10.0.1", "tshy": "^3.0.2", diff --git a/packages/react-hooks/package.json b/packages/react-hooks/package.json index 99a6952537a..837ddaf7cbb 100644 --- a/packages/react-hooks/package.json +++ b/packages/react-hooks/package.json @@ -44,7 +44,7 @@ "@arethetypeswrong/cli": "^0.15.4", "@types/react": "*", "@types/react-dom": "*", - "rimraf": "^3.0.2", + "rimraf": "^6.0.1", "tshy": "^3.0.2", "tsx": "4.17.0" }, diff --git a/packages/rsc/package.json b/packages/rsc/package.json index 17018123bf2..9c9ff4fd486 100644 --- a/packages/rsc/package.json +++ b/packages/rsc/package.json @@ -48,7 +48,7 @@ "@types/node": "^20.14.14", "@types/react": "*", "@types/react-dom": "*", - "rimraf": "^3.0.2", + "rimraf": "^6.0.1", "tshy": "^3.0.2", "tsx": "4.17.0" }, diff --git a/packages/trigger-sdk/package.json b/packages/trigger-sdk/package.json index cd38b7d2300..751244365cd 100644 --- a/packages/trigger-sdk/package.json +++ b/packages/trigger-sdk/package.json @@ -60,18 +60,16 @@ "slug": "^6.0.0", "ulid": "^2.3.0", "uncrypto": "^0.1.3", - "uuid": "^9.0.0", "ws": "^8.11.0" }, "devDependencies": { "@arethetypeswrong/cli": "^0.15.4", "@types/debug": "^4.1.7", "@types/slug": "^5.0.3", - "@types/uuid": "^9.0.0", "@types/ws": "^8.5.3", "ai": "^6.0.0", "encoding": "^0.1.13", - "rimraf": "^3.0.2", + "rimraf": "^6.0.1", "tshy": "^3.0.2", "tsx": "4.17.0", "typed-emitter": "^2.1.0", diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index f3631a68b63..2ab379c6a8c 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -9,17 +9,35 @@ overrides: '@types/node': 20.14.14 express@^4>body-parser: 1.20.3 '@remix-run/dev@2.17.4>tar-fs': 2.1.4 - testcontainers@10.28.0>tar-fs: 3.1.1 + tar@>=7 <7.5.11: ^7.5.11 form-data@^2: 2.5.4 form-data@^3: 3.0.4 form-data@^4: 4.0.4 - axios@1.9.0: '>=1.12.0' + axios@>=1.0.0 <1.15.0: ^1.15.0 js-yaml@>=3.0.0 <3.14.2: 3.14.2 js-yaml@>=4.0.0 <4.1.1: 4.1.1 jws@<3.2.3: 3.2.3 qs@>=6.0.0 <6.14.1: 6.14.1 - systeminformation@>=5.0.0 <5.27.14: 5.27.14 - lodash@>=4.0.0 <4.17.23: 4.17.23 + systeminformation@>=5.0.0 <5.31.0: ^5.31.0 + lodash@>=4.17 <4.18.0: ^4.18.0 + lodash-es@>=4.17 <4.18.0: ^4.18.0 + dompurify@>=3 <3.4.0: ^3.4.1 + vite@>=5.0.0 <6.4.2: ^6.4.2 + rollup@>=4 <4.59.0: ^4.59.0 + flatted@>=3 <3.4.2: ^3.4.2 + picomatch@>=2 <2.3.2: ^2.3.2 + picomatch@>=4 <4.0.4: ^4.0.4 + minimatch@>=3 <3.1.3: ^3.1.3 + protobufjs@>=7 <7.5.5: ^7.5.5 + fast-xml-parser@>=4 <4.5.5: ^4.5.5 + fast-xml-parser@>=5 <5.7.0: ^5.7.0 + path-to-regexp@>=0.1 <0.1.13: ^0.1.13 + ajv@>=8 <8.18.0: ^8.18.0 + socket.io-parser@>=4 <4.2.6: ^4.2.6 + postcss@>=8 <8.5.10: ^8.5.10 + yaml@>=2 <2.8.3: ^2.8.3 + semver@>=5 <5.7.2: ^5.7.2 + defu@>=6 <6.1.5: ^6.1.5 patchedDependencies: '@changesets/assemble-release-plan@5.2.4': @@ -81,10 +99,10 @@ importers: version: 20.14.14 '@vitest/coverage-v8': specifier: 3.1.4 - version: 3.1.4(vitest@3.1.4(@types/debug@4.1.12)(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1)) + version: 3.1.4(vitest@3.1.4(@types/debug@4.1.12)(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@3.12.2)(yaml@2.8.3)) autoprefixer: specifier: ^10.4.12 - version: 10.4.13(postcss@8.5.6) + version: 10.4.13(postcss@8.5.10) eslint-plugin-turbo: specifier: ^2.0.4 version: 2.0.5(eslint@8.31.0) @@ -106,15 +124,12 @@ importers: typescript: specifier: 5.5.4 version: 5.5.4 - vite: - specifier: ^5.4.21 - version: 5.4.21(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1) vite-tsconfig-paths: specifier: ^4.0.5 version: 4.0.5(typescript@5.5.4) vitest: specifier: 3.1.4 - version: 3.1.4(@types/debug@4.1.12)(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1) + version: 3.1.4(@types/debug@4.1.12)(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@3.12.2)(yaml@2.8.3) apps/coordinator: dependencies: @@ -585,14 +600,14 @@ importers: specifier: ^4.1.0 version: 4.1.0 dompurify: - specifier: ^3.2.6 - version: 3.2.6 + specifier: ^3.4.1 + version: 3.4.1 dotenv: specifier: ^16.4.5 version: 16.4.5 effect: - specifier: ^3.11.7 - version: 3.11.7 + specifier: ^3.21.2 + version: 3.21.2 emails: specifier: workspace:* version: link:../../internal-packages/emails @@ -825,8 +840,8 @@ importers: specifier: ^2.2.1 version: 2.2.1 uuid: - specifier: ^9.0.0 - version: 9.0.1 + specifier: ^14.0.0 + version: 14.0.0 ws: specifier: ^8.11.0 version: 8.12.0(bufferutil@4.0.9) @@ -851,7 +866,7 @@ importers: version: link:../../internal-packages/testcontainers '@remix-run/dev': specifier: 2.17.4 - version: 2.17.4(@remix-run/react@2.17.4(react-dom@18.2.0(react@18.2.0))(react@18.2.0)(typescript@5.5.4))(@remix-run/serve@2.17.4(typescript@5.5.4))(@types/node@20.14.14)(bufferutil@4.0.9)(lightningcss@1.29.2)(terser@5.44.1)(typescript@5.5.4)(vite@5.4.21(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1)) + version: 2.17.4(@remix-run/react@2.17.4(react-dom@18.2.0(react@18.2.0))(react@18.2.0)(typescript@5.5.4))(@remix-run/serve@2.17.4(typescript@5.5.4))(@types/node@20.14.14)(bufferutil@4.0.9)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@4.20.6)(typescript@5.5.4)(vite@6.4.2(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@4.20.6)(yaml@2.8.3))(yaml@2.8.3) '@remix-run/eslint-config': specifier: 2.17.4 version: 2.17.4(eslint@8.31.0)(react@18.2.0)(typescript@5.5.4) @@ -885,9 +900,6 @@ importers: '@types/cookie': specifier: ^0.6.0 version: 0.6.0 - '@types/dompurify': - specifier: ^3.2.0 - version: 3.2.0 '@types/eslint': specifier: ^8.4.6 version: 8.4.10 @@ -948,9 +960,6 @@ importers: '@types/tar': specifier: ^6.1.4 version: 6.1.4 - '@types/uuid': - specifier: ^9.0.0 - version: 9.0.0 '@types/ws': specifier: ^8.5.3 version: 8.5.4 @@ -965,7 +974,7 @@ importers: version: 0.0.130(encoding@0.1.13)(ws@8.12.0(bufferutil@4.0.9)) autoprefixer: specifier: ^10.4.13 - version: 10.4.13(postcss@8.5.6) + version: 10.4.13(postcss@8.5.10) css-loader: specifier: ^6.10.0 version: 6.10.0(webpack@5.102.1(@swc/core@1.3.26)(esbuild@0.15.18)) @@ -1001,10 +1010,10 @@ importers: version: 4.1.5 postcss-import: specifier: ^16.0.1 - version: 16.0.1(postcss@8.5.6) + version: 16.0.1(postcss@8.5.10) postcss-loader: specifier: ^8.1.1 - version: 8.1.1(postcss@8.5.6)(typescript@5.5.4)(webpack@5.102.1(@swc/core@1.3.26)(esbuild@0.15.18)) + version: 8.1.1(postcss@8.5.10)(typescript@5.5.4)(webpack@5.102.1(@swc/core@1.3.26)(esbuild@0.15.18)) prettier: specifier: ^2.8.8 version: 2.8.8 @@ -1015,8 +1024,8 @@ importers: specifier: ^15.8.1 version: 15.8.1 rimraf: - specifier: ^3.0.2 - version: 3.0.2 + specifier: ^6.0.1 + version: 6.0.1 style-loader: specifier: ^3.3.4 version: 3.3.4(webpack@5.102.1(@swc/core@1.3.26)(esbuild@0.15.18)) @@ -1131,14 +1140,14 @@ importers: specifier: ^0.0.12 version: 0.0.12 nodemailer: - specifier: ^7.0.11 - version: 7.0.11 + specifier: ^8.0.6 + version: 8.0.6 react: specifier: ^18.2.0 version: 18.3.1 react-email: specifier: ^2.1.1 - version: 2.1.2(@opentelemetry/api@1.9.0)(@swc/helpers@0.5.15)(bufferutil@4.0.9)(eslint@8.31.0) + version: 2.1.2(@opentelemetry/api@1.9.0)(@swc/helpers@0.5.15)(eslint@8.31.0) resend: specifier: ^3.2.0 version: 3.2.0 @@ -1150,8 +1159,8 @@ importers: version: 3.25.76 devDependencies: '@types/nodemailer': - specifier: ^7.0.4 - version: 7.0.4 + specifier: ^8.0.0 + version: 8.0.0 '@types/react': specifier: 18.2.69 version: 18.2.69 @@ -1170,7 +1179,7 @@ importers: version: link:../testcontainers vitest: specifier: 3.1.4 - version: 3.1.4(@types/debug@4.1.12)(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1) + version: 3.1.4(@types/debug@4.1.12)(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@4.20.6)(yaml@2.8.3) internal-packages/otlp-importer: dependencies: @@ -1178,15 +1187,15 @@ importers: specifier: ^5.2.3 version: 5.2.3 protobufjs: - specifier: ^7.2.6 - version: 7.3.2 + specifier: ^7.5.5 + version: 7.5.5 devDependencies: '@types/node': specifier: 20.14.14 version: 20.14.14 rimraf: - specifier: ^3.0.2 - version: 3.0.2 + specifier: ^6.0.1 + version: 6.0.1 ts-proto: specifier: ^1.167.3 version: 1.167.3 @@ -1331,7 +1340,7 @@ importers: version: 5.5.4 vitest: specifier: 3.1.4 - version: 3.1.4(@types/debug@4.1.12)(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1) + version: 3.1.4(@types/debug@4.1.12)(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@4.20.6)(yaml@2.8.3) internal-packages/testcontainers: dependencies: @@ -1349,11 +1358,11 @@ importers: version: 5.3.2 devDependencies: '@testcontainers/postgresql': - specifier: ^10.28.0 - version: 10.28.0 + specifier: ^11.14.0 + version: 11.14.0 '@testcontainers/redis': - specifier: ^10.28.0 - version: 10.28.0 + specifier: ^11.14.0 + version: 11.14.0 '@trigger.dev/core': specifier: workspace:* version: link:../../packages/core @@ -1361,8 +1370,8 @@ importers: specifier: ^3.9.0 version: 3.9.0 testcontainers: - specifier: ^10.28.0 - version: 10.28.0 + specifier: ^11.14.0 + version: 11.14.0 tinyexec: specifier: ^0.3.0 version: 0.3.0 @@ -1541,8 +1550,8 @@ importers: specifier: ^0.2.2 version: 0.2.2 defu: - specifier: ^6.1.4 - version: 6.1.4 + specifier: ^6.1.5 + version: 6.1.7 dotenv: specifier: ^16.4.5 version: 16.4.5 @@ -1643,8 +1652,8 @@ importers: specifier: ^10.0.0 version: 10.0.0 tar: - specifier: ^7.5.4 - version: 7.5.6 + specifier: ^7.5.13 + version: 7.5.13 tiny-invariant: specifier: ^1.2.0 version: 1.3.1 @@ -1713,8 +1722,8 @@ importers: specifier: ^7.0.0 version: 7.0.0 rimraf: - specifier: ^5.0.7 - version: 5.0.7 + specifier: ^6.0.1 + version: 6.0.1 ts-essentials: specifier: 10.0.1 version: 10.0.1(typescript@5.5.4) @@ -1858,14 +1867,14 @@ importers: specifier: ^6.0.0 version: 6.0.3(zod@3.25.76) defu: - specifier: ^6.1.4 - version: 6.1.4 + specifier: ^6.1.5 + version: 6.1.7 esbuild: specifier: ^0.23.0 version: 0.23.0 rimraf: - specifier: ^3.0.2 - version: 3.0.2 + specifier: ^6.0.1 + version: 6.0.1 superjson: specifier: ^2.2.1 version: 2.2.1 @@ -1941,8 +1950,8 @@ importers: specifier: '*' version: 18.2.7 rimraf: - specifier: ^3.0.2 - version: 3.0.2 + specifier: ^6.0.1 + version: 6.0.1 tshy: specifier: ^3.0.2 version: 3.0.2 @@ -1994,7 +2003,7 @@ importers: version: 6.0.1 tsup: specifier: ^8.4.0 - version: 8.4.0(@swc/core@1.3.101(@swc/helpers@0.5.15))(jiti@2.4.2)(postcss@8.5.6)(tsx@4.17.0)(typescript@5.5.4)(yaml@2.7.1) + version: 8.4.0(@swc/core@1.3.101(@swc/helpers@0.5.15))(jiti@2.4.2)(postcss@8.5.10)(tsx@4.17.0)(typescript@5.5.4)(yaml@2.8.3) tsx: specifier: 4.17.0 version: 4.17.0 @@ -2030,8 +2039,8 @@ importers: specifier: '*' version: 18.2.7 rimraf: - specifier: ^3.0.2 - version: 3.0.2 + specifier: ^6.0.1 + version: 6.0.1 tshy: specifier: ^3.0.2 version: 3.0.2 @@ -2117,9 +2126,6 @@ importers: uncrypto: specifier: ^0.1.3 version: 0.1.3 - uuid: - specifier: ^9.0.0 - version: 9.0.0 ws: specifier: ^8.11.0 version: 8.12.0(bufferutil@4.0.9) @@ -2133,9 +2139,6 @@ importers: '@types/slug': specifier: ^5.0.3 version: 5.0.3 - '@types/uuid': - specifier: ^9.0.0 - version: 9.0.0 '@types/ws': specifier: ^8.5.3 version: 8.5.4 @@ -2146,8 +2149,8 @@ importers: specifier: ^0.1.13 version: 0.1.13 rimraf: - specifier: ^3.0.2 - version: 3.0.2 + specifier: ^6.0.1 + version: 6.0.1 tshy: specifier: ^3.0.2 version: 3.0.2 @@ -2570,8 +2573,8 @@ importers: specifier: ^18 version: 18.2.7 postcss: - specifier: ^8 - version: 8.4.44 + specifier: ^8.5.10 + version: 8.5.10 tailwindcss: specifier: ^3.4.1 version: 3.4.1 @@ -4284,12 +4287,6 @@ packages: cpu: [ppc64] os: [aix] - '@esbuild/aix-ppc64@0.21.5': - resolution: {integrity: sha512-1SDgH6ZSPTlggy1yI6+Dbkiz8xzpHJEVAlF/AM1tHPLsf5STom9rwtjE4hKAF20FfXXNTFqEYXyJNWh1GiZedQ==} - engines: {node: '>=12'} - cpu: [ppc64] - os: [aix] - '@esbuild/aix-ppc64@0.23.0': resolution: {integrity: sha512-3sG8Zwa5fMcA9bgqB8AfWPQ+HFke6uD3h1s3RIwUNK8EG7a4buxvuFTs3j1IMs2NXAk9F30C/FF4vxRgQCcmoQ==} engines: {node: '>=18'} @@ -4326,12 +4323,6 @@ packages: cpu: [arm64] os: [android] - '@esbuild/android-arm64@0.21.5': - resolution: {integrity: sha512-c0uX9VAUBQ7dTDCjq+wdyGLowMdtR/GoC2U5IYk/7D1H1JYC0qseD7+11iMP2mRLN9RcCMRcjC4YMclCzGwS/A==} - engines: {node: '>=12'} - cpu: [arm64] - os: [android] - '@esbuild/android-arm64@0.23.0': resolution: {integrity: sha512-EuHFUYkAVfU4qBdyivULuu03FhJO4IJN9PGuABGrFy4vUuzk91P2d+npxHcFdpUnfYKy0PuV+n6bKIpHOB3prQ==} engines: {node: '>=18'} @@ -4374,12 +4365,6 @@ packages: cpu: [arm] os: [android] - '@esbuild/android-arm@0.21.5': - resolution: {integrity: sha512-vCPvzSjpPHEi1siZdlvAlsPxXl7WbOVUBBAowWug4rJHb68Ox8KualB+1ocNvT5fjv6wpkX6o/iEpbDrf68zcg==} - engines: {node: '>=12'} - cpu: [arm] - os: [android] - '@esbuild/android-arm@0.23.0': resolution: {integrity: sha512-+KuOHTKKyIKgEEqKbGTK8W7mPp+hKinbMBeEnNzjJGyFcWsfrXjSTNluJHCY1RqhxFurdD8uNXQDei7qDlR6+g==} engines: {node: '>=18'} @@ -4416,12 +4401,6 @@ packages: cpu: [x64] os: [android] - '@esbuild/android-x64@0.21.5': - resolution: {integrity: sha512-D7aPRUUNHRBwHxzxRvp856rjUHRFW1SdQATKXH2hqA0kAZb1hKmi02OpYRacl0TxIGz/ZmXWlbZgjwWYaCakTA==} - engines: {node: '>=12'} - cpu: [x64] - os: [android] - '@esbuild/android-x64@0.23.0': resolution: {integrity: sha512-WRrmKidLoKDl56LsbBMhzTTBxrsVwTKdNbKDalbEZr0tcsBgCLbEtoNthOW6PX942YiYq8HzEnb4yWQMLQuipQ==} engines: {node: '>=18'} @@ -4458,12 +4437,6 @@ packages: cpu: [arm64] os: [darwin] - '@esbuild/darwin-arm64@0.21.5': - resolution: {integrity: sha512-DwqXqZyuk5AiWWf3UfLiRDJ5EDd49zg6O9wclZ7kUMv2WRFr4HKjXp/5t8JZ11QbQfUS6/cRCKGwYhtNAY88kQ==} - engines: {node: '>=12'} - cpu: [arm64] - os: [darwin] - '@esbuild/darwin-arm64@0.23.0': resolution: {integrity: sha512-YLntie/IdS31H54Ogdn+v50NuoWF5BDkEUFpiOChVa9UnKpftgwzZRrI4J132ETIi+D8n6xh9IviFV3eXdxfow==} engines: {node: '>=18'} @@ -4500,12 +4473,6 @@ packages: cpu: [x64] os: [darwin] - '@esbuild/darwin-x64@0.21.5': - resolution: {integrity: sha512-se/JjF8NlmKVG4kNIuyWMV/22ZaerB+qaSi5MdrXtd6R08kvs2qCN4C09miupktDitvh8jRFflwGFBQcxZRjbw==} - engines: {node: '>=12'} - cpu: [x64] - os: [darwin] - '@esbuild/darwin-x64@0.23.0': resolution: {integrity: sha512-IMQ6eme4AfznElesHUPDZ+teuGwoRmVuuixu7sv92ZkdQcPbsNHzutd+rAfaBKo8YK3IrBEi9SLLKWJdEvJniQ==} engines: {node: '>=18'} @@ -4542,12 +4509,6 @@ packages: cpu: [arm64] os: [freebsd] - '@esbuild/freebsd-arm64@0.21.5': - resolution: {integrity: sha512-5JcRxxRDUJLX8JXp/wcBCy3pENnCgBR9bN6JsY4OmhfUtIHe3ZW0mawA7+RDAcMLrMIZaf03NlQiX9DGyB8h4g==} - engines: {node: '>=12'} - cpu: [arm64] - os: [freebsd] - '@esbuild/freebsd-arm64@0.23.0': resolution: {integrity: sha512-0muYWCng5vqaxobq6LB3YNtevDFSAZGlgtLoAc81PjUfiFz36n4KMpwhtAd4he8ToSI3TGyuhyx5xmiWNYZFyw==} engines: {node: '>=18'} @@ -4584,12 +4545,6 @@ packages: cpu: [x64] os: [freebsd] - '@esbuild/freebsd-x64@0.21.5': - resolution: {integrity: sha512-J95kNBj1zkbMXtHVH29bBriQygMXqoVQOQYA+ISs0/2l3T9/kj42ow2mpqerRBxDJnmkUDCaQT/dfNXWX/ZZCQ==} - engines: {node: '>=12'} - cpu: [x64] - os: [freebsd] - '@esbuild/freebsd-x64@0.23.0': resolution: {integrity: sha512-XKDVu8IsD0/q3foBzsXGt/KjD/yTKBCIwOHE1XwiXmrRwrX6Hbnd5Eqn/WvDekddK21tfszBSrE/WMaZh+1buQ==} engines: {node: '>=18'} @@ -4626,12 +4581,6 @@ packages: cpu: [arm64] os: [linux] - '@esbuild/linux-arm64@0.21.5': - resolution: {integrity: sha512-ibKvmyYzKsBeX8d8I7MH/TMfWDXBF3db4qM6sy+7re0YXya+K1cem3on9XgdT2EQGMu4hQyZhan7TeQ8XkGp4Q==} - engines: {node: '>=12'} - cpu: [arm64] - os: [linux] - '@esbuild/linux-arm64@0.23.0': resolution: {integrity: sha512-j1t5iG8jE7BhonbsEg5d9qOYcVZv/Rv6tghaXM/Ug9xahM0nX/H2gfu6X6z11QRTMT6+aywOMA8TDkhPo8aCGw==} engines: {node: '>=18'} @@ -4668,12 +4617,6 @@ packages: cpu: [arm] os: [linux] - '@esbuild/linux-arm@0.21.5': - resolution: {integrity: sha512-bPb5AHZtbeNGjCKVZ9UGqGwo8EUu4cLq68E95A53KlxAPRmUyYv2D6F0uUI65XisGOL1hBP5mTronbgo+0bFcA==} - engines: {node: '>=12'} - cpu: [arm] - os: [linux] - '@esbuild/linux-arm@0.23.0': resolution: {integrity: sha512-SEELSTEtOFu5LPykzA395Mc+54RMg1EUgXP+iw2SJ72+ooMwVsgfuwXo5Fn0wXNgWZsTVHwY2cg4Vi/bOD88qw==} engines: {node: '>=18'} @@ -4710,12 +4653,6 @@ packages: cpu: [ia32] os: [linux] - '@esbuild/linux-ia32@0.21.5': - resolution: {integrity: sha512-YvjXDqLRqPDl2dvRODYmmhz4rPeVKYvppfGYKSNGdyZkA01046pLWyRKKI3ax8fbJoK5QbxblURkwK/MWY18Tg==} - engines: {node: '>=12'} - cpu: [ia32] - os: [linux] - '@esbuild/linux-ia32@0.23.0': resolution: {integrity: sha512-P7O5Tkh2NbgIm2R6x1zGJJsnacDzTFcRWZyTTMgFdVit6E98LTxO+v8LCCLWRvPrjdzXHx9FEOA8oAZPyApWUA==} engines: {node: '>=18'} @@ -4758,12 +4695,6 @@ packages: cpu: [loong64] os: [linux] - '@esbuild/linux-loong64@0.21.5': - resolution: {integrity: sha512-uHf1BmMG8qEvzdrzAqg2SIG/02+4/DHB6a9Kbya0XDvwDEKCoC8ZRWI5JJvNdUjtciBGFQ5PuBlpEOXQj+JQSg==} - engines: {node: '>=12'} - cpu: [loong64] - os: [linux] - '@esbuild/linux-loong64@0.23.0': resolution: {integrity: sha512-InQwepswq6urikQiIC/kkx412fqUZudBO4SYKu0N+tGhXRWUqAx+Q+341tFV6QdBifpjYgUndV1hhMq3WeJi7A==} engines: {node: '>=18'} @@ -4800,12 +4731,6 @@ packages: cpu: [mips64el] os: [linux] - '@esbuild/linux-mips64el@0.21.5': - resolution: {integrity: sha512-IajOmO+KJK23bj52dFSNCMsz1QP1DqM6cwLUv3W1QwyxkyIWecfafnI555fvSGqEKwjMXVLokcV5ygHW5b3Jbg==} - engines: {node: '>=12'} - cpu: [mips64el] - os: [linux] - '@esbuild/linux-mips64el@0.23.0': resolution: {integrity: sha512-J9rflLtqdYrxHv2FqXE2i1ELgNjT+JFURt/uDMoPQLcjWQA5wDKgQA4t/dTqGa88ZVECKaD0TctwsUfHbVoi4w==} engines: {node: '>=18'} @@ -4842,12 +4767,6 @@ packages: cpu: [ppc64] os: [linux] - '@esbuild/linux-ppc64@0.21.5': - resolution: {integrity: sha512-1hHV/Z4OEfMwpLO8rp7CvlhBDnjsC3CttJXIhBi+5Aj5r+MBvy4egg7wCbe//hSsT+RvDAG7s81tAvpL2XAE4w==} - engines: {node: '>=12'} - cpu: [ppc64] - os: [linux] - '@esbuild/linux-ppc64@0.23.0': resolution: {integrity: sha512-cShCXtEOVc5GxU0fM+dsFD10qZ5UpcQ8AM22bYj0u/yaAykWnqXJDpd77ublcX6vdDsWLuweeuSNZk4yUxZwtw==} engines: {node: '>=18'} @@ -4884,12 +4803,6 @@ packages: cpu: [riscv64] os: [linux] - '@esbuild/linux-riscv64@0.21.5': - resolution: {integrity: sha512-2HdXDMd9GMgTGrPWnJzP2ALSokE/0O5HhTUvWIbD3YdjME8JwvSCnNGBnTThKGEB91OZhzrJ4qIIxk/SBmyDDA==} - engines: {node: '>=12'} - cpu: [riscv64] - os: [linux] - '@esbuild/linux-riscv64@0.23.0': resolution: {integrity: sha512-HEtaN7Y5UB4tZPeQmgz/UhzoEyYftbMXrBCUjINGjh3uil+rB/QzzpMshz3cNUxqXN7Vr93zzVtpIDL99t9aRw==} engines: {node: '>=18'} @@ -4926,12 +4839,6 @@ packages: cpu: [s390x] os: [linux] - '@esbuild/linux-s390x@0.21.5': - resolution: {integrity: sha512-zus5sxzqBJD3eXxwvjN1yQkRepANgxE9lgOW2qLnmr8ikMTphkjgXu1HR01K4FJg8h1kEEDAqDcZQtbrRnB41A==} - engines: {node: '>=12'} - cpu: [s390x] - os: [linux] - '@esbuild/linux-s390x@0.23.0': resolution: {integrity: sha512-WDi3+NVAuyjg/Wxi+o5KPqRbZY0QhI9TjrEEm+8dmpY9Xir8+HE/HNx2JoLckhKbFopW0RdO2D72w8trZOV+Wg==} engines: {node: '>=18'} @@ -4968,12 +4875,6 @@ packages: cpu: [x64] os: [linux] - '@esbuild/linux-x64@0.21.5': - resolution: {integrity: sha512-1rYdTpyv03iycF1+BhzrzQJCdOuAOtaqHTWJZCWvijKD2N5Xu0TtVC8/+1faWqcP9iBCWOmjmhoH94dH82BxPQ==} - engines: {node: '>=12'} - cpu: [x64] - os: [linux] - '@esbuild/linux-x64@0.23.0': resolution: {integrity: sha512-a3pMQhUEJkITgAw6e0bWA+F+vFtCciMjW/LPtoj99MhVt+Mfb6bbL9hu2wmTZgNd994qTAEw+U/r6k3qHWWaOQ==} engines: {node: '>=18'} @@ -5022,12 +4923,6 @@ packages: cpu: [x64] os: [netbsd] - '@esbuild/netbsd-x64@0.21.5': - resolution: {integrity: sha512-Woi2MXzXjMULccIwMnLciyZH4nCIMpWQAs049KEeMvOcNADVxo0UBIQPfSmxB3CWKedngg7sWZdLvLczpe0tLg==} - engines: {node: '>=12'} - cpu: [x64] - os: [netbsd] - '@esbuild/netbsd-x64@0.23.0': resolution: {integrity: sha512-cRK+YDem7lFTs2Q5nEv/HHc4LnrfBCbH5+JHu6wm2eP+d8OZNoSMYgPZJq78vqQ9g+9+nMuIsAO7skzphRXHyw==} engines: {node: '>=18'} @@ -5082,12 +4977,6 @@ packages: cpu: [x64] os: [openbsd] - '@esbuild/openbsd-x64@0.21.5': - resolution: {integrity: sha512-HLNNw99xsvx12lFBUwoT8EVCsSvRNDVxNpjZ7bPn947b8gJPzeHWyNVhFsaerc0n3TsbOINvRP2byTZ5LKezow==} - engines: {node: '>=12'} - cpu: [x64] - os: [openbsd] - '@esbuild/openbsd-x64@0.23.0': resolution: {integrity: sha512-6p3nHpby0DM/v15IFKMjAaayFhqnXV52aEmv1whZHX56pdkK+MEaLoQWj+H42ssFarP1PcomVhbsR4pkz09qBg==} engines: {node: '>=18'} @@ -5124,12 +5013,6 @@ packages: cpu: [x64] os: [sunos] - '@esbuild/sunos-x64@0.21.5': - resolution: {integrity: sha512-6+gjmFpfy0BHU5Tpptkuh8+uw3mnrvgs+dSPQXQOv3ekbordwnzTVEb4qnIvQcYXq6gzkyTnoZ9dZG+D4garKg==} - engines: {node: '>=12'} - cpu: [x64] - os: [sunos] - '@esbuild/sunos-x64@0.23.0': resolution: {integrity: sha512-BFelBGfrBwk6LVrmFzCq1u1dZbG4zy/Kp93w2+y83Q5UGYF1d8sCzeLI9NXjKyujjBBniQa8R8PzLFAUrSM9OA==} engines: {node: '>=18'} @@ -5166,12 +5049,6 @@ packages: cpu: [arm64] os: [win32] - '@esbuild/win32-arm64@0.21.5': - resolution: {integrity: sha512-Z0gOTd75VvXqyq7nsl93zwahcTROgqvuAcYDUr+vOv8uHhNSKROyU961kgtCD1e95IqPKSQKH7tBTslnS3tA8A==} - engines: {node: '>=12'} - cpu: [arm64] - os: [win32] - '@esbuild/win32-arm64@0.23.0': resolution: {integrity: sha512-lY6AC8p4Cnb7xYHuIxQ6iYPe6MfO2CC43XXKo9nBXDb35krYt7KGhQnOkRGar5psxYkircpCqfbNDB4uJbS2jQ==} engines: {node: '>=18'} @@ -5208,12 +5085,6 @@ packages: cpu: [ia32] os: [win32] - '@esbuild/win32-ia32@0.21.5': - resolution: {integrity: sha512-SWXFF1CL2RVNMaVs+BBClwtfZSvDgtL//G/smwAc5oVK/UPu2Gu9tIaRgFmYFFKrmg3SyAjSrElf0TiJ1v8fYA==} - engines: {node: '>=12'} - cpu: [ia32] - os: [win32] - '@esbuild/win32-ia32@0.23.0': resolution: {integrity: sha512-7L1bHlOTcO4ByvI7OXVI5pNN6HSu6pUQq9yodga8izeuB1KcT2UkHaH6118QJwopExPn0rMHIseCTx1CRo/uNA==} engines: {node: '>=18'} @@ -5250,12 +5121,6 @@ packages: cpu: [x64] os: [win32] - '@esbuild/win32-x64@0.21.5': - resolution: {integrity: sha512-tQd/1efJuzPC6rCFwEvLtci/xNFcTZknmXs98FYDfGE4wP9ClFV98nyKrzJKVPMhdDnjzLhdUyMX4PsQAPjwIw==} - engines: {node: '>=12'} - cpu: [x64] - os: [win32] - '@esbuild/win32-x64@0.23.0': resolution: {integrity: sha512-Arm+WgUFLUATuoxCJcahGuk6Yj9Pzxd6l11Zb/2aAuv5kWWvvfhLFo2fni4uSK5vzlUdCGZ/BdV5tH8klj8p8g==} engines: {node: '>=18'} @@ -5860,6 +5725,9 @@ packages: '@kubernetes/client-node@1.0.0': resolution: {integrity: sha512-a8NSvFDSHKFZ0sR1hbPSf8IDFNJwctEU5RodSCNiq/moRXWmrdmqhb1RRQzF+l+TSBaDgHw3YsYNxxE92STBzw==} + '@kwsites/file-exists@1.1.1': + resolution: {integrity: sha512-m9/5YGR18lIwxSFDwfE3oA7bWuq9kdau6ugN4H2rJeyhFQZcG9AgSHkQtSD15a8WvTgfz9aikZMrKPHvbpqFiw==} + '@lezer/common@1.0.2': resolution: {integrity: sha512-SVgiGtMnMnW3ActR8SXgsDhw7a0w0ChHSYAyAUxxrOiJ1OqYWEKk/xJd84tTSPo1mo6DXLObAJALNnd0Hrv7Ng==} @@ -6225,6 +6093,9 @@ packages: '@nicolo-ribaudo/eslint-scope-5-internals@5.1.1-v1': resolution: {integrity: sha512-54/JRvkLIzzDWshCWfuhadfrfZVPiElY8Fcgmg1HroEly/EDSszzhBAsarCux+D/kOslTRquNzuyGSmUSTTHGg==} + '@nodable/entities@2.1.0': + resolution: {integrity: sha512-nyT7T3nbMyBI/lvr6L5TyWbFJAI9FTgVRakNoBqCD+PmID8DzFrrNdLLtHMwMszOtqZa8PAOV24ZqDnQrhQINA==} + '@nodelib/fs.scandir@2.1.5': resolution: {integrity: sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g==} engines: {node: '>= 8'} @@ -9216,7 +9087,7 @@ packages: '@remix-run/react': ^2.17.0 '@remix-run/serve': ^2.17.0 typescript: 5.5.4 - vite: ^5.1.0 || ^6.0.0 + vite: ^6.4.2 wrangler: ^3.28.2 peerDependenciesMeta: '@remix-run/serve': @@ -9319,119 +9190,152 @@ packages: '@remix-run/web-stream@1.1.0': resolution: {integrity: sha512-KRJtwrjRV5Bb+pM7zxcTJkhIqWWSy+MYsIxHK+0m5atcznsf15YwUBWHWulZerV2+vvHH1Lp1DD7pw6qKW8SgA==} - '@rollup/rollup-android-arm-eabi@4.36.0': - resolution: {integrity: sha512-jgrXjjcEwN6XpZXL0HUeOVGfjXhPyxAbbhD0BlXUB+abTOpbPiN5Wb3kOT7yb+uEtATNYF5x5gIfwutmuBA26w==} + '@rollup/rollup-android-arm-eabi@4.60.1': + resolution: {integrity: sha512-d6FinEBLdIiK+1uACUttJKfgZREXrF0Qc2SmLII7W2AD8FfiZ9Wjd+rD/iRuf5s5dWrr1GgwXCvPqOuDquOowA==} cpu: [arm] os: [android] - '@rollup/rollup-android-arm64@4.36.0': - resolution: {integrity: sha512-NyfuLvdPdNUfUNeYKUwPwKsE5SXa2J6bCt2LdB/N+AxShnkpiczi3tcLJrm5mA+eqpy0HmaIY9F6XCa32N5yzg==} + '@rollup/rollup-android-arm64@4.60.1': + resolution: {integrity: sha512-YjG/EwIDvvYI1YvYbHvDz/BYHtkY4ygUIXHnTdLhG+hKIQFBiosfWiACWortsKPKU/+dUwQQCKQM3qrDe8c9BA==} cpu: [arm64] os: [android] - '@rollup/rollup-darwin-arm64@4.36.0': - resolution: {integrity: sha512-JQ1Jk5G4bGrD4pWJQzWsD8I1n1mgPXq33+/vP4sk8j/z/C2siRuxZtaUA7yMTf71TCZTZl/4e1bfzwUmFb3+rw==} + '@rollup/rollup-darwin-arm64@4.53.2': + resolution: {integrity: sha512-A6s4gJpomNBtJ2yioj8bflM2oogDwzUiMl2yNJ2v9E7++sHrSrsQ29fOfn5DM/iCzpWcebNYEdXpaK4tr2RhfQ==} cpu: [arm64] os: [darwin] - '@rollup/rollup-darwin-arm64@4.53.2': - resolution: {integrity: sha512-A6s4gJpomNBtJ2yioj8bflM2oogDwzUiMl2yNJ2v9E7++sHrSrsQ29fOfn5DM/iCzpWcebNYEdXpaK4tr2RhfQ==} + '@rollup/rollup-darwin-arm64@4.60.1': + resolution: {integrity: sha512-mjCpF7GmkRtSJwon+Rq1N8+pI+8l7w5g9Z3vWj4T7abguC4Czwi3Yu/pFaLvA3TTeMVjnu3ctigusqWUfjZzvw==} cpu: [arm64] os: [darwin] - '@rollup/rollup-darwin-x64@4.36.0': - resolution: {integrity: sha512-6c6wMZa1lrtiRsbDziCmjE53YbTkxMYhhnWnSW8R/yqsM7a6mSJ3uAVT0t8Y/DGt7gxUWYuFM4bwWk9XCJrFKA==} + '@rollup/rollup-darwin-x64@4.60.1': + resolution: {integrity: sha512-haZ7hJ1JT4e9hqkoT9R/19XW2QKqjfJVv+i5AGg57S+nLk9lQnJ1F/eZloRO3o9Scy9CM3wQ9l+dkXtcBgN5Ew==} cpu: [x64] os: [darwin] - '@rollup/rollup-freebsd-arm64@4.36.0': - resolution: {integrity: sha512-KXVsijKeJXOl8QzXTsA+sHVDsFOmMCdBRgFmBb+mfEb/7geR7+C8ypAml4fquUt14ZyVXaw2o1FWhqAfOvA4sg==} + '@rollup/rollup-freebsd-arm64@4.60.1': + resolution: {integrity: sha512-czw90wpQq3ZsAVBlinZjAYTKduOjTywlG7fEeWKUA7oCmpA8xdTkxZZlwNJKWqILlq0wehoZcJYfBvOyhPTQ6w==} cpu: [arm64] os: [freebsd] - '@rollup/rollup-freebsd-x64@4.36.0': - resolution: {integrity: sha512-dVeWq1ebbvByI+ndz4IJcD4a09RJgRYmLccwlQ8bPd4olz3Y213uf1iwvc7ZaxNn2ab7bjc08PrtBgMu6nb4pQ==} + '@rollup/rollup-freebsd-x64@4.60.1': + resolution: {integrity: sha512-KVB2rqsxTHuBtfOeySEyzEOB7ltlB/ux38iu2rBQzkjbwRVlkhAGIEDiiYnO2kFOkJp+Z7pUXKyrRRFuFUKt+g==} cpu: [x64] os: [freebsd] - '@rollup/rollup-linux-arm-gnueabihf@4.36.0': - resolution: {integrity: sha512-bvXVU42mOVcF4le6XSjscdXjqx8okv4n5vmwgzcmtvFdifQ5U4dXFYaCB87namDRKlUL9ybVtLQ9ztnawaSzvg==} + '@rollup/rollup-linux-arm-gnueabihf@4.60.1': + resolution: {integrity: sha512-L+34Qqil+v5uC0zEubW7uByo78WOCIrBvci69E7sFASRl0X7b/MB6Cqd1lky/CtcSVTydWa2WZwFuWexjS5o6g==} cpu: [arm] os: [linux] libc: [glibc] - '@rollup/rollup-linux-arm-musleabihf@4.36.0': - resolution: {integrity: sha512-JFIQrDJYrxOnyDQGYkqnNBtjDwTgbasdbUiQvcU8JmGDfValfH1lNpng+4FWlhaVIR4KPkeddYjsVVbmJYvDcg==} + '@rollup/rollup-linux-arm-musleabihf@4.60.1': + resolution: {integrity: sha512-n83O8rt4v34hgFzlkb1ycniJh7IR5RCIqt6mz1VRJD6pmhRi0CXdmfnLu9dIUS6buzh60IvACM842Ffb3xd6Gg==} cpu: [arm] os: [linux] libc: [musl] - '@rollup/rollup-linux-arm64-gnu@4.36.0': - resolution: {integrity: sha512-KqjYVh3oM1bj//5X7k79PSCZ6CvaVzb7Qs7VMWS+SlWB5M8p3FqufLP9VNp4CazJ0CsPDLwVD9r3vX7Ci4J56A==} + '@rollup/rollup-linux-arm64-gnu@4.60.1': + resolution: {integrity: sha512-Nql7sTeAzhTAja3QXeAI48+/+GjBJ+QmAH13snn0AJSNL50JsDqotyudHyMbO2RbJkskbMbFJfIJKWA6R1LCJQ==} cpu: [arm64] os: [linux] libc: [glibc] - '@rollup/rollup-linux-arm64-musl@4.36.0': - resolution: {integrity: sha512-QiGnhScND+mAAtfHqeT+cB1S9yFnNQ/EwCg5yE3MzoaZZnIV0RV9O5alJAoJKX/sBONVKeZdMfO8QSaWEygMhw==} + '@rollup/rollup-linux-arm64-musl@4.60.1': + resolution: {integrity: sha512-+pUymDhd0ys9GcKZPPWlFiZ67sTWV5UU6zOJat02M1+PiuSGDziyRuI/pPue3hoUwm2uGfxdL+trT6Z9rxnlMA==} cpu: [arm64] os: [linux] libc: [musl] - '@rollup/rollup-linux-loongarch64-gnu@4.36.0': - resolution: {integrity: sha512-1ZPyEDWF8phd4FQtTzMh8FQwqzvIjLsl6/84gzUxnMNFBtExBtpL51H67mV9xipuxl1AEAerRBgBwFNpkw8+Lg==} + '@rollup/rollup-linux-loong64-gnu@4.60.1': + resolution: {integrity: sha512-VSvgvQeIcsEvY4bKDHEDWcpW4Yw7BtlKG1GUT4FzBUlEKQK0rWHYBqQt6Fm2taXS+1bXvJT6kICu5ZwqKCnvlQ==} cpu: [loong64] os: [linux] libc: [glibc] - '@rollup/rollup-linux-powerpc64le-gnu@4.36.0': - resolution: {integrity: sha512-VMPMEIUpPFKpPI9GZMhJrtu8rxnp6mJR3ZzQPykq4xc2GmdHj3Q4cA+7avMyegXy4n1v+Qynr9fR88BmyO74tg==} + '@rollup/rollup-linux-loong64-musl@4.60.1': + resolution: {integrity: sha512-4LqhUomJqwe641gsPp6xLfhqWMbQV04KtPp7/dIp0nzPxAkNY1AbwL5W0MQpcalLYk07vaW9Kp1PBhdpZYYcEw==} + cpu: [loong64] + os: [linux] + libc: [musl] + + '@rollup/rollup-linux-ppc64-gnu@4.60.1': + resolution: {integrity: sha512-tLQQ9aPvkBxOc/EUT6j3pyeMD6Hb8QF2BTBnCQWP/uu1lhc9AIrIjKnLYMEroIz/JvtGYgI9dF3AxHZNaEH0rw==} cpu: [ppc64] os: [linux] libc: [glibc] - '@rollup/rollup-linux-riscv64-gnu@4.36.0': - resolution: {integrity: sha512-ttE6ayb/kHwNRJGYLpuAvB7SMtOeQnVXEIpMtAvx3kepFQeowVED0n1K9nAdraHUPJ5hydEMxBpIR7o4nrm8uA==} + '@rollup/rollup-linux-ppc64-musl@4.60.1': + resolution: {integrity: sha512-RMxFhJwc9fSXP6PqmAz4cbv3kAyvD1etJFjTx4ONqFP9DkTkXsAMU4v3Vyc5BgzC+anz7nS/9tp4obsKfqkDHg==} + cpu: [ppc64] + os: [linux] + libc: [musl] + + '@rollup/rollup-linux-riscv64-gnu@4.60.1': + resolution: {integrity: sha512-QKgFl+Yc1eEk6MmOBfRHYF6lTxiiiV3/z/BRrbSiW2I7AFTXoBFvdMEyglohPj//2mZS4hDOqeB0H1ACh3sBbg==} cpu: [riscv64] os: [linux] libc: [glibc] - '@rollup/rollup-linux-s390x-gnu@4.36.0': - resolution: {integrity: sha512-4a5gf2jpS0AIe7uBjxDeUMNcFmaRTbNv7NxI5xOCs4lhzsVyGR/0qBXduPnoWf6dGC365saTiwag8hP1imTgag==} + '@rollup/rollup-linux-riscv64-musl@4.60.1': + resolution: {integrity: sha512-RAjXjP/8c6ZtzatZcA1RaQr6O1TRhzC+adn8YZDnChliZHviqIjmvFwHcxi4JKPSDAt6Uhf/7vqcBzQJy0PDJg==} + cpu: [riscv64] + os: [linux] + libc: [musl] + + '@rollup/rollup-linux-s390x-gnu@4.60.1': + resolution: {integrity: sha512-wcuocpaOlaL1COBYiA89O6yfjlp3RwKDeTIA0hM7OpmhR1Bjo9j31G1uQVpDlTvwxGn2nQs65fBFL5UFd76FcQ==} cpu: [s390x] os: [linux] libc: [glibc] - '@rollup/rollup-linux-x64-gnu@4.36.0': - resolution: {integrity: sha512-5KtoW8UWmwFKQ96aQL3LlRXX16IMwyzMq/jSSVIIyAANiE1doaQsx/KRyhAvpHlPjPiSU/AYX/8m+lQ9VToxFQ==} + '@rollup/rollup-linux-x64-gnu@4.53.2': + resolution: {integrity: sha512-yo8d6tdfdeBArzC7T/PnHd7OypfI9cbuZzPnzLJIyKYFhAQ8SvlkKtKBMbXDxe1h03Rcr7u++nFS7tqXz87Gtw==} cpu: [x64] os: [linux] libc: [glibc] - '@rollup/rollup-linux-x64-gnu@4.53.2': - resolution: {integrity: sha512-yo8d6tdfdeBArzC7T/PnHd7OypfI9cbuZzPnzLJIyKYFhAQ8SvlkKtKBMbXDxe1h03Rcr7u++nFS7tqXz87Gtw==} + '@rollup/rollup-linux-x64-gnu@4.60.1': + resolution: {integrity: sha512-77PpsFQUCOiZR9+LQEFg9GClyfkNXj1MP6wRnzYs0EeWbPcHs02AXu4xuUbM1zhwn3wqaizle3AEYg5aeoohhg==} cpu: [x64] os: [linux] libc: [glibc] - '@rollup/rollup-linux-x64-musl@4.36.0': - resolution: {integrity: sha512-sycrYZPrv2ag4OCvaN5js+f01eoZ2U+RmT5as8vhxiFz+kxwlHrsxOwKPSA8WyS+Wc6Epid9QeI/IkQ9NkgYyQ==} + '@rollup/rollup-linux-x64-musl@4.60.1': + resolution: {integrity: sha512-5cIATbk5vynAjqqmyBjlciMJl1+R/CwX9oLk/EyiFXDWd95KpHdrOJT//rnUl4cUcskrd0jCCw3wpZnhIHdD9w==} cpu: [x64] os: [linux] libc: [musl] - '@rollup/rollup-win32-arm64-msvc@4.36.0': - resolution: {integrity: sha512-qbqt4N7tokFwwSVlWDsjfoHgviS3n/vZ8LK0h1uLG9TYIRuUTJC88E1xb3LM2iqZ/WTqNQjYrtmtGmrmmawB6A==} + '@rollup/rollup-openbsd-x64@4.60.1': + resolution: {integrity: sha512-cl0w09WsCi17mcmWqqglez9Gk8isgeWvoUZ3WiJFYSR3zjBQc2J5/ihSjpl+VLjPqjQ/1hJRcqBfLjssREQILw==} + cpu: [x64] + os: [openbsd] + + '@rollup/rollup-openharmony-arm64@4.60.1': + resolution: {integrity: sha512-4Cv23ZrONRbNtbZa37mLSueXUCtN7MXccChtKpUnQNgF010rjrjfHx3QxkS2PI7LqGT5xXyYs1a7LbzAwT0iCA==} + cpu: [arm64] + os: [openharmony] + + '@rollup/rollup-win32-arm64-msvc@4.60.1': + resolution: {integrity: sha512-i1okWYkA4FJICtr7KpYzFpRTHgy5jdDbZiWfvny21iIKky5YExiDXP+zbXzm3dUcFpkEeYNHgQ5fuG236JPq0g==} cpu: [arm64] os: [win32] - '@rollup/rollup-win32-ia32-msvc@4.36.0': - resolution: {integrity: sha512-t+RY0JuRamIocMuQcfwYSOkmdX9dtkr1PbhKW42AMvaDQa+jOdpUYysroTF/nuPpAaQMWp7ye+ndlmmthieJrQ==} + '@rollup/rollup-win32-ia32-msvc@4.60.1': + resolution: {integrity: sha512-u09m3CuwLzShA0EYKMNiFgcjjzwqtUMLmuCJLeZWjjOYA3IT2Di09KaxGBTP9xVztWyIWjVdsB2E9goMjZvTQg==} cpu: [ia32] os: [win32] - '@rollup/rollup-win32-x64-msvc@4.36.0': - resolution: {integrity: sha512-aRXd7tRZkWLqGbChgcMMDEHjOKudo1kChb1Jt1IfR8cY/KIpgNviLeJy5FUb9IpSuQj8dU2fAYNMPW/hLKOSTw==} + '@rollup/rollup-win32-x64-gnu@4.60.1': + resolution: {integrity: sha512-k+600V9Zl1CM7eZxJgMyTUzmrmhB/0XZnF4pRypKAlAgxmedUA+1v9R+XOFv56W4SlHEzfeMtzujLJD22Uz5zg==} + cpu: [x64] + os: [win32] + + '@rollup/rollup-win32-x64-msvc@4.60.1': + resolution: {integrity: sha512-lWMnixq/QzxyhTV6NjQJ4SFo1J6PvOX8vUx5Wb4bBPsEb+8xZ89Bz6kOXpfXj9ak9AHTQVQzlgzBEc1SyM27xQ==} cpu: [x64] os: [win32] @@ -10495,11 +10399,11 @@ packages: '@team-plain/typescript-sdk@3.5.0': resolution: {integrity: sha512-9kweiSlYAN31VI7yzILGxdlZqsGJ+FmCEfXyEZ/0/i3r6vOwq45FDqtjadnQJVtFm+rf/8vCFRN+wEYMIEv6Aw==} - '@testcontainers/postgresql@10.28.0': - resolution: {integrity: sha512-NN25rruG5D4Q7pCNIJuHwB+G85OSeJ3xHZ2fWx0O6sPoPEfCYwvpj8mq99cyn68nxFkFYZeyrZJtSFO+FnydiA==} + '@testcontainers/postgresql@11.14.0': + resolution: {integrity: sha512-wYbJn8GRTj8qfqzfVubxioYWlHJU/ImIjuzPwyy9C5Qfo6g3GLduPZAj+BifvqTZjgT3gd4gFVLCPhBji7dc1w==} - '@testcontainers/redis@10.28.0': - resolution: {integrity: sha512-xDNKSJTBmQca/3v5sdHmqSCYr68vjvAGSxoHCuWylha77gAYn88g5nUZK0ocNbUZgBq69KhIzj/f9zlHkw34uA==} + '@testcontainers/redis@11.14.0': + resolution: {integrity: sha512-WX005slz2JMQPw2avbSjf5awVjpmFhOs5xCxeGSYLcV5ia4W1edv/P6MdOw4dZnvDQDuN5LfqNoV/ut3XGb2pA==} '@testing-library/dom@8.19.1': resolution: {integrity: sha512-P6iIPyYQ+qH8CvGauAqanhVnjrnRe0IZFSYCeGkSRW9q3u8bdVn2NPI+lasFyVsEQn1J/IFmp5Aax41+dAP9wg==} @@ -10690,9 +10594,8 @@ packages: '@types/dockerode@3.3.35': resolution: {integrity: sha512-P+DCMASlsH+QaKkDpekKrP5pLls767PPs+/LrlVbKnEnY5tMpEUa2C6U4gRsdFZengOqxdCIqy16R22Q3pLB6Q==} - '@types/dompurify@3.2.0': - resolution: {integrity: sha512-Fgg31wv9QbLDA0SpTOXO3MaxySc4DKGLi8sna4/Utjo4r3ZRPdCt4UQee8BWr+Q5z21yifghREPJGYaEOEIACg==} - deprecated: This is a stub types definition. dompurify provides its own type definitions, so you do not need this installed. + '@types/dockerode@4.0.1': + resolution: {integrity: sha512-cmUpB+dPN955PxBEuXE3f6lKO1hHiIGYJA46IVF3BJpNsZGvtBDcRnlrHYHtOH/B6vtDOyl2kZ2ShAu3mgc27Q==} '@types/eslint-scope@3.7.4': resolution: {integrity: sha512-9K4zoImiZc3HlIp6AVUDE4CWYx22a+lhSZMYNpbjW04+YF0KWj4pJXnEMjdnFTiQibFFmElcsasJXDbdI/EPhA==} @@ -10712,9 +10615,6 @@ packages: '@types/estree@1.0.0': resolution: {integrity: sha512-WulqXMDUTYAXCjZnk6JtIHPigp55cVtDgDrO2gHRwhyJto21+1zbVCtOYB2L1F9w4qCQ0rOGWBnBe0FNTiEJIQ==} - '@types/estree@1.0.6': - resolution: {integrity: sha512-AYnb1nQyY49te+VRAVgmzfcgjYS91mY5P0TKUDCLEM+gNnA+3T6rWITXRLYCpahpqSQbN5cE+gHpnPyXjHWxcw==} - '@types/estree@1.0.8': resolution: {integrity: sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==} @@ -10835,8 +10735,8 @@ packages: '@types/node@20.14.14': resolution: {integrity: sha512-d64f00982fS9YoOgJkAMolK7MN8Iq3TDdVjchbYHdEmjth/DHowx82GnoA+tVUAN+7vxfYUgAzi+JXbKNd2SDQ==} - '@types/nodemailer@7.0.4': - resolution: {integrity: sha512-ee8fxWqOchH+Hv6MDDNNy028kwvVnLplrStm4Zf/3uHWw5zzo8FoYYeffpJtGs2wWysEumMH0ZIdMGMY1eMAow==} + '@types/nodemailer@8.0.0': + resolution: {integrity: sha512-fyf8jWULsCo0d0BuoQ75i6IeoHs47qcqxWc7yUdUcV0pOZGjUTTOvwdG1PRXUDqN/8A64yQdQdnA2pZgcdi+cA==} '@types/normalize-package-data@2.4.1': resolution: {integrity: sha512-Gj7cI7z+98M282Tqmp2K5EIsoouUEzbBJhQQzDE3jSIRk6r9gsz0oUokqIUR4u1R3dMHo0pDHM7sNOHyhulypw==} @@ -10997,9 +10897,6 @@ packages: '@types/uuid@10.0.0': resolution: {integrity: sha512-7gqG38EyHgyP1S+7+xomFtL+ZNHcKv6DwNaCZmJmo1vgMugyF3TCnXVg4t1uk89mLNwnLtnY3TpOpCOyp1/xHQ==} - '@types/uuid@9.0.0': - resolution: {integrity: sha512-kr90f+ERiQtKWMz5rP32ltJ/BtULDI5RVO0uavn1HQUOwjx0R1h0rnDYNL0CepF1zL5bSY6FISAfd9tOdDhU5Q==} - '@types/webpack@5.28.5': resolution: {integrity: sha512-wR87cgvxj3p6D0Crt1r5avwqffqPXUkNlnQ1mjU93G7gCuFjufZR4I6j8cz5g1F1tTYpfOOFvly+cmIQwL9wvw==} @@ -11191,7 +11088,7 @@ packages: resolution: {integrity: sha512-8IJ3CvwtSw/EFXqWFL8aCMu+YyYXG2WUSrQbViOZkWTKTVicVwZ/YiEZDSqD00kX+v/+W+OnxhNWoeVKorHygA==} peerDependencies: msw: ^2.4.9 - vite: ^5.0.0 || ^6.0.0 + vite: ^6.4.2 peerDependenciesMeta: msw: optional: true @@ -11515,7 +11412,7 @@ packages: ajv-formats@2.1.1: resolution: {integrity: sha512-Wx0Kx52hxE7C18hkMEggYlEifqWZtYaRgouJor+WMdPnQyEK13vgEWyVNup7SoeeoLMsr4kf5h6dOW11I15MUA==} peerDependencies: - ajv: ^8.0.0 + ajv: ^8.18.0 peerDependenciesMeta: ajv: optional: true @@ -11523,7 +11420,7 @@ packages: ajv-formats@3.0.1: resolution: {integrity: sha512-8iUql50EUR+uUcdRQ3HDqa6EVyo3docL8g5WJ3FNcWmu62IbkGUue/pEyLBW8VGKKucTPgqeks4fIU1DA4yowQ==} peerDependencies: - ajv: ^8.0.0 + ajv: ^8.18.0 peerDependenciesMeta: ajv: optional: true @@ -11536,13 +11433,13 @@ packages: ajv-keywords@5.1.0: resolution: {integrity: sha512-YCS/JNFAUyr5vAuhk1DWm1CBxRHW9LbJ2ozWeemrIqpbsqKjHVxYPyi5GC0rjZIT5JxJ3virVTS8wk4i/Z+krw==} peerDependencies: - ajv: ^8.8.2 + ajv: ^8.18.0 ajv@6.12.6: resolution: {integrity: sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==} - ajv@8.17.1: - resolution: {integrity: sha512-B/gBuNg5SiMTrPkC+A2+cW0RszwxYmn6VYxB/inlBStS5nx6xHIt/ehKRhIMhqusl7a8LjQoZnjCs5vhwxOQ1g==} + ajv@8.18.0: + resolution: {integrity: sha512-PlXPeEWMXMZ7sPYOHqmDyCJzcfNrUr3fGNKtezX14ykXOEIvyK81d+qydx89KY5O71FKMPaQ2vBfBFI5NHR63A==} ansi-colors@4.1.3: resolution: {integrity: sha512-/6w/C21Pm1A7aZitlI5Ni/2J6FFQN8i1Cvz3kHABAAbw93v/NlvKdVOqz7CCWz/3iv/JplRSEEZ83XION15ovw==} @@ -11720,14 +11617,14 @@ packages: engines: {node: ^10 || ^12 || >=14} hasBin: true peerDependencies: - postcss: ^8.1.0 + postcss: ^8.5.10 autoprefixer@10.4.14: resolution: {integrity: sha512-FQzyfOsTlwVzjHxKEqRIAdJx9niO6VCBCoEwax/VLSoQF29ggECcPuBqUMZ+u8jCZOPSy8b8/8KnuFbp0SaFZQ==} engines: {node: ^10 || ^12 || >=14} hasBin: true peerDependencies: - postcss: ^8.1.0 + postcss: ^8.5.10 autoprefixer@9.8.8: resolution: {integrity: sha512-eM9d/swFopRt5gdJ7jrpCwgvEMIayITpojhkkSMRsFHYuH5bkSQ4p/9qTEHtmNudUZh22Tehu7I6CxAW0IXTKA==} @@ -11753,8 +11650,8 @@ packages: resolution: {integrity: sha512-b1WlTV8+XKLj9gZy2DZXgQiyDp9xkkoe2a6U6UbYccScq2wgH/YwCeI2/Jq2mgo0HzQxqJOjWZBLeA/mqsk5Mg==} engines: {node: '>=4'} - axios@1.12.2: - resolution: {integrity: sha512-vMJzPewAlRyOgxV2dU0Cuz2O8zzzx9VYtbJOaBgXFeLc4IV/Eg50n4LowmehOOR61S8ZMpc2K5Sa7g6A4jfkUw==} + axios@1.15.1: + resolution: {integrity: sha512-WOG+Jj8ZOvR0a3rAn+Tuf1UQJRxw5venr6DgdbJzngJE3qG7X0kL83CZGpdHMxEm+ZK3seAbvFsw4FfOfP9vxg==} axobject-query@3.2.1: resolution: {integrity: sha512-jsyHu61e6N4Vbz/v18DHwWYKK0bSWLqn47eeDSKPB7m8tqMHF9YJ+mhIk2lVteyZrY8tnSj/jHOv4YiTCuCJgg==} @@ -12932,8 +12829,8 @@ packages: defined@1.0.1: resolution: {integrity: sha512-hsBd2qSVCRE+5PmNdHt1uzyrFu5d3RwmFDKzyNZMFq/EwDNJF7Ee5+D5oEKF0hU6LhtoUF1macFvOe4AskQC1Q==} - defu@6.1.4: - resolution: {integrity: sha512-mEQCMmwJu317oSz8CwdIOdwf3xMif1ttiM8LTufzc3g6kR+9Pe236twL8j3IYT1F7GfRgGcW6MWxzZjLIkuHIg==} + defu@6.1.7: + resolution: {integrity: sha512-7z22QmUWiQ/2d0KkdYmANbRUVABpZ9SNYyH5vx6PZ+nE5bcC0l7uFvEfHlyld/HcGBFTL536ClDt3DEcSlEJAQ==} degenerator@5.0.1: resolution: {integrity: sha512-TllpMR/t0M5sqCXfj85i4XaAzxmS5tVA16dqvdkMwGmzI+dXLXnw3J+3Vdv7VKw+ThlTMboK6i9rnZ6Nntj5CQ==} @@ -13018,14 +12915,22 @@ packages: dlv@1.1.3: resolution: {integrity: sha512-+HlytyjlPKnIG8XuRG8WvmBP8xs8P71y+SKKS6ZXWoEgLuePxtDoUEiH7WkdePWrQ5JBpE6aoVqfZfJUQkjXwA==} - docker-compose@0.24.8: - resolution: {integrity: sha512-plizRs/Vf15H+GCVxq2EUvyPK7ei9b/cVesHvjnX4xaXjM9spHe2Ytq0BitndFgvTJ3E3NljPNUEl7BAN43iZw==} + docker-compose@1.4.2: + resolution: {integrity: sha512-rPHigTKGaEHpkUmfd69QgaOp+Os5vGJwG/Ry8lcr8W/382AmI+z/D7qoa9BybKIkqNppaIbs8RYeHSevdQjWww==} engines: {node: '>= 6.0.0'} docker-modem@5.0.6: resolution: {integrity: sha512-ens7BiayssQz/uAxGzH8zGXCtiV24rRWXdjNha5V4zSOcxmAZsfGVm/PPFbwQdqEkDnhG+SyR9E3zSHUbOKXBQ==} engines: {node: '>= 8.0'} + docker-modem@5.0.7: + resolution: {integrity: sha512-XJgGhoR/CLpqshm4d3L7rzH6t8NgDFUIIpztYlLHIApeJjMZKYJMz2zxPsYxnejq5h3ELYSw/RBsi3t5h7gNTA==} + engines: {node: '>= 8.0'} + + dockerode@4.0.10: + resolution: {integrity: sha512-8L/P9JynLBiG7/coiA4FlQXegHltRqS0a+KqI44P1zgQh8QLHTg7FKOwhkBgSJwZTeHsq30WRoVFLuwkfK0YFg==} + engines: {node: '>= 8.0'} + dockerode@4.0.6: resolution: {integrity: sha512-FbVf3Z8fY/kALB9s+P9epCpWhfi/r0N2DgYYcYpsAUlaTxPjdsitsFobnltb+lyCgAIvf9C+4PSWlTnHlJMf1w==} engines: {node: '>= 8.0'} @@ -13057,8 +12962,8 @@ packages: resolution: {integrity: sha512-cgwlv/1iFQiFnU96XXgROh8xTeetsnJiDsTc7TYCLFd9+/WNkIqPTxiM/8pSd8VIrhXGTf1Ny1q1hquVqDJB5w==} engines: {node: '>= 4'} - dompurify@3.2.6: - resolution: {integrity: sha512-/2GogDQlohXPZe6D6NOgQvXLPSYBqIWMnZ8zzOhn09REE4eyAzb+Hed3jhoM9OkuaJ8P6ZGTTVWQKAi8ieIzfQ==} + dompurify@3.4.1: + resolution: {integrity: sha512-JahakDAIg1gyOm7dlgWSDjV4n7Ip2PKR55NIT6jrMfIgLFgWo81vdr1/QGqWtFNRqXP9UV71oVePtjqS2ebnPw==} domutils@3.0.1: resolution: {integrity: sha512-z08c1l761iKhDFtfXO04C7kTdPBLi41zwOZl00WS8b5eiaebNpY00HKbztwBq+e3vyqWNwWF3mP9YLUeqIrF+Q==} @@ -13131,9 +13036,6 @@ packages: ee-first@1.1.1: resolution: {integrity: sha512-WMwm9LhRUo+WUaRN+vRuETqG89IgZphVSNkdFgeb6sS/E4OrDIN7t48CAewSHXc6C8lefD8KKfr5vY61brQlow==} - effect@3.11.7: - resolution: {integrity: sha512-laj+TCxWGn0eOv6jNmS9vavMO01Z4vvRr7v5airaOUfE7Zr5PrHiECpiI5HRvOewxa1im/4EcOvRodOZ1S2Y7Q==} - effect@3.16.12: resolution: {integrity: sha512-N39iBk0K71F9nb442TLbTkjl24FLUzuvx2i1I2RsEAQsdAdUTuUoW0vlfUXgkMTUOnYqKnWcFfqw4hK4Pw27hg==} @@ -13143,6 +13045,9 @@ packages: effect@3.18.4: resolution: {integrity: sha512-b1LXQJLe9D11wfnOKAk3PKxuqYshQ0Heez+y5pnkd3jLj1yx9QhM72zZ9uUrOQyNvrs2GZZd/3maL0ZV18YuDA==} + effect@3.21.2: + resolution: {integrity: sha512-rXd2FGDM8KdjSIrc+mqEELo7ScW7xTVxEf1iInmPSpIde9/nyGuFM710cjTo7/EreGXiUX2MOonPpprbz2XHCg==} + effect@3.7.2: resolution: {integrity: sha512-pV7l1+LSZFvVObj4zuy4nYiBaC7qZOfrKV6s/Ef4p3KueiQwZFgamazklwyZ+x7Nyj2etRDFvHE/xkThTfQD1w==} @@ -13407,11 +13312,6 @@ packages: engines: {node: '>=12'} hasBin: true - esbuild@0.21.5: - resolution: {integrity: sha512-mg3OPMV4hXywwpoDxu3Qda5xCKQi+vCTZq8S9J/EpkhB2HzKXq4SNFZE3+NK93JYxc8VMSep+lOUSC/RVKaBqw==} - engines: {node: '>=12'} - hasBin: true - esbuild@0.23.0: resolution: {integrity: sha512-1lvV17H2bMYda/WaFb2jLPeHU3zml2k4/yagNMG8Q/YtfMjCwEUZa2eXXMgZTVSL5q1n4H7sQ0X6CdJDqqeCFA==} engines: {node: '>=18'} @@ -13805,10 +13705,6 @@ packages: resolution: {integrity: sha512-11Ndz7Nv+mvAC1j0ktTa7fAb0vLyGGX+rMHNBYQviQDGU0Hw7lhctJANqbPhu9nV9/izT/IntTgZ7Im/9LJs9g==} engines: {'0': node >=0.6.0} - fast-check@3.22.0: - resolution: {integrity: sha512-8HKz3qXqnHYp/VCNn2qfjHdAdcI8zcSqOyX64GOMukp7SL2bfzfeDKjSd+UyECtejccaZv3LcvZTm9YDD22iCQ==} - engines: {node: '>=8.0.0'} - fast-check@3.23.2: resolution: {integrity: sha512-h5+1OzzfCC3Ef7VbtKdcv7zsstUQwUDlYpUTvjeUsJAssPgLn7QzbboPtL5ro04Mq0rPOsMzl7q5hIbRs2wD1A==} engines: {node: '>=8.0.0'} @@ -13864,16 +13760,15 @@ packages: fast-url-parser@1.1.3: resolution: {integrity: sha512-5jOCVXADYNuRkKFzNJ0dCCewsZiYo0dz8QNYljkOpFC6r2U4OBmKtvm/Tsuh4w1YYdDqDb31a8TVhBJ2OJKdqQ==} - fast-xml-parser@4.2.5: - resolution: {integrity: sha512-B9/wizE4WngqQftFPmdaMYlXoJlJOYxGQOanC77fq9k8+Z0v5dDSVh+3glErdIROP//s/jgb7ZuxKfB8nVyo0g==} - hasBin: true + fast-xml-builder@1.1.5: + resolution: {integrity: sha512-4TJn/8FKLeslLAH3dnohXqE3QSoxkhvaMzepOIZytwJXZO69Bfz0HBdDHzOTOon6G59Zrk6VQ2bEiv1t61rfkA==} - fast-xml-parser@4.4.1: - resolution: {integrity: sha512-xkjOecfnKGkSsOwtZ5Pz7Us/T6mrbPQrq0nh+aCO5V9nk5NLWmasAHumTKjiPJPWANe+kAZ84Jc8ooJkzZ88Sw==} + fast-xml-parser@4.5.6: + resolution: {integrity: sha512-Yd4vkROfJf8AuJrDIVMVmYfULKmIJszVsMv7Vo71aocsKgFxpdlpSHXSaInvyYfgw2PRuObQSW2GFpVMUjxu9A==} hasBin: true - fast-xml-parser@5.2.5: - resolution: {integrity: sha512-pfX9uG9Ki0yekDHx2SiuRIyFdyAr1kMIMitPvb0YBo8SUfKvia7w7FIyd/l6av85pFYRhZscS75MwMnbvY+hcQ==} + fast-xml-parser@5.7.1: + resolution: {integrity: sha512-8Cc3f8GUGUULg34pBch/KGyPLglS+OFs05deyOlY7fL2MTagYPKrVQNmR1fLF/yJ9PH5ZSTd3YDF6pnmeZU+zA==} hasBin: true fastest-stable-stringify@2.0.2: @@ -13900,7 +13795,7 @@ packages: fdir@6.2.0: resolution: {integrity: sha512-9XaWcDl0riOX5j2kYfy0kKdg7skw3IY6kA4LFT8Tk2yF9UdrADUy8D6AJuBLtf7ISm/MksumwAHE3WVbMRyCLw==} peerDependencies: - picomatch: ^3 || ^4 + picomatch: ^4.0.4 peerDependenciesMeta: picomatch: optional: true @@ -13908,7 +13803,7 @@ packages: fdir@6.4.3: resolution: {integrity: sha512-PMXmW2y1hDDfTSRc9gaXIuCCRpuoz3Kaz8cUelp3smouvfT632ozg2vrT6lJsHKKOF59YLbOGfAWGUcKEfRMQw==} peerDependencies: - picomatch: ^3 || ^4 + picomatch: ^4.0.4 peerDependenciesMeta: picomatch: optional: true @@ -13916,7 +13811,7 @@ packages: fdir@6.4.4: resolution: {integrity: sha512-1NZP+GK4GfuAv3PqKvxQRDMjdSRZjnkq7KfhlNrCNNlZ0ygQFpebfrnfnq/W7fpUnAv9aGWmY1zKx7FYL3gwhg==} peerDependencies: - picomatch: ^3 || ^4 + picomatch: ^4.0.4 peerDependenciesMeta: picomatch: optional: true @@ -13987,11 +13882,11 @@ packages: resolution: {integrity: sha512-dm9s5Pw7Jc0GvMYbshN6zchCA9RgQlzzEZX3vylR9IqFfS8XciblUXOKfW6SiuJ0e13eDYZoZV5wdrev7P3Nwg==} engines: {node: ^10.12.0 || >=12.0.0} - flatted@3.2.7: - resolution: {integrity: sha512-5nqDSxl8nn5BSNxyR3n4I6eDmbolI6WT+QqR547RwxQapgjQBmtktdP+HTBb/a/zLsbzERTONyUB5pefh5TtjQ==} + flatted@3.4.2: + resolution: {integrity: sha512-PjDse7RzhcPkIJwy5t7KPWQSZ9cAbzQXcafsetQoD7sOJRQlGikNbx7yZp2OotDnJyrDcbyRq3Ttb18iYOqkxA==} - follow-redirects@1.15.9: - resolution: {integrity: sha512-gew4GsXizNgdoRyqmyfMHyAmXsZDk6mHkSxZFCzW9gwlbtOW44CDtYavM+y+72qD/Vq2l550kMF52DT8fOLJqQ==} + follow-redirects@1.16.0: + resolution: {integrity: sha512-y5rN/uOsadFT/JfYwhxRS5R7Qce+g3zG97+JrtFZlC9klX/W5hD7iiLzScI4nZqUS7DNUdhPgw4xI8W2LuXlUw==} engines: {node: '>=4.0'} peerDependencies: debug: '*' @@ -14152,8 +14047,8 @@ packages: resolution: {integrity: sha512-g/Q1aTSDOxFpchXC4i8ZWvxA1lnPqx/JHqcpIw0/LX9T8x/GBbi6YnlN5nhaKIFkT8oFsscUKgDJYxfwfS6QsQ==} engines: {node: '>=8'} - get-port@7.1.0: - resolution: {integrity: sha512-QB9NKEeDg3xxVwCCwJQ9+xycaz6pBB6iQ76wiWMl1927n0Kir6alPiP+yuiICLLU4jpMe08dXfpebuQppFA2zw==} + get-port@7.2.0: + resolution: {integrity: sha512-afP4W205ONCuMoPBqcR6PSXnzX35KTcJygfJfcp+QY+uwm3p20p1YczWXhlICIzGMCxYBQcySEcOgsJcrkyobg==} engines: {node: '>=16'} get-proto@1.0.1: @@ -14226,12 +14121,6 @@ packages: glob-to-regexp@0.4.1: resolution: {integrity: sha512-lkX1HJXwyMcprw/5YUZc2s7DrpAiHB21/V+E1rHUrVNokkvB6bqMzT0VfV6/86ZNabt1k14YOIaT7nDvOX3Iiw==} - glob@10.3.10: - resolution: {integrity: sha512-fa46+tv1Ak0UPK1TOy/pZrIybNNt4HCv7SDzwyfiOZkvZLEbjsZkJBPtDHVshZjbecAoAGSC20MjLDG/qr679g==} - engines: {node: '>=16 || 14 >=14.17'} - deprecated: Old versions of glob are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me - hasBin: true - glob@10.3.4: resolution: {integrity: sha512-6LFElP3A+i/Q8XQKEvZjkEWEOTgAIALR9AO2rwT8bgPhDd1anmqDJDZ6lLddI4ehxxxR1S5RIqKe1uapMQfYaQ==} engines: {node: '>=16 || 14 >=14.17'} @@ -14542,7 +14431,7 @@ packages: resolution: {integrity: sha512-soFhflCVWLfRNOPU3iv5Z9VUdT44xFRbzjLsEzSr5AQmgqPMTHdU3PMT1Cf1ssx8fLNJDA1juftYl+PUcv3MqA==} engines: {node: ^10 || ^12 || >= 14} peerDependencies: - postcss: ^8.1.0 + postcss: ^8.5.10 ieee754@1.2.1: resolution: {integrity: sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==} @@ -15377,8 +15266,8 @@ packages: resolution: {integrity: sha512-gvVijfZvn7R+2qyPX8mAuKcFGDf6Nc61GdvGafQsHL0sBIxfKzA+usWn4GFC/bk+QdwPUD4kWFJLhElipq+0VA==} engines: {node: ^12.20.0 || ^14.13.1 || >=16.0.0} - lodash-es@4.17.21: - resolution: {integrity: sha512-mKnC+QJ9pWVzv+C4/U3rRsHapFfHvQFoFB92e52xeyGMcX6/OlIl78je1u8vePzYZSkkogMPJ2yjxxsb89cxyw==} + lodash-es@4.18.1: + resolution: {integrity: sha512-J8xewKD/Gk22OZbhpOVSwcs60zhd95ESDwezOFuA3/099925PdHJ7OFHNTGtajL3AlZkykD32HykiMo+BIBI8A==} lodash.camelcase@4.3.0: resolution: {integrity: sha512-TwuEnCnxbc3rAvhf/LbG7tJUDzhqXyFnv3dtzLOPgCG/hODL7WFnsbwktkD7yUV0RrreP/l1PALq/YSg6VvjlA==} @@ -15450,8 +15339,8 @@ packages: lodash.uniq@4.5.0: resolution: {integrity: sha512-xfBaXQd9ryd9dlSDvnvI0lvxfLJlYAZzXomUYzLKtUeOQvOP5piqAWuGtrhWeqaXK9hhoM/iyJc5AV+XfsX3HQ==} - lodash@4.17.23: - resolution: {integrity: sha512-LgVTMpQtIopCi79SJeDiP0TfWi5CNEc/L/aRdTh3yIvmZXTnheWpKjSZhnvMl8iXbC1tFg9gdHHDMLoV7CnG+w==} + lodash@4.18.1: + resolution: {integrity: sha512-dMInicTPVE8d1e5otfwmmjlxkZoUpiVLwyeTdUsi/Caj/gfzzblBcCE5sRHV/AsjuCmxWrte2TNGSYuCeCq+0Q==} log-symbols@4.1.0: resolution: {integrity: sha512-8XPvpAA8uyhfteu8pIvQxpJZ7SYYdpUivZpGy6sFsBuKRY/7rQGavedeB8aK+Zkyq6upMFVL/9AW6vOYzfRyLg==} @@ -15984,8 +15873,8 @@ packages: resolution: {integrity: sha512-ethXTt3SGGR+95gudmqJ1eNhRO7eGEGIgYA9vnPatK4/etz2MEVDno5GMCibdMTuBMyElzIlgxMna3K94XDIDQ==} engines: {node: 20 || >=22} - minimatch@3.1.2: - resolution: {integrity: sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw==} + minimatch@3.1.5: + resolution: {integrity: sha512-VgjWUsnnT6n+NUk6eZq77zeFdpW2LWDzP6zFGrCbHXiYNul5Dzqk2HHQ5uFH2DNW5Xbp8+jVzaeNt94ssEEl4w==} minimatch@5.1.6: resolution: {integrity: sha512-lKwV/1brpG6mBUFHtb7NUmtABCb2WZZmm2wNiOA5hAb8VdCS4B3dtMWyvcoViccwAW/COERjXLt0zP1zXUN26g==} @@ -16366,8 +16255,8 @@ packages: node-releases@2.0.27: resolution: {integrity: sha512-nmh3lCkYZ3grZvqcCH+fjmQ7X+H0OeZgP40OierEaAptX4XofMh5kwNbWh7lBduUzCcV/8kZ+NDLCwm2iorIlA==} - nodemailer@7.0.11: - resolution: {integrity: sha512-gnXhNRE0FNhD7wPSCGhdNh46Hs6nm+uTyg+Kq0cZukNQiYdnCsoQjodNP9BQVG9XrcK/v6/MgpAPBUFyzh9pvw==} + nodemailer@8.0.6: + resolution: {integrity: sha512-Nm2XeuDwwy2wi5A+8jPWwQwNzcjNjhWdE3pVLoXEusxJqCnAPAgnBGkSmiLknbnWuOF9qraRpYZjfxqtKZ4tPw==} engines: {node: '>=6.0.0'} non.geist@1.0.2: @@ -16809,6 +16698,10 @@ packages: resolution: {integrity: sha512-RjhtfwJOxzcFmNOi6ltcbcu4Iu+FL3zEj83dk4kAS+fVpTxXLO1b38RvJgT/0QwvV/L3aY9TAnyv0EOqW4GoMQ==} engines: {node: ^12.20.0 || ^14.13.1 || >=16.0.0} + path-expression-matcher@1.5.0: + resolution: {integrity: sha512-cbrerZV+6rvdQrrD+iGMcZFEiiSrbv9Tfdkvnusy6y0x0GKBXREFg/Y65GhIfm0tnLntThhzCnfKwp1WRjeCyQ==} + engines: {node: '>=14.0.0'} + path-is-absolute@1.0.1: resolution: {integrity: sha512-AVbw3UJ2e9bq64vSaS9Am0fje1Pa8pbGqTTsmXfaIiMpnr5DlDhfJOuLj9Sf95ZPVDAUerDfEk88MPmPe7UCQg==} engines: {node: '>=0.10.0'} @@ -16836,8 +16729,8 @@ packages: resolution: {integrity: sha512-ypGJsmGtdXUOeM5u93TyeIEfEhM6s+ljAhrk5vAvSx8uyY/02OvrZnA0YNGUrPXfpJMgI1ODd3nwz8Npx4O4cg==} engines: {node: 20 || >=22} - path-to-regexp@0.1.10: - resolution: {integrity: sha512-7lf7qcQidTku0Gu3YDPc8DJ1q7OOucfa/BSsIwjuh56VU7katFvuM8hULfkwB3Fns/rsVF7PwPKVw1sl5KQS9w==} + path-to-regexp@0.1.13: + resolution: {integrity: sha512-A/AGNMFN3c8bOlvV9RreMdrv7jsmF9XIfDeCd87+I8RNg6s78BhJxMu69NEMHBSJFxKidViTEdruRwEk/WIKqA==} path-to-regexp@8.2.0: resolution: {integrity: sha512-TdrF7fW9Rphjq4RjrW0Kp2AW0Ahwu9sRGTkS6bvDi0SCwZlEZYmcfDbEsTz8RVk0EHIS/Vd1bv3JhG+1xZuAyQ==} @@ -16969,12 +16862,12 @@ packages: picocolors@1.1.1: resolution: {integrity: sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==} - picomatch@2.3.1: - resolution: {integrity: sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==} + picomatch@2.3.2: + resolution: {integrity: sha512-V7+vQEJ06Z+c5tSye8S+nHUfI51xoXIXjHQ99cQtKUkQqqO1kO/KCJUfZXuB47h/YBlDhah2H3hdUGXn8ie0oA==} engines: {node: '>=8.6'} - picomatch@4.0.2: - resolution: {integrity: sha512-M7BAV6Rlcy5u+m6oPhAPFgJTzAioX/6B0DxyvDlo9l8+T3nLKbrczg2WLUyzd45L8RqfUMyGPzekbMvX2Ldkwg==} + picomatch@4.0.4: + resolution: {integrity: sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==} engines: {node: '>=12'} pidtree@0.3.1: @@ -17059,7 +16952,7 @@ packages: resolution: {integrity: sha512-zmX3IoSI2aoenxHV6C7plngHWWhUOV3sP1T8y2ifzxzbtnuhk1EdPwm0S1bIUNaJ2eNbWeGLEwzw8huPD67aQw==} engines: {node: ^10 || ^12 || >=14.0} peerDependencies: - postcss: ^8.2.15 + postcss: ^8.5.10 postcss-functions@3.0.0: resolution: {integrity: sha512-N5yWXWKA+uhpLQ9ZhBRl2bIAdM6oVJYpDojuI1nF2SzXBimJcdjFwiAouBVbO5VuOF3qA6BSFWFc3wXbbj72XQ==} @@ -17068,13 +16961,13 @@ packages: resolution: {integrity: sha512-hpr+J05B2FVYUAXHeK1YyI267J/dDDhMU6B6civm8hSY1jYJnBXxzKDKDswzJmtLHryrjhnDjqqp/49t8FALew==} engines: {node: '>=14.0.0'} peerDependencies: - postcss: ^8.0.0 + postcss: ^8.5.10 postcss-import@16.0.1: resolution: {integrity: sha512-i2Pci0310NaLHr/5JUFSw1j/8hf1CzwMY13g6ZDxgOavmRHQi2ba3PmUHoihO+sjaum+KmCNzskNsw7JDrg03g==} engines: {node: '>=18.0.0'} peerDependencies: - postcss: ^8.0.0 + postcss: ^8.5.10 postcss-js@2.0.3: resolution: {integrity: sha512-zS59pAk3deu6dVHyrGqmC3oDXBdNdajk4k1RyxeVXCrcEDBUBHoIhE4QTsmhxgzXxsaqFDAkUZfmMa5f/N/79w==} @@ -17083,13 +16976,13 @@ packages: resolution: {integrity: sha512-dDLF8pEO191hJMtlHFPRa8xsizHaM82MLfNkUHdUtVEV3tgTp5oj+8qbEqYM57SLfc74KSbw//4SeJma2LRVIw==} engines: {node: ^12 || ^14 || >= 16} peerDependencies: - postcss: ^8.4.21 + postcss: ^8.5.10 postcss-load-config@4.0.2: resolution: {integrity: sha512-bSVhyJGL00wMVoPUzAVAnbEoWyqRxkjv64tUl427SKnPrENtq6hJwUojroMz2VB+Q1edmi4IfrAPpami5VVgMQ==} engines: {node: '>= 14'} peerDependencies: - postcss: '>=8.0.9' + postcss: ^8.5.10 ts-node: '>=9.0.0' peerDependenciesMeta: postcss: @@ -17102,9 +16995,9 @@ packages: engines: {node: '>= 18'} peerDependencies: jiti: '>=1.21.0' - postcss: '>=8.0.9' + postcss: ^8.5.10 tsx: ^4.8.1 - yaml: ^2.4.2 + yaml: ^2.8.3 peerDependenciesMeta: jiti: optional: true @@ -17120,7 +17013,7 @@ packages: engines: {node: '>= 18.12.0'} peerDependencies: '@rspack/core': 0.x || 1.x - postcss: ^7.0.0 || ^8.0.1 + postcss: ^8.5.10 webpack: ^5.0.0 peerDependenciesMeta: '@rspack/core': @@ -17132,30 +17025,30 @@ packages: resolution: {integrity: sha512-bdHleFnP3kZ4NYDhuGlVK+CMrQ/pqUm8bx/oGL93K6gVwiclvX5x0n76fYMKuIGKzlABOy13zsvqjb0f92TEXw==} engines: {node: ^10 || ^12 || >= 14} peerDependencies: - postcss: ^8.1.0 + postcss: ^8.5.10 postcss-modules-local-by-default@4.0.4: resolution: {integrity: sha512-L4QzMnOdVwRm1Qb8m4x8jsZzKAaPAgrUF1r/hjDR2Xj7R+8Zsf97jAlSQzWtKx5YNiNGN8QxmPFIc/sh+RQl+Q==} engines: {node: ^10 || ^12 || >= 14} peerDependencies: - postcss: ^8.1.0 + postcss: ^8.5.10 postcss-modules-scope@3.1.1: resolution: {integrity: sha512-uZgqzdTleelWjzJY+Fhti6F3C9iF1JR/dODLs/JDefozYcKTBCdD8BIl6nNPbTbcLnGrk56hzwZC2DaGNvYjzA==} engines: {node: ^10 || ^12 || >= 14} peerDependencies: - postcss: ^8.1.0 + postcss: ^8.5.10 postcss-modules-values@4.0.0: resolution: {integrity: sha512-RDxHkAiEGI78gS2ofyvCsu7iycRv7oqw5xMWn9iMoR0N/7mf9D50ecQqUo5BZ9Zh2vH4bCUR/ktCqbB9m8vJjQ==} engines: {node: ^10 || ^12 || >= 14} peerDependencies: - postcss: ^8.1.0 + postcss: ^8.5.10 postcss-modules@6.0.0: resolution: {integrity: sha512-7DGfnlyi/ju82BRzTIjWS5C4Tafmzl3R79YP/PASiocj+aa6yYphHhhKUOEoXQToId5rgyFgJ88+ccOUydjBXQ==} peerDependencies: - postcss: ^8.0.0 + postcss: ^8.5.10 postcss-nested@4.2.3: resolution: {integrity: sha512-rOv0W1HquRCamWy2kFl3QazJMMe1ku6rCFoAAH+9AcxdbpDeBr6k968MLWuLjvjMcGEip01ak09hKOEgpK9hvw==} @@ -17164,7 +17057,7 @@ packages: resolution: {integrity: sha512-HQbt28KulC5AJzG+cZtj9kvKB93CFCdLvog1WFLf1D+xmMvPGlBstkpTEZfK5+AN9hfJocyBFCNiqyS48bpgzQ==} engines: {node: '>=12.0'} peerDependencies: - postcss: ^8.2.14 + postcss: ^8.5.10 postcss-selector-parser@6.0.10: resolution: {integrity: sha512-IQ7TZdoaqbT+LCpShg46jnZVlhWD2w6iQYAcYXfHARZ7X1t/UGhhceQDs5X0cGqKvYlHNOuv7Oa1xmb0oQuA3w==} @@ -17192,24 +17085,8 @@ packages: resolution: {integrity: sha512-yioayjNbHn6z1/Bywyb2Y4s3yvDAeXGOyxqD+LnVOinq6Mdmd++SW2wUNVzavyyHxd6+DxzWGIuosg6P1Rj8uA==} engines: {node: '>=6.0.0'} - postcss@8.4.31: - resolution: {integrity: sha512-PS08Iboia9mts/2ygV3eLpY5ghnUcfLV/EXTOW1E2qYxJKGGBUtNjN76FYHnMs36RmARn41bC0AZmn+rR0OVpQ==} - engines: {node: ^10 || ^12 || >=14} - - postcss@8.4.35: - resolution: {integrity: sha512-u5U8qYpBCpN13BsiEB0CbR1Hhh4Gc0zLFuedrHJKMctHCHAGrMdG0PRM/KErzAL3CU6/eckEtmHNB3x6e3c0vA==} - engines: {node: ^10 || ^12 || >=14} - - postcss@8.4.44: - resolution: {integrity: sha512-Aweb9unOEpQ3ezu4Q00DPvvM2ZTUitJdNKeP/+uQgr1IBIqu574IaZoURId7BKtWMREwzKa9OgzPzezWGPWFQw==} - engines: {node: ^10 || ^12 || >=14} - - postcss@8.5.4: - resolution: {integrity: sha512-QSa9EBe+uwlGTFmHsPKokv3B/oEMQZxfqW0QqNCyhpa6mB1afzulwn8hihglqAb2pOw+BJgNlmXQ8la2VeHB7w==} - engines: {node: ^10 || ^12 || >=14} - - postcss@8.5.6: - resolution: {integrity: sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg==} + postcss@8.5.10: + resolution: {integrity: sha512-pMMHxBOZKFU6HgAZ4eyGnwXF/EvPGGqUr0MnZ5+99485wwW41kW91A4LOGxSHhgugZmSChL5AlElNdwlNgcnLQ==} engines: {node: ^10 || ^12 || >=14} postgres-array@2.0.0: @@ -17459,9 +17336,9 @@ packages: proper-lockfile@4.1.2: resolution: {integrity: sha512-TjNPblN4BwAWMXU8s9AEz4JmQxnD1NNL7bNOY/AKUzyamc379FWASUhc/K1pL2noVb+XmZKLL68cjzLsiOAMaA==} - properties-reader@2.3.0: - resolution: {integrity: sha512-z597WicA7nDZxK12kZqHr2TcvwNU1GCfA5UwfDY/HDp3hXPoPlb5rlEx9bwGTiJnc0OqbBTkU975jDToth8Gxw==} - engines: {node: '>=14'} + properties-reader@3.0.1: + resolution: {integrity: sha512-WPn+h9RGEExOKdu4bsF4HksG/uzd3cFq3MFtq8PsFeExPse5Ha/VOjQNyHhjboBFwGXGev6muJYTSPAOkROq2g==} + engines: {node: '>=18'} property-expr@2.0.6: resolution: {integrity: sha512-SVtmxhRE/CGkn3eZY1T6pC8Nln6Fr/lu1mKSgRud0eC73whjGfoAogbn78LkD8aFL0zz3bAFerKSnOl7NlErBA==} @@ -17475,8 +17352,8 @@ packages: proto-list@1.2.4: resolution: {integrity: sha512-vtK/94akxsTMhe0/cbfpR+syPuszcuwhqVjJq26CuNDgFGj682oRBXOP5MJpv2r7JtE8MsiepGIqvvOTBwn2vA==} - protobufjs@7.3.2: - resolution: {integrity: sha512-RXyHaACeqXeqAKGLDl68rQKbmObRsTIn4TYVUUug1KfS47YWCo5MacGITEryugIgZqORCvJWEk4l449POg5Txg==} + protobufjs@7.5.5: + resolution: {integrity: sha512-3wY1AxV+VBNW8Yypfd1yQY9pXnqTAN+KwQxL8iYm3/BjKYMNg4i0owhEe26PWDOMaIrzeeF98Lqd5NGz4omiIg==} engines: {node: '>=12.0.0'} proxy-addr@2.0.7: @@ -17490,6 +17367,10 @@ packages: proxy-from-env@1.1.0: resolution: {integrity: sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg==} + proxy-from-env@2.1.0: + resolution: {integrity: sha512-cJ+oHTW1VAEa8cJslgmUZrc+sjRKgAKl3Zyse6+PV38hZe/V6Z14TbCuXcan9F9ghlz4QrFr2c92TNF82UkYHA==} + engines: {node: '>=10'} + pseudomap@1.0.2: resolution: {integrity: sha512-b/YwNhb8lk1Zz2+bXXpS/LK9OisiZZ1SNsSLxN1x2OXVEhW2Ckr/7mWE5vrC1ZTiJlD9g19jWszTmJsB+oEpFQ==} @@ -18134,11 +18015,6 @@ packages: engines: {node: '>=14'} hasBin: true - rimraf@5.0.7: - resolution: {integrity: sha512-nV6YcJo5wbLW77m+8KjH8aB/7/rxQy9SZ0HY5shnwULfS+9nmTtVXAJET5NdZmCzA4fPI/Hm1wo/Po/4mopOdg==} - engines: {node: '>=14.18'} - hasBin: true - rimraf@6.0.1: resolution: {integrity: sha512-9dkvaxAsk/xNXSJzMgFqqMCuFgt2+KsOFek3TMLfo8NCPfWpBmqwyNn5Y+NX56QUYfCtsyhF3ayiboEoUmJk/A==} engines: {node: 20 || >=22} @@ -18155,8 +18031,8 @@ packages: engines: {node: '>=14.18.0', npm: '>=8.0.0'} hasBin: true - rollup@4.36.0: - resolution: {integrity: sha512-zwATAXNQxUcd40zgtQG0ZafcRK4g004WtEl7kbuhTWPvf07PsfohXl39jVUvPF7jvNAIkKPQ2XrsDlWuxBd++Q==} + rollup@4.60.1: + resolution: {integrity: sha512-VmtB2rFU/GroZ4oL8+ZqXgSA38O6GR8KSIvWmEFv63pQ0G6KaBH9s07PO8XTXP4vI+3UJUEypOfjkGfmSBBR0w==} engines: {node: '>=18.0.0', npm: '>=8.0.0'} hasBin: true @@ -18263,8 +18139,8 @@ packages: sembear@0.5.2: resolution: {integrity: sha512-Ij1vCAdFgWABd7zTg50Xw1/p0JgESNxuLlneEAsmBrKishA06ulTTL/SHGmNy2Zud7+rKrHTKNI6moJsn1ppAQ==} - semver@5.7.1: - resolution: {integrity: sha512-sauaDf/PZdVgrLTNYHRtpXa1iRiKcaebiKQ1BJdpQlWH2lCvexQdX55snPFyK7QzpudqbCI0qXFfOasHdyNDGQ==} + semver@5.7.2: + resolution: {integrity: sha512-cBznnQ9KjJqU67B52RMC65CMarK2600WFnbkcaiwWq3xy/5haFJlshgnpjovMVJ+Hff49d8GEn0b87C5pDQ10g==} hasBin: true semver@6.3.1: @@ -18461,8 +18337,8 @@ packages: resolution: {integrity: sha512-sJ/tqHOCe7Z50JCBCXrsY3I2k03iOiUe+tj1OmKeD2lXPiGH/RUCdTZFoqVyN7l1MnpIzPrGtLcijffmeouNlQ==} engines: {node: '>=10.0.0'} - socket.io-parser@4.2.4: - resolution: {integrity: sha512-/GbIKmo8ioc+NIWIhwdecY0ge+qVBSMdgxGygevmdHj24bsfgtCmcUUcQ5ZzcylGFHsN3k4HB4Cgkl96KVnuew==} + socket.io-parser@4.2.6: + resolution: {integrity: sha512-asJqbVBDsBCJx0pTqw3WfesSY0iRX+2xzWEWzrpcH7L6fLzrhyF8WPI8UaeM4YCuDfpwA/cgsdugMsmtz8EJeg==} engines: {node: '>=10.0.0'} socket.io@4.7.3: @@ -18738,8 +18614,8 @@ packages: strnum@1.0.5: resolution: {integrity: sha512-J8bbNyKKXl5qYcR36TIO8W3mVGVHrmmxsd5PAItGkmyzwJvybiw2IVq5nqd0i4LSNSkB/sx9VHllbfFdr9k1JA==} - strnum@2.1.1: - resolution: {integrity: sha512-7ZvoFTiCnGxBtDqJ//Cu6fWtZtc7Y3x+QOirG15wztbdngGSkht27o2pyGWrVy0b4WAy3jbKmnoK6g5VlVNUUw==} + strnum@2.2.3: + resolution: {integrity: sha512-oKx6RUCuHfT3oyVjtnrmn19H1SiCqgJSg+54XqURKp5aCMbrXrhLjRN9TjuwMjiYstZ0MzDrHqkGZ5dFTKd+zg==} strtok3@9.1.1: resolution: {integrity: sha512-FhwotcEqjr241ZbjFzjlIYg6c5/L/s4yBGWSMvJ9UoExiSqL+FnFA/CaeZx17WGaZMS/4SOZp8wH18jSS4R4lw==} @@ -18860,8 +18736,8 @@ packages: resolution: {integrity: sha512-L1dapNV6vu2s/4Sputv8xGsCdAVlb5nRDMFU/E27D44l5U6cw1g0dGd45uLc+OXjNMmF4ntiMdCimzcjFKQI8Q==} engines: {node: ^14.18.0 || >=16.0.0} - systeminformation@5.27.14: - resolution: {integrity: sha512-3DoNDYSZBLxBwaJtQGWNpq0fonga/VZ47HY1+7/G3YoIPaPz93Df6egSzzTKbEMmlzUpy3eQ0nR9REuYIycXGg==} + systeminformation@5.31.5: + resolution: {integrity: sha512-5SyLdip4/3alxD4Kh+63bUQTJmu7YMfYQTC+koZy7X73HgNqZSD2P4wOZQWtUncvPvcEmnfIjCoygN4MRoEejQ==} engines: {node: '>=8.0.0'} os: [darwin, linux, win32, freebsd, openbsd, netbsd, sunos, android] hasBin: true @@ -18938,12 +18814,12 @@ packages: tar-fs@2.1.4: resolution: {integrity: sha512-mDAjwmZdh7LTT6pNleZ05Yt65HC3E+NiQzl672vQG38jIrehtJk/J3mNwIg+vShQPcLF/LV7CMnDW6vjj6sfYQ==} - tar-fs@3.1.0: - resolution: {integrity: sha512-5Mty5y/sOF1YWj1J6GiBodjlDc05CUR8PKXrsnFAiSG0xA+GHeWLovaZPYUDXkH/1iKRf2+M5+OrRgzC7O9b7w==} - tar-fs@3.1.1: resolution: {integrity: sha512-LZA0oaPOc2fVo82Txf3gw+AkEd38szODlptMYejQUhndHMLQ9M059uXR+AfS7DNo0NpINvSqDsvyaCrBVkptWg==} + tar-fs@3.1.2: + resolution: {integrity: sha512-QGxxTxxyleAdyM3kpFs14ymbYmNFrfY+pHj7Z8FgtbZ7w2//VAgLMac7sT6nRpIHjppXO2AwwEOg0bPFVRcmXw==} + tar-stream@2.2.0: resolution: {integrity: sha512-ujeqbceABgwMZxEJnk2HDY2DlnUZ+9oEcb1KzTVfYHio0UE6dG71n60d8D2I4qNvleWrrXpmjpt7vZeF1LnMZQ==} engines: {node: '>=6'} @@ -18961,15 +18837,9 @@ packages: engines: {node: '>=10'} deprecated: Old versions of tar are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me - tar@7.4.3: - resolution: {integrity: sha512-5S7Va8hKfV7W5U6g3aYxXmlPoZVAwUMy9AOKyF2fVuZa2UD3qZjg578OrLRt8PcNN1PleVaL/5/yYATNL0ICUw==} + tar@7.5.13: + resolution: {integrity: sha512-tOG/7GyXpFevhXVh8jOPJrmtRpOTsYqUIkVdVooZYJS/z8WhfQUX8RJILmeuJNinGAMSu1veBr4asSHFt5/hng==} engines: {node: '>=18'} - deprecated: Old versions of tar are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me - - tar@7.5.6: - resolution: {integrity: sha512-xqUeu2JAIJpXyvskvU3uvQW8PAmHrtXp2KDuMJwQqW8Sqq0CaZBAQ+dKS3RBXVhU4wC5NjAdKrmh84241gO9cA==} - engines: {node: '>=18'} - deprecated: Old versions of tar are not supported, and contain widely publicized security vulnerabilities, which have been fixed in the current version. Please update. Support for old versions may be purchased (at exorbitant rates) by contacting i@izs.me tdigest@0.1.2: resolution: {integrity: sha512-+G0LLgjjo9BZX2MfdvPfH+MKLCrxlXSYec5DaPYP1fe6Iyhf0/fSmJ0bFiZ1F8BT6cGXl2LpltQptzjXKWEkKA==} @@ -19019,8 +18889,8 @@ packages: resolution: {integrity: sha512-pFYqmTw68LXVjeWJMST4+borgQP2AyMNbg1BpZh9LbyhUeNkeaPF9gzfPGUAnSMV3qPYdWUwDIjjCLiSDOl7vg==} engines: {node: '>=18'} - testcontainers@10.28.0: - resolution: {integrity: sha512-1fKrRRCsgAQNkarjHCMKzBKXSJFmzNTiTbhb5E/j5hflRXChEtHvkefjaHlgkNUjfw92/Dq8LTgwQn6RDBFbMg==} + testcontainers@11.14.0: + resolution: {integrity: sha512-r9pniwv/iwzyHaI7gwAvAm4Y+IvjJg3vBWdjrUCaDMc2AXIr4jKbq7jJO18Mw2ybs73pZy1Aj7p/4RVBGMRWjg==} text-decoder@1.2.0: resolution: {integrity: sha512-n1yg1mOj9DNpk3NeZOx7T6jchTbyJS3i3cucbNN6FcdPriMZx7NsgrGpWWdWZZGxD7ES1XB+3uoqHMgOKaN+fg==} @@ -19129,6 +18999,10 @@ packages: resolution: {integrity: sha512-nZD7m9iCPC5g0pYmcaxogYKggSfLsdxl8of3Q/oIbqCqLLIO9IAF0GWjX1z9NZRHPiXv8Wex4yDCaZsgEw0Y8w==} engines: {node: '>=14.14'} + tmp@0.2.5: + resolution: {integrity: sha512-voyz6MApa1rQGUxT3E+BK7/ROe8itEx7vD8/HEvt4xwXucvQ5G5oeEiHkmHZJuBO21RpOf+YYm9MOivj709jow==} + engines: {node: '>=14.14'} + to-fast-properties@2.0.0: resolution: {integrity: sha512-/OaKK0xYrs3DmxRYqL/yDc+FxFUVYhDlXMhRmv3z915w2HF1tnN1omB354j8VUGO/hbRzyD6Y3sA7v7GS/ceog==} engines: {node: '>=4'} @@ -19288,7 +19162,7 @@ packages: peerDependencies: '@microsoft/api-extractor': ^7.36.0 '@swc/core': ^1 - postcss: ^8.4.12 + postcss: ^8.5.10 typescript: 5.5.4 peerDependenciesMeta: '@microsoft/api-extractor': @@ -19494,6 +19368,10 @@ packages: resolution: {integrity: sha512-ZgpWDC5gmNiuY9CnLVXEH8rl50xhRCuLNA97fAUnKi8RRuV4E6KG31pDTsLVUKnohJE0I3XDrTeEydAXRw47xg==} engines: {node: '>=18.17'} + undici@7.24.6: + resolution: {integrity: sha512-Xi4agocCbRzt0yYMZGMA6ApD7gvtUFaxm4ZmeacWI4cZxaF6C+8I8QfofC20NAePiB/IcvZmzkJ7XPa471AEtA==} + engines: {node: '>=20.18.1'} + unicode-emoji-modifier-base@1.0.0: resolution: {integrity: sha512-yLSH4py7oFH3oG/9K+XWrz1pSi3dfUrWEnInbxMfArOfc1+33BlGPQtLsOYwvdMy11AwUBetYuaRxSPqgkq+8g==} engines: {node: '>=4'} @@ -19681,6 +19559,10 @@ packages: resolution: {integrity: sha512-0/A9rDy9P7cJ+8w1c9WD9V//9Wj15Ce2MPz8Ri6032usz+NfePxx5AcN3bN+r6ZL6jEo066/yNYB3tn4pQEx+A==} hasBin: true + uuid@14.0.0: + resolution: {integrity: sha512-Qo+uWgilfSmAhXCMav1uYFynlQO7fMFiMVZsQqZRMIXp0O7rR7qjkj+cPvBHLgBqi960QCoo/PH2/6ZtVqKvrg==} + hasBin: true + uuid@3.4.0: resolution: {integrity: sha512-HjSDRw6gZE5JMggctHBcjVak08+KEVhSIiDzFnT9S9aegmp85S/bReBVTb4QTFaRNptJ9kuYaNhnbNEOkbKb/A==} deprecated: Please upgrade to version 7 or higher. Older versions may use Math.random() in certain circumstances, which is known to be problematic. See https://v8.dev/blog/math-random for details. @@ -19690,10 +19572,6 @@ packages: resolution: {integrity: sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==} hasBin: true - uuid@9.0.0: - resolution: {integrity: sha512-MXcSTerfPa4uqyzStbRoTgt5XIe3x5+42+q1sDuy3R5MDk66URdLMOZe5aPX/SQd+kuYAh0FdP/pO28IkQyTeg==} - hasBin: true - uuid@9.0.1: resolution: {integrity: sha512-b+1eJOlsR9K8HJpow9Ok3fiWOWSIcIzXodvv0rQjVoOVNpWMpxf1wZNpt4y9h10odCNrqnYp1OBzRktckBe3sA==} hasBin: true @@ -19805,22 +19683,27 @@ packages: terser: optional: true - vite@5.4.21: - resolution: {integrity: sha512-o5a9xKjbtuhY6Bi5S3+HvbRERmouabWbyUcpXXUA1u+GNUKoROi9byOJ8M0nHbHYHkYICiMlqxkg1KkYmm25Sw==} - engines: {node: ^18.0.0 || >=20.0.0} + vite@6.4.2: + resolution: {integrity: sha512-2N/55r4JDJ4gdrCvGgINMy+HH3iRpNIz8K6SFwVsA+JbQScLiC+clmAxBgwiSPgcG9U15QmvqCGWzMbqda5zGQ==} + engines: {node: ^18.0.0 || ^20.0.0 || >=22.0.0} hasBin: true peerDependencies: '@types/node': 20.14.14 + jiti: '>=1.21.0' less: '*' lightningcss: ^1.21.0 sass: '*' sass-embedded: '*' stylus: '*' sugarss: '*' - terser: ^5.4.0 + terser: ^5.16.0 + tsx: ^4.8.1 + yaml: ^2.8.3 peerDependenciesMeta: '@types/node': optional: true + jiti: + optional: true less: optional: true lightningcss: @@ -19835,6 +19718,10 @@ packages: optional: true terser: optional: true + tsx: + optional: true + yaml: + optional: true vitest@3.1.4: resolution: {integrity: sha512-Ta56rT7uWxCSJXlBtKgIlApJnT6e6IGmTYxYcmxjJ4ujuZDI59GUQgVDObXXJujOmPDBYXHK1qmaGtneu6TNIQ==} @@ -20139,9 +20026,9 @@ packages: resolution: {integrity: sha512-YgvUTfwqyc7UXVMrB+SImsVYSmTS8X/tSrtdNZMImM+n7+QTriRXyXim0mBrTXNeqzVF0KWGgHPeiyViFFrNDw==} engines: {node: '>=18'} - yaml@2.7.1: - resolution: {integrity: sha512-10ULxpnOCQXxJvBgxsn9ptjq6uviG/htZKk9veJGhlqn3w/DxQ631zFF+nlQXLwmImeS5amR2dl2U8sg6U9jsQ==} - engines: {node: '>= 14'} + yaml@2.8.3: + resolution: {integrity: sha512-AvbaCLOO2Otw/lW5bmh9d/WEdcDFdQp2Z2ZUH3pX9U2ihyUY0nvLv7J6TrWowklRGPYbB/IuIMfYgxaCPg5Bpg==} + engines: {node: '>= 14.6'} hasBin: true yargs-parser@18.1.3: @@ -21097,7 +20984,7 @@ snapshots: '@smithy/util-endpoints': 1.0.5 '@smithy/util-retry': 2.0.7 '@smithy/util-utf8': 2.3.0 - fast-xml-parser: 4.2.5 + fast-xml-parser: 4.5.6 tslib: 2.8.1 transitivePeerDependencies: - aws-crt @@ -21166,7 +21053,7 @@ snapshots: '@smithy/util-body-length-browser': 4.0.0 '@smithy/util-middleware': 4.0.4 '@smithy/util-utf8': 4.0.0 - fast-xml-parser: 4.4.1 + fast-xml-parser: 4.5.6 tslib: 2.8.1 '@aws-sdk/core@3.840.0': @@ -21184,7 +21071,7 @@ snapshots: '@smithy/util-body-length-browser': 4.2.0 '@smithy/util-middleware': 4.2.5 '@smithy/util-utf8': 4.2.0 - fast-xml-parser: 4.4.1 + fast-xml-parser: 4.5.6 tslib: 2.8.1 '@aws-sdk/core@3.931.0': @@ -22397,7 +22284,7 @@ snapshots: '@aws-sdk/xml-builder@3.930.0': dependencies: '@smithy/types': 4.9.0 - fast-xml-parser: 5.2.5 + fast-xml-parser: 5.7.1 tslib: 2.8.1 '@aws/lambda-invoke-store@0.1.1': {} @@ -22865,12 +22752,12 @@ snapshots: dependencies: '@chevrotain/gast': 11.0.3 '@chevrotain/types': 11.0.3 - lodash-es: 4.17.21 + lodash-es: 4.18.1 '@chevrotain/gast@11.0.3': dependencies: '@chevrotain/types': 11.0.3 - lodash-es: 4.17.21 + lodash-es: 4.18.1 '@chevrotain/regexp-to-ast@11.0.3': {} @@ -23138,9 +23025,6 @@ snapshots: '@esbuild/aix-ppc64@0.19.11': optional: true - '@esbuild/aix-ppc64@0.21.5': - optional: true - '@esbuild/aix-ppc64@0.23.0': optional: true @@ -23159,9 +23043,6 @@ snapshots: '@esbuild/android-arm64@0.19.11': optional: true - '@esbuild/android-arm64@0.21.5': - optional: true - '@esbuild/android-arm64@0.23.0': optional: true @@ -23183,9 +23064,6 @@ snapshots: '@esbuild/android-arm@0.19.11': optional: true - '@esbuild/android-arm@0.21.5': - optional: true - '@esbuild/android-arm@0.23.0': optional: true @@ -23204,9 +23082,6 @@ snapshots: '@esbuild/android-x64@0.19.11': optional: true - '@esbuild/android-x64@0.21.5': - optional: true - '@esbuild/android-x64@0.23.0': optional: true @@ -23225,9 +23100,6 @@ snapshots: '@esbuild/darwin-arm64@0.19.11': optional: true - '@esbuild/darwin-arm64@0.21.5': - optional: true - '@esbuild/darwin-arm64@0.23.0': optional: true @@ -23246,9 +23118,6 @@ snapshots: '@esbuild/darwin-x64@0.19.11': optional: true - '@esbuild/darwin-x64@0.21.5': - optional: true - '@esbuild/darwin-x64@0.23.0': optional: true @@ -23267,9 +23136,6 @@ snapshots: '@esbuild/freebsd-arm64@0.19.11': optional: true - '@esbuild/freebsd-arm64@0.21.5': - optional: true - '@esbuild/freebsd-arm64@0.23.0': optional: true @@ -23288,9 +23154,6 @@ snapshots: '@esbuild/freebsd-x64@0.19.11': optional: true - '@esbuild/freebsd-x64@0.21.5': - optional: true - '@esbuild/freebsd-x64@0.23.0': optional: true @@ -23309,9 +23172,6 @@ snapshots: '@esbuild/linux-arm64@0.19.11': optional: true - '@esbuild/linux-arm64@0.21.5': - optional: true - '@esbuild/linux-arm64@0.23.0': optional: true @@ -23330,9 +23190,6 @@ snapshots: '@esbuild/linux-arm@0.19.11': optional: true - '@esbuild/linux-arm@0.21.5': - optional: true - '@esbuild/linux-arm@0.23.0': optional: true @@ -23351,9 +23208,6 @@ snapshots: '@esbuild/linux-ia32@0.19.11': optional: true - '@esbuild/linux-ia32@0.21.5': - optional: true - '@esbuild/linux-ia32@0.23.0': optional: true @@ -23375,9 +23229,6 @@ snapshots: '@esbuild/linux-loong64@0.19.11': optional: true - '@esbuild/linux-loong64@0.21.5': - optional: true - '@esbuild/linux-loong64@0.23.0': optional: true @@ -23396,9 +23247,6 @@ snapshots: '@esbuild/linux-mips64el@0.19.11': optional: true - '@esbuild/linux-mips64el@0.21.5': - optional: true - '@esbuild/linux-mips64el@0.23.0': optional: true @@ -23417,9 +23265,6 @@ snapshots: '@esbuild/linux-ppc64@0.19.11': optional: true - '@esbuild/linux-ppc64@0.21.5': - optional: true - '@esbuild/linux-ppc64@0.23.0': optional: true @@ -23438,9 +23283,6 @@ snapshots: '@esbuild/linux-riscv64@0.19.11': optional: true - '@esbuild/linux-riscv64@0.21.5': - optional: true - '@esbuild/linux-riscv64@0.23.0': optional: true @@ -23459,9 +23301,6 @@ snapshots: '@esbuild/linux-s390x@0.19.11': optional: true - '@esbuild/linux-s390x@0.21.5': - optional: true - '@esbuild/linux-s390x@0.23.0': optional: true @@ -23480,9 +23319,6 @@ snapshots: '@esbuild/linux-x64@0.19.11': optional: true - '@esbuild/linux-x64@0.21.5': - optional: true - '@esbuild/linux-x64@0.23.0': optional: true @@ -23507,9 +23343,6 @@ snapshots: '@esbuild/netbsd-x64@0.19.11': optional: true - '@esbuild/netbsd-x64@0.21.5': - optional: true - '@esbuild/netbsd-x64@0.23.0': optional: true @@ -23537,9 +23370,6 @@ snapshots: '@esbuild/openbsd-x64@0.19.11': optional: true - '@esbuild/openbsd-x64@0.21.5': - optional: true - '@esbuild/openbsd-x64@0.23.0': optional: true @@ -23558,9 +23388,6 @@ snapshots: '@esbuild/sunos-x64@0.19.11': optional: true - '@esbuild/sunos-x64@0.21.5': - optional: true - '@esbuild/sunos-x64@0.23.0': optional: true @@ -23579,9 +23406,6 @@ snapshots: '@esbuild/win32-arm64@0.19.11': optional: true - '@esbuild/win32-arm64@0.21.5': - optional: true - '@esbuild/win32-arm64@0.23.0': optional: true @@ -23600,9 +23424,6 @@ snapshots: '@esbuild/win32-ia32@0.19.11': optional: true - '@esbuild/win32-ia32@0.21.5': - optional: true - '@esbuild/win32-ia32@0.23.0': optional: true @@ -23621,9 +23442,6 @@ snapshots: '@esbuild/win32-x64@0.19.11': optional: true - '@esbuild/win32-x64@0.21.5': - optional: true - '@esbuild/win32-x64@0.23.0': optional: true @@ -23649,7 +23467,7 @@ snapshots: ignore: 5.2.4 import-fresh: 3.3.0 js-yaml: 4.1.1 - minimatch: 3.1.2 + minimatch: 3.1.5 strip-json-comments: 3.1.1 transitivePeerDependencies: - supports-color @@ -23673,8 +23491,8 @@ snapshots: '@fastify/ajv-compiler@4.0.2': dependencies: - ajv: 8.17.1 - ajv-formats: 3.0.1(ajv@8.17.1) + ajv: 8.18.0 + ajv-formats: 3.0.1(ajv@8.18.0) fast-uri: 3.0.6 '@fastify/busboy@2.1.1': {} @@ -23814,7 +23632,7 @@ snapshots: dependencies: lodash.camelcase: 4.3.0 long: 5.2.3 - protobufjs: 7.3.2 + protobufjs: 7.5.5 yargs: 17.7.2 '@hapi/boom@10.0.1': @@ -23878,7 +23696,7 @@ snapshots: dependencies: '@humanwhocodes/object-schema': 1.2.1 debug: 4.4.3(supports-color@10.0.0) - minimatch: 3.1.2 + minimatch: 3.1.5 transitivePeerDependencies: - supports-color @@ -24228,7 +24046,7 @@ snapshots: openid-client: 6.3.3 rfc4648: 1.5.3 stream-buffers: 3.0.2 - tar: 7.4.3 + tar: 7.5.13 tmp-promise: 3.0.3 tslib: 2.6.2 ws: 8.18.0(bufferutil@4.0.9) @@ -24237,6 +24055,12 @@ snapshots: - encoding - utf-8-validate + '@kwsites/file-exists@1.1.1': + dependencies: + debug: 4.4.3(supports-color@10.0.0) + transitivePeerDependencies: + - supports-color + '@lezer/common@1.0.2': {} '@lezer/common@1.3.0': {} @@ -24329,8 +24153,8 @@ snapshots: '@modelcontextprotocol/sdk@1.25.2(hono@4.11.8)(supports-color@10.0.0)(zod@3.25.76)': dependencies: '@hono/node-server': 1.19.9(hono@4.11.8) - ajv: 8.17.1 - ajv-formats: 3.0.1(ajv@8.17.1) + ajv: 8.18.0 + ajv-formats: 3.0.1(ajv@8.18.0) content-type: 1.0.5 cors: 2.8.5 cross-spawn: 7.0.6 @@ -24351,8 +24175,8 @@ snapshots: '@modelcontextprotocol/sdk@1.26.0(zod@3.25.76)': dependencies: '@hono/node-server': 1.19.9(hono@4.11.8) - ajv: 8.17.1 - ajv-formats: 3.0.1(ajv@8.17.1) + ajv: 8.18.0 + ajv-formats: 3.0.1(ajv@8.18.0) content-type: 1.0.5 cors: 2.8.5 cross-spawn: 7.0.6 @@ -24523,6 +24347,8 @@ snapshots: dependencies: eslint-scope: 5.1.1 + '@nodable/entities@2.1.0': {} + '@nodelib/fs.scandir@2.1.5': dependencies: '@nodelib/fs.stat': 2.0.5 @@ -24902,7 +24728,7 @@ snapshots: '@opentelemetry/host-metrics@0.37.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - systeminformation: 5.27.14 + systeminformation: 5.31.5 '@opentelemetry/instrumentation-amqplib@0.46.1(@opentelemetry/api@1.9.0)': dependencies: @@ -25204,7 +25030,7 @@ snapshots: '@opentelemetry/sdk-logs': 0.203.0(@opentelemetry/api@1.9.0) '@opentelemetry/sdk-metrics': 2.0.1(@opentelemetry/api@1.9.0) '@opentelemetry/sdk-trace-base': 2.0.1(@opentelemetry/api@1.9.0) - protobufjs: 7.3.2 + protobufjs: 7.5.5 '@opentelemetry/otlp-transformer@0.57.0(@opentelemetry/api@1.9.0)': dependencies: @@ -25215,7 +25041,7 @@ snapshots: '@opentelemetry/sdk-logs': 0.57.0(@opentelemetry/api@1.9.0) '@opentelemetry/sdk-metrics': 1.30.0(@opentelemetry/api@1.9.0) '@opentelemetry/sdk-trace-base': 1.30.0(@opentelemetry/api@1.9.0) - protobufjs: 7.3.2 + protobufjs: 7.5.5 '@opentelemetry/propagation-utils@0.31.3(@opentelemetry/api@1.9.0)': dependencies: @@ -25631,7 +25457,7 @@ snapshots: progress: 2.0.3 proxy-agent: 6.5.0 semver: 7.7.3 - tar-fs: 3.1.0 + tar-fs: 3.1.1 yargs: 17.7.2 transitivePeerDependencies: - bare-abort-controller @@ -29022,7 +28848,7 @@ snapshots: transitivePeerDependencies: - encoding - '@remix-run/dev@2.17.4(@remix-run/react@2.17.4(react-dom@18.2.0(react@18.2.0))(react@18.2.0)(typescript@5.5.4))(@remix-run/serve@2.17.4(typescript@5.5.4))(@types/node@20.14.14)(bufferutil@4.0.9)(lightningcss@1.29.2)(terser@5.44.1)(typescript@5.5.4)(vite@5.4.21(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1))': + '@remix-run/dev@2.17.4(@remix-run/react@2.17.4(react-dom@18.2.0(react@18.2.0))(react@18.2.0)(typescript@5.5.4))(@remix-run/serve@2.17.4(typescript@5.5.4))(@types/node@20.14.14)(bufferutil@4.0.9)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@4.20.6)(typescript@5.5.4)(vite@6.4.2(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@4.20.6)(yaml@2.8.3))(yaml@2.8.3)': dependencies: '@babel/core': 7.22.17 '@babel/generator': 7.24.7 @@ -29057,18 +28883,18 @@ snapshots: gunzip-maybe: 1.4.2 jsesc: 3.0.2 json5: 2.2.3 - lodash: 4.17.23 + lodash: 4.18.1 lodash.debounce: 4.0.8 minimatch: 9.0.5 ora: 5.4.1 pathe: 1.1.2 picocolors: 1.1.1 - picomatch: 2.3.1 + picomatch: 2.3.2 pidtree: 0.6.0 - postcss: 8.5.6 - postcss-discard-duplicates: 5.1.0(postcss@8.5.6) - postcss-load-config: 4.0.2(postcss@8.5.6) - postcss-modules: 6.0.0(postcss@8.5.6) + postcss: 8.5.10 + postcss-discard-duplicates: 5.1.0(postcss@8.5.10) + postcss-load-config: 4.0.2(postcss@8.5.10) + postcss-modules: 6.0.0(postcss@8.5.10) prettier: 2.8.8 pretty-ms: 7.0.1 react-refresh: 0.14.0 @@ -29079,16 +28905,17 @@ snapshots: tar-fs: 2.1.4 tsconfig-paths: 4.2.0 valibot: 1.3.1(typescript@5.5.4) - vite-node: 3.1.4(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1) + vite-node: 3.1.4(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@4.20.6)(yaml@2.8.3) ws: 7.5.10(bufferutil@4.0.9) optionalDependencies: '@remix-run/serve': 2.17.4(typescript@5.5.4) typescript: 5.5.4 - vite: 5.4.21(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1) + vite: 6.4.2(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@4.20.6)(yaml@2.8.3) transitivePeerDependencies: - '@types/node' - bluebird - bufferutil + - jiti - less - lightningcss - sass @@ -29098,7 +28925,9 @@ snapshots: - supports-color - terser - ts-node + - tsx - utf-8-validate + - yaml '@remix-run/eslint-config@2.17.4(eslint@8.31.0)(react@18.2.0)(typescript@5.5.4)': dependencies: @@ -29231,67 +29060,85 @@ snapshots: dependencies: web-streams-polyfill: 3.2.1 - '@rollup/rollup-android-arm-eabi@4.36.0': + '@rollup/rollup-android-arm-eabi@4.60.1': optional: true - '@rollup/rollup-android-arm64@4.36.0': + '@rollup/rollup-android-arm64@4.60.1': optional: true - '@rollup/rollup-darwin-arm64@4.36.0': + '@rollup/rollup-darwin-arm64@4.53.2': optional: true - '@rollup/rollup-darwin-arm64@4.53.2': + '@rollup/rollup-darwin-arm64@4.60.1': optional: true - '@rollup/rollup-darwin-x64@4.36.0': + '@rollup/rollup-darwin-x64@4.60.1': optional: true - '@rollup/rollup-freebsd-arm64@4.36.0': + '@rollup/rollup-freebsd-arm64@4.60.1': optional: true - '@rollup/rollup-freebsd-x64@4.36.0': + '@rollup/rollup-freebsd-x64@4.60.1': optional: true - '@rollup/rollup-linux-arm-gnueabihf@4.36.0': + '@rollup/rollup-linux-arm-gnueabihf@4.60.1': optional: true - '@rollup/rollup-linux-arm-musleabihf@4.36.0': + '@rollup/rollup-linux-arm-musleabihf@4.60.1': optional: true - '@rollup/rollup-linux-arm64-gnu@4.36.0': + '@rollup/rollup-linux-arm64-gnu@4.60.1': optional: true - '@rollup/rollup-linux-arm64-musl@4.36.0': + '@rollup/rollup-linux-arm64-musl@4.60.1': optional: true - '@rollup/rollup-linux-loongarch64-gnu@4.36.0': + '@rollup/rollup-linux-loong64-gnu@4.60.1': optional: true - '@rollup/rollup-linux-powerpc64le-gnu@4.36.0': + '@rollup/rollup-linux-loong64-musl@4.60.1': optional: true - '@rollup/rollup-linux-riscv64-gnu@4.36.0': + '@rollup/rollup-linux-ppc64-gnu@4.60.1': optional: true - '@rollup/rollup-linux-s390x-gnu@4.36.0': + '@rollup/rollup-linux-ppc64-musl@4.60.1': optional: true - '@rollup/rollup-linux-x64-gnu@4.36.0': + '@rollup/rollup-linux-riscv64-gnu@4.60.1': + optional: true + + '@rollup/rollup-linux-riscv64-musl@4.60.1': + optional: true + + '@rollup/rollup-linux-s390x-gnu@4.60.1': optional: true '@rollup/rollup-linux-x64-gnu@4.53.2': optional: true - '@rollup/rollup-linux-x64-musl@4.36.0': + '@rollup/rollup-linux-x64-gnu@4.60.1': + optional: true + + '@rollup/rollup-linux-x64-musl@4.60.1': + optional: true + + '@rollup/rollup-openbsd-x64@4.60.1': + optional: true + + '@rollup/rollup-openharmony-arm64@4.60.1': optional: true - '@rollup/rollup-win32-arm64-msvc@4.36.0': + '@rollup/rollup-win32-arm64-msvc@4.60.1': optional: true - '@rollup/rollup-win32-ia32-msvc@4.36.0': + '@rollup/rollup-win32-ia32-msvc@4.60.1': optional: true - '@rollup/rollup-win32-x64-msvc@4.36.0': + '@rollup/rollup-win32-x64-gnu@4.60.1': + optional: true + + '@rollup/rollup-win32-x64-msvc@4.60.1': optional: true '@rushstack/eslint-patch@1.2.0': {} @@ -29532,7 +29379,7 @@ snapshots: '@slack/types': 2.14.0 '@types/node': 20.14.14 '@types/retry': 0.12.0 - axios: 1.12.2 + axios: 1.15.1 eventemitter3: 5.0.1 form-data: 4.0.4 is-electron: 2.2.2 @@ -30663,7 +30510,7 @@ snapshots: '@tailwindcss/node': 4.0.17 '@tailwindcss/oxide': 4.0.17 lightningcss: 1.29.2 - postcss: 8.5.6 + postcss: 8.5.10 tailwindcss: 4.0.17 '@tailwindcss/typography@0.5.9(tailwindcss@3.4.1)': @@ -30715,17 +30562,17 @@ snapshots: graphql: 16.6.0 zod: 3.25.76 - '@testcontainers/postgresql@10.28.0': + '@testcontainers/postgresql@11.14.0': dependencies: - testcontainers: 10.28.0 + testcontainers: 11.14.0 transitivePeerDependencies: - bare-abort-controller - bare-buffer - supports-color - '@testcontainers/redis@10.28.0': + '@testcontainers/redis@11.14.0': dependencies: - testcontainers: 10.28.0 + testcontainers: 11.14.0 transitivePeerDependencies: - bare-abort-controller - bare-buffer @@ -30749,7 +30596,7 @@ snapshots: chalk: 3.0.0 css.escape: 1.5.1 dom-accessibility-api: 0.6.3 - lodash: 4.17.23 + lodash: 4.18.1 redent: 3.0.0 '@tokenizer/token@0.3.0': {} @@ -30962,9 +30809,11 @@ snapshots: '@types/node': 20.14.14 '@types/ssh2': 1.15.1 - '@types/dompurify@3.2.0': + '@types/dockerode@4.0.1': dependencies: - dompurify: 3.2.6 + '@types/docker-modem': 3.0.6 + '@types/node': 20.14.14 + '@types/ssh2': 1.15.1 '@types/eslint-scope@3.7.4': dependencies: @@ -30992,8 +30841,6 @@ snapshots: '@types/estree@1.0.0': {} - '@types/estree@1.0.6': {} - '@types/estree@1.0.8': {} '@types/eventsource@1.1.15': {} @@ -31121,12 +30968,9 @@ snapshots: dependencies: undici-types: 5.26.5 - '@types/nodemailer@7.0.4': + '@types/nodemailer@8.0.0': dependencies: - '@aws-sdk/client-sesv2': 3.940.0 '@types/node': 20.14.14 - transitivePeerDependencies: - - aws-crt '@types/normalize-package-data@2.4.1': {} @@ -31238,7 +31082,7 @@ snapshots: '@types/rimraf@4.0.5': dependencies: - rimraf: 5.0.7 + rimraf: 6.0.1 '@types/scheduler@0.16.2': {} @@ -31316,8 +31160,6 @@ snapshots: '@types/uuid@10.0.0': {} - '@types/uuid@9.0.0': {} - '@types/webpack@5.28.5(@swc/core@1.3.101(@swc/helpers@0.5.15))(esbuild@0.19.11)': dependencies: '@types/node': 20.14.14 @@ -31528,7 +31370,7 @@ snapshots: eval: 0.1.6 find-up: 5.0.0 javascript-stringify: 2.1.0 - lodash: 4.17.23 + lodash: 4.18.1 mlly: 1.7.4 outdent: 0.8.0 vite: 4.4.9(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1) @@ -31577,7 +31419,7 @@ snapshots: - '@cfworker/json-schema' - supports-color - '@vitest/coverage-v8@3.1.4(vitest@3.1.4(@types/debug@4.1.12)(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1))': + '@vitest/coverage-v8@3.1.4(vitest@3.1.4(@types/debug@4.1.12)(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@3.12.2)(yaml@2.8.3))': dependencies: '@ampproject/remapping': 2.3.0 '@bcoe/v8-coverage': 1.0.2 @@ -31591,7 +31433,7 @@ snapshots: std-env: 3.9.0 test-exclude: 7.0.1 tinyrainbow: 2.0.0 - vitest: 3.1.4(@types/debug@4.1.12)(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1) + vitest: 3.1.4(@types/debug@4.1.12)(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@3.12.2)(yaml@2.8.3) transitivePeerDependencies: - supports-color @@ -31602,13 +31444,13 @@ snapshots: chai: 5.2.0 tinyrainbow: 2.0.0 - '@vitest/mocker@3.1.4(vite@5.4.21(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1))': + '@vitest/mocker@3.1.4(vite@6.4.2(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@3.12.2)(yaml@2.8.3))': dependencies: '@vitest/spy': 3.1.4 estree-walker: 3.0.3 magic-string: 0.30.21 optionalDependencies: - vite: 5.4.21(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1) + vite: 6.4.2(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@3.12.2)(yaml@2.8.3) '@vitest/pretty-format@2.1.9': dependencies: @@ -32000,21 +31842,21 @@ snapshots: '@opentelemetry/api': 1.9.0 zod: 3.25.76 - ajv-formats@2.1.1(ajv@8.17.1): + ajv-formats@2.1.1(ajv@8.18.0): optionalDependencies: - ajv: 8.17.1 + ajv: 8.18.0 - ajv-formats@3.0.1(ajv@8.17.1): + ajv-formats@3.0.1(ajv@8.18.0): optionalDependencies: - ajv: 8.17.1 + ajv: 8.18.0 ajv-keywords@3.5.2(ajv@6.12.6): dependencies: ajv: 6.12.6 - ajv-keywords@5.1.0(ajv@8.17.1): + ajv-keywords@5.1.0(ajv@8.18.0): dependencies: - ajv: 8.17.1 + ajv: 8.18.0 fast-deep-equal: 3.1.3 ajv@6.12.6: @@ -32024,7 +31866,7 @@ snapshots: json-schema-traverse: 0.4.1 uri-js: 4.4.1 - ajv@8.17.1: + ajv@8.18.0: dependencies: fast-deep-equal: 3.1.3 fast-uri: 3.0.6 @@ -32062,7 +31904,7 @@ snapshots: anymatch@3.1.3: dependencies: normalize-path: 3.0.0 - picomatch: 2.3.1 + picomatch: 2.3.2 archiver-utils@5.0.2: dependencies: @@ -32070,7 +31912,7 @@ snapshots: graceful-fs: 4.2.11 is-stream: 2.0.1 lazystream: 1.0.1 - lodash: 4.17.23 + lodash: 4.18.1 normalize-path: 3.0.0 readable-stream: 4.7.0 @@ -32214,7 +32056,7 @@ snapshots: autoevals@0.0.130(encoding@0.1.13)(ws@8.12.0(bufferutil@4.0.9)): dependencies: - ajv: 8.17.1 + ajv: 8.18.0 compute-cosine-similarity: 1.1.0 js-levenshtein: 1.1.6 js-yaml: 4.1.1 @@ -32227,24 +32069,24 @@ snapshots: - encoding - ws - autoprefixer@10.4.13(postcss@8.5.6): + autoprefixer@10.4.13(postcss@8.5.10): dependencies: browserslist: 4.21.4 caniuse-lite: 1.0.30001577 fraction.js: 4.2.0 normalize-range: 0.1.2 picocolors: 1.0.0 - postcss: 8.5.6 + postcss: 8.5.10 postcss-value-parser: 4.2.0 - autoprefixer@10.4.14(postcss@8.4.35): + autoprefixer@10.4.14(postcss@8.5.10): dependencies: browserslist: 4.28.0 caniuse-lite: 1.0.30001754 fraction.js: 4.3.7 normalize-range: 0.1.2 picocolors: 1.1.1 - postcss: 8.4.35 + postcss: 8.5.10 postcss-value-parser: 4.2.0 autoprefixer@9.8.8: @@ -32274,11 +32116,11 @@ snapshots: axe-core@4.6.2: {} - axios@1.12.2: + axios@1.15.1: dependencies: - follow-redirects: 1.15.9 + follow-redirects: 1.16.0 form-data: 4.0.4 - proxy-from-env: 1.1.0 + proxy-from-env: 2.1.0 transitivePeerDependencies: - debug @@ -32532,7 +32374,7 @@ snapshots: dependencies: chokidar: 3.6.0 confbox: 0.1.8 - defu: 6.1.4 + defu: 6.1.7 dotenv: 16.4.7 giget: 1.2.3 jiti: 1.21.6 @@ -32549,7 +32391,7 @@ snapshots: dependencies: chokidar: 4.0.3 confbox: 0.2.2 - defu: 6.1.4 + defu: 6.1.7 dotenv: 16.6.1 exsolve: 1.0.7 giget: 2.0.0 @@ -32687,7 +32529,7 @@ snapshots: chevrotain-allstar@0.3.1(chevrotain@11.0.3): dependencies: chevrotain: 11.0.3 - lodash-es: 4.17.21 + lodash-es: 4.18.1 chevrotain@11.0.3: dependencies: @@ -32696,7 +32538,7 @@ snapshots: '@chevrotain/regexp-to-ast': 11.0.3 '@chevrotain/types': 11.0.3 '@chevrotain/utils': 11.0.3 - lodash-es: 4.17.21 + lodash-es: 4.18.1 chokidar@3.5.3: dependencies: @@ -33093,7 +32935,7 @@ snapshots: dependencies: nice-try: 1.0.5 path-key: 2.0.1 - semver: 5.7.1 + semver: 5.7.2 shebang-command: 1.2.0 which: 1.3.1 @@ -33119,12 +32961,12 @@ snapshots: css-loader@6.10.0(webpack@5.102.1(@swc/core@1.3.26)(esbuild@0.15.18)): dependencies: - icss-utils: 5.1.0(postcss@8.4.35) - postcss: 8.4.35 - postcss-modules-extract-imports: 3.0.0(postcss@8.4.35) - postcss-modules-local-by-default: 4.0.4(postcss@8.4.35) - postcss-modules-scope: 3.1.1(postcss@8.4.35) - postcss-modules-values: 4.0.0(postcss@8.4.35) + icss-utils: 5.1.0(postcss@8.5.10) + postcss: 8.5.10 + postcss-modules-extract-imports: 3.0.0(postcss@8.5.10) + postcss-modules-local-by-default: 4.0.4(postcss@8.5.10) + postcss-modules-scope: 3.1.1(postcss@8.5.10) + postcss-modules-values: 4.0.0(postcss@8.5.10) postcss-value-parser: 4.2.0 semver: 7.6.3 optionalDependencies: @@ -33348,7 +33190,7 @@ snapshots: dagre-d3-es@7.0.11: dependencies: d3: 7.9.0 - lodash-es: 4.17.21 + lodash-es: 4.18.1 damerau-levenshtein@1.0.8: {} @@ -33499,7 +33341,7 @@ snapshots: defined@1.0.1: {} - defu@6.1.4: {} + defu@6.1.7: {} degenerator@5.0.1: dependencies: @@ -33564,9 +33406,9 @@ snapshots: dlv@1.1.3: {} - docker-compose@0.24.8: + docker-compose@1.4.2: dependencies: - yaml: 2.7.1 + yaml: 2.8.3 docker-modem@5.0.6: dependencies: @@ -33577,13 +33419,34 @@ snapshots: transitivePeerDependencies: - supports-color + docker-modem@5.0.7: + dependencies: + debug: 4.4.3(supports-color@10.0.0) + readable-stream: 3.6.2 + split-ca: 1.0.1 + ssh2: 1.16.0 + transitivePeerDependencies: + - supports-color + + dockerode@4.0.10: + dependencies: + '@balena/dockerignore': 1.0.2 + '@grpc/grpc-js': 1.12.6 + '@grpc/proto-loader': 0.7.13 + docker-modem: 5.0.7 + protobufjs: 7.5.5 + tar-fs: 2.1.4 + uuid: 10.0.0 + transitivePeerDependencies: + - supports-color + dockerode@4.0.6: dependencies: '@balena/dockerignore': 1.0.2 '@grpc/grpc-js': 1.12.6 '@grpc/proto-loader': 0.7.13 docker-modem: 5.0.6 - protobufjs: 7.3.2 + protobufjs: 7.5.5 tar-fs: 2.1.3 uuid: 10.0.0 transitivePeerDependencies: @@ -33618,7 +33481,7 @@ snapshots: dependencies: domelementtype: 2.3.0 - dompurify@3.2.6: + dompurify@3.4.1: optionalDependencies: '@types/trusted-types': 2.0.7 @@ -33699,10 +33562,6 @@ snapshots: ee-first@1.1.1: {} - effect@3.11.7: - dependencies: - fast-check: 3.22.0 - effect@3.16.12: dependencies: '@standard-schema/spec': 1.1.0 @@ -33718,6 +33577,11 @@ snapshots: '@standard-schema/spec': 1.1.0 fast-check: 3.23.2 + effect@3.21.2: + dependencies: + '@standard-schema/spec': 1.1.0 + fast-check: 3.23.2 + effect@3.7.2: {} electron-to-chromium@1.4.433: {} @@ -34088,32 +33952,6 @@ snapshots: '@esbuild/win32-ia32': 0.19.11 '@esbuild/win32-x64': 0.19.11 - esbuild@0.21.5: - optionalDependencies: - '@esbuild/aix-ppc64': 0.21.5 - '@esbuild/android-arm': 0.21.5 - '@esbuild/android-arm64': 0.21.5 - '@esbuild/android-x64': 0.21.5 - '@esbuild/darwin-arm64': 0.21.5 - '@esbuild/darwin-x64': 0.21.5 - '@esbuild/freebsd-arm64': 0.21.5 - '@esbuild/freebsd-x64': 0.21.5 - '@esbuild/linux-arm': 0.21.5 - '@esbuild/linux-arm64': 0.21.5 - '@esbuild/linux-ia32': 0.21.5 - '@esbuild/linux-loong64': 0.21.5 - '@esbuild/linux-mips64el': 0.21.5 - '@esbuild/linux-ppc64': 0.21.5 - '@esbuild/linux-riscv64': 0.21.5 - '@esbuild/linux-s390x': 0.21.5 - '@esbuild/linux-x64': 0.21.5 - '@esbuild/netbsd-x64': 0.21.5 - '@esbuild/openbsd-x64': 0.21.5 - '@esbuild/sunos-x64': 0.21.5 - '@esbuild/win32-arm64': 0.21.5 - '@esbuild/win32-ia32': 0.21.5 - '@esbuild/win32-x64': 0.21.5 - esbuild@0.23.0: optionalDependencies: '@esbuild/aix-ppc64': 0.23.0 @@ -34304,7 +34142,7 @@ snapshots: hasown: 2.0.2 is-core-module: 2.14.0 is-glob: 4.0.3 - minimatch: 3.1.2 + minimatch: 3.1.5 object.fromentries: 2.0.8 object.groupby: 1.0.3 object.values: 1.2.0 @@ -34349,7 +34187,7 @@ snapshots: has: 1.0.3 jsx-ast-utils: 3.3.3 language-tags: 1.0.5 - minimatch: 3.1.2 + minimatch: 3.1.5 object.entries: 1.1.6 object.fromentries: 2.0.8 semver: 6.3.1 @@ -34360,7 +34198,7 @@ snapshots: eslint-plugin-es: 3.0.1(eslint@8.31.0) eslint-utils: 2.1.0 ignore: 5.2.4 - minimatch: 3.1.2 + minimatch: 3.1.5 resolve: 1.22.8 semver: 6.3.1 @@ -34377,7 +34215,7 @@ snapshots: eslint: 8.31.0 estraverse: 5.3.0 jsx-ast-utils: 3.3.3 - minimatch: 3.1.2 + minimatch: 3.1.5 object.entries: 1.1.6 object.fromentries: 2.0.8 object.hasown: 1.1.2 @@ -34466,7 +34304,7 @@ snapshots: json-stable-stringify-without-jsonify: 1.0.1 levn: 0.4.1 lodash.merge: 4.6.2 - minimatch: 3.1.2 + minimatch: 3.1.5 natural-compare: 1.4.0 optionator: 0.9.1 regexpp: 3.2.0 @@ -34668,7 +34506,7 @@ snapshots: methods: 1.1.2 on-finished: 2.4.1 parseurl: 1.3.3 - path-to-regexp: 0.1.10 + path-to-regexp: 0.1.13 proxy-addr: 2.0.7 qs: 6.14.1 range-parser: 1.2.1 @@ -34777,10 +34615,6 @@ snapshots: extsprintf@1.3.0: {} - fast-check@3.22.0: - dependencies: - pure-rand: 6.1.0 - fast-check@3.23.2: dependencies: pure-rand: 6.1.0 @@ -34808,8 +34642,8 @@ snapshots: fast-json-stringify@6.0.1: dependencies: '@fastify/merge-json-schemas': 0.2.1 - ajv: 8.17.1 - ajv-formats: 3.0.1(ajv@8.17.1) + ajv: 8.18.0 + ajv-formats: 3.0.1(ajv@8.18.0) fast-uri: 3.0.6 json-schema-ref-resolver: 2.0.1 rfdc: 1.4.1 @@ -34834,17 +34668,20 @@ snapshots: dependencies: punycode: 1.4.1 - fast-xml-parser@4.2.5: + fast-xml-builder@1.1.5: dependencies: - strnum: 1.0.5 + path-expression-matcher: 1.5.0 - fast-xml-parser@4.4.1: + fast-xml-parser@4.5.6: dependencies: strnum: 1.0.5 - fast-xml-parser@5.2.5: + fast-xml-parser@5.7.1: dependencies: - strnum: 2.1.1 + '@nodable/entities': 2.1.0 + fast-xml-builder: 1.1.5 + path-expression-matcher: 1.5.0 + strnum: 2.2.3 fastest-stable-stringify@2.0.2: {} @@ -34884,17 +34721,17 @@ snapshots: dependencies: pend: 1.2.0 - fdir@6.2.0(picomatch@4.0.2): + fdir@6.2.0(picomatch@4.0.4): optionalDependencies: - picomatch: 4.0.2 + picomatch: 4.0.4 - fdir@6.4.3(picomatch@4.0.2): + fdir@6.4.3(picomatch@4.0.4): optionalDependencies: - picomatch: 4.0.2 + picomatch: 4.0.4 - fdir@6.4.4(picomatch@4.0.2): + fdir@6.4.4(picomatch@4.0.4): optionalDependencies: - picomatch: 4.0.2 + picomatch: 4.0.4 fflate@0.4.8: {} @@ -34981,12 +34818,12 @@ snapshots: flat-cache@3.0.4: dependencies: - flatted: 3.2.7 + flatted: 3.4.2 rimraf: 3.0.2 - flatted@3.2.7: {} + flatted@3.4.2: {} - follow-redirects@1.15.9: {} + follow-redirects@1.16.0: {} for-each@0.3.3: dependencies: @@ -35146,7 +34983,7 @@ snapshots: get-port@5.1.1: {} - get-port@7.1.0: {} + get-port@7.2.0: {} get-proto@1.0.1: dependencies: @@ -35206,7 +35043,7 @@ snapshots: dependencies: citty: 0.1.6 consola: 3.4.2 - defu: 6.1.4 + defu: 6.1.7 node-fetch-native: 1.6.6 nypm: 0.3.9 ohash: 1.1.3 @@ -35217,7 +35054,7 @@ snapshots: dependencies: citty: 0.1.6 consola: 3.4.2 - defu: 6.1.4 + defu: 6.1.7 node-fetch-native: 1.6.6 nypm: 0.6.1 pathe: 2.0.3 @@ -35236,14 +35073,6 @@ snapshots: glob-to-regexp@0.4.1: {} - glob@10.3.10: - dependencies: - foreground-child: 3.1.1 - jackspeak: 2.3.6 - minimatch: 9.0.5 - minipass: 7.1.2 - path-scurry: 1.11.1 - glob@10.3.4: dependencies: foreground-child: 3.1.1 @@ -35275,7 +35104,7 @@ snapshots: fs.realpath: 1.0.0 inflight: 1.0.6 inherits: 2.0.4 - minimatch: 3.1.2 + minimatch: 3.1.5 once: 1.4.0 path-is-absolute: 1.0.1 @@ -35688,13 +35517,9 @@ snapshots: dependencies: safer-buffer: 2.1.2 - icss-utils@5.1.0(postcss@8.4.35): + icss-utils@5.1.0(postcss@8.5.10): dependencies: - postcss: 8.4.35 - - icss-utils@5.1.0(postcss@8.5.6): - dependencies: - postcss: 8.5.6 + postcss: 8.5.10 ieee754@1.2.1: {} @@ -36439,7 +36264,7 @@ snapshots: dependencies: p-locate: 6.0.0 - lodash-es@4.17.21: {} + lodash-es@4.18.1: {} lodash.camelcase@4.3.0: {} @@ -36487,7 +36312,7 @@ snapshots: lodash.uniq@4.5.0: {} - lodash@4.17.23: {} + lodash@4.18.1: {} log-symbols@4.1.0: dependencies: @@ -36939,10 +36764,10 @@ snapshots: d3-sankey: 0.12.3 dagre-d3-es: 7.0.11 dayjs: 1.11.18 - dompurify: 3.2.6 + dompurify: 3.4.1 katex: 0.16.25 khroma: 2.1.0 - lodash-es: 4.17.21 + lodash-es: 4.18.1 marked: 16.4.1 roughjs: 4.6.6 stylis: 4.3.6 @@ -37366,7 +37191,7 @@ snapshots: micromatch@4.0.8: dependencies: braces: 3.0.3 - picomatch: 2.3.1 + picomatch: 2.3.2 mime-db@1.52.0: {} @@ -37412,7 +37237,7 @@ snapshots: dependencies: brace-expansion: 2.0.1 - minimatch@3.1.2: + minimatch@3.1.5: dependencies: brace-expansion: 1.1.11 @@ -37637,7 +37462,7 @@ snapshots: busboy: 1.6.0 caniuse-lite: 1.0.30001754 graceful-fs: 4.2.11 - postcss: 8.4.31 + postcss: 8.5.10 react: 18.3.1 react-dom: 18.2.0(react@18.3.1) styled-jsx: 5.1.1(react@18.3.1) @@ -37663,7 +37488,7 @@ snapshots: busboy: 1.6.0 caniuse-lite: 1.0.30001699 graceful-fs: 4.2.11 - postcss: 8.4.31 + postcss: 8.5.10 react: 18.3.1 react-dom: 18.2.0(react@18.3.1) styled-jsx: 5.1.1(react@18.3.1) @@ -37690,7 +37515,7 @@ snapshots: '@swc/helpers': 0.5.15 busboy: 1.6.0 caniuse-lite: 1.0.30001707 - postcss: 8.4.31 + postcss: 8.5.10 react: 19.0.0 react-dom: 19.0.0(react@19.0.0) styled-jsx: 5.1.6(react@19.0.0) @@ -37715,7 +37540,7 @@ snapshots: '@next/env': 15.4.8 '@swc/helpers': 0.5.15 caniuse-lite: 1.0.30001754 - postcss: 8.4.31 + postcss: 8.5.10 react: 19.0.0 react-dom: 19.0.0(react@19.0.0) styled-jsx: 5.1.6(react@19.0.0) @@ -37740,7 +37565,7 @@ snapshots: '@next/env': 15.5.6 '@swc/helpers': 0.5.15 caniuse-lite: 1.0.30001754 - postcss: 8.4.31 + postcss: 8.5.10 react: 19.1.0 react-dom: 19.1.0(react@19.1.0) styled-jsx: 5.1.6(react@19.1.0) @@ -37772,7 +37597,7 @@ snapshots: node-emoji@1.11.0: dependencies: - lodash: 4.17.23 + lodash: 4.18.1 node-emoji@2.1.3: dependencies: @@ -37801,7 +37626,7 @@ snapshots: node-releases@2.0.27: {} - nodemailer@7.0.11: {} + nodemailer@8.0.6: {} non.geist@1.0.2: {} @@ -37813,7 +37638,7 @@ snapshots: dependencies: hosted-git-info: 2.8.9 resolve: 1.22.8 - semver: 5.7.1 + semver: 5.7.2 validate-npm-package-license: 3.0.4 normalize-package-data@5.0.0: @@ -37859,7 +37684,7 @@ snapshots: chalk: 2.4.2 cross-spawn: 6.0.5 memorystream: 0.3.1 - minimatch: 3.1.2 + minimatch: 3.1.5 pidtree: 0.3.1 read-pkg: 3.0.0 shell-quote: 1.8.1 @@ -38318,6 +38143,8 @@ snapshots: path-exists@5.0.0: {} + path-expression-matcher@1.5.0: {} + path-is-absolute@1.0.1: {} path-key@2.0.1: {} @@ -38338,7 +38165,7 @@ snapshots: lru-cache: 11.2.4 minipass: 7.1.2 - path-to-regexp@0.1.10: {} + path-to-regexp@0.1.13: {} path-to-regexp@8.2.0: {} @@ -38465,9 +38292,9 @@ snapshots: picocolors@1.1.1: {} - picomatch@2.3.1: {} + picomatch@2.3.2: {} - picomatch@4.0.2: {} + picomatch@4.0.4: {} pidtree@0.3.1: {} @@ -38545,9 +38372,9 @@ snapshots: possible-typed-array-names@1.0.0: {} - postcss-discard-duplicates@5.1.0(postcss@8.5.6): + postcss-discard-duplicates@5.1.0(postcss@8.5.10): dependencies: - postcss: 8.5.6 + postcss: 8.5.10 postcss-functions@3.0.0: dependencies: @@ -38556,23 +38383,16 @@ snapshots: postcss: 6.0.23 postcss-value-parser: 3.3.1 - postcss-import@15.1.0(postcss@8.5.4): - dependencies: - postcss: 8.5.4 - postcss-value-parser: 4.2.0 - read-cache: 1.0.0 - resolve: 1.22.8 - - postcss-import@15.1.0(postcss@8.5.6): + postcss-import@15.1.0(postcss@8.5.10): dependencies: - postcss: 8.5.6 + postcss: 8.5.10 postcss-value-parser: 4.2.0 read-cache: 1.0.0 resolve: 1.22.8 - postcss-import@16.0.1(postcss@8.5.6): + postcss-import@16.0.1(postcss@8.5.10): dependencies: - postcss: 8.5.6 + postcss: 8.5.10 postcss-value-parser: 4.2.0 read-cache: 1.0.0 resolve: 1.22.8 @@ -38582,102 +38402,69 @@ snapshots: camelcase-css: 2.0.1 postcss: 7.0.39 - postcss-js@4.0.1(postcss@8.5.4): - dependencies: - camelcase-css: 2.0.1 - postcss: 8.5.4 - - postcss-js@4.0.1(postcss@8.5.6): + postcss-js@4.0.1(postcss@8.5.10): dependencies: camelcase-css: 2.0.1 - postcss: 8.5.6 - - postcss-load-config@4.0.2(postcss@8.5.4): - dependencies: - lilconfig: 3.1.3 - yaml: 2.7.1 - optionalDependencies: - postcss: 8.5.4 + postcss: 8.5.10 - postcss-load-config@4.0.2(postcss@8.5.6): + postcss-load-config@4.0.2(postcss@8.5.10): dependencies: lilconfig: 3.1.3 - yaml: 2.7.1 + yaml: 2.8.3 optionalDependencies: - postcss: 8.5.6 + postcss: 8.5.10 - postcss-load-config@6.0.1(jiti@2.4.2)(postcss@8.5.6)(tsx@4.17.0)(yaml@2.7.1): + postcss-load-config@6.0.1(jiti@2.4.2)(postcss@8.5.10)(tsx@4.17.0)(yaml@2.8.3): dependencies: lilconfig: 3.1.3 optionalDependencies: jiti: 2.4.2 - postcss: 8.5.6 + postcss: 8.5.10 tsx: 4.17.0 - yaml: 2.7.1 + yaml: 2.8.3 - postcss-loader@8.1.1(postcss@8.5.6)(typescript@5.5.4)(webpack@5.102.1(@swc/core@1.3.26)(esbuild@0.15.18)): + postcss-loader@8.1.1(postcss@8.5.10)(typescript@5.5.4)(webpack@5.102.1(@swc/core@1.3.26)(esbuild@0.15.18)): dependencies: cosmiconfig: 9.0.0(typescript@5.5.4) jiti: 1.21.0 - postcss: 8.5.6 + postcss: 8.5.10 semver: 7.6.3 optionalDependencies: webpack: 5.102.1(@swc/core@1.3.26)(esbuild@0.15.18) transitivePeerDependencies: - typescript - postcss-modules-extract-imports@3.0.0(postcss@8.4.35): + postcss-modules-extract-imports@3.0.0(postcss@8.5.10): dependencies: - postcss: 8.4.35 + postcss: 8.5.10 - postcss-modules-extract-imports@3.0.0(postcss@8.5.6): + postcss-modules-local-by-default@4.0.4(postcss@8.5.10): dependencies: - postcss: 8.5.6 - - postcss-modules-local-by-default@4.0.4(postcss@8.4.35): - dependencies: - icss-utils: 5.1.0(postcss@8.4.35) - postcss: 8.4.35 + icss-utils: 5.1.0(postcss@8.5.10) + postcss: 8.5.10 postcss-selector-parser: 6.1.2 postcss-value-parser: 4.2.0 - postcss-modules-local-by-default@4.0.4(postcss@8.5.6): - dependencies: - icss-utils: 5.1.0(postcss@8.5.6) - postcss: 8.5.6 - postcss-selector-parser: 6.1.2 - postcss-value-parser: 4.2.0 - - postcss-modules-scope@3.1.1(postcss@8.4.35): - dependencies: - postcss: 8.4.35 - postcss-selector-parser: 6.1.2 - - postcss-modules-scope@3.1.1(postcss@8.5.6): + postcss-modules-scope@3.1.1(postcss@8.5.10): dependencies: - postcss: 8.5.6 + postcss: 8.5.10 postcss-selector-parser: 6.1.2 - postcss-modules-values@4.0.0(postcss@8.4.35): - dependencies: - icss-utils: 5.1.0(postcss@8.4.35) - postcss: 8.4.35 - - postcss-modules-values@4.0.0(postcss@8.5.6): + postcss-modules-values@4.0.0(postcss@8.5.10): dependencies: - icss-utils: 5.1.0(postcss@8.5.6) - postcss: 8.5.6 + icss-utils: 5.1.0(postcss@8.5.10) + postcss: 8.5.10 - postcss-modules@6.0.0(postcss@8.5.6): + postcss-modules@6.0.0(postcss@8.5.10): dependencies: generic-names: 4.0.0 - icss-utils: 5.1.0(postcss@8.5.6) + icss-utils: 5.1.0(postcss@8.5.10) lodash.camelcase: 4.3.0 - postcss: 8.5.6 - postcss-modules-extract-imports: 3.0.0(postcss@8.5.6) - postcss-modules-local-by-default: 4.0.4(postcss@8.5.6) - postcss-modules-scope: 3.1.1(postcss@8.5.6) - postcss-modules-values: 4.0.0(postcss@8.5.6) + postcss: 8.5.10 + postcss-modules-extract-imports: 3.0.0(postcss@8.5.10) + postcss-modules-local-by-default: 4.0.4(postcss@8.5.10) + postcss-modules-scope: 3.1.1(postcss@8.5.10) + postcss-modules-values: 4.0.0(postcss@8.5.10) string-hash: 1.1.3 postcss-nested@4.2.3: @@ -38685,14 +38472,9 @@ snapshots: postcss: 7.0.39 postcss-selector-parser: 6.1.2 - postcss-nested@6.2.0(postcss@8.5.4): + postcss-nested@6.2.0(postcss@8.5.10): dependencies: - postcss: 8.5.4 - postcss-selector-parser: 6.1.2 - - postcss-nested@6.2.0(postcss@8.5.6): - dependencies: - postcss: 8.5.6 + postcss: 8.5.10 postcss-selector-parser: 6.1.2 postcss-selector-parser@6.0.10: @@ -38726,31 +38508,7 @@ snapshots: picocolors: 0.2.1 source-map: 0.6.1 - postcss@8.4.31: - dependencies: - nanoid: 3.3.8 - picocolors: 1.1.1 - source-map-js: 1.2.1 - - postcss@8.4.35: - dependencies: - nanoid: 3.3.8 - picocolors: 1.1.1 - source-map-js: 1.2.1 - - postcss@8.4.44: - dependencies: - nanoid: 3.3.8 - picocolors: 1.1.1 - source-map-js: 1.2.0 - - postcss@8.5.4: - dependencies: - nanoid: 3.3.11 - picocolors: 1.1.1 - source-map-js: 1.2.1 - - postcss@8.5.6: + postcss@8.5.10: dependencies: nanoid: 3.3.11 picocolors: 1.1.1 @@ -38786,7 +38544,7 @@ snapshots: posthog-node@4.17.1: dependencies: - axios: 1.12.2 + axios: 1.15.1 transitivePeerDependencies: - debug @@ -38802,7 +38560,7 @@ snapshots: pump: 3.0.2 rc: 1.2.8 simple-get: 4.0.1 - tar-fs: 2.1.3 + tar-fs: 2.1.4 tunnel-agent: 0.6.0 preferred-pm@3.0.3: @@ -38943,9 +38701,12 @@ snapshots: retry: 0.12.0 signal-exit: 3.0.7 - properties-reader@2.3.0: + properties-reader@3.0.1: dependencies: - mkdirp: 1.0.4 + '@kwsites/file-exists': 1.1.1 + mkdirp: 3.0.1 + transitivePeerDependencies: + - supports-color property-expr@2.0.6: {} @@ -38955,7 +38716,7 @@ snapshots: proto-list@1.2.4: {} - protobufjs@7.3.2: + protobufjs@7.5.5: dependencies: '@protobufjs/aspromise': 1.1.2 '@protobufjs/base64': 1.1.2 @@ -38990,6 +38751,8 @@ snapshots: proxy-from-env@1.1.0: {} + proxy-from-env@2.1.0: {} + pseudomap@1.0.2: {} psl@1.9.0: {} @@ -39094,7 +38857,7 @@ snapshots: rc9@2.1.2: dependencies: - defu: 6.1.4 + defu: 6.1.7 destr: 2.0.3 rc@1.2.8: @@ -39233,7 +38996,7 @@ snapshots: react: 18.2.0 react-dom: 18.2.0(react@18.2.0) - react-email@2.1.2(@opentelemetry/api@1.9.0)(@swc/helpers@0.5.15)(bufferutil@4.0.9)(eslint@8.31.0): + react-email@2.1.2(@opentelemetry/api@1.9.0)(@swc/helpers@0.5.15)(eslint@8.31.0): dependencies: '@babel/parser': 7.24.1 '@radix-ui/colors': 1.0.1 @@ -39248,7 +39011,7 @@ snapshots: '@types/react': 18.2.69 '@types/react-dom': 18.2.7 '@types/webpack': 5.28.5(@swc/core@1.3.101(@swc/helpers@0.5.15))(esbuild@0.19.11) - autoprefixer: 10.4.14(postcss@8.4.35) + autoprefixer: 10.4.14(postcss@8.5.10) babel-walk: 3.0.0 chalk: 4.1.2 chokidar: 3.5.3 @@ -39265,13 +39028,13 @@ snapshots: next: 14.1.0(@opentelemetry/api@1.9.0)(react-dom@18.2.0(react@18.3.1))(react@18.3.1) normalize-path: 3.0.0 ora: 5.4.1 - postcss: 8.4.35 + postcss: 8.5.10 prism-react-renderer: 2.1.0(react@18.3.1) react: 18.3.1 react-dom: 18.2.0(react@18.3.1) shelljs: 0.8.5 - socket.io: 4.7.3(bufferutil@4.0.9) - socket.io-client: 4.7.3(bufferutil@4.0.9) + socket.io: 4.7.3 + socket.io-client: 4.7.3 sonner: 1.3.1(react-dom@18.2.0(react@18.3.1))(react@18.3.1) source-map-js: 1.0.2 stacktrace-parser: 0.1.10 @@ -39649,7 +39412,7 @@ snapshots: readdirp@3.6.0: dependencies: - picomatch: 2.3.1 + picomatch: 2.3.2 readdirp@4.1.2: {} @@ -39663,7 +39426,7 @@ snapshots: dependencies: clsx: 2.1.1 eventemitter3: 4.0.7 - lodash: 4.17.23 + lodash: 4.18.1 react: 18.2.0 react-dom: 18.2.0(react@18.2.0) react-is: 18.3.1 @@ -40004,10 +39767,6 @@ snapshots: dependencies: glob: 9.3.5 - rimraf@5.0.7: - dependencies: - glob: 10.3.10 - rimraf@6.0.1: dependencies: glob: 11.0.0 @@ -40021,29 +39780,35 @@ snapshots: optionalDependencies: fsevents: 2.3.3 - rollup@4.36.0: - dependencies: - '@types/estree': 1.0.6 - optionalDependencies: - '@rollup/rollup-android-arm-eabi': 4.36.0 - '@rollup/rollup-android-arm64': 4.36.0 - '@rollup/rollup-darwin-arm64': 4.36.0 - '@rollup/rollup-darwin-x64': 4.36.0 - '@rollup/rollup-freebsd-arm64': 4.36.0 - '@rollup/rollup-freebsd-x64': 4.36.0 - '@rollup/rollup-linux-arm-gnueabihf': 4.36.0 - '@rollup/rollup-linux-arm-musleabihf': 4.36.0 - '@rollup/rollup-linux-arm64-gnu': 4.36.0 - '@rollup/rollup-linux-arm64-musl': 4.36.0 - '@rollup/rollup-linux-loongarch64-gnu': 4.36.0 - '@rollup/rollup-linux-powerpc64le-gnu': 4.36.0 - '@rollup/rollup-linux-riscv64-gnu': 4.36.0 - '@rollup/rollup-linux-s390x-gnu': 4.36.0 - '@rollup/rollup-linux-x64-gnu': 4.36.0 - '@rollup/rollup-linux-x64-musl': 4.36.0 - '@rollup/rollup-win32-arm64-msvc': 4.36.0 - '@rollup/rollup-win32-ia32-msvc': 4.36.0 - '@rollup/rollup-win32-x64-msvc': 4.36.0 + rollup@4.60.1: + dependencies: + '@types/estree': 1.0.8 + optionalDependencies: + '@rollup/rollup-android-arm-eabi': 4.60.1 + '@rollup/rollup-android-arm64': 4.60.1 + '@rollup/rollup-darwin-arm64': 4.60.1 + '@rollup/rollup-darwin-x64': 4.60.1 + '@rollup/rollup-freebsd-arm64': 4.60.1 + '@rollup/rollup-freebsd-x64': 4.60.1 + '@rollup/rollup-linux-arm-gnueabihf': 4.60.1 + '@rollup/rollup-linux-arm-musleabihf': 4.60.1 + '@rollup/rollup-linux-arm64-gnu': 4.60.1 + '@rollup/rollup-linux-arm64-musl': 4.60.1 + '@rollup/rollup-linux-loong64-gnu': 4.60.1 + '@rollup/rollup-linux-loong64-musl': 4.60.1 + '@rollup/rollup-linux-ppc64-gnu': 4.60.1 + '@rollup/rollup-linux-ppc64-musl': 4.60.1 + '@rollup/rollup-linux-riscv64-gnu': 4.60.1 + '@rollup/rollup-linux-riscv64-musl': 4.60.1 + '@rollup/rollup-linux-s390x-gnu': 4.60.1 + '@rollup/rollup-linux-x64-gnu': 4.60.1 + '@rollup/rollup-linux-x64-musl': 4.60.1 + '@rollup/rollup-openbsd-x64': 4.60.1 + '@rollup/rollup-openharmony-arm64': 4.60.1 + '@rollup/rollup-win32-arm64-msvc': 4.60.1 + '@rollup/rollup-win32-ia32-msvc': 4.60.1 + '@rollup/rollup-win32-x64-gnu': 4.60.1 + '@rollup/rollup-win32-x64-msvc': 4.60.1 fsevents: 2.3.3 roughjs@4.6.6: @@ -40146,9 +39911,9 @@ snapshots: schema-utils@4.3.3: dependencies: '@types/json-schema': 7.0.15 - ajv: 8.17.1 - ajv-formats: 2.1.1(ajv@8.17.1) - ajv-keywords: 5.1.0(ajv@8.17.1) + ajv: 8.18.0 + ajv-formats: 2.1.1(ajv@8.18.0) + ajv-keywords: 5.1.0(ajv@8.18.0) screenfull@5.2.0: {} @@ -40167,7 +39932,7 @@ snapshots: '@types/semver': 6.2.3 semver: 6.3.1 - semver@5.7.1: {} + semver@5.7.2: {} semver@6.3.1: {} @@ -40496,12 +40261,12 @@ snapshots: - supports-color - utf-8-validate - socket.io-client@4.7.3(bufferutil@4.0.9): + socket.io-client@4.7.3: dependencies: '@socket.io/component-emitter': 3.1.0 debug: 4.3.7(supports-color@10.0.0) engine.io-client: 6.5.3(bufferutil@4.0.9)(supports-color@10.0.0) - socket.io-parser: 4.2.4(supports-color@10.0.0) + socket.io-parser: 4.2.6(supports-color@10.0.0) transitivePeerDependencies: - bufferutil - supports-color @@ -40512,20 +40277,20 @@ snapshots: '@socket.io/component-emitter': 3.1.0 debug: 4.3.7(supports-color@10.0.0) engine.io-client: 6.5.3(bufferutil@4.0.9)(supports-color@10.0.0) - socket.io-parser: 4.2.4(supports-color@10.0.0) + socket.io-parser: 4.2.6(supports-color@10.0.0) transitivePeerDependencies: - bufferutil - supports-color - utf-8-validate - socket.io-parser@4.2.4(supports-color@10.0.0): + socket.io-parser@4.2.6(supports-color@10.0.0): dependencies: '@socket.io/component-emitter': 3.1.0 - debug: 4.3.7(supports-color@10.0.0) + debug: 4.4.3(supports-color@10.0.0) transitivePeerDependencies: - supports-color - socket.io@4.7.3(bufferutil@4.0.9): + socket.io@4.7.3: dependencies: accepts: 1.3.8 base64id: 2.0.0 @@ -40533,7 +40298,7 @@ snapshots: debug: 4.3.7(supports-color@10.0.0) engine.io: 6.5.4(bufferutil@4.0.9) socket.io-adapter: 2.5.4(bufferutil@4.0.9) - socket.io-parser: 4.2.4(supports-color@10.0.0) + socket.io-parser: 4.2.6(supports-color@10.0.0) transitivePeerDependencies: - bufferutil - supports-color @@ -40547,7 +40312,7 @@ snapshots: debug: 4.3.7(supports-color@10.0.0) engine.io: 6.5.4(bufferutil@4.0.9) socket.io-adapter: 2.5.4(bufferutil@4.0.9) - socket.io-parser: 4.2.4(supports-color@10.0.0) + socket.io-parser: 4.2.6(supports-color@10.0.0) transitivePeerDependencies: - bufferutil - supports-color @@ -40872,7 +40637,7 @@ snapshots: strnum@1.0.5: {} - strnum@2.1.1: {} + strnum@2.2.3: {} strtok3@9.1.1: dependencies: @@ -41009,11 +40774,11 @@ snapshots: '@pkgr/utils': 2.3.1 tslib: 2.8.1 - systeminformation@5.27.14: {} + systeminformation@5.31.5: {} table@6.9.0: dependencies: - ajv: 8.17.1 + ajv: 8.18.0 lodash.truncate: 4.4.2 slice-ansi: 4.0.0 string-width: 4.2.3 @@ -41062,7 +40827,7 @@ snapshots: detective: 5.2.1 fs-extra: 8.1.0 html-tags: 3.3.1 - lodash: 4.17.23 + lodash: 4.18.1 node-emoji: 1.11.0 normalize.css: 8.0.1 object-hash: 2.2.0 @@ -41092,11 +40857,11 @@ snapshots: normalize-path: 3.0.0 object-hash: 3.0.0 picocolors: 1.1.1 - postcss: 8.5.6 - postcss-import: 15.1.0(postcss@8.5.6) - postcss-js: 4.0.1(postcss@8.5.6) - postcss-load-config: 4.0.2(postcss@8.5.6) - postcss-nested: 6.2.0(postcss@8.5.6) + postcss: 8.5.10 + postcss-import: 15.1.0(postcss@8.5.10) + postcss-js: 4.0.1(postcss@8.5.10) + postcss-load-config: 4.0.2(postcss@8.5.10) + postcss-nested: 6.2.0(postcss@8.5.10) postcss-selector-parser: 6.1.2 resolve: 1.22.8 sucrase: 3.35.0 @@ -41119,11 +40884,11 @@ snapshots: normalize-path: 3.0.0 object-hash: 3.0.0 picocolors: 1.1.1 - postcss: 8.5.4 - postcss-import: 15.1.0(postcss@8.5.4) - postcss-js: 4.0.1(postcss@8.5.4) - postcss-load-config: 4.0.2(postcss@8.5.4) - postcss-nested: 6.2.0(postcss@8.5.4) + postcss: 8.5.10 + postcss-import: 15.1.0(postcss@8.5.10) + postcss-js: 4.0.1(postcss@8.5.10) + postcss-load-config: 4.0.2(postcss@8.5.10) + postcss-nested: 6.2.0(postcss@8.5.10) postcss-selector-parser: 6.1.2 resolve: 1.22.8 sucrase: 3.35.0 @@ -41148,7 +40913,7 @@ snapshots: pump: 3.0.2 tar-stream: 2.2.0 - tar-fs@3.1.0: + tar-fs@3.1.1: dependencies: pump: 3.0.2 tar-stream: 3.1.7 @@ -41159,7 +40924,7 @@ snapshots: - bare-abort-controller - bare-buffer - tar-fs@3.1.1: + tar-fs@3.1.2: dependencies: pump: 3.0.2 tar-stream: 3.1.7 @@ -41204,16 +40969,7 @@ snapshots: mkdirp: 1.0.4 yallist: 4.0.0 - tar@7.4.3: - dependencies: - '@isaacs/fs-minipass': 4.0.1 - chownr: 3.0.0 - minipass: 7.1.2 - minizlib: 3.1.0 - mkdirp: 3.0.1 - yallist: 5.0.0 - - tar@7.5.6: + tar@7.5.13: dependencies: '@isaacs/fs-minipass': 4.0.1 chownr: 3.0.0 @@ -41264,23 +41020,23 @@ snapshots: glob: 10.4.5 minimatch: 9.0.5 - testcontainers@10.28.0: + testcontainers@11.14.0: dependencies: '@balena/dockerignore': 1.0.2 - '@types/dockerode': 3.3.35 + '@types/dockerode': 4.0.1 archiver: 7.0.1 async-lock: 1.4.1 byline: 5.0.0 - debug: 4.4.0 - docker-compose: 0.24.8 - dockerode: 4.0.6 - get-port: 7.1.0 + debug: 4.4.3(supports-color@10.0.0) + docker-compose: 1.4.2 + dockerode: 4.0.10 + get-port: 7.2.0 proper-lockfile: 4.1.2 - properties-reader: 2.3.0 + properties-reader: 3.0.1 ssh-remote-port-forward: 1.0.4 - tar-fs: 3.1.1 - tmp: 0.2.3 - undici: 5.29.0 + tar-fs: 3.1.2 + tmp: 0.2.5 + undici: 7.24.6 transitivePeerDependencies: - bare-abort-controller - bare-buffer @@ -41336,23 +41092,23 @@ snapshots: tinyglobby@0.2.10: dependencies: - fdir: 6.4.3(picomatch@4.0.2) - picomatch: 4.0.2 + fdir: 6.4.3(picomatch@4.0.4) + picomatch: 4.0.4 tinyglobby@0.2.12: dependencies: - fdir: 6.4.4(picomatch@4.0.2) - picomatch: 4.0.2 + fdir: 6.4.4(picomatch@4.0.4) + picomatch: 4.0.4 tinyglobby@0.2.13: dependencies: - fdir: 6.4.4(picomatch@4.0.2) - picomatch: 4.0.2 + fdir: 6.4.4(picomatch@4.0.4) + picomatch: 4.0.4 tinyglobby@0.2.2: dependencies: - fdir: 6.2.0(picomatch@4.0.2) - picomatch: 4.0.2 + fdir: 6.2.0(picomatch@4.0.4) + picomatch: 4.0.4 tinygradient@1.1.5: dependencies: @@ -41383,6 +41139,8 @@ snapshots: tmp@0.2.3: {} + tmp@0.2.5: {} + to-fast-properties@2.0.0: {} to-readable-stream@1.0.0: {} @@ -41452,12 +41210,12 @@ snapshots: ts-proto-descriptors@1.15.0: dependencies: long: 5.2.3 - protobufjs: 7.3.2 + protobufjs: 7.5.5 ts-proto@1.167.3: dependencies: case-anything: 2.1.13 - protobufjs: 7.3.2 + protobufjs: 7.5.5 ts-poet: 6.6.0 ts-proto-descriptors: 1.15.0 @@ -41515,7 +41273,7 @@ snapshots: tslib@2.8.1: {} - tsup@8.4.0(@swc/core@1.3.101(@swc/helpers@0.5.15))(jiti@2.4.2)(postcss@8.5.6)(tsx@4.17.0)(typescript@5.5.4)(yaml@2.7.1): + tsup@8.4.0(@swc/core@1.3.101(@swc/helpers@0.5.15))(jiti@2.4.2)(postcss@8.5.10)(tsx@4.17.0)(typescript@5.5.4)(yaml@2.8.3): dependencies: bundle-require: 5.1.0(esbuild@0.25.1) cac: 6.7.14 @@ -41525,9 +41283,9 @@ snapshots: esbuild: 0.25.1 joycon: 3.1.1 picocolors: 1.1.1 - postcss-load-config: 6.0.1(jiti@2.4.2)(postcss@8.5.6)(tsx@4.17.0)(yaml@2.7.1) + postcss-load-config: 6.0.1(jiti@2.4.2)(postcss@8.5.10)(tsx@4.17.0)(yaml@2.8.3) resolve-from: 5.0.0 - rollup: 4.36.0 + rollup: 4.60.1 source-map: 0.8.0-beta.0 sucrase: 3.35.0 tinyexec: 0.3.2 @@ -41535,7 +41293,7 @@ snapshots: tree-kill: 1.2.2 optionalDependencies: '@swc/core': 1.3.101(@swc/helpers@0.5.15) - postcss: 8.5.6 + postcss: 8.5.10 typescript: 5.5.4 transitivePeerDependencies: - jiti @@ -41743,6 +41501,8 @@ snapshots: undici@6.25.0: {} + undici@7.24.6: {} + unicode-emoji-modifier-base@1.0.0: {} unicorn-magic@0.1.0: {} @@ -41980,12 +41740,12 @@ snapshots: uuid@11.1.0: {} + uuid@14.0.0: {} + uuid@3.4.0: {} uuid@8.3.2: {} - uuid@9.0.0: {} - uuid@9.0.1: {} uvu@0.5.6: @@ -42097,15 +41857,16 @@ snapshots: - supports-color - terser - vite-node@3.1.4(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1): + vite-node@3.1.4(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@3.12.2)(yaml@2.8.3): dependencies: cac: 6.7.14 debug: 4.4.3(supports-color@10.0.0) es-module-lexer: 1.7.0 pathe: 2.0.3 - vite: 5.4.21(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1) + vite: 6.4.2(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@3.12.2)(yaml@2.8.3) transitivePeerDependencies: - '@types/node' + - jiti - less - lightningcss - sass @@ -42114,6 +41875,29 @@ snapshots: - sugarss - supports-color - terser + - tsx + - yaml + + vite-node@3.1.4(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@4.20.6)(yaml@2.8.3): + dependencies: + cac: 6.7.14 + debug: 4.4.3(supports-color@10.0.0) + es-module-lexer: 1.7.0 + pathe: 2.0.3 + vite: 6.4.2(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@4.20.6)(yaml@2.8.3) + transitivePeerDependencies: + - '@types/node' + - jiti + - less + - lightningcss + - sass + - sass-embedded + - stylus + - sugarss + - supports-color + - terser + - tsx + - yaml vite-tsconfig-paths@4.0.5(typescript@5.5.4): dependencies: @@ -42127,7 +41911,7 @@ snapshots: vite@4.4.9(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1): dependencies: esbuild: 0.18.11 - postcss: 8.5.6 + postcss: 8.5.10 rollup: 3.29.1 optionalDependencies: '@types/node': 20.14.14 @@ -42135,21 +41919,84 @@ snapshots: lightningcss: 1.29.2 terser: 5.44.1 - vite@5.4.21(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1): + vite@6.4.2(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@3.12.2)(yaml@2.8.3): + dependencies: + esbuild: 0.25.1 + fdir: 6.4.4(picomatch@4.0.4) + picomatch: 4.0.4 + postcss: 8.5.10 + rollup: 4.60.1 + tinyglobby: 0.2.13 + optionalDependencies: + '@types/node': 20.14.14 + fsevents: 2.3.3 + jiti: 2.4.2 + lightningcss: 1.29.2 + terser: 5.44.1 + tsx: 3.12.2 + yaml: 2.8.3 + + vite@6.4.2(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@4.20.6)(yaml@2.8.3): dependencies: - esbuild: 0.21.5 - postcss: 8.5.6 - rollup: 4.36.0 + esbuild: 0.25.1 + fdir: 6.4.4(picomatch@4.0.4) + picomatch: 4.0.4 + postcss: 8.5.10 + rollup: 4.60.1 + tinyglobby: 0.2.13 optionalDependencies: '@types/node': 20.14.14 fsevents: 2.3.3 + jiti: 2.4.2 lightningcss: 1.29.2 terser: 5.44.1 + tsx: 4.20.6 + yaml: 2.8.3 + + vitest@3.1.4(@types/debug@4.1.12)(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@3.12.2)(yaml@2.8.3): + dependencies: + '@vitest/expect': 3.1.4 + '@vitest/mocker': 3.1.4(vite@6.4.2(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@3.12.2)(yaml@2.8.3)) + '@vitest/pretty-format': 3.1.4 + '@vitest/runner': 3.1.4 + '@vitest/snapshot': 3.1.4 + '@vitest/spy': 3.1.4 + '@vitest/utils': 3.1.4 + chai: 5.2.0 + debug: 4.4.1 + expect-type: 1.2.1 + magic-string: 0.30.21 + pathe: 2.0.3 + std-env: 3.9.0 + tinybench: 2.9.0 + tinyexec: 0.3.2 + tinyglobby: 0.2.13 + tinypool: 1.0.2 + tinyrainbow: 2.0.0 + vite: 6.4.2(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@3.12.2)(yaml@2.8.3) + vite-node: 3.1.4(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@3.12.2)(yaml@2.8.3) + why-is-node-running: 2.3.0 + optionalDependencies: + '@types/debug': 4.1.12 + '@types/node': 20.14.14 + transitivePeerDependencies: + - jiti + - less + - lightningcss + - msw + - sass + - sass-embedded + - stylus + - sugarss + - supports-color + - terser + - tsx + - yaml - vitest@3.1.4(@types/debug@4.1.12)(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1): + vitest@3.1.4(@types/debug@4.1.12)(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@4.20.6)(yaml@2.8.3): dependencies: '@vitest/expect': 3.1.4 - '@vitest/mocker': 3.1.4(vite@5.4.21(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1)) + '@vitest/mocker': 3.1.4(vite@6.4.2(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@3.12.2)(yaml@2.8.3)) '@vitest/pretty-format': 3.1.4 '@vitest/runner': 3.1.4 '@vitest/snapshot': 3.1.4 @@ -42166,13 +42013,14 @@ snapshots: tinyglobby: 0.2.13 tinypool: 1.0.2 tinyrainbow: 2.0.0 - vite: 5.4.21(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1) - vite-node: 3.1.4(@types/node@20.14.14)(lightningcss@1.29.2)(terser@5.44.1) + vite: 6.4.2(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@4.20.6)(yaml@2.8.3) + vite-node: 3.1.4(@types/node@20.14.14)(jiti@2.4.2)(lightningcss@1.29.2)(terser@5.44.1)(tsx@4.20.6)(yaml@2.8.3) why-is-node-running: 2.3.0 optionalDependencies: '@types/debug': 4.1.12 '@types/node': 20.14.14 transitivePeerDependencies: + - jiti - less - lightningcss - msw @@ -42182,6 +42030,8 @@ snapshots: - sugarss - supports-color - terser + - tsx + - yaml vscode-jsonrpc@8.2.0: {} @@ -42455,7 +42305,7 @@ snapshots: yallist@5.0.0: {} - yaml@2.7.1: {} + yaml@2.8.3: {} yargs-parser@18.1.3: dependencies: From 4b28080ed4aa475c370afb2a7ca6e5ba09e71f8d Mon Sep 17 00:00:00 2001 From: "devin-ai-integration[bot]" <158243242+devin-ai-integration[bot]@users.noreply.github.com> Date: Tue, 28 Apr 2026 11:57:44 +0200 Subject: [PATCH 5/8] feat: add `isReplay` to run context (#3454) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## Summary Adds `isReplay` boolean to the run context (`ctx.run.isReplay`), following the same pattern as the existing `isTest`. The value is derived from the existing `replayedFromTaskRunFriendlyId` database field, so no schema migration is needed. ## ✅ Checklist - [x] I have followed every step in the [contributing guide](https://github.com/triggerdotdev/trigger.dev/blob/main/CONTRIBUTING.md) - [x] The PR title follows the convention. - [x] I ran and tested the code works --- ## Testing - Verified `@trigger.dev/core` builds successfully - Verified `webapp` typechecks successfully - All new fields use `default(false)` for backwards compatibility --- ## Changelog - Added `isReplay` to `TaskRun` and `V3TaskRun` schemas in `common.ts` - Added `RUN_IS_REPLAY` semantic attribute and wired it in `taskContext` - Propagated `isReplay` through the dequeue system, run attempt system, and all execution context construction paths (V1 + V2) - Added `isReplay` to `DequeuedMessage` and `TaskRunExecutionLazyAttemptPayload` schemas - Added patch changeset for `@trigger.dev/core` - Updated docs: added `isReplay` to context reference, added "Detecting replays" section to replaying page --- 💯 Link to Devin session: https://app.devin.ai/sessions/1d6f1b3cc39a4623b72d05bf00f2d70c --------- Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: nick <55853254+nicktrn@users.noreply.github.com> --- .changeset/add-is-replay-context.md | 5 ++ .../app/presenters/v3/SpanPresenter.server.ts | 15 +--- .../app/v3/marqs/devQueueConsumer.server.ts | 1 + .../v3/marqs/sharedQueueConsumer.server.ts | 4 + .../services/createTaskRunAttempt.server.ts | 1 + docs/context.mdx | 3 + docs/replaying.mdx | 15 ++++ .../src/engine/systems/dequeueSystem.ts | 1 + .../src/engine/systems/runAttemptSystem.ts | 88 ++++++++++--------- packages/core/src/v3/schemas/common.ts | 14 +-- packages/core/src/v3/schemas/runEngine.ts | 1 + packages/core/src/v3/schemas/schemas.ts | 1 + .../core/src/v3/semanticInternalAttributes.ts | 1 + packages/core/src/v3/taskContext/index.ts | 1 + 14 files changed, 92 insertions(+), 59 deletions(-) create mode 100644 .changeset/add-is-replay-context.md diff --git a/.changeset/add-is-replay-context.md b/.changeset/add-is-replay-context.md new file mode 100644 index 00000000000..28f6a01380d --- /dev/null +++ b/.changeset/add-is-replay-context.md @@ -0,0 +1,5 @@ +--- +"@trigger.dev/core": patch +--- + +Add `isReplay` boolean to the run context (`ctx.run.isReplay`), derived from the existing `replayedFromTaskRunFriendlyId` database field. Defaults to `false` for backwards compatibility. diff --git a/apps/webapp/app/presenters/v3/SpanPresenter.server.ts b/apps/webapp/app/presenters/v3/SpanPresenter.server.ts index 0ea9b37ab7f..de41aee4411 100644 --- a/apps/webapp/app/presenters/v3/SpanPresenter.server.ts +++ b/apps/webapp/app/presenters/v3/SpanPresenter.server.ts @@ -42,9 +42,7 @@ export type PromptSpanData = { config?: string; }; -function extractPromptSpanData( - properties: Record -): PromptSpanData | undefined { +function extractPromptSpanData(properties: Record): PromptSpanData | undefined { // Properties come as an unflattened nested object from ClickHouse, // e.g. { prompt: { slug: "...", version: 3, ... } } const prompt = properties.prompt; @@ -592,10 +590,7 @@ export class SpanPresenter extends BasePresenter { triggeredRuns, aiData: span.properties && typeof span.properties === "object" - ? extractAISpanData( - span.properties as Record, - span.duration / 1_000_000 - ) + ? extractAISpanData(span.properties as Record, span.duration / 1_000_000) : undefined, }; @@ -739,10 +734,7 @@ export class SpanPresenter extends BasePresenter { "ai.streamObject", ]; - if ( - typeof span.message === "string" && - AI_SUMMARY_MESSAGES.includes(span.message) - ) { + if (typeof span.message === "string" && AI_SUMMARY_MESSAGES.includes(span.message)) { const aiSummaryData = extractAISummarySpanData( span.properties as Record, span.duration / 1_000_000 @@ -899,6 +891,7 @@ export class SpanPresenter extends BasePresenter { createdAt: run.createdAt, tags: run.runTags, isTest: run.isTest, + isReplay: !!run.replayedFromTaskRunFriendlyId, idempotencyKey: getUserProvidedIdempotencyKey(run) ?? undefined, startedAt: run.startedAt ?? run.createdAt, durationMs: run.usageDurationMs, diff --git a/apps/webapp/app/v3/marqs/devQueueConsumer.server.ts b/apps/webapp/app/v3/marqs/devQueueConsumer.server.ts index 2bd80d465b4..3143e40f0de 100644 --- a/apps/webapp/app/v3/marqs/devQueueConsumer.server.ts +++ b/apps/webapp/app/v3/marqs/devQueueConsumer.server.ts @@ -519,6 +519,7 @@ export class DevQueueConsumer { runId: lockedTaskRun.friendlyId, messageId: lockedTaskRun.id, isTest: lockedTaskRun.isTest, + isReplay: !!lockedTaskRun.replayedFromTaskRunFriendlyId, metrics: [ { name: "start", diff --git a/apps/webapp/app/v3/marqs/sharedQueueConsumer.server.ts b/apps/webapp/app/v3/marqs/sharedQueueConsumer.server.ts index 20ee9daf7da..518b64666d4 100644 --- a/apps/webapp/app/v3/marqs/sharedQueueConsumer.server.ts +++ b/apps/webapp/app/v3/marqs/sharedQueueConsumer.server.ts @@ -1640,6 +1640,7 @@ export const AttemptForExecutionGetPayload = { createdAt: true, startedAt: true, isTest: true, + replayedFromTaskRunFriendlyId: true, metadata: true, metadataType: true, idempotencyKey: true, @@ -1726,6 +1727,7 @@ class SharedQueueTasks { startedAt: taskRun.startedAt ?? taskRun.createdAt, tags: taskRun.runTags ?? [], isTest: taskRun.isTest, + isReplay: !!taskRun.replayedFromTaskRunFriendlyId, idempotencyKey: taskRun.idempotencyKey ?? undefined, durationMs: taskRun.usageDurationMs, costInCents: taskRun.costInCents, @@ -2045,6 +2047,7 @@ class SharedQueueTasks { traceContext: true, friendlyId: true, isTest: true, + replayedFromTaskRunFriendlyId: true, lockedBy: { select: { machineConfig: true, @@ -2090,6 +2093,7 @@ class SharedQueueTasks { runId: run.friendlyId, messageId: run.id, isTest: run.isTest, + isReplay: !!run.replayedFromTaskRunFriendlyId, attemptCount, metrics: [], } satisfies TaskRunExecutionLazyAttemptPayload; diff --git a/apps/webapp/app/v3/services/createTaskRunAttempt.server.ts b/apps/webapp/app/v3/services/createTaskRunAttempt.server.ts index 7e0b40dd826..8be2b9557cc 100644 --- a/apps/webapp/app/v3/services/createTaskRunAttempt.server.ts +++ b/apps/webapp/app/v3/services/createTaskRunAttempt.server.ts @@ -210,6 +210,7 @@ export class CreateTaskRunAttemptService extends BaseService { createdAt: taskRun.createdAt, tags: taskRun.runTags ?? [], isTest: taskRun.isTest, + isReplay: !!taskRun.replayedFromTaskRunFriendlyId, idempotencyKey: taskRun.idempotencyKey ?? undefined, startedAt: taskRun.startedAt ?? taskRun.createdAt, durationMs: taskRun.usageDurationMs, diff --git a/docs/context.mdx b/docs/context.mdx index 4e4b8f7bac6..f522fd8ccb5 100644 --- a/docs/context.mdx +++ b/docs/context.mdx @@ -81,6 +81,9 @@ export const parentTask = task({ Whether this is a [test run](/run-tests). + + Whether this run is a [replay](/replaying) of a previous run. + The creation time of the task run. diff --git a/docs/replaying.mdx b/docs/replaying.mdx index 0e348dcd58e..34e5aac7980 100644 --- a/docs/replaying.mdx +++ b/docs/replaying.mdx @@ -30,6 +30,21 @@ description: "A replay is a copy of a run with the same payload but against the +### Detecting replays in your task + +You can check if a run is a replay using the [context](/context) object: + +```ts +export const myTask = task({ + id: "my-task", + run: async (payload, { ctx }) => { + if (ctx.run.isReplay) { + // This run is a replay of a previous run + } + }, +}); +``` + ### Replaying using the SDK You can replay a run using the SDK: diff --git a/internal-packages/run-engine/src/engine/systems/dequeueSystem.ts b/internal-packages/run-engine/src/engine/systems/dequeueSystem.ts index 15d79e76baa..3fe1ef072cf 100644 --- a/internal-packages/run-engine/src/engine/systems/dequeueSystem.ts +++ b/internal-packages/run-engine/src/engine/systems/dequeueSystem.ts @@ -607,6 +607,7 @@ export class DequeueSystem { id: lockedTaskRun.id, friendlyId: lockedTaskRun.friendlyId, isTest: lockedTaskRun.isTest, + isReplay: !!lockedTaskRun.replayedFromTaskRunFriendlyId, machine: machinePreset, attemptNumber: nextAttemptNumber, // Keeping this for backwards compatibility, but really this should be called workerQueue diff --git a/internal-packages/run-engine/src/engine/systems/runAttemptSystem.ts b/internal-packages/run-engine/src/engine/systems/runAttemptSystem.ts index 8e95519241c..27ddedde006 100644 --- a/internal-packages/run-engine/src/engine/systems/runAttemptSystem.ts +++ b/internal-packages/run-engine/src/engine/systems/runAttemptSystem.ts @@ -196,6 +196,7 @@ export class RunAttemptSystem { machinePreset: true, runTags: true, isTest: true, + replayedFromTaskRunFriendlyId: true, idempotencyKey: true, idempotencyKeyOptions: true, startedAt: true, @@ -232,9 +233,9 @@ export class RunAttemptSystem { run.lockedById ? this.#resolveTaskRunExecutionTask(run.lockedById) : Promise.resolve({ - id: run.taskIdentifier, - filePath: "unknown", - }), + id: run.taskIdentifier, + filePath: "unknown", + }), this.#resolveTaskRunExecutionQueue({ lockedQueueId: run.lockedQueueId ?? undefined, queueName: run.queue, @@ -245,13 +246,13 @@ export class RunAttemptSystem { run.lockedById ? this.#resolveTaskRunExecutionMachinePreset(run.lockedById, run.machinePreset) : Promise.resolve( - getMachinePreset({ - defaultMachine: this.options.machines.defaultMachine, - machines: this.options.machines.machines, - config: undefined, - run, - }) - ), + getMachinePreset({ + defaultMachine: this.options.machines.defaultMachine, + machines: this.options.machines.machines, + config: undefined, + run, + }) + ), run.lockedById ? this.#resolveTaskRunExecutionDeployment(run.lockedById) : Promise.resolve(undefined), @@ -262,6 +263,7 @@ export class RunAttemptSystem { id: run.friendlyId, tags: run.runTags, isTest: run.isTest, + isReplay: !!run.replayedFromTaskRunFriendlyId, createdAt: run.createdAt, startedAt: run.startedAt ?? run.createdAt, idempotencyKey: getUserProvidedIdempotencyKey(run) ?? undefined, @@ -426,6 +428,7 @@ export class RunAttemptSystem { payloadType: true, runTags: true, isTest: true, + replayedFromTaskRunFriendlyId: true, idempotencyKey: true, idempotencyKeyOptions: true, startedAt: true, @@ -459,8 +462,9 @@ export class RunAttemptSystem { run, snapshot: { executionStatus: "EXECUTING", - description: `Attempt created, starting execution${isWarmStart ? " (warm start)" : "" - }`, + description: `Attempt created, starting execution${ + isWarmStart ? " (warm start)" : "" + }`, }, previousSnapshotId: latestSnapshot.id, environmentId: latestSnapshot.environmentId, @@ -574,6 +578,7 @@ export class RunAttemptSystem { createdAt: updatedRun.createdAt, tags: updatedRun.runTags, isTest: updatedRun.isTest, + isReplay: !!updatedRun.replayedFromTaskRunFriendlyId, idempotencyKey: getUserProvidedIdempotencyKey(updatedRun) ?? undefined, idempotencyKeyScope: extractIdempotencyKeyScope(updatedRun), startedAt: updatedRun.startedAt ?? updatedRun.createdAt, @@ -618,8 +623,8 @@ export class RunAttemptSystem { deployment, batch: updatedRun.batchId ? { - id: BatchId.toFriendlyId(updatedRun.batchId), - } + id: BatchId.toFriendlyId(updatedRun.batchId), + } : undefined, }; @@ -1387,8 +1392,8 @@ export class RunAttemptSystem { error, bulkActionGroupIds: bulkActionId ? { - push: bulkActionId, - } + push: bulkActionId, + } : undefined, ...(usageUpdate && { usageDurationMs: usageUpdate.usageDurationMs, @@ -1876,26 +1881,26 @@ export class RunAttemptSystem { const result = await this.cache.queues.swr(cacheKey, async () => { const queue = params.lockedQueueId ? await this.$.readOnlyPrisma.taskQueue.findFirst({ - where: { - id: params.lockedQueueId, - }, - select: { - id: true, - friendlyId: true, - name: true, - }, - }) + where: { + id: params.lockedQueueId, + }, + select: { + id: true, + friendlyId: true, + name: true, + }, + }) : await this.$.readOnlyPrisma.taskQueue.findFirst({ - where: { - runtimeEnvironmentId: params.runtimeEnvironmentId, - name: params.queueName, - }, - select: { - id: true, - friendlyId: true, - name: true, - }, - }); + where: { + runtimeEnvironmentId: params.runtimeEnvironmentId, + name: params.queueName, + }, + select: { + id: true, + friendlyId: true, + name: true, + }, + }); if (!queue) { // Return synthetic queue so run/span view still loads (e.g. createFailedTaskRun with fallback queue) @@ -2068,13 +2073,13 @@ export class RunAttemptSystem { if (environmentType !== "DEVELOPMENT") { const machinePreset = machinePresetName ? machinePresetFromName( - this.options.machines.machines, - machinePresetName as MachinePresetName - ) + this.options.machines.machines, + machinePresetName as MachinePresetName + ) : machinePresetFromName( - this.options.machines.machines, - this.options.machines.defaultMachine - ); + this.options.machines.machines, + this.options.machines.defaultMachine + ); costInCents = currentCostInCents + attemptDurationMs * machinePreset.centsPerMs; } @@ -2084,7 +2089,6 @@ export class RunAttemptSystem { costInCents, }; } - } export function safeParseGitMeta(git: unknown): GitMeta | undefined { diff --git a/packages/core/src/v3/schemas/common.ts b/packages/core/src/v3/schemas/common.ts index f3757208335..8bd22dd4bbb 100644 --- a/packages/core/src/v3/schemas/common.ts +++ b/packages/core/src/v3/schemas/common.ts @@ -215,6 +215,7 @@ export const TaskRun = z.object({ payloadType: z.string(), tags: z.array(z.string()), isTest: z.boolean().default(false), + isReplay: z.boolean().default(false), createdAt: z.coerce.date(), startedAt: z.coerce.date().default(() => new Date()), /** The user-provided idempotency key (not the hash) */ @@ -378,6 +379,7 @@ export const V3TaskRun = z.object({ payloadType: z.string(), tags: z.array(z.string()), isTest: z.boolean().default(false), + isReplay: z.boolean().default(false), createdAt: z.coerce.date(), startedAt: z.coerce.date().default(() => new Date()), /** The user-provided idempotency key (not the hash) */ @@ -538,13 +540,13 @@ export type WaitpointTokenResult = z.infer; export type WaitpointTokenTypedResult = | { - ok: true; - output: T; - } + ok: true; + output: T; + } | { - ok: false; - error: Error; - }; + ok: false; + error: Error; + }; export const SerializedError = z.object({ message: z.string(), diff --git a/packages/core/src/v3/schemas/runEngine.ts b/packages/core/src/v3/schemas/runEngine.ts index 9378b290270..b9e41c9a8d7 100644 --- a/packages/core/src/v3/schemas/runEngine.ts +++ b/packages/core/src/v3/schemas/runEngine.ts @@ -277,6 +277,7 @@ export const DequeuedMessage = z.object({ id: z.string(), friendlyId: z.string(), isTest: z.boolean(), + isReplay: z.boolean().default(false), machine: MachinePreset, attemptNumber: z.number(), masterQueue: z.string(), diff --git a/packages/core/src/v3/schemas/schemas.ts b/packages/core/src/v3/schemas/schemas.ts index 4ec559ebf41..5fb85f80ae8 100644 --- a/packages/core/src/v3/schemas/schemas.ts +++ b/packages/core/src/v3/schemas/schemas.ts @@ -292,6 +292,7 @@ export const TaskRunExecutionLazyAttemptPayload = z.object({ attemptCount: z.number().optional(), messageId: z.string(), isTest: z.boolean(), + isReplay: z.boolean().default(false), traceContext: z.record(z.unknown()), environment: z.record(z.string()).optional(), metrics: TaskRunExecutionMetrics.optional(), diff --git a/packages/core/src/v3/semanticInternalAttributes.ts b/packages/core/src/v3/semanticInternalAttributes.ts index 3fb20a06499..2c715a03ea1 100644 --- a/packages/core/src/v3/semanticInternalAttributes.ts +++ b/packages/core/src/v3/semanticInternalAttributes.ts @@ -12,6 +12,7 @@ export const SemanticInternalAttributes = { ATTEMPT_NUMBER: "ctx.attempt.number", RUN_ID: "ctx.run.id", RUN_IS_TEST: "ctx.run.isTest", + RUN_IS_REPLAY: "ctx.run.isReplay", ORIGINAL_RUN_ID: "$original_run_id", BATCH_ID: "ctx.batch.id", TASK_SLUG: "ctx.task.id", diff --git a/packages/core/src/v3/taskContext/index.ts b/packages/core/src/v3/taskContext/index.ts index f76671160a6..92e0194cde9 100644 --- a/packages/core/src/v3/taskContext/index.ts +++ b/packages/core/src/v3/taskContext/index.ts @@ -94,6 +94,7 @@ export class TaskContextAPI { [SemanticInternalAttributes.QUEUE_ID]: this.ctx.queue.id, [SemanticInternalAttributes.RUN_ID]: this.ctx.run.id, [SemanticInternalAttributes.RUN_IS_TEST]: this.ctx.run.isTest, + [SemanticInternalAttributes.RUN_IS_REPLAY]: this.ctx.run.isReplay, [SemanticInternalAttributes.BATCH_ID]: this.ctx.batch?.id, [SemanticInternalAttributes.IDEMPOTENCY_KEY]: this.ctx.run.idempotencyKey, }; From e134da7306ef7dc4d7dec3f6b5d5716ee02672bc Mon Sep 17 00:00:00 2001 From: Eric Allam Date: Tue, 28 Apr 2026 11:22:00 +0100 Subject: [PATCH 6/8] fix(run-engine): debounce hot-key lock contention and 5xx feedback loop (#3453) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## Changes Three changes in `internal-packages/run-engine/src/engine/systems/debounceSystem.ts`, in order of impact: 1. **Fast-path skip before the lock.** In `handleExistingRun`, do an unlocked read of `delayUntil` (and `createdAt` for the max-duration check) from the run row before entering `runLock.lock("handleDebounce", ...)`. If `newDelayUntil <= currentDelayUntil` and the run is still within its max-duration window, return the existing run immediately without taking the lock. Safe because debounce is monotonic-forward only — a stale read either matches reality or undershoots, both of which decay correctly (re-checked properly inside the lock by whichever caller is actually pushing forward). Trailing-mode triggers carrying `updateData` still take the lock so the data update is applied. 2. **Quantize `newDelayUntil`.** Round the computed `newDelayUntil` to 1-second buckets (configurable via `quantizeNewDelayUntilMs`, set to 0 to disable). Without quantization, every call has a slightly larger `newDelayUntil` than the last and they all pass the fast-path check. With it, concurrent callers on the same key share a target time and ~95% short-circuit. User-visible effect: a debounced run might fire up to 1s earlier than the strict spec — non-issue for typical debounce use cases (chat summarization, batched notifications, etc.). 3. **Graceful lock-contention fallback.** Wrap the `runLock.lock(...)` call so `LockAcquisitionTimeoutError` and Redlock `ExecutionError` / `ResourceLockedError` return the existing run id with success instead of propagating a 5xx. Debounce is best-effort: if we can't take the lock, the herd is already updating it for us; fall in line. This kills the 5xx → SDK-retry feedback loop. With (1)+(2) this rarely fires; without them it's the difference between 5xx and 200. Defaults preserve current behaviour aside from quantization (1s) and fast-path (on). Both are configurable via `RunEngineOptions.debounce`. ## ✅ Checklist - [x] I have followed every step in the [contributing guide](https://github.com/triggerdotdev/trigger.dev/blob/main/CONTRIBUTING.md) - [x] The PR title follows the convention. - [x] I ran and tested the code works --- ## Changelog Reduce 5xx feedback loops on hot debounce keys by quantizing `delayUntil`, adding an unlocked fast-path skip before the redlock, and gracefully handling redlock contention in `handleDebounce` so the SDK no longer retries into a herd. --------- Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> --- .../debounce-hot-key-lock-contention.md | 8 + apps/webapp/app/env.server.ts | 16 + apps/webapp/app/v3/runEngine.server.ts | 3 + .../run-engine/src/engine/index.ts | 3 + .../src/engine/systems/debounceSystem.ts | 293 +++++++- .../src/engine/tests/debounce.test.ts | 698 ++++++++++++++++++ .../run-engine/src/engine/types.ts | 36 + 7 files changed, 1051 insertions(+), 6 deletions(-) create mode 100644 .server-changes/debounce-hot-key-lock-contention.md diff --git a/.server-changes/debounce-hot-key-lock-contention.md b/.server-changes/debounce-hot-key-lock-contention.md new file mode 100644 index 00000000000..7579ce50adb --- /dev/null +++ b/.server-changes/debounce-hot-key-lock-contention.md @@ -0,0 +1,8 @@ +--- +area: webapp +type: fix +--- + +Reduce 5xx feedback loops on hot debounce keys by quantizing `delayUntil`, +adding an unlocked fast-path skip, and gracefully handling redlock +contention in `handleDebounce` so the SDK no longer retries into a herd. diff --git a/apps/webapp/app/env.server.ts b/apps/webapp/app/env.server.ts index c10446d08ab..031c4795847 100644 --- a/apps/webapp/app/env.server.ts +++ b/apps/webapp/app/env.server.ts @@ -666,6 +666,21 @@ const EnvironmentSchema = z .int() .default(60_000 * 60), // 1 hour + /** + * Bucket size in milliseconds used to quantize the newly computed `delayUntil` + * in the debounce system. Quantization collapses concurrent triggers on the + * same hot debounce key onto the same target time so the unlocked fast-path + * skip is effective. Set to 0 to disable. Default: 1000ms (1s). + */ + RUN_ENGINE_DEBOUNCE_QUANTIZE_NEW_DELAY_UNTIL_MS: z.coerce.number().int().min(0).default(1000), + + /** + * Whether the unlocked fast-path skip is enabled in the debounce system. + * Acts as a kill switch in case the fast-path needs to be disabled in + * production without a redeploy. Default: "1" (enabled). + */ + RUN_ENGINE_DEBOUNCE_FAST_PATH_SKIP_ENABLED: z.string().default("1"), + RUN_ENGINE_WORKER_REDIS_HOST: z .string() .optional() @@ -837,6 +852,7 @@ const EnvironmentSchema = z .default("info"), RUN_ENGINE_TREAT_PRODUCTION_EXECUTION_STALLS_AS_OOM: z.string().default("0"), RUN_ENGINE_READ_REPLICA_SNAPSHOTS_SINCE_ENABLED: z.string().default("0"), + RUN_ENGINE_DEBOUNCE_USE_REPLICA_FOR_FAST_PATH_READ: z.string().default("0"), /** How long should the presence ttl last */ DEV_PRESENCE_SSE_TIMEOUT: z.coerce.number().int().default(30_000), diff --git a/apps/webapp/app/v3/runEngine.server.ts b/apps/webapp/app/v3/runEngine.server.ts index 8db60aed1ac..e97a1dc8ae7 100644 --- a/apps/webapp/app/v3/runEngine.server.ts +++ b/apps/webapp/app/v3/runEngine.server.ts @@ -214,6 +214,9 @@ function createRunEngine() { // Debounce configuration debounce: { maxDebounceDurationMs: env.RUN_ENGINE_MAXIMUM_DEBOUNCE_DURATION_MS, + quantizeNewDelayUntilMs: env.RUN_ENGINE_DEBOUNCE_QUANTIZE_NEW_DELAY_UNTIL_MS, + fastPathSkipEnabled: env.RUN_ENGINE_DEBOUNCE_FAST_PATH_SKIP_ENABLED === "1", + useReplicaForFastPathRead: env.RUN_ENGINE_DEBOUNCE_USE_REPLICA_FOR_FAST_PATH_READ === "1", }, }); diff --git a/internal-packages/run-engine/src/engine/index.ts b/internal-packages/run-engine/src/engine/index.ts index 92cf7365a9c..0da98c3c835 100644 --- a/internal-packages/run-engine/src/engine/index.ts +++ b/internal-packages/run-engine/src/engine/index.ts @@ -324,6 +324,9 @@ export class RunEngine { executionSnapshotSystem: this.executionSnapshotSystem, delayedRunSystem: this.delayedRunSystem, maxDebounceDurationMs: options.debounce?.maxDebounceDurationMs ?? 60 * 60 * 1000, // Default 1 hour + quantizeNewDelayUntilMs: options.debounce?.quantizeNewDelayUntilMs ?? 1000, + fastPathSkipEnabled: options.debounce?.fastPathSkipEnabled ?? true, + useReplicaForFastPathRead: options.debounce?.useReplicaForFastPathRead ?? false, }); this.pendingVersionSystem = new PendingVersionSystem({ diff --git a/internal-packages/run-engine/src/engine/systems/debounceSystem.ts b/internal-packages/run-engine/src/engine/systems/debounceSystem.ts index ef711a19577..0e59d1d69df 100644 --- a/internal-packages/run-engine/src/engine/systems/debounceSystem.ts +++ b/internal-packages/run-engine/src/engine/systems/debounceSystem.ts @@ -10,11 +10,17 @@ import { parseNaturalLanguageDuration, parseNaturalLanguageDurationInMs, } from "@trigger.dev/core/v3/isomorphic"; -import { PrismaClientOrTransaction, TaskRun, Waitpoint } from "@trigger.dev/database"; +import { + PrismaClientOrTransaction, + PrismaReplicaClient, + TaskRun, + Waitpoint, +} from "@trigger.dev/database"; import { nanoid } from "nanoid"; import { SystemResources } from "./systems.js"; import { ExecutionSnapshotSystem, getLatestExecutionSnapshot } from "./executionSnapshotSystem.js"; import { DelayedRunSystem } from "./delayedRunSystem.js"; +import { LockAcquisitionTimeoutError } from "../locking.js"; export type DebounceOptions = { key: string; @@ -45,6 +51,22 @@ export type DebounceSystemOptions = { executionSnapshotSystem: ExecutionSnapshotSystem; delayedRunSystem: DelayedRunSystem; maxDebounceDurationMs: number; + /** + * Bucket size in milliseconds used to quantize the newly computed `delayUntil`. + * Set to 0 to disable quantization. + */ + quantizeNewDelayUntilMs?: number; + /** + * When true, read the existing run's `delayUntil` outside the redlock and + * short-circuit if the new (quantized) `delayUntil` is not later than the + * current one. + */ + fastPathSkipEnabled?: boolean; + /** + * When true, route the unlocked fast-path reads (probe + full-run fetch) + * through `readOnlyPrisma` (e.g. an Aurora reader) instead of the writer. + */ + useReplicaForFastPathRead?: boolean; }; export type DebounceResult = @@ -89,6 +111,9 @@ export class DebounceSystem { private readonly executionSnapshotSystem: ExecutionSnapshotSystem; private readonly delayedRunSystem: DelayedRunSystem; private readonly maxDebounceDurationMs: number; + private readonly quantizeNewDelayUntilMs: number; + private readonly fastPathSkipEnabled: boolean; + private readonly useReplicaForFastPathRead: boolean; constructor(options: DebounceSystemOptions) { this.$ = options.resources; @@ -106,6 +131,9 @@ export class DebounceSystem { this.executionSnapshotSystem = options.executionSnapshotSystem; this.delayedRunSystem = options.delayedRunSystem; this.maxDebounceDurationMs = options.maxDebounceDurationMs; + this.quantizeNewDelayUntilMs = Math.max(0, options.quantizeNewDelayUntilMs ?? 1000); + this.fastPathSkipEnabled = options.fastPathSkipEnabled ?? true; + this.useReplicaForFastPathRead = options.useReplicaForFastPathRead ?? false; this.#registerCommands(); } @@ -450,9 +478,264 @@ return 0 debounce: DebounceOptions; tx?: PrismaClientOrTransaction; }): Promise { - return await this.$.runLock.lock("handleDebounce", [existingRunId], async () => { - const prisma = tx ?? this.$.prisma; + const prisma = tx ?? this.$.prisma; + // Reads in the unlocked fast-path can run on `readOnlyPrisma` when + // configured (e.g. an Aurora reader). Replica lag is fine: debounce is + // best-effort and a stale read either falls through to the locked path + // (when delayUntil hasn't replicated yet) or returns the existing run + // (when the run's status is stale). The latter is the same outcome the + // caller would see if their trigger had simply landed a few hundred ms + // earlier, which is within the natural debounce race. Only divert reads + // when the caller isn't inside a tx (where the read needs to see the + // tx's writes). + const fastPathReadPrisma = + tx ?? (this.useReplicaForFastPathRead ? this.$.readOnlyPrisma : this.$.prisma); + + // Compute the (quantized) target delayUntil up-front, before taking any lock. + // Quantizing to e.g. 1s buckets collapses many concurrent triggers on the same + // hot debounce key onto the same target time, so the unlocked fast-path skip + // below becomes effective and the redlock is not contended. + const newDelayUntil = this.#computeQuantizedDelayUntil(debounce.delay); + + // Fast-path: read the current delayUntil outside the redlock and short-circuit + // if our (quantized) newDelayUntil isn't later than what's already scheduled. + // Safe because debounce is monotonic-forward only: a stale read either matches + // reality or undershoots, both of which decay correctly (re-checked properly + // inside the lock by whoever is actually pushing forward). + if (this.fastPathSkipEnabled && newDelayUntil) { + const fastPathResult = await this.#tryFastPathSkip({ + existingRunId, + newDelayUntil, + debounce, + prisma: fastPathReadPrisma, + }); + if (fastPathResult) { + return fastPathResult; + } + } + try { + return await this.$.runLock.lock("handleDebounce", [existingRunId], async () => { + return await this.#handleExistingRunLocked({ + existingRunId, + redisKey, + environmentId, + taskIdentifier, + debounce, + newDelayUntil, + prisma, + tx, + }); + }); + } catch (error) { + // Lock contention safety net: if we couldn't take the lock (redlock quorum + // failure or our retry budget exhausted), fall in line with whoever is + // actually updating the run instead of bubbling a 5xx to the SDK and + // amplifying the herd via SDK retries. Debounce is best-effort - dropping + // our contribution to delayUntil here is fine, the herd is updating it for + // us. + if (this.#isLockContentionError(error)) { + return await this.#handleLockContentionFallback({ + existingRunId, + debounce, + error, + prisma, + }); + } + throw error; + } + } + + /** + * Parses the debounce delay and (optionally) quantizes it to a bucket boundary + * by flooring the absolute timestamp. Quantization makes concurrent triggers on + * the same key share a target time, which is what makes the unlocked fast-path + * skip effective. + */ + #computeQuantizedDelayUntil(delay: string): Date | null { + const parsed = parseNaturalLanguageDuration(delay); + if (!parsed) { + return null; + } + if (this.quantizeNewDelayUntilMs <= 0) { + return parsed; + } + const bucket = this.quantizeNewDelayUntilMs; + const quantized = Math.floor(parsed.getTime() / bucket) * bucket; + return new Date(quantized); + } + + #isLockContentionError(error: unknown): boolean { + if (!(error instanceof Error)) return false; + return ( + error instanceof LockAcquisitionTimeoutError || + error.name === "LockAcquisitionTimeoutError" || + error.name === "ExecutionError" || + error.name === "ResourceLockedError" + ); + } + + /** + * Reads `delayUntil`/`status`/`createdAt` outside the redlock and + * short-circuits if the existing scheduled time already covers our target. + * Skips trailing-mode triggers that carry `updateData` since those still need + * the lock to apply their data update. Also falls through when the run has + * already exceeded its max debounce duration so the locked path can return + * `max_duration_exceeded` and let the caller create a new run. + * + * `prisma` may be a read replica - replica lag is acceptable because + * debounce is best-effort. A stale `delayUntil` either matches reality or + * undershoots (we fall through to the locked path); a stale `status` at + * worst returns the existing run, which is the same outcome the caller + * would see if their trigger had landed a few hundred ms earlier. + */ + async #tryFastPathSkip({ + existingRunId, + newDelayUntil, + debounce, + prisma, + }: { + existingRunId: string; + newDelayUntil: Date; + debounce: DebounceOptions; + prisma: PrismaClientOrTransaction | PrismaReplicaClient; + }): Promise { + // Trailing mode with updateData still needs the lock so the data update is + // applied; only short-circuit when there's nothing to update. + if (debounce.mode === "trailing" && debounce.updateData) { + return null; + } + + const probe = await prisma.taskRun.findFirst({ + where: { id: existingRunId }, + select: { status: true, delayUntil: true, createdAt: true }, + }); + if (!probe || probe.status !== "DELAYED" || !probe.delayUntil) { + return null; + } + if (newDelayUntil.getTime() > probe.delayUntil.getTime()) { + return null; + } + + // Fall through to the lock path when newDelayUntil would exceed the run's + // max debounce window so the caller can return max_duration_exceeded and + // create a fresh run. + let maxDurationMs = this.maxDebounceDurationMs; + if (debounce.maxDelay) { + const parsedMaxDelay = parseNaturalLanguageDurationInMs(debounce.maxDelay); + if (parsedMaxDelay !== undefined) { + maxDurationMs = parsedMaxDelay; + } + } + const maxDelayUntilMs = probe.createdAt.getTime() + maxDurationMs; + if (newDelayUntil.getTime() > maxDelayUntilMs) { + return null; + } + + const fullRun = await prisma.taskRun.findFirst({ + where: { id: existingRunId }, + include: { associatedWaitpoint: true }, + }); + if (!fullRun || fullRun.status !== "DELAYED") { + return null; + } + + this.$.logger.debug("handleExistingRun: fast-path skip, existing delayUntil already covers", { + existingRunId, + debounceKey: debounce.key, + newDelayUntil, + currentDelayUntil: fullRun.delayUntil, + }); + + return { + status: "existing", + run: fullRun, + waitpoint: fullRun.associatedWaitpoint, + }; + } + + async #handleLockContentionFallback({ + existingRunId, + debounce, + error, + prisma, + }: { + existingRunId: string; + debounce: DebounceOptions; + error: unknown; + prisma: PrismaClientOrTransaction; + }): Promise { + const fullRun = await prisma.taskRun.findFirst({ + where: { id: existingRunId }, + include: { associatedWaitpoint: true }, + }); + + if (!fullRun || fullRun.status !== "DELAYED") { + // The run is no longer in a state we can safely return as "existing" - + // re-throw so the caller surfaces the failure rather than silently + // succeeding on a stale/terminated run. + this.$.logger.warn( + "handleExistingRun: lock contention, but existing run no longer DELAYED - rethrowing", + { + existingRunId, + debounceKey: debounce.key, + status: fullRun?.status, + } + ); + throw error; + } + + // Trailing-mode triggers carrying updateData fall through to the same + // "return existing" path as everything else. Under lock contention some + // other concurrent caller is winning the lock right now and applying + // *its* updateData (which is, by wall-clock, ms-different from ours and + // indistinguishable to the user). Re-throwing here would just produce a + // 5xx that the SDK retries with our now-older payload - more likely to + // result in stale data landing than letting the herd's winner stand. + this.$.logger.warn( + "handleExistingRun: lock contention, returning existing run without rescheduling", + { + existingRunId, + debounceKey: debounce.key, + currentDelayUntil: fullRun.delayUntil, + mode: debounce.mode, + hasUpdateData: !!debounce.updateData, + error: error instanceof Error ? error.message : String(error), + errorName: error instanceof Error ? error.name : undefined, + } + ); + + return { + status: "existing", + run: fullRun, + waitpoint: fullRun.associatedWaitpoint, + }; + } + + /** + * Body of `handleExistingRun` that runs while holding the redlock on the run. + * Receives the (possibly quantized) `newDelayUntil` precomputed by the caller. + */ + async #handleExistingRunLocked({ + existingRunId, + redisKey, + environmentId, + taskIdentifier, + debounce, + newDelayUntil, + prisma, + tx, + }: { + existingRunId: string; + redisKey: string; + environmentId: string; + taskIdentifier: string; + debounce: DebounceOptions; + newDelayUntil: Date | null; + prisma: PrismaClientOrTransaction; + tx?: PrismaClientOrTransaction; + }): Promise { + { // Get the latest execution snapshot let snapshot; try { @@ -514,8 +797,6 @@ return 0 }); } - // Calculate new delay - parseNaturalLanguageDuration returns a Date (now + duration) - const newDelayUntil = parseNaturalLanguageDuration(debounce.delay); if (!newDelayUntil) { this.$.logger.error("handleExistingRun: invalid delay duration", { delay: debounce.delay, @@ -619,7 +900,7 @@ return 0 run: updatedRun, waitpoint: existingRun.associatedWaitpoint, }; - }); + } } /** diff --git a/internal-packages/run-engine/src/engine/tests/debounce.test.ts b/internal-packages/run-engine/src/engine/tests/debounce.test.ts index 1c201c4b4c4..e46f0de07cd 100644 --- a/internal-packages/run-engine/src/engine/tests/debounce.test.ts +++ b/internal-packages/run-engine/src/engine/tests/debounce.test.ts @@ -4,6 +4,7 @@ import { expect } from "vitest"; import { RunEngine } from "../index.js"; import { setTimeout } from "timers/promises"; import { setupAuthenticatedEnvironment, setupBackgroundWorker } from "./setup.js"; +import { createRedisClient } from "@internal/redis"; vi.setConfig({ testTimeout: 60_000 }); @@ -240,6 +241,8 @@ describe("RunEngine debounce", () => { }, debounce: { maxDebounceDurationMs: 60_000, + // Disable quantization so this test can observe sub-second extensions. + quantizeNewDelayUntilMs: 0, }, tracer: trace.getTracer("test", "0.0.0"), }); @@ -2497,5 +2500,700 @@ describe("RunEngine debounce", () => { } } ); + + containerTest( + "Debounce fast-path: subsequent triggers within the same quantization bucket skip the lock", + async ({ prisma, redisOptions }) => { + const authenticatedEnvironment = await setupAuthenticatedEnvironment(prisma, "PRODUCTION"); + + const engine = new RunEngine({ + prisma, + worker: { + redis: redisOptions, + workers: 1, + tasksPerWorker: 10, + pollIntervalMs: 100, + }, + queue: { + redis: redisOptions, + }, + runLock: { + redis: redisOptions, + }, + machines: { + defaultMachine: "small-1x", + machines: { + "small-1x": { + name: "small-1x" as const, + cpu: 0.5, + memory: 0.5, + centsPerMs: 0.0001, + }, + }, + baseCostInCents: 0.0001, + }, + debounce: { + maxDebounceDurationMs: 60_000, + // Wide bucket so the second trigger is guaranteed to land in the same one. + quantizeNewDelayUntilMs: 60_000, + }, + tracer: trace.getTracer("test", "0.0.0"), + }); + + try { + const taskIdentifier = "test-task"; + + await setupBackgroundWorker(engine, authenticatedEnvironment, taskIdentifier); + + const run1 = await engine.trigger( + { + number: 1, + friendlyId: "run_fp1", + environment: authenticatedEnvironment, + taskIdentifier, + payload: '{"data": "first"}', + payloadType: "application/json", + context: {}, + traceContext: {}, + traceId: "t_fp_1", + spanId: "s_fp_1", + workerQueue: "main", + queue: "task/test-task", + isTest: false, + tags: [], + delayUntil: new Date(Date.now() + 5000), + debounce: { + key: "fast-path-key", + delay: "5s", + }, + }, + prisma + ); + + const originalDelayUntil = run1.delayUntil; + assertNonNullable(originalDelayUntil); + + // Update the delayUntil directly to a far-future value, simulating a + // previous trigger that already pushed the bucket forward. The next + // call's quantized newDelayUntil will land at-or-before this, so the + // fast-path should skip the lock and leave delayUntil untouched. + const farFuture = new Date(Date.now() + 10 * 60_000); + await prisma.taskRun.update({ + where: { id: run1.id }, + data: { delayUntil: farFuture }, + }); + + const run2 = await engine.trigger( + { + number: 2, + friendlyId: "run_fp2", + environment: authenticatedEnvironment, + taskIdentifier, + payload: '{"data": "second"}', + payloadType: "application/json", + context: {}, + traceContext: {}, + traceId: "t_fp_2", + spanId: "s_fp_2", + workerQueue: "main", + queue: "task/test-task", + isTest: false, + tags: [], + delayUntil: new Date(Date.now() + 5000), + debounce: { + key: "fast-path-key", + delay: "5s", + }, + }, + prisma + ); + + expect(run2.id).toBe(run1.id); + + const updatedRun = await prisma.taskRun.findFirst({ + where: { id: run1.id }, + }); + assertNonNullable(updatedRun); + assertNonNullable(updatedRun.delayUntil); + + // delayUntil must NOT have been bumped backward by the second trigger, + // proving we short-circuited before taking the lock or rescheduling. + expect(updatedRun.delayUntil.getTime()).toBe(farFuture.getTime()); + } finally { + await engine.quit(); + } + } + ); + + containerTest( + "Debounce fast-path: trailing mode with updateData still takes the lock", + async ({ prisma, redisOptions }) => { + const authenticatedEnvironment = await setupAuthenticatedEnvironment(prisma, "PRODUCTION"); + + const engine = new RunEngine({ + prisma, + worker: { + redis: redisOptions, + workers: 1, + tasksPerWorker: 10, + pollIntervalMs: 100, + }, + queue: { + redis: redisOptions, + }, + runLock: { + redis: redisOptions, + }, + machines: { + defaultMachine: "small-1x", + machines: { + "small-1x": { + name: "small-1x" as const, + cpu: 0.5, + memory: 0.5, + centsPerMs: 0.0001, + }, + }, + baseCostInCents: 0.0001, + }, + debounce: { + maxDebounceDurationMs: 60_000, + quantizeNewDelayUntilMs: 60_000, + }, + tracer: trace.getTracer("test", "0.0.0"), + }); + + try { + const taskIdentifier = "test-task"; + + await setupBackgroundWorker(engine, authenticatedEnvironment, taskIdentifier); + + const run1 = await engine.trigger( + { + number: 1, + friendlyId: "run_trfp1", + environment: authenticatedEnvironment, + taskIdentifier, + payload: '{"data": "first"}', + payloadType: "application/json", + context: {}, + traceContext: {}, + traceId: "t_trfp_1", + spanId: "s_trfp_1", + workerQueue: "main", + queue: "task/test-task", + isTest: false, + tags: [], + delayUntil: new Date(Date.now() + 5000), + debounce: { + key: "trailing-fast-path-key", + delay: "5s", + mode: "trailing", + }, + }, + prisma + ); + + // Push delayUntil far forward so the fast-path *would* short-circuit + // for leading mode. Trailing-mode triggers with updateData must still + // take the lock so the data update is applied. + const farFuture = new Date(Date.now() + 10 * 60_000); + await prisma.taskRun.update({ + where: { id: run1.id }, + data: { delayUntil: farFuture }, + }); + + const run2 = await engine.trigger( + { + number: 2, + friendlyId: "run_trfp2", + environment: authenticatedEnvironment, + taskIdentifier, + payload: '{"data": "second"}', + payloadType: "application/json", + context: {}, + traceContext: {}, + traceId: "t_trfp_2", + spanId: "s_trfp_2", + workerQueue: "main", + queue: "task/test-task", + isTest: false, + tags: [], + delayUntil: new Date(Date.now() + 5000), + debounce: { + key: "trailing-fast-path-key", + delay: "5s", + mode: "trailing", + updateData: { + payload: '{"data": "second"}', + payloadType: "application/json", + }, + }, + }, + prisma + ); + + expect(run2.id).toBe(run1.id); + + const updatedRun = await prisma.taskRun.findFirst({ + where: { id: run1.id }, + }); + assertNonNullable(updatedRun); + // Trailing-mode update went through the lock and rewrote the payload. + expect(updatedRun.payload).toBe('{"data": "second"}'); + } finally { + await engine.quit(); + } + } + ); + + containerTest( + "Debounce: quantized newDelayUntil falls on a bucket boundary", + async ({ prisma, redisOptions }) => { + const authenticatedEnvironment = await setupAuthenticatedEnvironment(prisma, "PRODUCTION"); + + const engine = new RunEngine({ + prisma, + worker: { + redis: redisOptions, + workers: 1, + tasksPerWorker: 10, + pollIntervalMs: 100, + }, + queue: { + redis: redisOptions, + }, + runLock: { + redis: redisOptions, + }, + machines: { + defaultMachine: "small-1x", + machines: { + "small-1x": { + name: "small-1x" as const, + cpu: 0.5, + memory: 0.5, + centsPerMs: 0.0001, + }, + }, + baseCostInCents: 0.0001, + }, + debounce: { + maxDebounceDurationMs: 60_000, + quantizeNewDelayUntilMs: 1000, + }, + tracer: trace.getTracer("test", "0.0.0"), + }); + + try { + const taskIdentifier = "test-task"; + + await setupBackgroundWorker(engine, authenticatedEnvironment, taskIdentifier); + + const run1 = await engine.trigger( + { + number: 1, + friendlyId: "run_q1", + environment: authenticatedEnvironment, + taskIdentifier, + payload: '{"data": "first"}', + payloadType: "application/json", + context: {}, + traceContext: {}, + traceId: "t_q_1", + spanId: "s_q_1", + workerQueue: "main", + queue: "task/test-task", + isTest: false, + tags: [], + delayUntil: new Date(Date.now() + 5000), + debounce: { + key: "quantize-key", + delay: "5s", + }, + }, + prisma + ); + + // Force a meaningfully-later bucket so the second trigger pushes + // delayUntil forward through the lock. + await prisma.taskRun.update({ + where: { id: run1.id }, + data: { delayUntil: new Date(Date.now() - 1000) }, + }); + + await engine.trigger( + { + number: 2, + friendlyId: "run_q2", + environment: authenticatedEnvironment, + taskIdentifier, + payload: '{"data": "second"}', + payloadType: "application/json", + context: {}, + traceContext: {}, + traceId: "t_q_2", + spanId: "s_q_2", + workerQueue: "main", + queue: "task/test-task", + isTest: false, + tags: [], + delayUntil: new Date(Date.now() + 5000), + debounce: { + key: "quantize-key", + delay: "5s", + }, + }, + prisma + ); + + const updatedRun = await prisma.taskRun.findFirst({ + where: { id: run1.id }, + }); + assertNonNullable(updatedRun); + assertNonNullable(updatedRun.delayUntil); + + // The new delayUntil should be aligned to a 1s bucket boundary. + expect(updatedRun.delayUntil.getTime() % 1000).toBe(0); + } finally { + await engine.quit(); + } + } + ); + + containerTest( + "Debounce: lock contention falls back to returning existing run", + async ({ prisma, redisOptions }) => { + const authenticatedEnvironment = await setupAuthenticatedEnvironment(prisma, "PRODUCTION"); + + const engine = new RunEngine({ + prisma, + worker: { + redis: redisOptions, + workers: 1, + tasksPerWorker: 10, + pollIntervalMs: 100, + }, + queue: { + redis: redisOptions, + }, + runLock: { + redis: redisOptions, + // Force lock acquisition to fail almost immediately so we can + // exercise the contention safety net deterministically. + retryConfig: { + maxAttempts: 0, + baseDelay: 1, + maxDelay: 1, + maxTotalWaitTime: 1, + }, + duration: 30_000, + }, + machines: { + defaultMachine: "small-1x", + machines: { + "small-1x": { + name: "small-1x" as const, + cpu: 0.5, + memory: 0.5, + centsPerMs: 0.0001, + }, + }, + baseCostInCents: 0.0001, + }, + debounce: { + maxDebounceDurationMs: 60_000, + // Disable fast-path so the request is forced through the lock and + // we can prove the contention fallback handles 5xx prevention. + fastPathSkipEnabled: false, + quantizeNewDelayUntilMs: 0, + }, + tracer: trace.getTracer("test", "0.0.0"), + }); + + try { + const taskIdentifier = "test-task"; + + await setupBackgroundWorker(engine, authenticatedEnvironment, taskIdentifier); + + const run1 = await engine.trigger( + { + number: 1, + friendlyId: "run_lc1", + environment: authenticatedEnvironment, + taskIdentifier, + payload: '{"data": "first"}', + payloadType: "application/json", + context: {}, + traceContext: {}, + traceId: "t_lc_1", + spanId: "s_lc_1", + workerQueue: "main", + queue: "task/test-task", + isTest: false, + tags: [], + delayUntil: new Date(Date.now() + 5000), + debounce: { + key: "contention-key", + delay: "5s", + }, + }, + prisma + ); + + // Hold the underlying redlock key from a separate Redis connection so + // the engine's runLock cannot acquire it. Since we configured + // `retryConfig.maxAttempts: 0` and `maxTotalWaitTime: 1`, the second + // trigger should hit the contention fallback rather than bubble a 5xx. + // Note: the prefix template here intentionally matches what the engine + // builds at index.ts:120 (no `?? ""` fallback) so that the keys line up + // even when redisOptions.keyPrefix is undefined. + const blockingRedis = createRedisClient({ + ...redisOptions, + keyPrefix: `${redisOptions.keyPrefix}runlock:`, + }); + + const originalDelayUntil = run1.delayUntil; + assertNonNullable(originalDelayUntil); + + try { + const blockResult = await blockingRedis.set( + run1.id, + "test-blocker", + "PX", + 30_000, + "NX" + ); + expect(blockResult).toBe("OK"); + + const run2 = await engine.trigger( + { + number: 2, + friendlyId: "run_lc2", + environment: authenticatedEnvironment, + taskIdentifier, + payload: '{"data": "second"}', + payloadType: "application/json", + context: {}, + traceContext: {}, + traceId: "t_lc_2", + spanId: "s_lc_2", + workerQueue: "main", + queue: "task/test-task", + isTest: false, + tags: [], + delayUntil: new Date(Date.now() + 5000), + debounce: { + key: "contention-key", + delay: "5s", + }, + }, + prisma + ); + + // We did NOT 5xx; we returned the existing run. + expect(run2.id).toBe(run1.id); + + // Prove the fallback actually ran rather than the lock being acquired + // normally: the second trigger could not push delayUntil forward + // because rescheduling is skipped on contention. + const updatedRun = await prisma.taskRun.findFirst({ + where: { id: run1.id }, + }); + assertNonNullable(updatedRun); + assertNonNullable(updatedRun.delayUntil); + expect(updatedRun.delayUntil.getTime()).toBe(originalDelayUntil.getTime()); + } finally { + await blockingRedis.del(run1.id); + await blockingRedis.quit(); + } + } finally { + await engine.quit(); + } + } + ); + + // Reproduces the hot-key contention from TRI-8758: fires N concurrent + // triggers on the same debounce key after the run is already DELAYED. + // + // - fixed=true: fast-path skip + 1s quantization on. The herd collapses on + // the unlocked read and onto the same quantized newDelayUntil, so almost + // every call short-circuits and `taskRun.update` is barely written. + // - fixed=false: fast-path off and quantization off (closer to the + // pre-fix behaviour). The lock-contention fallback (also part of this + // PR) still catches herd lock failures; this case validates that even + // without the fast-path the system stays correct under stress, just at + // higher Redlock cost. + for (const fixed of [true, false]) { + containerTest( + `Debounce hot-key stress (fixed=${fixed}): N concurrent triggers stay correct`, + async ({ prisma, redisOptions }) => { + const authenticatedEnvironment = await setupAuthenticatedEnvironment(prisma, "PRODUCTION"); + + const engine = new RunEngine({ + prisma, + worker: { + redis: redisOptions, + workers: 1, + tasksPerWorker: 10, + pollIntervalMs: 100, + }, + queue: { + redis: redisOptions, + }, + runLock: { + redis: redisOptions, + }, + machines: { + defaultMachine: "small-1x", + machines: { + "small-1x": { + name: "small-1x" as const, + cpu: 0.5, + memory: 0.5, + centsPerMs: 0.0001, + }, + }, + baseCostInCents: 0.0001, + }, + debounce: { + maxDebounceDurationMs: 10 * 60_000, + fastPathSkipEnabled: fixed, + // 1s buckets - same as the real default - or 0 to mimic the + // pre-fix behaviour where every concurrent trigger has a slightly + // larger newDelayUntil than the last. + quantizeNewDelayUntilMs: fixed ? 1000 : 0, + }, + tracer: trace.getTracer("test", "0.0.0"), + }); + + try { + const taskIdentifier = "test-task"; + await setupBackgroundWorker(engine, authenticatedEnvironment, taskIdentifier); + + // Seed the debounce key with an initial run, then push delayUntil far + // forward so the herd lands well inside the existing window. + const seed = await engine.trigger( + { + number: 0, + friendlyId: "run_stress0", + environment: authenticatedEnvironment, + taskIdentifier, + payload: '{"data": "seed"}', + payloadType: "application/json", + context: {}, + traceContext: {}, + traceId: "t_stress_seed", + spanId: "s_stress_seed", + workerQueue: "main", + queue: "task/test-task", + isTest: false, + tags: [], + delayUntil: new Date(Date.now() + 30_000), + debounce: { + key: "stress-key", + delay: "30s", + }, + }, + prisma + ); + + // Move delayUntil to a small but safe future offset. The herd's + // newDelayUntil (now + 30s) will be meaningfully later than the + // current value, so the fast-path-off branch reschedules. The + // ~2s buffer keeps the run DELAYED long enough to absorb startup + // jitter before the first trigger writes delayUntil = now + 30s. + await prisma.taskRun.update({ + where: { id: seed.id }, + data: { delayUntil: new Date(Date.now() + 2_000) }, + }); + + // Subscribe to `runDelayRescheduled` so we can count how many times + // the herd actually pushed `delayUntil` forward. Each event corresponds + // to a successful reschedule under the lock - the fast-path/contention + // fallback paths skip the reschedule entirely. We use the engine's + // public eventBus, which is the same observable interface other tests + // in this repo (ttl, trigger, cancelling, waitpoints) use. + let rescheduleCount = 0; + engine.eventBus.on("runDelayRescheduled", () => { + rescheduleCount++; + }); + + const N = 40; + const triggers = Array.from({ length: N }, (_, i) => + engine.trigger( + { + number: i + 1, + friendlyId: `run_stress${i + 1}`, + environment: authenticatedEnvironment, + taskIdentifier, + payload: `{"data": "stress-${i}"}`, + payloadType: "application/json", + context: {}, + traceContext: {}, + traceId: `t_stress_${i}`, + spanId: `s_stress_${i}`, + workerQueue: "main", + queue: "task/test-task", + isTest: false, + tags: [], + delayUntil: new Date(Date.now() + 30_000), + debounce: { + key: "stress-key", + delay: "30s", + }, + }, + prisma + ) + ); + + const start = performance.now(); + const settled = await Promise.allSettled(triggers); + const durationMs = performance.now() - start; + + const fulfilled = settled.filter( + (r): r is PromiseFulfilledResult<{ id: string }> => r.status === "fulfilled" + ); + const rejected = settled.filter((r) => r.status === "rejected"); + + // No 5xx feedback loop: every concurrent trigger succeeds and + // returns the existing run id. + expect(rejected).toHaveLength(0); + expect(fulfilled).toHaveLength(N); + for (const r of fulfilled) { + expect(r.value.id).toBe(seed.id); + } + + // Only one row, regardless of contention path. + const runs = await prisma.taskRun.findMany({ + where: { taskIdentifier, runtimeEnvironmentId: authenticatedEnvironment.id }, + }); + expect(runs.length).toBe(1); + + // Wait briefly for any in-flight reschedule events to flush before + // asserting on the count. EventBus emit is synchronous here but + // settle a microtask just to be safe. + await new Promise((resolve) => setImmediate(resolve)); + + console.log( + `[stress fixed=${fixed}] N=${N} duration=${durationMs.toFixed( + 0 + )}ms reschedules=${rescheduleCount}` + ); + + if (fixed) { + // With fast-path + quantization: the herd collapses onto the + // same quantized newDelayUntil. Trigger #1 takes the lock and + // pushes delayUntil; every subsequent trigger sees a covering + // delayUntil on the unlocked read and short-circuits without + // emitting a reschedule. So at most one reschedule fires. + expect(rescheduleCount).toBeLessThanOrEqual(1); + } + } finally { + await engine.quit(); + } + } + ); + } }); diff --git a/internal-packages/run-engine/src/engine/types.ts b/internal-packages/run-engine/src/engine/types.ts index 255643ef2f5..15e63368d2e 100644 --- a/internal-packages/run-engine/src/engine/types.ts +++ b/internal-packages/run-engine/src/engine/types.ts @@ -129,6 +129,42 @@ export type RunEngineOptions = { redis?: RedisOptions; /** Maximum duration in milliseconds that a run can be debounced. Default: 1 hour */ maxDebounceDurationMs?: number; + /** + * Bucket size in milliseconds used to quantize the newly computed `delayUntil`. + * Quantization collapses many concurrent triggers on the same hot debounce key + * into the same target time, so that the unlocked fast-path skip becomes + * effective and the redlock on `handleDebounce` is not contended. + * + * A run might fire up to `quantizeNewDelayUntilMs` earlier than the strict + * `now + delay` spec. + * + * Set to 0 to disable quantization. + * + * Default: 1000 (1s). + */ + quantizeNewDelayUntilMs?: number; + /** + * Whether to read the existing run's `delayUntil` outside of the redlock and + * short-circuit when the new (quantized) `delayUntil` is not later than the + * current one. Trailing-mode triggers carrying `updateData` always bypass + * this fast path and take the lock so payload/metadata/tag updates still + * land on the run. + * + * Default: true. + */ + fastPathSkipEnabled?: boolean; + /** + * Whether to route the unlocked fast-path reads (probe + full-run fetch) + * through `readOnlyPrisma` (e.g. an Aurora reader) instead of the writer. + * Safe because debounce is best-effort: a stale `delayUntil` falls + * through to the locked path (the locked path re-checks under the lock), + * and a stale `status` at worst returns the existing run, which is the + * same outcome the caller would see if their trigger had landed a few + * hundred ms earlier. + * + * Default: false. + */ + useReplicaForFastPathRead?: boolean; }; /** If not set then checkpoints won't ever be used */ retryWarmStartThresholdMs?: number; From c69e939c3405690a2cc475f22c2bdb88fd321c27 Mon Sep 17 00:00:00 2001 From: Eric Allam Date: Tue, 28 Apr 2026 12:35:55 +0100 Subject: [PATCH 7/8] feat: Sessions - bidirectional durable agent streams (#3417) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit > ⚠️ **Not released yet.** This PR is the server-side foundation only. The SDK changes that customers will actually use (`chat.agent` migration, `chat.createStartSessionAction`, `useTriggerChatTransport` updates) live on a separate branch and ship together in an upcoming `@trigger.dev/sdk` prerelease. Until that prerelease is published, this surface is reachable only via direct HTTP. ## What this gives Trigger.dev users A new first-class primitive, **Session**, for durable, task-bound, bidirectional I/O that outlives any single run. Sessions are the run manager for `chat.agent` going forward, and they unblock anything else that needs "one identifier, many runs over time" with a stable channel pair the client can write to and subscribe to. ### Use cases unblocked - **Chat agents that persist across many runs.** One session per chat (keyed on your own `chatId` via `externalId`), turns 1..N attach to the same Session, the UI subscribes once and keeps receiving output as new runs take over. - **Approval loops and long-running tasks with user feedback.** The task waits on `.in`, the client writes to `.in`, the server enforces no-writes-after-close. - **Workflow progress streams that live past the run.** Subscribe to `.out` after the task finishes to replay history. - **Resume-next-day flows.** A session is a durable row, not a transient stream. Send a message a day later and the server triggers a fresh run on the same session. ### How it works (Session-as-run-manager) A Session row is task-bound (`taskIdentifier` + `triggerConfig` are required) and owns its current run via `currentRunId` + `currentRunVersion` for optimistic claim. Three trigger paths: 1. **Session create** — `POST /api/v1/sessions` creates the row and triggers the first run synchronously. 2. **Append-time probe** — `POST /realtime/v1/sessions/:session/in/append` checks if the current run is alive; if it has terminated (idle exit, crash, etc.), the server triggers a new run before processing the append. 3. **End-and-continue handoff** — `POST /api/v1/sessions/:session/end-and-continue`, called by the running agent, triggers a fresh run and atomically swaps `currentRunId`. Used by `chat.requestUpgrade()` for version handoffs. Every triggered run is recorded in the `SessionRun` audit table with a reason (`initial`, `continuation`, `upgrade`, `manual`). ## Public API surface ### Control plane - `POST /api/v1/sessions` — create. Idempotent on `(env, externalId)`. Triggers the first run, returns the session and a session-scoped public access token. Returns 409 if the upserted row is already closed. - `GET /api/v1/sessions/:session` — retrieve by friendlyId (`session_abc...`) or by your own externalId (server disambiguates by prefix). - `GET /api/v1/sessions` — list with filters (`type`, `tag`, `taskIdentifier`, `externalId`, derived `status` ACTIVE/CLOSED/EXPIRED, created-at range) and cursor pagination. Backed by ClickHouse. - `PATCH /api/v1/sessions/:session` — update tags / metadata / externalId. - `POST /api/v1/sessions/:session/close` — terminate. Idempotent, hard-blocks new server-brokered writes. - `POST /api/v1/sessions/:session/end-and-continue` — agent-only handoff to a fresh run. ### Realtime - `PUT /realtime/v1/sessions/:session/:io` — initialize a channel. Returns S2 credentials in headers so high-throughput clients can write direct to S2. - `GET /realtime/v1/sessions/:session/:io` — SSE subscribe. Supports Last-Event-ID resume and an opt-in `X-Peek-Settled: 1` header that fast-closes the stream when the upstream is already settled (`trigger:turn-complete`), eliminating long-poll wait on reconnect-on-reload paths. - `POST /realtime/v1/sessions/:session/:io/append` — server-side appends. - `POST /api/v1/runs/:runFriendlyId/session-streams/wait` — runs wait on a session stream as a waitpoint, with a race-check to avoid suspending if data already landed. ### Auth scopes `sessions` is a new resource type. `read:sessions:{id}`, `write:sessions:{id}`, `admin:sessions:{id}` flow through the existing JWT validator. Session-scoped public access tokens minted by the server replace browser-held trigger-task tokens for chat-style flows — the browser never sees a run identifier or a run-scoped token in steady state. ## What's coming after this PR - **SDK + chat.agent migration**: separate branch, separate PR, ships in the next `@trigger.dev/sdk` prerelease alongside this server deploy. Customers using the prerelease `chat.agent` will follow the [upgrade guide](https://github.com/triggerdotdev/trigger.dev/blob/docs/tri-7532-ai-sdk-chat-transport-and-chat-task-system/docs/ai-chat/upgrade-guide.mdx). - **Dashboard surfaces**: dedicated agent list, agent playground, agent view on the run dashboard. Tracking separately. ## Implementation notes - **Postgres `Session` table**: scalar scoping columns (`projectId`, `runtimeEnvironmentId`, `environmentType`, `organizationId`) without FKs, matching the January TaskRun FK-removal decision. Point-lookup indexes only — list queries go to ClickHouse. Terminal markers (`closedAt`, `expiresAt`) are write-once. - **ClickHouse `sessions_v1`**: ReplacingMergeTree, partitioned by month, ordered by `(org_id, project_id, environment_id, created_at, session_id)`. Tags indexed via `tokenbf_v1` skip index. - **`SessionsReplicationService`**: mirrors `RunsReplicationService` exactly — leader-locked logical replication consumer, `ConcurrentFlushScheduler`, retry with exponential backoff + jitter, identical metric shape. Dedicated slot + publication so the two consume independently. - **S2 keys**: `sessions/{addressingKey}/{out|in}`. The existing `runs/{runId}/{streamId}` key format for run-scoped streams is untouched. - **Optimistic claim**: `ensureRunForSession` triggers a run upfront (cheap to cancel if it loses the race), then attempts an `updateMany` keyed on `currentRunVersion`. Loser cancels its triggered run and reuses the winner's. No DB lock held across the trigger. ### What did NOT change Run-scoped `streams.pipe` / `streams.input` and the existing `/realtime/v1/streams/{runId}/...` routes are unchanged. Sessions are net-new — not a reshaping of the current streams API. ## Deploy notes - Set `SESSION_REPLICATION_CLICKHOUSE_URL` and `SESSION_REPLICATION_ENABLED=1` to enable the replication consumer. - The `Session` table needs `REPLICA IDENTITY FULL` set on the prod source DB before the publication is created (same one-time DDL we did for `TaskRun`). Required for delete events to carry full column values. - Cross-form authorization on the `GET /api/v1/sessions/:session` loader (a JWT minted for either form authorizes both URL forms). Action routes are URL-form-specific, matching how the SDK mints PATs. ## Verification - Webapp typecheck clean (10/10). - `apps/webapp/test/sessionsReplicationService.test.ts` — round-trip tests for insert/update/delete through Postgres logical replication into ClickHouse via testcontainers. - Live end-to-end against local dev: create + retrieve (both forms) + update + close, `.out.initialize` + `.out.append` x2 + `.in.send` + `.out.subscribe` over SSE, list with all filter combinations + pagination, `end-and-continue` swap, `X-Peek-Settled` fast-close (verified in browser via reconnect-on-reload and via curl). Replicated row lands in ClickHouse within ~1s. - Multi-round Devin + CodeRabbit review feedback addressed (read-after-write paths use `prisma` writer, info-leak on auth-routes masked as 403, peek-settled discriminator parsing fix, etc.). ## Test plan - [ ] `pnpm run typecheck --filter webapp` - [ ] `pnpm run test --filter webapp ./test/sessionsReplicationService.test.ts --run` - [ ] Start the webapp with `SESSION_REPLICATION_CLICKHOUSE_URL` and `SESSION_REPLICATION_ENABLED=1`. Confirm the slot and publication auto-create on boot. - [ ] `POST /api/v1/sessions` and verify the row replicates to `trigger_dev.sessions_v1` within a couple of seconds. - [ ] `POST /api/v1/sessions/:id/close`, then confirm `POST /realtime/v1/sessions/:id/out/append` returns 400. - [ ] Reuse a closed session's `externalId` on `POST /api/v1/sessions` and confirm 409. - [ ] `GET /realtime/v1/sessions/:id/out` with `X-Peek-Settled: 1` after a turn completes and confirm `X-Session-Settled: true` response header + immediate close. --- .changeset/session-primitive.md | 5 + .server-changes/session-primitive.md | 6 + apps/webapp/app/entry.server.tsx | 38 + apps/webapp/app/env.server.ts | 32 + ...uns.$runFriendlyId.session-streams.wait.ts | 188 +++++ .../routes/api.v1.sessions.$session.close.ts | 79 ++ ...i.v1.sessions.$session.end-and-continue.ts | 135 ++++ .../app/routes/api.v1.sessions.$session.ts | 108 +++ apps/webapp/app/routes/api.v1.sessions.ts | 238 ++++++ ...ealtime.v1.sessions.$session.$io.append.ts | 183 +++++ .../realtime.v1.sessions.$session.$io.ts | 181 +++++ ...streams.$runId.$target.$streamId.append.ts | 8 + .../app/services/authorization.server.ts | 2 +- .../realtime/mintSessionToken.server.ts | 40 + .../realtime/s2realtimeStreams.server.ts | 207 ++++- .../realtime/sessionRunManager.server.ts | 375 +++++++++ .../app/services/realtime/sessions.server.ts | 127 +++ apps/webapp/app/services/realtime/types.ts | 11 + .../routeBuilders/apiBuilder.server.ts | 40 +- .../sessionStreamWaitpointCache.server.ts | 147 ++++ .../sessionsReplicationInstance.server.ts | 72 ++ .../sessionsReplicationService.server.ts | 763 ++++++++++++++++++ .../clickhouseSessionsRepository.server.ts | 254 ++++++ .../sessionsRepository.server.ts | 198 +++++ .../app/v3/services/adminWorker.server.ts | 6 + .../test/sessionsReplicationService.test.ts | 212 +++++ .../schema/030_create_sessions_v1.sql | 42 + internal-packages/clickhouse/src/index.ts | 18 + internal-packages/clickhouse/src/sessions.ts | 184 +++++ .../migration.sql | 33 + .../migration.sql | 31 + .../migration.sql | 3 + .../database/prisma/schema.prisma | 86 ++ packages/core/src/v3/isomorphic/friendlyId.ts | 1 + packages/core/src/v3/schemas/api.ts | 245 ++++++ 35 files changed, 4284 insertions(+), 14 deletions(-) create mode 100644 .changeset/session-primitive.md create mode 100644 .server-changes/session-primitive.md create mode 100644 apps/webapp/app/routes/api.v1.runs.$runFriendlyId.session-streams.wait.ts create mode 100644 apps/webapp/app/routes/api.v1.sessions.$session.close.ts create mode 100644 apps/webapp/app/routes/api.v1.sessions.$session.end-and-continue.ts create mode 100644 apps/webapp/app/routes/api.v1.sessions.$session.ts create mode 100644 apps/webapp/app/routes/api.v1.sessions.ts create mode 100644 apps/webapp/app/routes/realtime.v1.sessions.$session.$io.append.ts create mode 100644 apps/webapp/app/routes/realtime.v1.sessions.$session.$io.ts create mode 100644 apps/webapp/app/services/realtime/mintSessionToken.server.ts create mode 100644 apps/webapp/app/services/realtime/sessionRunManager.server.ts create mode 100644 apps/webapp/app/services/realtime/sessions.server.ts create mode 100644 apps/webapp/app/services/sessionStreamWaitpointCache.server.ts create mode 100644 apps/webapp/app/services/sessionsReplicationInstance.server.ts create mode 100644 apps/webapp/app/services/sessionsReplicationService.server.ts create mode 100644 apps/webapp/app/services/sessionsRepository/clickhouseSessionsRepository.server.ts create mode 100644 apps/webapp/app/services/sessionsRepository/sessionsRepository.server.ts create mode 100644 apps/webapp/test/sessionsReplicationService.test.ts create mode 100644 internal-packages/clickhouse/schema/030_create_sessions_v1.sql create mode 100644 internal-packages/clickhouse/src/sessions.ts create mode 100644 internal-packages/database/prisma/migrations/20260419000000_add_sessions_table/migration.sql create mode 100644 internal-packages/database/prisma/migrations/20260426190818_sessions_as_run_manager/migration.sql create mode 100644 internal-packages/database/prisma/migrations/20260426190819_session_current_run_id_index/migration.sql diff --git a/.changeset/session-primitive.md b/.changeset/session-primitive.md new file mode 100644 index 00000000000..0f56fc65ad1 --- /dev/null +++ b/.changeset/session-primitive.md @@ -0,0 +1,5 @@ +--- +"@trigger.dev/core": patch +--- + +Add `SessionId` friendly ID generator and schemas for the new durable Session primitive. Exported from `@trigger.dev/core/v3/isomorphic` alongside `RunId`, `BatchId`, etc. Ships the `CreateSessionStreamWaitpoint` request/response schemas alongside the main Session CRUD. diff --git a/.server-changes/session-primitive.md b/.server-changes/session-primitive.md new file mode 100644 index 00000000000..a4d8b606ee2 --- /dev/null +++ b/.server-changes/session-primitive.md @@ -0,0 +1,6 @@ +--- +area: webapp +type: feature +--- + +Add the `Session` primitive — a durable, task-bound, bidirectional I/O channel that outlives a single run and acts as the run manager for `chat.agent`. Ships the Postgres `Session` + `SessionRun` tables, ClickHouse `sessions_v1` + replication service, the `sessions` JWT scope, and the public CRUD + realtime routes (`/api/v1/sessions`, `/realtime/v1/sessions/:session/:io`) including `end-and-continue` for server-orchestrated run handoffs and session-stream waitpoints. diff --git a/apps/webapp/app/entry.server.tsx b/apps/webapp/app/entry.server.tsx index 87171011e03..436ec288211 100644 --- a/apps/webapp/app/entry.server.tsx +++ b/apps/webapp/app/entry.server.tsx @@ -23,6 +23,44 @@ import { registerRunEngineEventBusHandlers, setupBatchQueueCallbacks, } from "./v3/runEngineHandlers.server"; +import { sessionsReplicationInstance } from "./services/sessionsReplicationInstance.server"; +import { signalsEmitter } from "./services/signals.server"; + +// Start the sessions replication service (subscribes to the logical replication +// slot, runs leader election, flushes to ClickHouse). Done at entry level so it +// runs deterministically on webapp boot rather than lazily via a singleton +// reference elsewhere in the module graph. +if (sessionsReplicationInstance && env.SESSION_REPLICATION_ENABLED === "1") { + // Capture a non-nullable reference so the shutdown closure below + // doesn't need to re-null-check (TS narrowing doesn't follow through + // an inner function scope). + const replicator = sessionsReplicationInstance; + replicator + .start() + .then(() => { + console.log("🗃️ Sessions replication service started"); + }) + .catch((error) => { + console.error("🗃️ Sessions replication service failed to start", { + error, + }); + }); + + // Wrap the async shutdown in a sync handler that catches rejections — + // SIGTERM/SIGINT fire during process teardown, and an unhandled + // promise rejection from `_replicationClient.stop()` there would + // bubble up past the process exit. Matches the pattern in + // dynamicFlushScheduler.server.ts. + const shutdownSessionsReplication = () => { + replicator.shutdown().catch((error) => { + console.error("🗃️ Sessions replication service shutdown error", { + error, + }); + }); + }; + signalsEmitter.on("SIGTERM", shutdownSessionsReplication); + signalsEmitter.on("SIGINT", shutdownSessionsReplication); +} const ABORT_DELAY = 30000; diff --git a/apps/webapp/app/env.server.ts b/apps/webapp/app/env.server.ts index 031c4795847..1807f0a54c4 100644 --- a/apps/webapp/app/env.server.ts +++ b/apps/webapp/app/env.server.ts @@ -1237,6 +1237,38 @@ const EnvironmentSchema = z RUN_REPLICATION_DISABLE_PAYLOAD_INSERT: z.string().default("0"), RUN_REPLICATION_DISABLE_ERROR_FINGERPRINTING: z.string().default("0"), + // Session replication (Postgres → ClickHouse sessions_v1). Shares Redis + // with the runs replicator for leader locking but has its own slot and + // publication so the two consume independently. + SESSION_REPLICATION_CLICKHOUSE_URL: z.string().optional(), + SESSION_REPLICATION_ENABLED: z.string().default("0"), + SESSION_REPLICATION_SLOT_NAME: z.string().default("sessions_to_clickhouse_v1"), + SESSION_REPLICATION_PUBLICATION_NAME: z + .string() + .default("sessions_to_clickhouse_v1_publication"), + SESSION_REPLICATION_MAX_FLUSH_CONCURRENCY: z.coerce.number().int().default(1), + SESSION_REPLICATION_FLUSH_INTERVAL_MS: z.coerce.number().int().default(1000), + SESSION_REPLICATION_FLUSH_BATCH_SIZE: z.coerce.number().int().default(100), + SESSION_REPLICATION_LEADER_LOCK_TIMEOUT_MS: z.coerce.number().int().default(30_000), + SESSION_REPLICATION_LEADER_LOCK_EXTEND_INTERVAL_MS: z.coerce.number().int().default(10_000), + SESSION_REPLICATION_LEADER_LOCK_ADDITIONAL_TIME_MS: z.coerce.number().int().default(10_000), + SESSION_REPLICATION_LEADER_LOCK_RETRY_INTERVAL_MS: z.coerce.number().int().default(500), + SESSION_REPLICATION_ACK_INTERVAL_SECONDS: z.coerce.number().int().default(10), + SESSION_REPLICATION_LOG_LEVEL: z + .enum(["log", "error", "warn", "info", "debug"]) + .default("info"), + SESSION_REPLICATION_CLICKHOUSE_LOG_LEVEL: z + .enum(["log", "error", "warn", "info", "debug"]) + .default("info"), + SESSION_REPLICATION_WAIT_FOR_ASYNC_INSERT: z.string().default("0"), + SESSION_REPLICATION_KEEP_ALIVE_ENABLED: z.string().default("0"), + SESSION_REPLICATION_KEEP_ALIVE_IDLE_SOCKET_TTL_MS: z.coerce.number().int().optional(), + SESSION_REPLICATION_MAX_OPEN_CONNECTIONS: z.coerce.number().int().default(10), + SESSION_REPLICATION_INSERT_STRATEGY: z.enum(["insert", "insert_async"]).default("insert"), + SESSION_REPLICATION_INSERT_MAX_RETRIES: z.coerce.number().int().default(3), + SESSION_REPLICATION_INSERT_BASE_DELAY_MS: z.coerce.number().int().default(100), + SESSION_REPLICATION_INSERT_MAX_DELAY_MS: z.coerce.number().int().default(2000), + // Clickhouse CLICKHOUSE_URL: z.string(), CLICKHOUSE_KEEP_ALIVE_ENABLED: z.string().default("1"), diff --git a/apps/webapp/app/routes/api.v1.runs.$runFriendlyId.session-streams.wait.ts b/apps/webapp/app/routes/api.v1.runs.$runFriendlyId.session-streams.wait.ts new file mode 100644 index 00000000000..18034caab47 --- /dev/null +++ b/apps/webapp/app/routes/api.v1.runs.$runFriendlyId.session-streams.wait.ts @@ -0,0 +1,188 @@ +import { json } from "@remix-run/server-runtime"; +import { + CreateSessionStreamWaitpointRequestBody, + type CreateSessionStreamWaitpointResponseBody, +} from "@trigger.dev/core/v3"; +import { WaitpointId } from "@trigger.dev/core/v3/isomorphic"; +import { z } from "zod"; +import { $replica } from "~/db.server"; +import { createWaitpointTag, MAX_TAGS_PER_WAITPOINT } from "~/models/waitpointTag.server"; +import { + canonicalSessionAddressingKey, + isSessionFriendlyIdForm, + resolveSessionByIdOrExternalId, +} from "~/services/realtime/sessions.server"; +import { S2RealtimeStreams } from "~/services/realtime/s2realtimeStreams.server"; +import { getRealtimeStreamInstance } from "~/services/realtime/v1StreamsGlobal.server"; +import { + addSessionStreamWaitpoint, + removeSessionStreamWaitpoint, +} from "~/services/sessionStreamWaitpointCache.server"; +import { createActionApiRoute } from "~/services/routeBuilders/apiBuilder.server"; +import { logger } from "~/services/logger.server"; +import { parseDelay } from "~/utils/delays"; +import { resolveIdempotencyKeyTTL } from "~/utils/idempotencyKeys.server"; +import { engine } from "~/v3/runEngine.server"; +import { ServiceValidationError } from "~/v3/services/baseService.server"; + +const ParamsSchema = z.object({ + runFriendlyId: z.string(), +}); + +const { action, loader } = createActionApiRoute( + { + params: ParamsSchema, + body: CreateSessionStreamWaitpointRequestBody, + maxContentLength: 1024 * 10, // 10KB + method: "POST", + }, + async ({ authentication, body, params }) => { + try { + const run = await $replica.taskRun.findFirst({ + where: { + friendlyId: params.runFriendlyId, + runtimeEnvironmentId: authentication.environment.id, + }, + select: { + id: true, + friendlyId: true, + realtimeStreamsVersion: true, + }, + }); + + if (!run) { + return json({ error: "Run not found" }, { status: 404 }); + } + + // Row-optional addressing — see the .out / .in.append handlers. + // The waitpoint cache + S2 stream key derive from the row's + // canonical identity (externalId if set, else friendlyId), so + // the agent's wait registration and the append-side drain + // converge regardless of which URL form each side used. + const maybeSession = await resolveSessionByIdOrExternalId( + $replica, + authentication.environment.id, + body.session + ); + + if (!maybeSession && isSessionFriendlyIdForm(body.session)) { + return json({ error: "Session not found" }, { status: 404 }); + } + + const addressingKey = canonicalSessionAddressingKey(maybeSession, body.session); + + const idempotencyKeyExpiresAt = body.idempotencyKeyTTL + ? resolveIdempotencyKeyTTL(body.idempotencyKeyTTL) + : undefined; + + const timeout = await parseDelay(body.timeout); + + const bodyTags = typeof body.tags === "string" ? [body.tags] : body.tags; + + if (bodyTags && bodyTags.length > MAX_TAGS_PER_WAITPOINT) { + throw new ServiceValidationError( + `Waitpoints can only have ${MAX_TAGS_PER_WAITPOINT} tags, you're trying to set ${bodyTags.length}.` + ); + } + + if (bodyTags && bodyTags.length > 0) { + for (const tag of bodyTags) { + await createWaitpointTag({ + tag, + environmentId: authentication.environment.id, + projectId: authentication.environment.projectId, + }); + } + } + + // Step 1: Create the waitpoint. + const result = await engine.createManualWaitpoint({ + environmentId: authentication.environment.id, + projectId: authentication.environment.projectId, + idempotencyKey: body.idempotencyKey, + idempotencyKeyExpiresAt, + timeout, + tags: bodyTags, + }); + + // Step 2: Register the waitpoint on the session channel so the next + // append fires it. Keyed by (addressingKey, io) — the canonical + // string for the row. The append handler drains by the same + // canonical key, so writers and readers converge regardless of + // which URL form the agent vs. the appending caller used. + const ttlMs = timeout ? timeout.getTime() - Date.now() : undefined; + await addSessionStreamWaitpoint( + addressingKey, + body.io, + result.waitpoint.id, + ttlMs && ttlMs > 0 ? ttlMs : undefined + ); + + // Step 3: Race-check. If a record landed on the channel before this + // .wait() call, complete the waitpoint synchronously with that data + // and remove the pending registration. + if (!result.isCached) { + try { + // Session streams are always v2 (S2) — the writer in + // `appendPartToSessionStream` and the SSE subscribe both + // hardcode "v2", so the race-check reader has to match. + // Don't fall through to the run's own `realtimeStreamsVersion`, + // which only describes the run's run-scoped streams. + const realtimeStream = getRealtimeStreamInstance(authentication.environment, "v2"); + + if (realtimeStream instanceof S2RealtimeStreams) { + const records = await realtimeStream.readSessionStreamRecords( + addressingKey, + body.io, + body.lastSeqNum + ); + + if (records.length > 0) { + const record = records[0]!; + + await engine.completeWaitpoint({ + id: result.waitpoint.id, + output: { + value: record.data, + type: "application/json", + isError: false, + }, + }); + + await removeSessionStreamWaitpoint( + addressingKey, + body.io, + result.waitpoint.id + ); + } + } + } catch (error) { + // Non-fatal: pending registration stays in Redis; the next append + // will complete the waitpoint via the append handler path. Log so + // a broken race-check doesn't silently degrade to timeout-only. + logger.warn("session-stream wait race-check failed", { + addressingKey, + io: body.io, + waitpointId: WaitpointId.toFriendlyId(result.waitpoint.id), + error, + }); + } + } + + return json({ + waitpointId: WaitpointId.toFriendlyId(result.waitpoint.id), + isCached: result.isCached, + }); + } catch (error) { + if (error instanceof ServiceValidationError) { + return json({ error: error.message }, { status: 422 }); + } + // Don't forward raw internal error messages (could leak Prisma/engine + // details). Log server-side and return a generic 500. + logger.error("Failed to create session-stream waitpoint", { error }); + return json({ error: "Something went wrong" }, { status: 500 }); + } + } +); + +export { action, loader }; diff --git a/apps/webapp/app/routes/api.v1.sessions.$session.close.ts b/apps/webapp/app/routes/api.v1.sessions.$session.close.ts new file mode 100644 index 00000000000..16d8a6d93d1 --- /dev/null +++ b/apps/webapp/app/routes/api.v1.sessions.$session.close.ts @@ -0,0 +1,79 @@ +import { json } from "@remix-run/server-runtime"; +import { + CloseSessionRequestBody, + type RetrieveSessionResponseBody, +} from "@trigger.dev/core/v3"; +import { z } from "zod"; +import { $replica, prisma } from "~/db.server"; +import { + resolveSessionByIdOrExternalId, + serializeSessionWithFriendlyRunId, +} from "~/services/realtime/sessions.server"; +import { createActionApiRoute } from "~/services/routeBuilders/apiBuilder.server"; + +const ParamsSchema = z.object({ + session: z.string(), +}); + +const { action, loader } = createActionApiRoute( + { + params: ParamsSchema, + body: CloseSessionRequestBody, + maxContentLength: 1024, + method: "POST", + allowJWT: true, + corsStrategy: "all", + authorization: { + action: "admin", + resource: (params) => ({ sessions: params.session }), + superScopes: ["admin:sessions", "admin:all", "admin"], + }, + }, + async ({ authentication, params, body }) => { + const existing = await resolveSessionByIdOrExternalId( + $replica, + authentication.environment.id, + params.session + ); + + if (!existing) { + return json({ error: "Session not found" }, { status: 404 }); + } + + // Idempotent: if already closed, return the current row without clobbering + // the original closedAt / closedReason. + if (existing.closedAt) { + return json( + await serializeSessionWithFriendlyRunId(existing) + ); + } + + // `closedAt: null` on the where clause makes the update conditional at + // the DB level. Two concurrent closes race through the earlier read, + // but only one can win this update — the loser hits `count === 0` and + // falls back to reading the winning row. Closedness is write-once. + const { count } = await prisma.session.updateMany({ + where: { id: existing.id, closedAt: null }, + data: { + closedAt: new Date(), + closedReason: body.reason ?? null, + }, + }); + + if (count === 0) { + const final = await prisma.session.findFirst({ where: { id: existing.id } }); + if (!final) return json({ error: "Session not found" }, { status: 404 }); + return json( + await serializeSessionWithFriendlyRunId(final) + ); + } + + const updated = await prisma.session.findFirst({ where: { id: existing.id } }); + if (!updated) return json({ error: "Session not found" }, { status: 404 }); + return json( + await serializeSessionWithFriendlyRunId(updated) + ); + } +); + +export { action, loader }; diff --git a/apps/webapp/app/routes/api.v1.sessions.$session.end-and-continue.ts b/apps/webapp/app/routes/api.v1.sessions.$session.end-and-continue.ts new file mode 100644 index 00000000000..cdc9c9e8dc7 --- /dev/null +++ b/apps/webapp/app/routes/api.v1.sessions.$session.end-and-continue.ts @@ -0,0 +1,135 @@ +import { json } from "@remix-run/server-runtime"; +import { + EndAndContinueSessionRequestBody, + type EndAndContinueSessionResponseBody, +} from "@trigger.dev/core/v3"; +import { z } from "zod"; +import { $replica, prisma } from "~/db.server"; +import { logger } from "~/services/logger.server"; +import { swapSessionRun } from "~/services/realtime/sessionRunManager.server"; +import { resolveSessionByIdOrExternalId } from "~/services/realtime/sessions.server"; +import { createActionApiRoute } from "~/services/routeBuilders/apiBuilder.server"; + +const ParamsSchema = z.object({ + session: z.string(), +}); + +// POST /api/v1/sessions/:session/end-and-continue +// +// Generic "the running run is exiting; please trigger a fresh one for +// this session and swap `currentRunId` to it" endpoint. The agent calls +// this from `chat.requestUpgrade` and other planned-handoff paths. The +// transport's `.out` SSE keeps streaming across the swap because S2 is +// keyed on the session, not the run — v1's last chunks land, v2's new +// chunks land on the same stream. +// +// Auth: `write:sessions:{ext}` — the running agent's internal API key +// (PRIVATE) bypasses authorization; a browser holding the session PAT +// can also reach this endpoint, which is fine: if you have the session +// PAT, you own the chat. +const { action, loader } = createActionApiRoute( + { + params: ParamsSchema, + body: EndAndContinueSessionRequestBody, + method: "POST", + maxContentLength: 1024, + allowJWT: true, + corsStrategy: "all", + // Resolved before authorization so the auth scope can expand to both + // addressing forms (friendlyId + externalId). Handler reads the row + // from `resource` instead of re-fetching. + findResource: async (params, auth) => + resolveSessionByIdOrExternalId($replica, auth.environment.id, params.session), + authorization: { + action: "write", + resource: (params, _, __, ___, session) => { + const ids = new Set([params.session]); + if (session) { + ids.add(session.friendlyId); + if (session.externalId) ids.add(session.externalId); + } + return { sessions: [...ids] }; + }, + superScopes: ["write:sessions", "write:all", "admin"], + }, + }, + async ({ authentication, params, body, resource: session }) => { + if (!session) { + // Unreachable — `findResource` 404s before this runs. Type narrow. + return json({ error: "Session not found" }, { status: 404 }); + } + + if (session.closedAt) { + return json( + { error: "Cannot end-and-continue a closed session" }, + { status: 400 } + ); + } + + if (session.expiresAt && session.expiresAt.getTime() < Date.now()) { + return json( + { error: "Cannot end-and-continue an expired session" }, + { status: 400 } + ); + } + + // The wire `callingRunId` is a friendlyId (that's what the agent + // SDK exposes via `ctx.run.id`). Internally `Session.currentRunId` + // stores the TaskRun.id cuid, so resolve before handing to the + // optimistic-claim service. + const callingRun = await $replica.taskRun.findFirst({ + where: { + friendlyId: body.callingRunId, + runtimeEnvironmentId: authentication.environment.id, + }, + select: { id: true }, + }); + if (!callingRun) { + return json({ error: "callingRunId not found in this environment" }, { status: 404 }); + } + + try { + // Body's `reason` is free-form for forward-compat (audit metadata + // only); narrow into the closed `EnsureRunReason` set, defaulting + // to `"manual"` for unknown labels. + const reason: "initial" | "continuation" | "upgrade" | "manual" = + body.reason === "upgrade" || + body.reason === "continuation" || + body.reason === "initial" || + body.reason === "manual" + ? body.reason + : "manual"; + + const result = await swapSessionRun({ + session, + callingRunId: callingRun.id, + environment: authentication.environment, + reason, + }); + + // Read-after-write: the swap just triggered (or claimed) the + // run on the writer, so read it from `prisma` rather than + // `$replica`. A replica miss here would silently fall back to + // returning the internal cuid, which the public API contract + // says is a friendlyId. + const run = await prisma.taskRun.findFirst({ + where: { id: result.runId }, + select: { friendlyId: true }, + }); + + const responseBody: EndAndContinueSessionResponseBody = { + runId: run?.friendlyId ?? result.runId, + swapped: result.swapped, + }; + return json(responseBody); + } catch (error) { + logger.error("Failed end-and-continue", { + sessionId: session.id, + error, + }); + return json({ error: "Failed to swap session run" }, { status: 500 }); + } + } +); + +export { action, loader }; diff --git a/apps/webapp/app/routes/api.v1.sessions.$session.ts b/apps/webapp/app/routes/api.v1.sessions.$session.ts new file mode 100644 index 00000000000..800ee32b99b --- /dev/null +++ b/apps/webapp/app/routes/api.v1.sessions.$session.ts @@ -0,0 +1,108 @@ +import { json } from "@remix-run/server-runtime"; +import { + type RetrieveSessionResponseBody, + UpdateSessionRequestBody, +} from "@trigger.dev/core/v3"; +import { Prisma } from "@trigger.dev/database"; +import { z } from "zod"; +import { $replica, prisma } from "~/db.server"; +import { + resolveSessionByIdOrExternalId, + serializeSessionWithFriendlyRunId, +} from "~/services/realtime/sessions.server"; +import { + createActionApiRoute, + createLoaderApiRoute, +} from "~/services/routeBuilders/apiBuilder.server"; + +const ParamsSchema = z.object({ + session: z.string(), +}); + +export const loader = createLoaderApiRoute( + { + params: ParamsSchema, + allowJWT: true, + corsStrategy: "all", + findResource: async (params, auth) => { + return resolveSessionByIdOrExternalId($replica, auth.environment.id, params.session); + }, + authorization: { + action: "read", + resource: (session) => ({ sessions: [session.friendlyId, session.externalId ?? ""] }), + superScopes: ["read:sessions", "read:all", "admin"], + }, + }, + async ({ resource: session }) => { + return json( + await serializeSessionWithFriendlyRunId(session) + ); + } +); + +const { action } = createActionApiRoute( + { + params: ParamsSchema, + body: UpdateSessionRequestBody, + maxContentLength: 1024 * 32, + method: "PATCH", + allowJWT: true, + corsStrategy: "all", + authorization: { + action: "admin", + resource: (params) => ({ sessions: params.session }), + superScopes: ["admin:sessions", "admin:all", "admin"], + }, + }, + async ({ authentication, params, body }) => { + const existing = await resolveSessionByIdOrExternalId( + $replica, + authentication.environment.id, + params.session + ); + + if (!existing) { + return json({ error: "Session not found" }, { status: 404 }); + } + + try { + const updated = await prisma.session.update({ + where: { id: existing.id }, + data: { + ...(body.tags !== undefined ? { tags: body.tags } : {}), + ...(body.metadata !== undefined + ? { + metadata: + body.metadata === null + ? Prisma.JsonNull + : (body.metadata as Prisma.InputJsonValue), + } + : {}), + ...(body.externalId !== undefined ? { externalId: body.externalId } : {}), + }, + }); + + return json( + await serializeSessionWithFriendlyRunId(updated) + ); + } catch (error) { + // A duplicate externalId in the same environment violates the + // `(runtimeEnvironmentId, externalId)` unique constraint. Surface that + // as a 409 rather than a generic 500. + if ( + error instanceof Prisma.PrismaClientKnownRequestError && + error.code === "P2002" && + Array.isArray((error.meta as { target?: string[] })?.target) && + ((error.meta as { target?: string[] }).target ?? []).includes("externalId") + ) { + return json( + { error: "A session with this externalId already exists in this environment" }, + { status: 409 } + ); + } + throw error; + } + } +); + +export { action }; diff --git a/apps/webapp/app/routes/api.v1.sessions.ts b/apps/webapp/app/routes/api.v1.sessions.ts new file mode 100644 index 00000000000..6251ff3ac38 --- /dev/null +++ b/apps/webapp/app/routes/api.v1.sessions.ts @@ -0,0 +1,238 @@ +import { json } from "@remix-run/server-runtime"; +import { + CreateSessionRequestBody, + type CreatedSessionResponseBody, + ListSessionsQueryParams, + type ListSessionsResponseBody, + type SessionItem, + type SessionStatus, +} from "@trigger.dev/core/v3"; +import { SessionId } from "@trigger.dev/core/v3/isomorphic"; +import type { Prisma, Session } from "@trigger.dev/database"; +import { $replica, prisma, type PrismaClient } from "~/db.server"; +import { clickhouseClient } from "~/services/clickhouseInstance.server"; +import { logger } from "~/services/logger.server"; +import { mintSessionToken } from "~/services/realtime/mintSessionToken.server"; +import { + ensureRunForSession, + type SessionTriggerConfig, +} from "~/services/realtime/sessionRunManager.server"; +import { serializeSession } from "~/services/realtime/sessions.server"; +import { SessionsRepository } from "~/services/sessionsRepository/sessionsRepository.server"; +import { + createActionApiRoute, + createLoaderApiRoute, +} from "~/services/routeBuilders/apiBuilder.server"; +import { ServiceValidationError } from "~/v3/services/common.server"; + +function asArray(value: T | T[] | undefined): T[] | undefined { + if (value === undefined) return undefined; + return Array.isArray(value) ? value : [value]; +} + +export const loader = createLoaderApiRoute( + { + searchParams: ListSessionsQueryParams, + allowJWT: true, + corsStrategy: "all", + authorization: { + action: "read", + resource: (_, __, searchParams) => ({ tasks: searchParams["filter[taskIdentifier]"] }), + superScopes: ["read:sessions", "read:all", "admin"], + }, + findResource: async () => 1, + }, + async ({ searchParams, authentication }) => { + const repository = new SessionsRepository({ + clickhouse: clickhouseClient, + prisma: $replica as PrismaClient, + }); + + // `page[after]` is the forward cursor, `page[before]` is the backward + // cursor. The repository internally keys off `{cursor, direction}`. + const cursor = searchParams["page[after]"] ?? searchParams["page[before]"]; + const direction = searchParams["page[before]"] ? "backward" : "forward"; + + const { sessions: rows, pagination } = await repository.listSessions({ + organizationId: authentication.environment.organizationId, + projectId: authentication.environment.projectId, + environmentId: authentication.environment.id, + types: asArray(searchParams["filter[type]"]), + tags: asArray(searchParams["filter[tags]"]), + taskIdentifiers: asArray(searchParams["filter[taskIdentifier]"]), + externalId: searchParams["filter[externalId]"], + statuses: asArray(searchParams["filter[status]"]) as SessionStatus[] | undefined, + period: searchParams["filter[createdAt][period]"], + from: searchParams["filter[createdAt][from]"], + to: searchParams["filter[createdAt][to]"], + page: { + size: searchParams["page[size]"], + cursor, + direction, + }, + }); + + return json({ + data: rows.map((row) => + serializeSession({ + ...row, + // Columns the list query doesn't select — filled so `serializeSession` + // can operate on a narrowed payload without type errors. + projectId: authentication.environment.projectId, + environmentType: authentication.environment.type, + organizationId: authentication.environment.organizationId, + } as Session) + ), + pagination: { + ...(pagination.nextCursor ? { next: pagination.nextCursor } : {}), + ...(pagination.previousCursor ? { previous: pagination.previousCursor } : {}), + }, + }); + } +); + +const { action } = createActionApiRoute( + { + body: CreateSessionRequestBody, + method: "POST", + maxContentLength: 1024 * 32, // 32KB — metadata is the only thing that grows + // Secret-key only. Customer's server (typically wrapping + // `chat.createStartSessionAction`) owns session creation so any + // authorization decision (per-user/plan/quota) sits server-side + // alongside whatever DB write the customer pairs with the create. + // The session-scoped PAT returned in the response body is what the + // browser uses thereafter against `.in/append`, `.out` SSE, + // `end-and-continue`, etc. + corsStrategy: "all", + }, + async ({ authentication, body }) => { + try { + const { id, friendlyId } = SessionId.generate(); + + // Idempotent on (env, externalId): two concurrent POSTs converge + // to the same row. We refresh `triggerConfig` on the cached path + // so newly-deployed schema changes (e.g. an updated + // `clientDataSchema` on the agent) propagate to subsequent runs + // — the next `ensureRunForSession` reads back the latest config. + let session: Session; + let isCached = false; + + const triggerConfigJson = body.triggerConfig as unknown as Prisma.InputJsonValue; + + if (body.externalId) { + session = await prisma.session.upsert({ + where: { + runtimeEnvironmentId_externalId: { + runtimeEnvironmentId: authentication.environment.id, + externalId: body.externalId, + }, + }, + create: { + id, + friendlyId, + externalId: body.externalId, + type: body.type, + taskIdentifier: body.taskIdentifier, + triggerConfig: triggerConfigJson, + tags: body.tags ?? [], + metadata: body.metadata as Prisma.InputJsonValue | undefined, + expiresAt: body.expiresAt ?? null, + projectId: authentication.environment.projectId, + runtimeEnvironmentId: authentication.environment.id, + environmentType: authentication.environment.type, + organizationId: authentication.environment.organizationId, + }, + update: { triggerConfig: triggerConfigJson }, + }); + isCached = session.id !== id; + } else { + session = await prisma.session.create({ + data: { + id, + friendlyId, + type: body.type, + taskIdentifier: body.taskIdentifier, + triggerConfig: triggerConfigJson, + tags: body.tags ?? [], + metadata: body.metadata as Prisma.InputJsonValue | undefined, + expiresAt: body.expiresAt ?? null, + projectId: authentication.environment.projectId, + runtimeEnvironmentId: authentication.environment.id, + environmentType: authentication.environment.type, + organizationId: authentication.environment.organizationId, + }, + }); + } + + // Reject create on a closed session. The upsert path will return + // an already-closed row when the caller reuses an externalId, and + // without this guard `ensureRunForSession` would trigger a fresh + // run that can't receive `.in` input (the append handler 409s on + // closed sessions). Force the caller to use a different externalId + // — `close` is one-way. + if (session.closedAt) { + return json( + { error: "Session is closed; use a different externalId to create a new session" }, + { status: 409 } + ); + } + + // Session is task-bound — every session has a live run by + // construction. `ensureRunForSession` is idempotent: on the + // cached path it sees `currentRunId` is alive and returns it + // without re-triggering. + const ensureResult = await ensureRunForSession({ + session, + environment: authentication.environment, + reason: isCached ? "continuation" : "initial", + }); + + // Read-after-write: the run was just triggered in this request, + // so go to the writer rather than $replica. Replica lag here + // would null this out and turn a successful create into a 500. + const run = await prisma.taskRun.findFirst({ + where: { id: ensureResult.runId }, + select: { friendlyId: true }, + }); + if (!run) { + throw new Error(`Triggered run ${ensureResult.runId} not found`); + } + + // Mint a session-scoped PAT keyed on the addressing string the + // transport will use everywhere (`.in/append`, `.out` SSE, + // `end-and-continue`). For sessions with an externalId, that's + // the externalId; otherwise the friendlyId. Mirrors the + // canonical addressing key used server-side. + const addressingKey = session.externalId ?? session.friendlyId; + const publicAccessToken = await mintSessionToken( + authentication.environment, + addressingKey + ); + + const sessionItem: SessionItem = { + ...serializeSession(session), + triggerConfig: session.triggerConfig as unknown as SessionTriggerConfig, + currentRunId: run.friendlyId, + }; + + const responseBody: CreatedSessionResponseBody = { + ...sessionItem, + runId: run.friendlyId, + publicAccessToken, + isCached, + }; + + return json(responseBody, { + status: isCached ? 200 : 201, + }); + } catch (error) { + if (error instanceof ServiceValidationError) { + return json({ error: error.message }, { status: 422 }); + } + logger.error("Failed to create session", { error }); + return json({ error: "Something went wrong" }, { status: 500 }); + } + } +); + +export { action }; diff --git a/apps/webapp/app/routes/realtime.v1.sessions.$session.$io.append.ts b/apps/webapp/app/routes/realtime.v1.sessions.$session.$io.append.ts new file mode 100644 index 00000000000..4251baae91e --- /dev/null +++ b/apps/webapp/app/routes/realtime.v1.sessions.$session.$io.append.ts @@ -0,0 +1,183 @@ +import { json } from "@remix-run/server-runtime"; +import { tryCatch } from "@trigger.dev/core/utils"; +import { nanoid } from "nanoid"; +import { z } from "zod"; +import { $replica } from "~/db.server"; +import { logger } from "~/services/logger.server"; +import { S2RealtimeStreams } from "~/services/realtime/s2realtimeStreams.server"; +import { ensureRunForSession } from "~/services/realtime/sessionRunManager.server"; +import { + canonicalSessionAddressingKey, + resolveSessionByIdOrExternalId, +} from "~/services/realtime/sessions.server"; +import { getRealtimeStreamInstance } from "~/services/realtime/v1StreamsGlobal.server"; +import { drainSessionStreamWaitpoints } from "~/services/sessionStreamWaitpointCache.server"; +import { createActionApiRoute } from "~/services/routeBuilders/apiBuilder.server"; +import { engine } from "~/v3/runEngine.server"; +import { ServiceValidationError } from "~/v3/services/common.server"; + +const ParamsSchema = z.object({ + session: z.string(), + io: z.enum(["out", "in"]), +}); + +// POST: server-side append of a single record to a session channel. Mirrors +// the existing /realtime/v1/streams/:runId/:target/:streamId/append route, +// scoped to a Session primitive. +// S2 enforces a 1 MiB per-record limit (metered as +// `8 + 2*H + Σ(header name+value) + body`). We cap the raw HTTP body at +// 512 KiB so the JSON wrapper (`{"data":"...","id":"..."}`), string +// escaping, and any future per-record header additions all stay comfortably +// below S2's ceiling. See https://s2.dev/docs/limits. +const MAX_APPEND_BODY_BYTES = 1024 * 512; + +const { action, loader } = createActionApiRoute( + { + params: ParamsSchema, + method: "POST", + maxContentLength: MAX_APPEND_BODY_BYTES, + allowJWT: true, + corsStrategy: "all", + // Sessions are task-bound (created by `POST /api/v1/sessions` which + // also triggers the first run). The row exists before any caller + // can reach `.in/append` — no row, no append. Resolved here so the + // authorization scope can expand to both addressing forms (friendlyId + // + externalId) and the handler can skip its own lookup. + findResource: async (params, auth) => + resolveSessionByIdOrExternalId($replica, auth.environment.id, params.session), + authorization: { + action: "write", + // Authorize against the union of the URL form, friendlyId, and + // externalId so a JWT scoped to any form authorizes any URL. + resource: (params, _, __, ___, session) => { + const ids = new Set([params.session]); + if (session) { + ids.add(session.friendlyId); + if (session.externalId) ids.add(session.externalId); + } + return { sessions: [...ids] }; + }, + superScopes: ["write:sessions", "write:all", "admin"], + }, + }, + async ({ request, params, authentication, resource: session }) => { + if (!session) { + // Unreachable — `findResource` short-circuits to 404 before this + // handler runs. Type-narrow the rest of the body. + return new Response("Session not found", { status: 404 }); + } + + if (session.closedAt) { + return json( + { ok: false, error: "Cannot append to a closed session" }, + { status: 400 } + ); + } + + if (session.expiresAt && session.expiresAt.getTime() < Date.now()) { + return json( + { ok: false, error: "Cannot append to an expired session" }, + { status: 400 } + ); + } + + const realtimeStream = getRealtimeStreamInstance(authentication.environment, "v2"); + + if (!(realtimeStream instanceof S2RealtimeStreams)) { + return json( + { ok: false, error: "Session channels require the S2 realtime backend" }, + { status: 501 } + ); + } + + // Probe + ensure a live run before appending. The append itself is + // run-independent (S2 stream is durable, keyed on the session) but + // the message is useless if no run is alive to consume it. The + // probe is a single Prisma read; ensureRunForSession is no-op when + // currentRunId is alive, so the steady-state cost is one extra + // read in the hot path. + // + // Best-effort: if ensureRunForSession throws (e.g. the trigger + // call fails transiently), still append to S2 — the record is + // durable and the next append will retry the ensure step. Don't + // surface the error to the caller; the SSE tail just won't deliver + // it until a run boots. + const [ensureError] = await tryCatch( + ensureRunForSession({ + session, + environment: authentication.environment, + reason: "continuation", + }) + ); + if (ensureError) { + logger.error("Failed to ensureRunForSession on .in/append", { + sessionId: session.id, + externalId: session.externalId, + error: ensureError, + }); + } + + const addressingKey = canonicalSessionAddressingKey(session, params.session); + + const part = await request.text(); + const partId = request.headers.get("X-Part-Id") ?? nanoid(7); + + const [appendError] = await tryCatch( + realtimeStream.appendPartToSessionStream(part, partId, addressingKey, params.io) + ); + + if (appendError) { + if (appendError instanceof ServiceValidationError) { + return json( + { ok: false, error: appendError.message }, + { status: appendError.status ?? 422 } + ); + } + return json({ ok: false, error: appendError.message }, { status: 500 }); + } + + // Fire any run-scoped waitpoints registered against this channel. Best + // effort — a failure here must not fail the append (the record is + // durable in S2; the SSE tail will still deliver it). Waitpoints are + // keyed on the canonical addressing key the agent registered with via + // `sessions.open(...).in.wait()`, so writers and readers converge + // regardless of which URL form they used. + const [drainError, waitpointIds] = await tryCatch( + drainSessionStreamWaitpoints(addressingKey, params.io) + ); + if (drainError) { + logger.error("Failed to drain session stream waitpoints", { + addressingKey, + io: params.io, + error: drainError, + }); + } else if (waitpointIds && waitpointIds.length > 0) { + await Promise.all( + waitpointIds.map(async (waitpointId) => { + const [completeError] = await tryCatch( + engine.completeWaitpoint({ + id: waitpointId, + output: { + value: part, + type: "application/json", + isError: false, + }, + }) + ); + if (completeError) { + logger.error("Failed to complete session stream waitpoint", { + addressingKey, + io: params.io, + waitpointId, + error: completeError, + }); + } + }) + ); + } + + return json({ ok: true }, { status: 200 }); + } +); + +export { action, loader }; diff --git a/apps/webapp/app/routes/realtime.v1.sessions.$session.$io.ts b/apps/webapp/app/routes/realtime.v1.sessions.$session.$io.ts new file mode 100644 index 00000000000..c04992f7f14 --- /dev/null +++ b/apps/webapp/app/routes/realtime.v1.sessions.$session.$io.ts @@ -0,0 +1,181 @@ +import { json } from "@remix-run/server-runtime"; +import { z } from "zod"; +import { $replica } from "~/db.server"; +import { getRequestAbortSignal } from "~/services/httpAsyncStorage.server"; +import { S2RealtimeStreams } from "~/services/realtime/s2realtimeStreams.server"; +import { + canonicalSessionAddressingKey, + isSessionFriendlyIdForm, + resolveSessionByIdOrExternalId, +} from "~/services/realtime/sessions.server"; +import { getRealtimeStreamInstance } from "~/services/realtime/v1StreamsGlobal.server"; +import { + createActionApiRoute, + createLoaderApiRoute, +} from "~/services/routeBuilders/apiBuilder.server"; + +const ParamsSchema = z.object({ + session: z.string(), + io: z.enum(["out", "in"]), +}); + +// PUT: initialize the S2 channel for this (session, io) pair — returns S2 +// credentials in response headers so the caller can write/read directly +// against S2. GET is handled by the loader below. +const { action } = createActionApiRoute( + { + params: ParamsSchema, + method: "PUT", + allowJWT: true, + corsStrategy: "all", + authorization: { + action: "write", + resource: (params) => ({ sessions: params.session }), + superScopes: ["write:sessions", "write:all", "admin"], + }, + }, + async ({ params, authentication }) => { + // Row-optional addressing. The agent calls PUT initialize as part + // of `session.out.writer()`, by which time it has already created + // the row at bind, so a missing row here is an unusual case + // (manual init from outside chat.agent). Require a real row only + // for opaque friendlyIds, and treat closedAt as a soft reject only + // when a row exists. The S2 stream key is built from the row's + // canonical key (externalId if set, else friendlyId) so writers + // and readers converge regardless of URL form. + const maybeSession = await resolveSessionByIdOrExternalId( + $replica, + authentication.environment.id, + params.session + ); + + if (!maybeSession && isSessionFriendlyIdForm(params.session)) { + return new Response("Session not found", { status: 404 }); + } + + if (maybeSession?.closedAt) { + return new Response("Cannot initialize a channel on a closed session", { + status: 400, + }); + } + + const realtimeStream = getRealtimeStreamInstance(authentication.environment, "v2"); + + if (!(realtimeStream instanceof S2RealtimeStreams)) { + return new Response("Session channels require the S2 realtime backend", { + status: 501, + }); + } + + const addressingKey = canonicalSessionAddressingKey(maybeSession, params.session); + + const { responseHeaders } = await realtimeStream.initializeSessionStream( + addressingKey, + params.io + ); + + return json({ version: "v2" }, { status: 202, headers: responseHeaders }); + } +); + +// GET: SSE subscribe to a session channel. HEAD returns the last chunk index +// for resume semantics, mirroring the existing run-stream route. +// +// Subscribes are row-optional: the chat.agent transport opens the SSE on +// `chatId` (externalId) before the agent has booted and upserted the +// Session row. The S2 stream is keyed on the row's *canonical* identity +// (externalId if set, else friendlyId) so two callers addressing the +// same row via different URL forms converge on the same stream. We +// short-circuit to 404 only for opaque `session_*` friendlyIds (those +// must come from a real mint). +const loader = createLoaderApiRoute( + { + params: ParamsSchema, + allowJWT: true, + corsStrategy: "all", + findResource: async (params, auth) => { + const row = await resolveSessionByIdOrExternalId( + $replica, + auth.environment.id, + params.session + ); + if (!row && isSessionFriendlyIdForm(params.session)) { + return undefined; // 404 — opaque friendlyId must reference a real row + } + // Non-null wrapper so missing row doesn't 404 for externalId form. + return { + row, + addressingKey: canonicalSessionAddressingKey(row, params.session), + }; + }, + authorization: { + action: "read", + resource: ({ row, addressingKey }) => { + const ids = new Set([addressingKey]); + if (row) { + ids.add(row.friendlyId); + if (row.externalId) ids.add(row.externalId); + } + return { sessions: [...ids] }; + }, + superScopes: ["read:sessions", "read:all", "admin"], + }, + }, + async ({ params, request, authentication, resource }) => { + const realtimeStream = getRealtimeStreamInstance(authentication.environment, "v2"); + + if (!(realtimeStream instanceof S2RealtimeStreams)) { + return new Response("Session channels require the S2 realtime backend", { + status: 501, + }); + } + + if (request.method === "HEAD") { + // No last-chunk-index on the S2 backend (clients resume via Last-Event-ID + // on the SSE stream directly). Return 200 with a zero index for + // compatibility with the run-stream shape. + return new Response(null, { + status: 200, + headers: { "X-Last-Chunk-Index": "0" }, + }); + } + + const lastEventId = request.headers.get("Last-Event-ID") ?? undefined; + + const timeoutInSecondsRaw = request.headers.get("Timeout-Seconds"); + let timeoutInSeconds: number | undefined; + if (timeoutInSecondsRaw) { + // `Number()` rejects `"10abc"` as NaN; `parseInt` would silently accept + // the trailing garbage and bypass the bounds checks below. + const parsed = Number(timeoutInSecondsRaw); + if (!Number.isFinite(parsed) || !Number.isInteger(parsed)) { + return new Response("Invalid timeout seconds", { status: 400 }); + } + if (parsed < 1) { + return new Response("Timeout seconds must be greater than 0", { status: 400 }); + } + if (parsed > 600) { + return new Response("Timeout seconds must be less than 600", { status: 400 }); + } + timeoutInSeconds = parsed; + } + + // Opt-in: only consider the settled-peek shortcut when the client + // asks for it via `X-Peek-Settled: 1`. Reconnect-on-reload paths + // (`TriggerChatTransport.reconnectToStream`) set this; the active + // send-a-message path (`sendMessages → subscribeToSessionStream`) + // does not — otherwise the peek races with the newly-triggered + // turn's first chunk and the SSE closes before records land. + const peekSettled = request.headers.get("X-Peek-Settled") === "1"; + + return realtimeStream.streamResponseFromSessionStream( + request, + resource.addressingKey, + params.io, + getRequestAbortSignal(), + { lastEventId, timeoutInSeconds, peekSettled } + ); + } +); + +export { action, loader }; diff --git a/apps/webapp/app/routes/realtime.v1.streams.$runId.$target.$streamId.append.ts b/apps/webapp/app/routes/realtime.v1.streams.$runId.$target.$streamId.append.ts index facb6dd664f..deefbc20773 100644 --- a/apps/webapp/app/routes/realtime.v1.streams.$runId.$target.$streamId.append.ts +++ b/apps/webapp/app/routes/realtime.v1.streams.$runId.$target.$streamId.append.ts @@ -13,9 +13,17 @@ const ParamsSchema = z.object({ streamId: z.string(), }); +// S2 enforces a 1 MiB per-record limit (metered as +// `8 + 2*H + Σ(header name+value) + body`). Cap the raw HTTP body at +// 512 KiB so the JSON wrapper, string escaping, and any future per-record +// header additions all stay well under S2's ceiling. +// See https://s2.dev/docs/limits. +const MAX_APPEND_BODY_BYTES = 1024 * 512; + const { action } = createActionApiRoute( { params: ParamsSchema, + maxContentLength: MAX_APPEND_BODY_BYTES, }, async ({ request, params, authentication }) => { const run = await $replica.taskRun.findFirst({ diff --git a/apps/webapp/app/services/authorization.server.ts b/apps/webapp/app/services/authorization.server.ts index 0406c02438e..786cc161ed9 100644 --- a/apps/webapp/app/services/authorization.server.ts +++ b/apps/webapp/app/services/authorization.server.ts @@ -1,6 +1,6 @@ export type AuthorizationAction = "read" | "write" | string; // Add more actions as needed -const ResourceTypes = ["tasks", "tags", "runs", "batch", "waitpoints", "deployments", "inputStreams", "query", "prompts"] as const; +const ResourceTypes = ["tasks", "tags", "runs", "batch", "waitpoints", "deployments", "inputStreams", "query", "prompts", "sessions"] as const; export type AuthorizationResources = { [key in (typeof ResourceTypes)[number]]?: string | string[]; diff --git a/apps/webapp/app/services/realtime/mintSessionToken.server.ts b/apps/webapp/app/services/realtime/mintSessionToken.server.ts new file mode 100644 index 00000000000..d69b36b7710 --- /dev/null +++ b/apps/webapp/app/services/realtime/mintSessionToken.server.ts @@ -0,0 +1,40 @@ +import { generateJWT as internal_generateJWT } from "@trigger.dev/core/v3"; +import { extractJwtSigningSecretKey } from "./jwtAuth.server"; + +type Environment = Parameters[0]; + +export type MintSessionTokenOptions = { + /** Token expiration. Defaults to "1h". */ + expirationTime?: string; +}; + +/** + * Mint a session-scoped public access token (JWT) covering both `.in` + * append and `.out` subscribe for a session's realtime channels. + * + * Returned by `POST /api/v1/sessions` so the browser holds a single + * long-lived token that survives across runs (sessions outlive any + * single run). Includes both read and write scopes since the transport + * needs both: read for SSE subscribe on `.out`, write for `.in` appends + * (`stop`, follow-up messages, action chunks). + */ +export async function mintSessionToken( + environment: Environment, + sessionAddressingKey: string, + options: MintSessionTokenOptions = {} +): Promise { + const scopes = [ + `read:sessions:${sessionAddressingKey}`, + `write:sessions:${sessionAddressingKey}`, + ]; + + return internal_generateJWT({ + secretKey: extractJwtSigningSecretKey(environment), + payload: { + sub: environment.id, + pub: true, + scopes, + }, + expirationTime: options.expirationTime ?? "1h", + }); +} diff --git a/apps/webapp/app/services/realtime/s2realtimeStreams.server.ts b/apps/webapp/app/services/realtime/s2realtimeStreams.server.ts index 4a7acb60606..46c7f3854a1 100644 --- a/apps/webapp/app/services/realtime/s2realtimeStreams.server.ts +++ b/apps/webapp/app/services/realtime/s2realtimeStreams.server.ts @@ -88,9 +88,42 @@ export class S2RealtimeStreams implements StreamResponder, StreamIngestor { return `${this.streamPrefix}/runs/${runId}/${streamId}`; } + /** + * Build an S2 stream name for a `Session`-primitive channel, addressed by + * the session's `friendlyId` and the I/O direction. Used by the session + * realtime routes to route traffic to `sessions/{friendlyId}/{out|in}`. + */ + public toSessionStreamName(friendlyId: string, io: "out" | "in"): string { + return `${this.streamPrefix}/sessions/${friendlyId}/${io}`; + } + async initializeStream( runId: string, streamId: string + ): Promise<{ responseHeaders?: Record }> { + return this.#initializeStreamByName( + this.toStreamName(runId, streamId), + `/runs/${runId}/${streamId}` + ); + } + + /** + * Initialize an S2 stream by `(sessionFriendlyId, io)` — mirrors + * {@link initializeStream} but addresses the new `sessions/*` key format. + */ + async initializeSessionStream( + friendlyId: string, + io: "out" | "in" + ): Promise<{ responseHeaders?: Record }> { + return this.#initializeStreamByName( + this.toSessionStreamName(friendlyId, io), + `/sessions/${friendlyId}/${io}` + ); + } + + async #initializeStreamByName( + prefixedName: string, + relativeName: string ): Promise<{ responseHeaders?: Record }> { const accessToken = this.skipAccessTokens ? this.token @@ -99,9 +132,7 @@ export class S2RealtimeStreams implements StreamResponder, StreamIngestor { return { responseHeaders: { "X-S2-Access-Token": accessToken, - "X-S2-Stream-Name": this.skipAccessTokens - ? this.toStreamName(runId, streamId) - : `/runs/${runId}/${streamId}`, + "X-S2-Stream-Name": this.skipAccessTokens ? prefixedName : relativeName, "X-S2-Basin": this.basin, "X-S2-Flush-Interval-Ms": this.flushIntervalMs.toString(), "X-S2-Max-Retries": this.maxRetries.toString(), @@ -121,8 +152,22 @@ export class S2RealtimeStreams implements StreamResponder, StreamIngestor { } async appendPart(part: string, partId: string, runId: string, streamId: string): Promise { - const s2Stream = this.toStreamName(runId, streamId); + return this.#appendPartByName(part, partId, this.toStreamName(runId, streamId)); + } + /** + * Append a single record to a `Session`-primitive channel. + */ + async appendPartToSessionStream( + part: string, + partId: string, + friendlyId: string, + io: "out" | "in" + ): Promise { + return this.#appendPartByName(part, partId, this.toSessionStreamName(friendlyId, io)); + } + + async #appendPartByName(part: string, partId: string, s2Stream: string): Promise { this.logger.debug(`S2 appending to stream`, { part, stream: s2Stream }); const result = await this.s2Append(s2Stream, { @@ -141,7 +186,22 @@ export class S2RealtimeStreams implements StreamResponder, StreamIngestor { streamId: string, afterSeqNum?: number ): Promise { - const s2Stream = this.toStreamName(runId, streamId); + return this.#readRecordsByName(this.toStreamName(runId, streamId), afterSeqNum); + } + + /** + * Read records from a `Session`-primitive channel starting after the + * given sequence number. Used by the `.wait()` race-check path. + */ + async readSessionStreamRecords( + friendlyId: string, + io: "out" | "in", + afterSeqNum?: number + ): Promise { + return this.#readRecordsByName(this.toSessionStreamName(friendlyId, io), afterSeqNum); + } + + async #readRecordsByName(s2Stream: string, afterSeqNum?: number): Promise { const startSeq = afterSeqNum != null ? afterSeqNum + 1 : 0; const qs = new URLSearchParams(); @@ -227,7 +287,142 @@ export class S2RealtimeStreams implements StreamResponder, StreamIngestor { signal: AbortSignal, options?: StreamResponseOptions ): Promise { - const s2Stream = this.toStreamName(runId, streamId); + return this.#streamResponseByName(this.toStreamName(runId, streamId), signal, options); + } + + /** + * Serve SSE from a `Session`-primitive channel addressed by + * `(friendlyId, io)`. + * + * For `io=out`, peek the tail record first. If it's + * `trigger:turn-complete`, the agent has finished a turn and is + * either idle-waiting on `.in` or has exited — either way, no more + * chunks will arrive without further user action. We switch the + * downstream S2 read to `wait=0` (drain whatever's left, close fast) + * and set `X-Session-Settled: true` so the client knows this SSE + * close is terminal instead of the normal 60s long-poll cycle. + * + * Mid-turn tail (streaming UIMessageChunk) falls through to the + * long-poll path; a crashed-mid-turn stream is indistinguishable + * here and behaves like today (client sees wait=60 close, retries). + */ + async streamResponseFromSessionStream( + request: Request, + friendlyId: string, + io: "out" | "in", + signal: AbortSignal, + options?: StreamResponseOptions + ): Promise { + const s2Stream = this.toSessionStreamName(friendlyId, io); + + let waitSeconds = options?.timeoutInSeconds ?? this.s2WaitSeconds; + let settled = false; + + // Only peek + settle when the client opts in via `options.peekSettled`. + // Reconnect-on-reload paths (`TriggerChatTransport.reconnectToStream`) + // set it; active send-a-message paths don't — otherwise the peek + // races the newly-triggered turn's first chunk and the SSE closes + // before records land. + if (io === "out" && options?.peekSettled) { + const lastChunk = await this.#peekLastChunkBody(s2Stream); + const lastChunkType = + lastChunk != null && typeof lastChunk === "object" + ? (lastChunk as { type?: unknown }).type + : null; + if (lastChunkType === "trigger:turn-complete") { + settled = true; + waitSeconds = 0; + } + } + + const s2Response = await this.#streamResponseByName(s2Stream, signal, { + ...options, + timeoutInSeconds: waitSeconds, + }); + + if (!settled) return s2Response; + + const headers = new Headers(s2Response.headers); + headers.set("X-Session-Settled", "true"); + return new Response(s2Response.body, { + status: s2Response.status, + statusText: s2Response.statusText, + headers, + }); + } + + async #peekLastChunkBody(s2Stream: string): Promise { + const qs = new URLSearchParams(); + // `tail_offset=1` reads one record before the next seq — i.e. the + // most recently appended record. `count=1` caps it to just that + // record. `wait=0` returns immediately with no long-poll. + qs.set("tail_offset", "1"); + qs.set("count", "1"); + qs.set("wait", "0"); + + let res: Response; + try { + res = await fetch( + `${this.baseUrl}/streams/${encodeURIComponent(s2Stream)}/records?${qs}`, + { + method: "GET", + headers: { + Authorization: `Bearer ${this.token}`, + Accept: "application/json", + "S2-Format": "raw", + "S2-Basin": this.basin, + }, + } + ); + } catch (err) { + this.logger.warn("S2 peek last record: fetch failed", { err, stream: s2Stream }); + return null; + } + + if (!res.ok) { + // 404: stream has never been written to. 416: range not + // satisfiable (empty stream). Both mean "nothing to peek." + if (res.status === 404 || res.status === 416) return null; + const text = await res.text().catch(() => ""); + this.logger.warn("S2 peek last record failed", { + status: res.status, + statusText: res.statusText, + text, + stream: s2Stream, + }); + return null; + } + + try { + const json = (await res.json()) as { + records?: Array<{ body: string; seq_num: number; timestamp: number }>; + }; + const record = json.records?.[0]; + if (!record) return null; + // The record body is a JSON string `{data: , id: partId}`. + // The agent-side writer (`StreamsWriterV2`) hands `appendPart` an + // already-JSON-stringified chunk, so `data` round-trips as a string, + // not an object. Parse it once more to surface the chunk shape. + const envelope = JSON.parse(record.body) as { data: unknown; id: string }; + if (typeof envelope.data === "string") { + try { + return JSON.parse(envelope.data); + } catch { + return envelope.data; + } + } + return envelope.data; + } catch (err) { + this.logger.warn("S2 peek last record: parse failed", { err, stream: s2Stream }); + return null; + } + } + + async #streamResponseByName( + s2Stream: string, + signal: AbortSignal, + options?: StreamResponseOptions + ): Promise { const startSeq = this.parseLastEventId(options?.lastEventId); this.logger.info(`S2 streaming records from stream`, { stream: s2Stream, startSeq }); diff --git a/apps/webapp/app/services/realtime/sessionRunManager.server.ts b/apps/webapp/app/services/realtime/sessionRunManager.server.ts new file mode 100644 index 00000000000..58513460b14 --- /dev/null +++ b/apps/webapp/app/services/realtime/sessionRunManager.server.ts @@ -0,0 +1,375 @@ +import type { Session, TaskRunStatus } from "@trigger.dev/database"; +import { SessionTriggerConfig as SessionTriggerConfigZod } from "@trigger.dev/core/v3"; +import { z } from "zod"; +import { prisma, $replica } from "~/db.server"; +import type { AuthenticatedEnvironment } from "~/services/apiAuth.server"; +import { logger } from "~/services/logger.server"; +import { CancelTaskRunService } from "~/v3/services/cancelTaskRun.server"; +import { TriggerTaskService } from "~/v3/services/triggerTask.server"; +import { isFinalRunStatus } from "~/v3/taskStatus"; + +/** + * Schema for `Session.triggerConfig` (stored as JSONB). The wire-format + * source of truth lives in `@trigger.dev/core/v3` as `SessionTriggerConfig`; + * we re-export it here for the trigger machinery to validate on read. + * + * `basePayload` carries the customer's wire payload (for chat.agent: + * `{ chatId, ...clientData, idleTimeoutInSeconds? }`). Runtime fields + * specific to a particular trigger (e.g. `trigger: "trigger" | "preload"`, + * an `isContinuation` flag) come in via the `payloadOverrides` argument + * to `ensureRunForSession` and shallow-merge on top of `basePayload`. + */ +export const SessionTriggerConfigSchema = SessionTriggerConfigZod; + +export type SessionTriggerConfig = z.infer; + +export type EnsureRunReason = "initial" | "continuation" | "upgrade" | "manual"; + +/** + * Hard cap on how many times `ensureRunForSession` will recurse on the + * pathological "we lost the claim race AND the winner's run was already + * terminal" path. In practice progress through the run engine bounds + * this, but a misconfigured task that crashes before it can be dequeued + * could otherwise loop without limit. After this many attempts we + * surface `SessionRunManagerError` so the caller can 5xx instead of + * blowing the stack. + */ +const ENSURE_RUN_FOR_SESSION_MAX_ATTEMPTS = 3; + +type EnsureRunForSessionParams = { + /** + * Session row to operate on. Caller is responsible for the env match — + * we don't re-check `runtimeEnvironmentId` against `environment.id`. + */ + session: Pick< + Session, + "id" | "taskIdentifier" | "triggerConfig" | "currentRunId" | "currentRunVersion" + >; + environment: AuthenticatedEnvironment; + reason: EnsureRunReason; + /** + * Shallow-merged on top of `triggerConfig.basePayload`. Runtime fields + * only — caller-controlled data that varies per trigger (`trigger: + * "preload"` vs `"trigger"`, etc). + */ + payloadOverrides?: Record; + /** + * @internal Recursion-guard counter for the lost-claim-race retry path. + * Public callers should leave this unset; the function recurses with + * an incremented value on the pathological "winner's run was already + * terminal" branch and throws once it exceeds + * {@link ENSURE_RUN_FOR_SESSION_MAX_ATTEMPTS}. + */ + _attempt?: number; +}; + +export type EnsureRunResult = { + runId: string; + /** True if this call triggered a fresh run; false if it reused an alive existing one. */ + triggered: boolean; +}; + +/** + * Idempotently make sure the session has a live run. + * + * Algorithm: + * 1. If `currentRunId` is set, probe its status. Alive → return as-is. + * 2. Trigger a new run upfront (cheap to cancel if we lose the race). + * 3. Atomic claim via `updateMany` keyed on `currentRunVersion`. + * - Won: return new runId, record SessionRun audit row. + * - Lost: cancel our triggered run, re-read session, reuse winner's + * run if alive. If pathological (winner's run already terminal), + * recurse. + * + * No DB lock is held across the trigger call. Wasted-trigger window is + * the rare multi-tab race on a dead run; cancel cost is negligible and + * the run-engine handles it gracefully. + */ +export async function ensureRunForSession( + params: EnsureRunForSessionParams +): Promise { + const { session, environment, reason, payloadOverrides, _attempt = 1 } = params; + + if (_attempt > ENSURE_RUN_FOR_SESSION_MAX_ATTEMPTS) { + throw new SessionRunManagerError( + `ensureRunForSession exceeded ${ENSURE_RUN_FOR_SESSION_MAX_ATTEMPTS} attempts for session ${session.id} — every triggered run reached a terminal state before claim could resolve` + ); + } + + // 1. Probe currentRunId. + if (session.currentRunId) { + const status = await getRunStatus(session.currentRunId); + if (status && !isFinalRunStatus(status)) { + return { runId: session.currentRunId, triggered: false }; + } + } + + // 2. Validate config + trigger upfront. + const config = SessionTriggerConfigSchema.parse(session.triggerConfig); + const triggered = await triggerSessionRun({ + session, + config, + environment, + payloadOverrides, + }); + + // 3. Try to claim the slot atomically. + const claim = await prisma.session.updateMany({ + where: { + id: session.id, + currentRunVersion: session.currentRunVersion, + }, + data: { + currentRunId: triggered.id, + currentRunVersion: { increment: 1 }, + }, + }); + + if (claim.count === 1) { + // Won. Audit the SessionRun. Best-effort — failure here doesn't + // invalidate the live run, just leaves a missing audit row. + prisma.sessionRun + .create({ + data: { sessionId: session.id, runId: triggered.id, reason }, + }) + .catch((error) => { + logger.warn("Failed to record SessionRun audit row", { + sessionId: session.id, + runId: triggered.id, + reason, + error, + }); + }); + + return { runId: triggered.id, triggered: true }; + } + + // 4. Lost the race. Cancel our triggered run; reuse the winner's. + cancelLostRaceRun(triggered.id, environment).catch((error) => { + logger.warn("Failed to cancel lost-race session run", { + sessionId: session.id, + runId: triggered.id, + error, + }); + }); + + // Read-after-write: the winner just wrote `currentRunId` / + // `currentRunVersion` on the writer. Reading from `$replica` could + // return pre-race state and cause us to recurse with the same stale + // version, losing the next claim, until we exhaust max attempts. + const fresh = await prisma.session.findFirst({ + where: { id: session.id }, + select: { + id: true, + taskIdentifier: true, + triggerConfig: true, + currentRunId: true, + currentRunVersion: true, + }, + }); + + if (!fresh) { + // Session vanished mid-flight. Surface as an error — caller decides + // whether to 404 or retry. + throw new SessionRunManagerError(`Session ${session.id} not found after lost claim race`); + } + + if (fresh.currentRunId) { + const status = await getRunStatus(fresh.currentRunId); + if (status && !isFinalRunStatus(status)) { + return { runId: fresh.currentRunId, triggered: false }; + } + } + + // Pathological: winner's run already terminal. Recurse with the fresh + // version. Bounded by `ENSURE_RUN_FOR_SESSION_MAX_ATTEMPTS` so a task + // that always crashes before being dequeued surfaces as an error + // instead of a stack overflow. + return ensureRunForSession({ + session: fresh, + environment, + reason, + payloadOverrides, + _attempt: _attempt + 1, + }); +} + +/** + * Trigger a single run for a session. Builds `TriggerTaskRequestBody` + * by shallow-merging `payloadOverrides` over `config.basePayload` and + * threading `config`'s machine/queue/tags through the trigger options. + */ +async function triggerSessionRun(params: { + session: Pick; + config: SessionTriggerConfig; + environment: AuthenticatedEnvironment; + payloadOverrides?: Record; +}): Promise<{ id: string; friendlyId: string }> { + const { session, config, environment, payloadOverrides } = params; + + const payload = { + ...config.basePayload, + ...(config.idleTimeoutInSeconds !== undefined + ? { idleTimeoutInSeconds: config.idleTimeoutInSeconds } + : {}), + ...(payloadOverrides ?? {}), + }; + + const body = { + payload, + context: {}, + options: { + ...(config.machine ? { machine: config.machine as never } : {}), + ...(config.queue ? { queue: { name: config.queue } } : {}), + ...(config.tags ? { tags: config.tags } : {}), + ...(config.maxAttempts !== undefined ? { maxAttempts: config.maxAttempts } : {}), + }, + }; + + const service = new TriggerTaskService(); + const result = await service.call(session.taskIdentifier, environment, body, { + triggerSource: "session", + triggerAction: "trigger", + }); + + if (!result) { + throw new SessionRunManagerError( + `TriggerTaskService returned no result for taskIdentifier=${session.taskIdentifier}` + ); + } + + return { id: result.run.id, friendlyId: result.run.friendlyId }; +} + +type SwapSessionRunParams = { + session: Pick< + Session, + "id" | "taskIdentifier" | "triggerConfig" | "currentRunId" | "currentRunVersion" + >; + /** + * The run requesting the swap. Optimistic claim requires + * `Session.currentRunId === callingRunId` so the swap can't clobber + * a run triggered out-of-band (e.g. a parallel `.in/append` probe + * that already replaced the dead run). + */ + callingRunId: string; + environment: AuthenticatedEnvironment; + reason: EnsureRunReason; + payloadOverrides?: Record; +}; + +export type SwapSessionRunResult = { + /** runId of the newly-triggered run that has taken over the session. */ + runId: string; + /** + * False when the swap was preempted (currentRunId is no longer the + * calling run). The caller should treat this as "someone else + * already moved on" — exit cleanly without expecting to drive the + * next run. + */ + swapped: boolean; +}; + +/** + * Force-swap the session to a freshly-triggered run, regardless of + * whether the current run is alive. Called by `end-and-continue` when + * the running agent wants a clean handoff (typically version upgrade). + * + * Differs from `ensureRunForSession`: never reuses the current run. + * The optimistic claim is keyed on `currentRunId === callingRunId`, so + * a parallel append-time probe that already swapped to a different + * run wins the race and `swapped: false` is surfaced. + */ +export async function swapSessionRun( + params: SwapSessionRunParams +): Promise { + const { session, callingRunId, environment, reason, payloadOverrides } = params; + + const config = SessionTriggerConfigSchema.parse(session.triggerConfig); + const triggered = await triggerSessionRun({ + session, + config, + environment, + payloadOverrides, + }); + + const claim = await prisma.session.updateMany({ + where: { + id: session.id, + currentRunId: callingRunId, + currentRunVersion: session.currentRunVersion, + }, + data: { + currentRunId: triggered.id, + currentRunVersion: { increment: 1 }, + }, + }); + + if (claim.count === 1) { + prisma.sessionRun + .create({ + data: { sessionId: session.id, runId: triggered.id, reason }, + }) + .catch((error) => { + logger.warn("Failed to record SessionRun audit row", { + sessionId: session.id, + runId: triggered.id, + reason, + error, + }); + }); + return { runId: triggered.id, swapped: true }; + } + + // Lost the race — someone else already swapped to a new run. Cancel + // ours, surface the existing winner. + cancelLostRaceRun(triggered.id, environment).catch((error) => { + logger.warn("Failed to cancel preempted swap run", { + sessionId: session.id, + runId: triggered.id, + error, + }); + }); + + // Read-after-write: the winner's swap was just committed on the + // writer. A replica read could return the pre-swap `currentRunId` + // (often `callingRunId` itself), which would tell the caller it is + // still the canonical run when in fact a different run has taken + // over. + const fresh = await prisma.session.findFirst({ + where: { id: session.id }, + select: { currentRunId: true }, + }); + + return { + runId: fresh?.currentRunId ?? callingRunId, + swapped: false, + }; +} + +async function getRunStatus(runId: string): Promise { + // Use the read replica — this is a hot-path probe and stale-by-ms is + // fine. The append handler re-checks if it ends up reusing the runId. + const row = await $replica.taskRun.findFirst({ + where: { id: runId }, + select: { status: true }, + }); + return row?.status ?? null; +} + +async function cancelLostRaceRun( + runId: string, + environment: AuthenticatedEnvironment +): Promise { + const service = new CancelTaskRunService(); + // Read-after-write: the run was just triggered on the writer, so go + // through `prisma`. A `$replica` miss here would silently no-op the + // cancel and leak an orphan run that no session is going to claim. + const run = await prisma.taskRun.findFirst({ where: { id: runId } }); + if (!run) return; + await service.call(run, { reason: "Lost session-run claim race" }); +} + +export class SessionRunManagerError extends Error { + readonly name = "SessionRunManagerError"; +} diff --git a/apps/webapp/app/services/realtime/sessions.server.ts b/apps/webapp/app/services/realtime/sessions.server.ts new file mode 100644 index 00000000000..594d417292c --- /dev/null +++ b/apps/webapp/app/services/realtime/sessions.server.ts @@ -0,0 +1,127 @@ +import type { PrismaClient, Session } from "@trigger.dev/database"; +import type { SessionItem } from "@trigger.dev/core/v3"; +import { $replica } from "~/db.server"; + +/** + * Prefix that {@link SessionId.generate} attaches to every Session friendlyId. + * Used to distinguish friendlyId lookups (`session_abc...`) from externalId + * lookups on the public `GET /api/v1/sessions/:session` route. + */ +const SESSION_FRIENDLY_ID_PREFIX = "session_"; + +/** + * Resolve a session from a URL path parameter that may contain either a + * friendlyId (`session_abc...`) or a user-supplied externalId. + * + * Disambiguated by prefix: values starting with `session_` are treated as + * friendlyIds, anything else is looked up against `externalId` scoped to + * the caller's environment. + */ +export async function resolveSessionByIdOrExternalId( + prisma: Pick, + runtimeEnvironmentId: string, + idOrExternalId: string +): Promise { + if (isSessionFriendlyIdForm(idOrExternalId)) { + return prisma.session.findFirst({ + where: { friendlyId: idOrExternalId, runtimeEnvironmentId }, + }); + } + + // `findFirst` rather than `findUnique` per the repo rule — `findUnique`'s + // implicit DataLoader has open correctness bugs in Prisma 6.x that bite + // hot-path lookups exactly like this one. + return prisma.session.findFirst({ + where: { runtimeEnvironmentId, externalId: idOrExternalId }, + }); +} + +/** True for `session_*` friendlyId form, false for everything else. */ +export function isSessionFriendlyIdForm(value: string): boolean { + return value.startsWith(SESSION_FRIENDLY_ID_PREFIX); +} + +/** + * Canonicalise the addressing key used for everything stream-level: the + * S2 stream path and the run-engine waitpoint cache key. `chat.agent` + * and the rest of the operational surface always pass `externalId`, but + * a public-API caller may legitimately address by `friendlyId` — and a + * session created without an `externalId` only has a friendlyId at all. + * + * Rule: + * - If we have a Session row, the canonical key is `externalId` if + * set, else `friendlyId`. This way two callers addressing the same + * row via different forms always converge to the same S2 stream. + * - If we have no row (yet — chat.agent's transport may subscribe + * before the agent's bind-time upsert lands), the canonical key is + * whatever the URL had. Operationally that's always an externalId. + * Friendlyid-form callers without a matching row are rejected by + * the route handler before this is reached. + */ +export function canonicalSessionAddressingKey( + row: Session | null, + paramSession: string +): string { + if (row) { + return row.externalId ?? row.friendlyId; + } + return paramSession; +} + +/** + * Convert a Prisma `Session` row to the public {@link SessionItem} wire format. + * Strips internal columns (project/environment/organization ids) and narrows + * the `metadata` JSON to a record. + * + * Note: `currentRunId` is left as-is — Prisma stores the internal run id + * (cuid), but `SessionItem.currentRunId` is the *friendly* form. Routes + * that emit a single `SessionItem` should use + * {@link serializeSessionWithFriendlyRunId} instead, which resolves the + * friendlyId via a TaskRun lookup. List endpoints stay on this raw form + * to avoid N+1 lookups when paginating. + */ +export function serializeSession(session: Session): SessionItem { + return { + id: session.friendlyId, + externalId: session.externalId, + type: session.type, + taskIdentifier: session.taskIdentifier, + triggerConfig: session.triggerConfig as SessionItem["triggerConfig"], + currentRunId: session.currentRunId, + tags: session.tags, + metadata: (session.metadata ?? null) as SessionItem["metadata"], + closedAt: session.closedAt, + closedReason: session.closedReason, + expiresAt: session.expiresAt, + createdAt: session.createdAt, + updatedAt: session.updatedAt, + }; +} + +/** + * Same as {@link serializeSession} but resolves `currentRunId` from the + * internal cuid to the public `run_*` friendlyId via a TaskRun lookup. + * Single-row endpoints (`POST/GET/PATCH/close /api/v1/sessions/:s`) use + * this so the wire-side `currentRunId` is consistent with the rest of + * the public API (which only accepts friendlyIds for run lookups). + * + * Skips the lookup when `currentRunId` is null. The read goes through + * `$replica` — a TaskRun's `friendlyId` is immutable so replica lag is + * harmless, and serializing on the writer would just add hot-path load. + */ +export async function serializeSessionWithFriendlyRunId( + session: Session +): Promise { + const base = serializeSession(session); + if (!session.currentRunId) return base; + + const run = await $replica.taskRun.findFirst({ + where: { id: session.currentRunId }, + select: { friendlyId: true }, + }); + + return { + ...base, + currentRunId: run?.friendlyId ?? null, + }; +} diff --git a/apps/webapp/app/services/realtime/types.ts b/apps/webapp/app/services/realtime/types.ts index 64433a716f4..7161f158a48 100644 --- a/apps/webapp/app/services/realtime/types.ts +++ b/apps/webapp/app/services/realtime/types.ts @@ -33,6 +33,17 @@ export interface StreamIngestor { export type StreamResponseOptions = { timeoutInSeconds?: number; lastEventId?: string; + /** + * Session-stream-only. When `true`, the responder MAY peek the tail + * of `.out` and short-circuit to `wait=0` + `X-Session-Settled: true` + * if the last chunk is a terminal marker (e.g. `trigger:turn-complete`). + * Used by `TriggerChatTransport.reconnectToStream` on page reload. + * + * When absent/false, the responder keeps the unconditional long-poll + * behavior — required on the active send-a-message path where the + * peek would race the newly-triggered turn's first chunk. + */ + peekSettled?: boolean; }; // Interface for stream response diff --git a/apps/webapp/app/services/routeBuilders/apiBuilder.server.ts b/apps/webapp/app/services/routeBuilders/apiBuilder.server.ts index e1f248927ae..aae3c7ff54e 100644 --- a/apps/webapp/app/services/routeBuilders/apiBuilder.server.ts +++ b/apps/webapp/app/services/routeBuilders/apiBuilder.server.ts @@ -469,7 +469,13 @@ type ApiKeyActionRouteBuilderOptions< : undefined, body: TBodySchema extends z.ZodFirstPartySchemaTypes | z.ZodDiscriminatedUnion ? z.infer - : undefined + : undefined, + // The resolved resource from `findResource`. `undefined` when the route + // doesn't declare `findResource`. Routes that need to expand the auth + // scope to alternate identifiers of the same row (e.g. friendlyId + + // externalId for sessions) read it here so a JWT minted for either form + // authorizes both URL forms. + resource: TResource | undefined ) => AuthorizationResources; superScopes?: string[]; }; @@ -667,9 +673,33 @@ export function createActionApiRoute< parsedBody = body.data; } + // Resolve the resource before authorization so the auth scope check + // can expand to alternate identifiers of the same row (e.g. a Session + // is addressable by both `friendlyId` and `externalId` and a JWT minted + // for either form should authorize both URL forms). Mirrors the + // ordering in `createLoaderApiRoute`. + const resource = options.findResource + ? await options.findResource(parsedParams, authenticationResult, parsedSearchParams) + : undefined; + + // Run authorization first — but with the resolved resource available + // as the 5th arg so the auth scope check can expand to alternate + // identifiers of the same row (e.g. a Session is addressable by both + // `friendlyId` and `externalId`). Resource-null is checked AFTER auth + // so: + // - underscoped JWT + missing resource → 403 (no info leak) + // - underscoped JWT + existing resource → 403 (existing behavior) + // - PRIVATE key + missing resource → auth passes → 404 (correct) + // - PRIVATE key + existing resource → auth passes → handler runs if (authorization) { - const { action, resource, superScopes } = authorization; - const $resource = resource(parsedParams, parsedSearchParams, parsedHeaders, parsedBody); + const { action, resource: authResource, superScopes } = authorization; + const $resource = authResource( + parsedParams, + parsedSearchParams, + parsedHeaders, + parsedBody, + resource + ); logger.debug("Checking authorization", { action, @@ -702,10 +732,6 @@ export function createActionApiRoute< } } - const resource = options.findResource - ? await options.findResource(parsedParams, authenticationResult, parsedSearchParams) - : undefined; - if (options.findResource && !resource) { return await wrapResponse( request, diff --git a/apps/webapp/app/services/sessionStreamWaitpointCache.server.ts b/apps/webapp/app/services/sessionStreamWaitpointCache.server.ts new file mode 100644 index 00000000000..050ebddeac3 --- /dev/null +++ b/apps/webapp/app/services/sessionStreamWaitpointCache.server.ts @@ -0,0 +1,147 @@ +import { Redis } from "ioredis"; +import { env } from "~/env.server"; +import { singleton } from "~/utils/singleton"; +import { logger } from "./logger.server"; + +// "ssw" — session-stream-waitpoint. Parallel to the input-stream variant +// (`isw:{runFriendlyId}:{streamId}`). Keyed purely on `{sessionId, io}` so +// a send() lands on the channel regardless of which run is waiting, and +// multiple concurrent waiters (e.g. two agents on one chat) all wake. +const KEY_PREFIX = "ssw:"; +const DEFAULT_TTL_MS = 7 * 24 * 60 * 60 * 1000; // 7 days + +function buildKey(sessionFriendlyId: string, io: "out" | "in"): string { + return `${KEY_PREFIX}${sessionFriendlyId}:${io}`; +} + +function initializeRedis(): Redis | undefined { + const host = env.CACHE_REDIS_HOST; + if (!host) { + return undefined; + } + + return new Redis({ + connectionName: "sessionStreamWaitpointCache", + host, + port: env.CACHE_REDIS_PORT, + username: env.CACHE_REDIS_USERNAME, + password: env.CACHE_REDIS_PASSWORD, + keyPrefix: "tr:", + enableAutoPipelining: true, + ...(env.CACHE_REDIS_TLS_DISABLED === "true" ? {} : { tls: {} }), + }); +} + +const redis = singleton("sessionStreamWaitpointCache", initializeRedis); + +// Atomic SADD + PEXPIRE that only ever extends the key's TTL. +// +// Two concerns rolled into one script: +// 1. SADD + PEXPIRE as separate commands can leave the key with no TTL +// if the second call fails (or the process crashes in between). +// 2. Each waitpoint registers with its own `ttlMs` (derived from the +// waitpoint's timeout). Calling PEXPIRE unconditionally would let a +// short-TTL registration shrink the key's TTL below a longer-TTL +// sibling — evicting the sibling early and degrading the append-path +// fast drain to engine-timeout-only. +// +// The script: SADD the member, then set PEXPIRE only if the new TTL is +// greater than the current PTTL (or the key has no TTL yet). Engine- +// level timeouts still fire per-waitpoint; this keeps the Redis key +// alive for the longest-lived member. +const ADD_WAITPOINT_SCRIPT = ` + redis.call("SADD", KEYS[1], ARGV[1]) + local newTtl = tonumber(ARGV[2]) + local currentTtl = redis.call("PTTL", KEYS[1]) + if currentTtl < 0 or newTtl > currentTtl then + redis.call("PEXPIRE", KEYS[1], newTtl) + end + return 1 +`; + +/** + * Register a waitpoint as pending on the given session channel. Called + * from the `.wait()` create-waitpoint route. Multiple waiters on the same + * channel are allowed (stored as a Redis set). + */ +export async function addSessionStreamWaitpoint( + sessionFriendlyId: string, + io: "out" | "in", + waitpointId: string, + ttlMs?: number +): Promise { + if (!redis) return; + + try { + const key = buildKey(sessionFriendlyId, io); + await redis.eval( + ADD_WAITPOINT_SCRIPT, + 1, + key, + waitpointId, + String(ttlMs ?? DEFAULT_TTL_MS) + ); + } catch (error) { + logger.error("Failed to set session stream waitpoint cache", { + sessionFriendlyId, + io, + error, + }); + } +} + +/** + * Atomically read + clear all waitpoints registered on the given session + * channel. Called from the append handler so the next append sees an + * empty set even if two appends race. + */ +export async function drainSessionStreamWaitpoints( + sessionFriendlyId: string, + io: "out" | "in" +): Promise { + if (!redis) return []; + + try { + const key = buildKey(sessionFriendlyId, io); + const pipeline = redis.multi(); + pipeline.smembers(key); + pipeline.del(key); + const results = await pipeline.exec(); + if (!results) return []; + const [smembersResult] = results; + if (!smembersResult) return []; + const [err, members] = smembersResult; + if (err) return []; + return Array.isArray(members) ? (members as string[]) : []; + } catch (error) { + logger.error("Failed to drain session stream waitpoint cache", { + sessionFriendlyId, + io, + error, + }); + return []; + } +} + +/** + * Remove a single waitpoint from the pending set. Called after a race + * where `.wait()` completed the waitpoint from pre-arrived data. + */ +export async function removeSessionStreamWaitpoint( + sessionFriendlyId: string, + io: "out" | "in", + waitpointId: string +): Promise { + if (!redis) return; + + try { + const key = buildKey(sessionFriendlyId, io); + await redis.srem(key, waitpointId); + } catch (error) { + logger.error("Failed to remove session stream waitpoint cache entry", { + sessionFriendlyId, + io, + error, + }); + } +} diff --git a/apps/webapp/app/services/sessionsReplicationInstance.server.ts b/apps/webapp/app/services/sessionsReplicationInstance.server.ts new file mode 100644 index 00000000000..c6ed1b6b088 --- /dev/null +++ b/apps/webapp/app/services/sessionsReplicationInstance.server.ts @@ -0,0 +1,72 @@ +import { ClickHouse } from "@internal/clickhouse"; +import invariant from "tiny-invariant"; +import { env } from "~/env.server"; +import { singleton } from "~/utils/singleton"; +import { meter, provider } from "~/v3/tracer.server"; +import { SessionsReplicationService } from "./sessionsReplicationService.server"; + +export const sessionsReplicationInstance = singleton( + "sessionsReplicationInstance", + initializeSessionsReplicationInstance +); + +function initializeSessionsReplicationInstance() { + const { DATABASE_URL } = process.env; + invariant(typeof DATABASE_URL === "string", "DATABASE_URL env var not set"); + + if (!env.SESSION_REPLICATION_CLICKHOUSE_URL) { + console.log("🗃️ Sessions replication service not enabled"); + return; + } + + console.log("🗃️ Sessions replication service enabled"); + + const clickhouse = new ClickHouse({ + url: env.SESSION_REPLICATION_CLICKHOUSE_URL, + name: "sessions-replication", + keepAlive: { + enabled: env.SESSION_REPLICATION_KEEP_ALIVE_ENABLED === "1", + idleSocketTtl: env.SESSION_REPLICATION_KEEP_ALIVE_IDLE_SOCKET_TTL_MS, + }, + logLevel: env.SESSION_REPLICATION_CLICKHOUSE_LOG_LEVEL, + compression: { + request: true, + }, + maxOpenConnections: env.SESSION_REPLICATION_MAX_OPEN_CONNECTIONS, + }); + + const service = new SessionsReplicationService({ + clickhouse: clickhouse, + pgConnectionUrl: DATABASE_URL, + serviceName: "sessions-replication", + slotName: env.SESSION_REPLICATION_SLOT_NAME, + publicationName: env.SESSION_REPLICATION_PUBLICATION_NAME, + redisOptions: { + keyPrefix: "sessions-replication:", + port: env.RUN_REPLICATION_REDIS_PORT ?? undefined, + host: env.RUN_REPLICATION_REDIS_HOST ?? undefined, + username: env.RUN_REPLICATION_REDIS_USERNAME ?? undefined, + password: env.RUN_REPLICATION_REDIS_PASSWORD ?? undefined, + enableAutoPipelining: true, + ...(env.RUN_REPLICATION_REDIS_TLS_DISABLED === "true" ? {} : { tls: {} }), + }, + maxFlushConcurrency: env.SESSION_REPLICATION_MAX_FLUSH_CONCURRENCY, + flushIntervalMs: env.SESSION_REPLICATION_FLUSH_INTERVAL_MS, + flushBatchSize: env.SESSION_REPLICATION_FLUSH_BATCH_SIZE, + leaderLockTimeoutMs: env.SESSION_REPLICATION_LEADER_LOCK_TIMEOUT_MS, + leaderLockExtendIntervalMs: env.SESSION_REPLICATION_LEADER_LOCK_EXTEND_INTERVAL_MS, + leaderLockAcquireAdditionalTimeMs: env.SESSION_REPLICATION_LEADER_LOCK_ADDITIONAL_TIME_MS, + leaderLockRetryIntervalMs: env.SESSION_REPLICATION_LEADER_LOCK_RETRY_INTERVAL_MS, + ackIntervalSeconds: env.SESSION_REPLICATION_ACK_INTERVAL_SECONDS, + logLevel: env.SESSION_REPLICATION_LOG_LEVEL, + waitForAsyncInsert: env.SESSION_REPLICATION_WAIT_FOR_ASYNC_INSERT === "1", + tracer: provider.getTracer("sessions-replication-service"), + meter, + insertMaxRetries: env.SESSION_REPLICATION_INSERT_MAX_RETRIES, + insertBaseDelayMs: env.SESSION_REPLICATION_INSERT_BASE_DELAY_MS, + insertMaxDelayMs: env.SESSION_REPLICATION_INSERT_MAX_DELAY_MS, + insertStrategy: env.SESSION_REPLICATION_INSERT_STRATEGY, + }); + + return service; +} diff --git a/apps/webapp/app/services/sessionsReplicationService.server.ts b/apps/webapp/app/services/sessionsReplicationService.server.ts new file mode 100644 index 00000000000..f7f384faffc --- /dev/null +++ b/apps/webapp/app/services/sessionsReplicationService.server.ts @@ -0,0 +1,763 @@ +import type { ClickHouse, SessionInsertArray } from "@internal/clickhouse"; +import { getSessionField } from "@internal/clickhouse"; +import { type RedisOptions } from "@internal/redis"; +import { + LogicalReplicationClient, + type MessageDelete, + type MessageInsert, + type MessageUpdate, + type PgoutputMessage, +} from "@internal/replication"; +import { + getMeter, + recordSpanError, + startSpan, + trace, + type Counter, + type Histogram, + type Meter, + type Tracer, +} from "@internal/tracing"; +import { Logger, type LogLevel } from "@trigger.dev/core/logger"; +import { tryCatch } from "@trigger.dev/core/utils"; +import { type Session } from "@trigger.dev/database"; +import EventEmitter from "node:events"; +import { ConcurrentFlushScheduler } from "./runsReplicationService.server"; + +interface TransactionEvent { + tag: "insert" | "update" | "delete"; + data: T; + raw: MessageInsert | MessageUpdate | MessageDelete; +} + +interface Transaction { + beginStartTimestamp: number; + commitLsn: string | null; + commitEndLsn: string | null; + xid: number; + events: TransactionEvent[]; + replicationLagMs: number; +} + +export type SessionsReplicationServiceOptions = { + clickhouse: ClickHouse; + pgConnectionUrl: string; + serviceName: string; + slotName: string; + publicationName: string; + redisOptions: RedisOptions; + maxFlushConcurrency?: number; + flushIntervalMs?: number; + flushBatchSize?: number; + leaderLockTimeoutMs?: number; + leaderLockExtendIntervalMs?: number; + leaderLockAcquireAdditionalTimeMs?: number; + leaderLockRetryIntervalMs?: number; + ackIntervalSeconds?: number; + acknowledgeTimeoutMs?: number; + logger?: Logger; + logLevel?: LogLevel; + tracer?: Tracer; + meter?: Meter; + waitForAsyncInsert?: boolean; + insertStrategy?: "insert" | "insert_async"; + // Retry configuration for insert operations + insertMaxRetries?: number; + insertBaseDelayMs?: number; + insertMaxDelayMs?: number; +}; + +type SessionInsert = { + _version: bigint; + session: Session; + event: "insert" | "update" | "delete"; +}; + +export type SessionsReplicationServiceEvents = { + message: [{ lsn: string; message: PgoutputMessage; service: SessionsReplicationService }]; + batchFlushed: [{ flushId: string; sessionInserts: SessionInsertArray[] }]; +}; + +export class SessionsReplicationService { + private _isSubscribed = false; + private _currentTransaction: + | (Omit, "commitEndLsn" | "replicationLagMs"> & { + commitEndLsn?: string | null; + replicationLagMs?: number; + }) + | null = null; + + private _replicationClient: LogicalReplicationClient; + private _concurrentFlushScheduler: ConcurrentFlushScheduler; + private logger: Logger; + private _isShuttingDown = false; + private _isShutDownComplete = false; + private _tracer: Tracer; + private _meter: Meter; + private _currentParseDurationMs: number | null = null; + private _lastAcknowledgedAt: number | null = null; + private _acknowledgeTimeoutMs: number; + private _latestCommitEndLsn: string | null = null; + private _lastAcknowledgedLsn: string | null = null; + private _acknowledgeInterval: NodeJS.Timeout | null = null; + // Retry configuration + private _insertMaxRetries: number; + private _insertBaseDelayMs: number; + private _insertMaxDelayMs: number; + private _insertStrategy: "insert" | "insert_async"; + + // Metrics + private _replicationLagHistogram: Histogram; + private _batchesFlushedCounter: Counter; + private _batchSizeHistogram: Histogram; + private _sessionsInsertedCounter: Counter; + private _insertRetriesCounter: Counter; + private _eventsProcessedCounter: Counter; + private _flushDurationHistogram: Histogram; + + public readonly events: EventEmitter; + + constructor(private readonly options: SessionsReplicationServiceOptions) { + this.logger = + options.logger ?? new Logger("SessionsReplicationService", options.logLevel ?? "info"); + this.events = new EventEmitter(); + this._tracer = options.tracer ?? trace.getTracer("sessions-replication-service"); + this._meter = options.meter ?? getMeter("sessions-replication"); + + // Initialize metrics + this._replicationLagHistogram = this._meter.createHistogram( + "sessions_replication.replication_lag_ms", + { + description: "Replication lag from Postgres commit to processing", + unit: "ms", + } + ); + + this._batchesFlushedCounter = this._meter.createCounter( + "sessions_replication.batches_flushed", + { + description: "Total batches flushed to ClickHouse", + } + ); + + this._batchSizeHistogram = this._meter.createHistogram("sessions_replication.batch_size", { + description: "Number of items per batch flush", + unit: "items", + }); + + this._sessionsInsertedCounter = this._meter.createCounter( + "sessions_replication.sessions_inserted", + { + description: "Session inserts to ClickHouse", + unit: "inserts", + } + ); + + this._insertRetriesCounter = this._meter.createCounter("sessions_replication.insert_retries", { + description: "Insert retry attempts", + }); + + this._eventsProcessedCounter = this._meter.createCounter( + "sessions_replication.events_processed", + { + description: "Replication events processed (inserts, updates, deletes)", + } + ); + + this._flushDurationHistogram = this._meter.createHistogram( + "sessions_replication.flush_duration_ms", + { + description: "Duration of batch flush operations", + unit: "ms", + } + ); + + this._acknowledgeTimeoutMs = options.acknowledgeTimeoutMs ?? 1_000; + + this._insertStrategy = options.insertStrategy ?? "insert"; + + this._replicationClient = new LogicalReplicationClient({ + pgConfig: { + connectionString: options.pgConnectionUrl, + }, + name: options.serviceName, + slotName: options.slotName, + publicationName: options.publicationName, + table: "Session", + redisOptions: options.redisOptions, + autoAcknowledge: false, + publicationActions: ["insert", "update", "delete"], + logger: options.logger ?? new Logger("LogicalReplicationClient", options.logLevel ?? "info"), + leaderLockTimeoutMs: options.leaderLockTimeoutMs ?? 30_000, + leaderLockExtendIntervalMs: options.leaderLockExtendIntervalMs ?? 10_000, + ackIntervalSeconds: options.ackIntervalSeconds ?? 10, + leaderLockAcquireAdditionalTimeMs: options.leaderLockAcquireAdditionalTimeMs ?? 10_000, + leaderLockRetryIntervalMs: options.leaderLockRetryIntervalMs ?? 500, + tracer: options.tracer, + }); + + this._concurrentFlushScheduler = new ConcurrentFlushScheduler({ + batchSize: options.flushBatchSize ?? 50, + flushInterval: options.flushIntervalMs ?? 100, + maxConcurrency: options.maxFlushConcurrency ?? 100, + callback: this.#flushBatch.bind(this), + // Key-based deduplication to reduce duplicates sent to ClickHouse + getKey: (item) => { + if (!item?.session?.id) { + this.logger.warn("Skipping replication event with null session", { event: item }); + return null; + } + return `${item.event}_${item.session.id}`; + }, + // Keep the session with the higher version (latest) + // and take the last occurrence for that version. + // Items originating from the same DB transaction have the same version. + shouldReplace: (existing, incoming) => incoming._version >= existing._version, + logger: new Logger("ConcurrentFlushScheduler", options.logLevel ?? "info"), + tracer: options.tracer, + }); + + this._replicationClient.events.on("data", async ({ lsn, log, parseDuration }) => { + this.#handleData(lsn, log, parseDuration); + }); + + this._replicationClient.events.on("heartbeat", async ({ lsn, shouldRespond }) => { + if (this._isShuttingDown) return; + if (this._isShutDownComplete) return; + + if (shouldRespond) { + this._lastAcknowledgedLsn = lsn; + await this._replicationClient.acknowledge(lsn); + } + }); + + this._replicationClient.events.on("error", (error) => { + this.logger.error("Replication client error", { + error, + }); + }); + + this._replicationClient.events.on("start", () => { + this.logger.info("Replication client started"); + }); + + this._replicationClient.events.on("acknowledge", ({ lsn }) => { + this.logger.debug("Acknowledged", { lsn }); + }); + + this._replicationClient.events.on("leaderElection", (isLeader) => { + this.logger.info("Leader election", { isLeader }); + }); + + // Initialize retry configuration + this._insertMaxRetries = options.insertMaxRetries ?? 3; + this._insertBaseDelayMs = options.insertBaseDelayMs ?? 100; + this._insertMaxDelayMs = options.insertMaxDelayMs ?? 2000; + } + + public async shutdown() { + if (this._isShuttingDown) return; + + this._isShuttingDown = true; + + this.logger.info("Initiating shutdown of sessions replication service"); + + if (!this._currentTransaction) { + this.logger.info("No transaction to commit, shutting down immediately"); + await this._replicationClient.stop(); + this._isSubscribed = false; + this._isShutDownComplete = true; + return; + } + + this._concurrentFlushScheduler.shutdown(); + } + + async start() { + if (this._isSubscribed) { + this.logger.debug("Replication client already started, skipping start"); + return; + } + + this.logger.info("Starting replication client", { + lastLsn: this._latestCommitEndLsn, + }); + + await this._replicationClient.subscribe(this._latestCommitEndLsn ?? undefined); + + this._acknowledgeInterval = setInterval(this.#acknowledgeLatestTransaction.bind(this), 1000); + this._concurrentFlushScheduler.start(); + this._isSubscribed = true; + } + + async stop() { + this.logger.info("Stopping replication client"); + + await this._replicationClient.stop(); + + if (this._acknowledgeInterval) { + clearInterval(this._acknowledgeInterval); + this._acknowledgeInterval = null; + } + + this._isSubscribed = false; + } + + async teardown() { + this.logger.info("Teardown replication client"); + + await this._replicationClient.teardown(); + + if (this._acknowledgeInterval) { + clearInterval(this._acknowledgeInterval); + this._acknowledgeInterval = null; + } + + this._isSubscribed = false; + } + + #handleData(lsn: string, message: PgoutputMessage, parseDuration: bigint) { + this.logger.debug("Handling data", { + lsn, + tag: message.tag, + parseDuration, + }); + + this.events.emit("message", { lsn, message, service: this }); + + switch (message.tag) { + case "begin": { + if (this._isShuttingDown || this._isShutDownComplete) { + return; + } + + this._currentTransaction = { + beginStartTimestamp: Date.now(), + commitLsn: message.commitLsn, + xid: message.xid, + events: [], + }; + + this._currentParseDurationMs = Number(parseDuration) / 1_000_000; + + break; + } + case "insert": { + if (!this._currentTransaction) { + return; + } + + if (this._currentParseDurationMs) { + this._currentParseDurationMs = + this._currentParseDurationMs + Number(parseDuration) / 1_000_000; + } + + this._currentTransaction.events.push({ + tag: message.tag, + data: message.new as Session, + raw: message, + }); + break; + } + case "update": { + if (!this._currentTransaction) { + return; + } + + if (this._currentParseDurationMs) { + this._currentParseDurationMs = + this._currentParseDurationMs + Number(parseDuration) / 1_000_000; + } + + this._currentTransaction.events.push({ + tag: message.tag, + data: message.new as Session, + raw: message, + }); + break; + } + case "delete": { + if (!this._currentTransaction) { + return; + } + + if (this._currentParseDurationMs) { + this._currentParseDurationMs = + this._currentParseDurationMs + Number(parseDuration) / 1_000_000; + } + + this._currentTransaction.events.push({ + tag: message.tag, + data: message.old as Session, + raw: message, + }); + + break; + } + case "commit": { + if (!this._currentTransaction) { + return; + } + + if (this._currentParseDurationMs) { + this._currentParseDurationMs = + this._currentParseDurationMs + Number(parseDuration) / 1_000_000; + } + + const replicationLagMs = Date.now() - Number(message.commitTime / 1000n); + this._currentTransaction.commitEndLsn = message.commitEndLsn; + this._currentTransaction.replicationLagMs = replicationLagMs; + const transaction = this._currentTransaction as Transaction; + this._currentTransaction = null; + + if (transaction.commitEndLsn) { + this._latestCommitEndLsn = transaction.commitEndLsn; + } + + this.#handleTransaction(transaction); + break; + } + default: { + this.logger.debug("Unknown message tag", { + pgMessage: message, + }); + } + } + } + + #handleTransaction(transaction: Transaction) { + if (this._isShutDownComplete) return; + + if (this._isShuttingDown) { + this._replicationClient.stop().finally(() => { + this._isSubscribed = false; + this._isShutDownComplete = true; + }); + } + + // If there are no events, do nothing + if (transaction.events.length === 0) { + return; + } + + if (!transaction.commitEndLsn) { + this.logger.error("Transaction has no commit end lsn", { + transaction, + }); + + return; + } + + const lsnToUInt64Start = process.hrtime.bigint(); + + // If there are events, we need to handle them + const _version = lsnToUInt64(transaction.commitEndLsn); + + const lsnToUInt64DurationMs = Number(process.hrtime.bigint() - lsnToUInt64Start) / 1_000_000; + + this._concurrentFlushScheduler.addToBatch( + transaction.events.map((event) => ({ + _version, + session: event.data, + event: event.tag, + })) + ); + + // Record metrics + this._replicationLagHistogram.record(transaction.replicationLagMs); + + // Count events by type + for (const event of transaction.events) { + this._eventsProcessedCounter.add(1, { event_type: event.tag }); + } + + this.logger.debug("handle_transaction", { + transaction: { + xid: transaction.xid, + commitLsn: transaction.commitLsn, + commitEndLsn: transaction.commitEndLsn, + events: transaction.events.length, + parseDurationMs: this._currentParseDurationMs, + lsnToUInt64DurationMs, + version: _version.toString(), + }, + }); + } + + async #acknowledgeLatestTransaction() { + if (!this._latestCommitEndLsn) { + return; + } + + if (this._lastAcknowledgedLsn === this._latestCommitEndLsn) { + return; + } + + const now = Date.now(); + + if (this._lastAcknowledgedAt) { + const timeSinceLastAcknowledged = now - this._lastAcknowledgedAt; + // If we've already acknowledged within the last second, don't acknowledge again + if (timeSinceLastAcknowledged < this._acknowledgeTimeoutMs) { + return; + } + } + + this._lastAcknowledgedAt = now; + this._lastAcknowledgedLsn = this._latestCommitEndLsn; + + this.logger.debug("acknowledge_latest_transaction", { + commitEndLsn: this._latestCommitEndLsn, + lastAcknowledgedAt: this._lastAcknowledgedAt, + }); + + const [ackError] = await tryCatch( + this._replicationClient.acknowledge(this._latestCommitEndLsn) + ); + + if (ackError) { + this.logger.error("Error acknowledging transaction", { ackError }); + } + + if (this._isShutDownComplete && this._acknowledgeInterval) { + clearInterval(this._acknowledgeInterval); + } + } + + async #flushBatch(flushId: string, batch: Array) { + if (batch.length === 0) { + return; + } + + this.logger.debug("Flushing batch", { + flushId, + batchSize: batch.length, + }); + + const flushStartTime = performance.now(); + + await startSpan(this._tracer, "flushBatch", async (span) => { + const sessionInserts = batch + .map((item) => toSessionInsertArray(item.session, item._version, item.event === "delete")) + // batch inserts in clickhouse are more performant if the items + // are pre-sorted by the primary key + .sort((a, b) => { + const aOrgId = getSessionField(a, "organization_id"); + const bOrgId = getSessionField(b, "organization_id"); + if (aOrgId !== bOrgId) { + return aOrgId < bOrgId ? -1 : 1; + } + const aProjId = getSessionField(a, "project_id"); + const bProjId = getSessionField(b, "project_id"); + if (aProjId !== bProjId) { + return aProjId < bProjId ? -1 : 1; + } + const aEnvId = getSessionField(a, "environment_id"); + const bEnvId = getSessionField(b, "environment_id"); + if (aEnvId !== bEnvId) { + return aEnvId < bEnvId ? -1 : 1; + } + const aCreatedAt = getSessionField(a, "created_at"); + const bCreatedAt = getSessionField(b, "created_at"); + if (aCreatedAt !== bCreatedAt) { + return aCreatedAt - bCreatedAt; + } + const aSessionId = getSessionField(a, "session_id"); + const bSessionId = getSessionField(b, "session_id"); + if (aSessionId === bSessionId) return 0; + return aSessionId < bSessionId ? -1 : 1; + }); + + span.setAttribute("session_inserts", sessionInserts.length); + + this.logger.debug("Flushing inserts", { + flushId, + sessionInserts: sessionInserts.length, + }); + + const [sessionError, sessionResult] = await this.#insertWithRetry( + (attempt) => this.#insertSessionInserts(sessionInserts, attempt), + "session inserts", + flushId + ); + + if (sessionError) { + this.logger.error("Error inserting session inserts", { + error: sessionError, + flushId, + }); + recordSpanError(span, sessionError); + } + + this.logger.debug("Flushed inserts", { + flushId, + sessionInserts: sessionInserts.length, + }); + + this.events.emit("batchFlushed", { flushId, sessionInserts }); + + // Record metrics + const flushDurationMs = performance.now() - flushStartTime; + const hasErrors = sessionError !== null; + + this._batchSizeHistogram.record(batch.length); + this._flushDurationHistogram.record(flushDurationMs); + this._batchesFlushedCounter.add(1, { success: !hasErrors }); + + if (!sessionError) { + this._sessionsInsertedCounter.add(sessionInserts.length); + } + }); + } + + // New method to handle inserts with retry logic for connection errors + async #insertWithRetry( + insertFn: (attempt: number) => Promise, + operationName: string, + flushId: string + ): Promise<[Error | null, T | null]> { + let lastError: Error | null = null; + + for (let attempt = 1; attempt <= this._insertMaxRetries; attempt++) { + try { + const result = await insertFn(attempt); + return [null, result]; + } catch (error) { + lastError = error instanceof Error ? error : new Error(String(error)); + + // Check if this is a retryable error + if (this.#isRetryableError(lastError)) { + const delay = this.#calculateRetryDelay(attempt); + + this.logger.warn(`Retrying SessionsReplication insert due to error`, { + operationName, + flushId, + attempt, + maxRetries: this._insertMaxRetries, + error: lastError.message, + delay, + }); + + // Record retry metric + this._insertRetriesCounter.add(1, { operation: "sessions" }); + + await new Promise((resolve) => setTimeout(resolve, delay)); + continue; + } + break; + } + } + + return [lastError, null]; + } + + // Retry all errors except known permanent ones + #isRetryableError(error: Error): boolean { + const errorMessage = error.message.toLowerCase(); + + // Permanent errors that should NOT be retried + const permanentErrorPatterns = [ + "authentication failed", + "permission denied", + "invalid credentials", + "table not found", + "database not found", + "column not found", + "schema mismatch", + "invalid query", + "syntax error", + "type error", + "constraint violation", + "duplicate key", + "foreign key violation", + ]; + + // If it's a known permanent error, don't retry + if (permanentErrorPatterns.some((pattern) => errorMessage.includes(pattern))) { + return false; + } + + // Retry everything else + return true; + } + + #calculateRetryDelay(attempt: number): number { + // Exponential backoff: baseDelay, baseDelay*2, baseDelay*4, etc. + const delay = Math.min( + this._insertBaseDelayMs * Math.pow(2, attempt - 1), + this._insertMaxDelayMs + ); + + // Add some jitter to prevent thundering herd + const jitter = Math.random() * 100; + return delay + jitter; + } + + #getClickhouseInsertSettings() { + if (this._insertStrategy === "insert") { + return {}; + } + + return { + async_insert: 1 as const, + async_insert_max_data_size: "1000000", + async_insert_busy_timeout_ms: 1000, + wait_for_async_insert: this.options.waitForAsyncInsert ? (1 as const) : (0 as const), + }; + } + + async #insertSessionInserts(sessionInserts: SessionInsertArray[], attempt: number) { + return await startSpan(this._tracer, "insertSessionInserts", async (span) => { + const [insertError, insertResult] = + await this.options.clickhouse.sessions.insertCompactArrays(sessionInserts, { + params: { + clickhouse_settings: this.#getClickhouseInsertSettings(), + }, + }); + + if (insertError) { + this.logger.error("Error inserting session inserts attempt", { + error: insertError, + attempt, + }); + + recordSpanError(span, insertError); + throw insertError; + } + + return insertResult; + }); + } +} + +function toSessionInsertArray( + session: Session, + version: bigint, + isDeleted: boolean +): SessionInsertArray { + return [ + session.runtimeEnvironmentId, + session.organizationId, + session.projectId, + session.id, + session.environmentType, + session.friendlyId, + session.externalId ?? "", + session.type, + session.taskIdentifier ?? "", + session.tags ?? [], + { data: session.metadata ?? null }, + session.closedAt ? session.closedAt.getTime() : null, + session.closedReason ?? "", + session.expiresAt ? session.expiresAt.getTime() : null, + session.createdAt.getTime(), + session.updatedAt.getTime(), + version.toString(), + isDeleted ? 1 : 0, + ]; +} + +function lsnToUInt64(lsn: string): bigint { + const [seg, off] = lsn.split("/"); + return (BigInt("0x" + seg) << 32n) | BigInt("0x" + off); +} diff --git a/apps/webapp/app/services/sessionsRepository/clickhouseSessionsRepository.server.ts b/apps/webapp/app/services/sessionsRepository/clickhouseSessionsRepository.server.ts new file mode 100644 index 00000000000..c810a0dfa1e --- /dev/null +++ b/apps/webapp/app/services/sessionsRepository/clickhouseSessionsRepository.server.ts @@ -0,0 +1,254 @@ +import { type ClickhouseQueryBuilder } from "@internal/clickhouse"; +import parseDuration from "parse-duration"; +import { + convertSessionListInputOptionsToFilterOptions, + type FilterSessionsOptions, + type ISessionsRepository, + type ListSessionsOptions, + type SessionListInputOptions, + type SessionTagListOptions, + type SessionsRepositoryOptions, +} from "./sessionsRepository.server"; + +export class ClickHouseSessionsRepository implements ISessionsRepository { + constructor(private readonly options: SessionsRepositoryOptions) {} + + get name() { + return "clickhouse"; + } + + async listSessionIds(options: ListSessionsOptions): Promise { + const queryBuilder = this.options.clickhouse.sessions.queryBuilder(); + applySessionFiltersToQueryBuilder( + queryBuilder, + convertSessionListInputOptionsToFilterOptions(options) + ); + + if (options.page.cursor) { + if (options.page.direction === "forward" || !options.page.direction) { + queryBuilder + .where("session_id < {sessionId: String}", { sessionId: options.page.cursor }) + .orderBy("created_at DESC, session_id DESC") + .limit(options.page.size + 1); + } else { + queryBuilder + .where("session_id > {sessionId: String}", { sessionId: options.page.cursor }) + .orderBy("created_at ASC, session_id ASC") + .limit(options.page.size + 1); + } + } else { + queryBuilder.orderBy("created_at DESC, session_id DESC").limit(options.page.size + 1); + } + + const [queryError, result] = await queryBuilder.execute(); + if (queryError) throw queryError; + + return result.map((row) => row.session_id); + } + + async listSessions(options: ListSessionsOptions) { + const sessionIds = await this.listSessionIds(options); + const hasMore = sessionIds.length > options.page.size; + + let nextCursor: string | null = null; + let previousCursor: string | null = null; + + const direction = options.page.direction ?? "forward"; + switch (direction) { + case "forward": { + previousCursor = options.page.cursor ? sessionIds.at(0) ?? null : null; + if (hasMore) { + nextCursor = sessionIds[options.page.size - 1]; + } + break; + } + case "backward": { + const reversed = [...sessionIds].reverse(); + if (hasMore) { + previousCursor = reversed.at(1) ?? null; + nextCursor = reversed.at(options.page.size) ?? null; + } else { + nextCursor = reversed.at(options.page.size - 1) ?? null; + } + break; + } + } + + // Both directions slice the first `size` IDs: the `size+1`th item is + // the sentinel proving another page exists (hasMore), not part of the + // page content. Backward queries sort ASC (items closest to the cursor + // first), so `[0..size)` is still the legitimate window and the last + // element is the sentinel — identical to the forward case. + const idsToReturn = sessionIds.slice(0, options.page.size); + + let sessions = await this.options.prisma.session.findMany({ + where: { + id: { in: idsToReturn }, + runtimeEnvironmentId: options.environmentId, + }, + orderBy: { createdAt: "desc" }, + select: { + id: true, + friendlyId: true, + externalId: true, + type: true, + taskIdentifier: true, + tags: true, + metadata: true, + closedAt: true, + closedReason: true, + expiresAt: true, + createdAt: true, + updatedAt: true, + runtimeEnvironmentId: true, + }, + }); + + // ClickHouse is slightly delayed; narrow by derived status in-memory to + // catch recent Postgres writes that haven't replicated yet. + if (options.statuses && options.statuses.length > 0) { + const wanted = new Set(options.statuses); + const now = Date.now(); + sessions = sessions.filter((s) => { + const status = + s.closedAt != null + ? "CLOSED" + : s.expiresAt != null && s.expiresAt.getTime() < now + ? "EXPIRED" + : "ACTIVE"; + return wanted.has(status); + }); + } + + return { + sessions, + pagination: { nextCursor, previousCursor }, + }; + } + + async countSessions(options: SessionListInputOptions): Promise { + const queryBuilder = this.options.clickhouse.sessions.countQueryBuilder(); + applySessionFiltersToQueryBuilder( + queryBuilder, + convertSessionListInputOptionsToFilterOptions(options) + ); + + const [queryError, result] = await queryBuilder.execute(); + if (queryError) throw queryError; + + if (result.length === 0) { + throw new Error("No count rows returned"); + } + return result[0].count; + } + + async listTags(options: SessionTagListOptions) { + const queryBuilder = this.options.clickhouse.sessions + .tagQueryBuilder() + .where("organization_id = {organizationId: String}", { + organizationId: options.organizationId, + }) + .where("project_id = {projectId: String}", { projectId: options.projectId }) + .where("environment_id = {environmentId: String}", { + environmentId: options.environmentId, + }); + + const periodMs = options.period ? parseDuration(options.period) ?? undefined : undefined; + if (periodMs) { + queryBuilder.where("created_at >= fromUnixTimestamp64Milli({period: Int64})", { + period: new Date(Date.now() - periodMs).getTime(), + }); + } + + if (options.from) { + queryBuilder.where("created_at >= fromUnixTimestamp64Milli({from: Int64})", { + from: options.from, + }); + } + + if (options.to) { + queryBuilder.where("created_at <= fromUnixTimestamp64Milli({to: Int64})", { + to: options.to, + }); + } + + if (options.query && options.query.trim().length > 0) { + queryBuilder.where("positionCaseInsensitiveUTF8(tag, {query: String}) > 0", { + query: options.query, + }); + } + + queryBuilder.orderBy("tag ASC").limit(options.limit); + + const [queryError, result] = await queryBuilder.execute(); + if (queryError) throw queryError; + + return { tags: result.map((row) => row.tag) }; + } +} + +function applySessionFiltersToQueryBuilder( + queryBuilder: ClickhouseQueryBuilder, + options: FilterSessionsOptions +) { + queryBuilder + .where("organization_id = {organizationId: String}", { + organizationId: options.organizationId, + }) + .where("project_id = {projectId: String}", { projectId: options.projectId }) + .where("environment_id = {environmentId: String}", { environmentId: options.environmentId }); + + if (options.types && options.types.length > 0) { + queryBuilder.where("type IN {types: Array(String)}", { types: options.types }); + } + + if (options.tags && options.tags.length > 0) { + queryBuilder.where("hasAny(tags, {tags: Array(String)})", { tags: options.tags }); + } + + if (options.taskIdentifiers && options.taskIdentifiers.length > 0) { + queryBuilder.where("task_identifier IN {taskIdentifiers: Array(String)}", { + taskIdentifiers: options.taskIdentifiers, + }); + } + + if (options.externalId) { + queryBuilder.where("external_id = {externalId: String}", { externalId: options.externalId }); + } + + if (options.statuses && options.statuses.length > 0) { + const conditions: string[] = []; + if (options.statuses.includes("ACTIVE")) { + conditions.push( + "(closed_at IS NULL AND (expires_at IS NULL OR expires_at > now64(3)))" + ); + } + if (options.statuses.includes("CLOSED")) { + conditions.push("closed_at IS NOT NULL"); + } + if (options.statuses.includes("EXPIRED")) { + conditions.push("(closed_at IS NULL AND expires_at IS NOT NULL AND expires_at <= now64(3))"); + } + if (conditions.length > 0) { + queryBuilder.where(`(${conditions.join(" OR ")})`); + } + } + + if (options.period) { + queryBuilder.where("created_at >= fromUnixTimestamp64Milli({period: Int64})", { + period: new Date(Date.now() - options.period).getTime(), + }); + } + + if (options.from) { + queryBuilder.where("created_at >= fromUnixTimestamp64Milli({from: Int64})", { + from: options.from, + }); + } + + if (options.to) { + queryBuilder.where("created_at <= fromUnixTimestamp64Milli({to: Int64})", { + to: options.to, + }); + } +} diff --git a/apps/webapp/app/services/sessionsRepository/sessionsRepository.server.ts b/apps/webapp/app/services/sessionsRepository/sessionsRepository.server.ts new file mode 100644 index 00000000000..15566295e33 --- /dev/null +++ b/apps/webapp/app/services/sessionsRepository/sessionsRepository.server.ts @@ -0,0 +1,198 @@ +import { type ClickHouse } from "@internal/clickhouse"; +import { type Tracer } from "@internal/tracing"; +import { type Logger, type LogLevel } from "@trigger.dev/core/logger"; +import { type Prisma } from "@trigger.dev/database"; +import parseDuration from "parse-duration"; +import { z } from "zod"; +import { type PrismaClientOrTransaction } from "~/db.server"; +import { startActiveSpan } from "~/v3/tracer.server"; +import { ClickHouseSessionsRepository } from "./clickhouseSessionsRepository.server"; + +export type SessionsRepositoryOptions = { + clickhouse: ClickHouse; + prisma: PrismaClientOrTransaction; + logger?: Logger; + logLevel?: LogLevel; + tracer?: Tracer; +}; + +/** + * Derived status values — `Session` rows don't have a stored status column. + * `ACTIVE` is the base state; `CLOSED` means `closedAt` is set; `EXPIRED` + * means `expiresAt` has passed. + */ +export const SessionStatus = z.enum(["ACTIVE", "CLOSED", "EXPIRED"]); +export type SessionStatus = z.infer; + +const SessionListInputOptionsSchema = z.object({ + organizationId: z.string(), + projectId: z.string(), + environmentId: z.string(), + // filters + types: z.array(z.string()).optional(), + tags: z.array(z.string()).optional(), + taskIdentifiers: z.array(z.string()).optional(), + externalId: z.string().optional(), + statuses: z.array(SessionStatus).optional(), + period: z.string().optional(), + from: z.number().optional(), + to: z.number().optional(), +}); + +export type SessionListInputOptions = z.infer; +export type SessionListInputFilters = Omit< + SessionListInputOptions, + "organizationId" | "projectId" | "environmentId" +>; + +export type FilterSessionsOptions = Omit & { + /** period converted to milliseconds duration */ + period: number | undefined; +}; + +type Pagination = { + page: { + size: number; + cursor?: string; + direction?: "forward" | "backward"; + }; +}; + +export type ListSessionsOptions = SessionListInputOptions & Pagination; + +type OffsetPagination = { + offset: number; + limit: number; +}; + +export type SessionTagListOptions = { + organizationId: string; + projectId: string; + environmentId: string; + period?: string; + from?: number; + to?: number; + /** Case-insensitive substring match on the tag name */ + query?: string; +} & OffsetPagination; + +export type SessionTagList = { + tags: string[]; +}; + +export type ListedSession = Prisma.SessionGetPayload<{ + select: { + id: true; + friendlyId: true; + externalId: true; + type: true; + taskIdentifier: true; + tags: true; + metadata: true; + closedAt: true; + closedReason: true; + expiresAt: true; + createdAt: true; + updatedAt: true; + runtimeEnvironmentId: true; + }; +}>; + +export type ISessionsRepository = { + name: string; + listSessionIds(options: ListSessionsOptions): Promise; + listSessions(options: ListSessionsOptions): Promise<{ + sessions: ListedSession[]; + pagination: { + nextCursor: string | null; + previousCursor: string | null; + }; + }>; + countSessions(options: SessionListInputOptions): Promise; + listTags(options: SessionTagListOptions): Promise; +}; + +export class SessionsRepository implements ISessionsRepository { + private readonly clickHouseSessionsRepository: ClickHouseSessionsRepository; + + constructor(private readonly options: SessionsRepositoryOptions) { + this.clickHouseSessionsRepository = new ClickHouseSessionsRepository(options); + } + + get name() { + return "sessionsRepository"; + } + + async listSessionIds(options: ListSessionsOptions): Promise { + return startActiveSpan( + "sessionsRepository.listSessionIds", + async () => this.clickHouseSessionsRepository.listSessionIds(options), + { + attributes: { + "repository.name": "clickhouse", + organizationId: options.organizationId, + projectId: options.projectId, + environmentId: options.environmentId, + }, + } + ); + } + + async listSessions(options: ListSessionsOptions) { + return startActiveSpan( + "sessionsRepository.listSessions", + async () => this.clickHouseSessionsRepository.listSessions(options), + { + attributes: { + "repository.name": "clickhouse", + organizationId: options.organizationId, + projectId: options.projectId, + environmentId: options.environmentId, + }, + } + ); + } + + async countSessions(options: SessionListInputOptions) { + return startActiveSpan( + "sessionsRepository.countSessions", + async () => this.clickHouseSessionsRepository.countSessions(options), + { + attributes: { + "repository.name": "clickhouse", + organizationId: options.organizationId, + projectId: options.projectId, + environmentId: options.environmentId, + }, + } + ); + } + + async listTags(options: SessionTagListOptions) { + return startActiveSpan( + "sessionsRepository.listTags", + async () => this.clickHouseSessionsRepository.listTags(options), + { + attributes: { + "repository.name": "clickhouse", + organizationId: options.organizationId, + projectId: options.projectId, + environmentId: options.environmentId, + }, + } + ); + } +} + +export function parseSessionListInputOptions(data: unknown): SessionListInputOptions { + return SessionListInputOptionsSchema.parse(data); +} + +export function convertSessionListInputOptionsToFilterOptions( + options: SessionListInputOptions +): FilterSessionsOptions { + return { + ...options, + period: options.period ? parseDuration(options.period) ?? undefined : undefined, + }; +} diff --git a/apps/webapp/app/v3/services/adminWorker.server.ts b/apps/webapp/app/v3/services/adminWorker.server.ts index 97c94b954f0..2e4d1b066cb 100644 --- a/apps/webapp/app/v3/services/adminWorker.server.ts +++ b/apps/webapp/app/v3/services/adminWorker.server.ts @@ -4,6 +4,12 @@ import { z } from "zod"; import { env } from "~/env.server"; import { logger } from "~/services/logger.server"; import { runsReplicationInstance } from "~/services/runsReplicationInstance.server"; +// Reference-hold the sessions-replication singleton so module evaluation runs +// its initializer (creates the ClickHouse client, subscribes to the logical +// replication slot, wires signal handlers) when the webapp boots. A bare +// side-effect import gets tree-shaken by the bundler. +import { sessionsReplicationInstance } from "~/services/sessionsReplicationInstance.server"; +void sessionsReplicationInstance; import { singleton } from "~/utils/singleton"; import { tracer } from "../tracer.server"; import { $replica } from "~/db.server"; diff --git a/apps/webapp/test/sessionsReplicationService.test.ts b/apps/webapp/test/sessionsReplicationService.test.ts new file mode 100644 index 00000000000..3a16ce4471a --- /dev/null +++ b/apps/webapp/test/sessionsReplicationService.test.ts @@ -0,0 +1,212 @@ +import { ClickHouse } from "@internal/clickhouse"; +import { containerTest } from "@internal/testcontainers"; +import { setTimeout } from "node:timers/promises"; +import { z } from "zod"; +import { SessionsReplicationService } from "~/services/sessionsReplicationService.server"; + +vi.setConfig({ testTimeout: 60_000 }); + +describe("SessionsReplicationService", () => { + containerTest( + "replicates an insert from Postgres Session → ClickHouse sessions_v1", + async ({ clickhouseContainer, redisOptions, postgresContainer, prisma }) => { + // Logical replication needs full-row images for DELETE events. + await prisma.$executeRawUnsafe(`ALTER TABLE public."Session" REPLICA IDENTITY FULL;`); + + const clickhouse = new ClickHouse({ + url: clickhouseContainer.getConnectionUrl(), + name: "sessions-replication", + compression: { request: true }, + logLevel: "warn", + }); + + const service = new SessionsReplicationService({ + clickhouse, + pgConnectionUrl: postgresContainer.getConnectionUri(), + serviceName: "sessions-replication", + slotName: "sessions_to_clickhouse_v1", + publicationName: "sessions_to_clickhouse_v1_publication", + redisOptions, + maxFlushConcurrency: 1, + flushIntervalMs: 100, + flushBatchSize: 1, + leaderLockTimeoutMs: 5000, + leaderLockExtendIntervalMs: 1000, + ackIntervalSeconds: 5, + logLevel: "warn", + }); + + await service.start(); + + const organization = await prisma.organization.create({ + data: { title: "test", slug: "test" }, + }); + + const project = await prisma.project.create({ + data: { + name: "test", + slug: "test", + organizationId: organization.id, + externalRef: "test", + }, + }); + + const environment = await prisma.runtimeEnvironment.create({ + data: { + slug: "test", + type: "DEVELOPMENT", + projectId: project.id, + organizationId: organization.id, + apiKey: "test", + pkApiKey: "test", + shortcode: "test", + }, + }); + + const session = await prisma.session.create({ + data: { + id: "session_test_insert_1", + friendlyId: "session_abc123", + externalId: "my-test-session", + type: "chat.agent", + projectId: project.id, + runtimeEnvironmentId: environment.id, + environmentType: "DEVELOPMENT", + organizationId: organization.id, + taskIdentifier: "my-agent", + triggerConfig: { + basePayload: { messages: [], trigger: "preload" }, + }, + tags: ["user:42", "plan:pro"], + metadata: { plan: "pro", seats: 3 }, + }, + }); + + // Allow the replication pipeline to flush + await setTimeout(2000); + + const querySessions = clickhouse.reader.query({ + name: "read-sessions", + query: "SELECT * FROM trigger_dev.sessions_v1 FINAL", + schema: z.any(), + }); + + const [queryError, result] = await querySessions({}); + + expect(queryError).toBeNull(); + expect(result?.length).toBe(1); + expect(result?.[0]).toEqual( + expect.objectContaining({ + session_id: session.id, + friendly_id: session.friendlyId, + external_id: "my-test-session", + type: "chat.agent", + project_id: project.id, + environment_id: environment.id, + organization_id: organization.id, + environment_type: "DEVELOPMENT", + task_identifier: "my-agent", + tags: ["user:42", "plan:pro"], + _is_deleted: 0, + }) + ); + + await service.stop(); + } + ); + + containerTest( + "replicates an update (close) from Postgres → ClickHouse", + async ({ clickhouseContainer, redisOptions, postgresContainer, prisma }) => { + await prisma.$executeRawUnsafe(`ALTER TABLE public."Session" REPLICA IDENTITY FULL;`); + + const clickhouse = new ClickHouse({ + url: clickhouseContainer.getConnectionUrl(), + name: "sessions-replication", + compression: { request: true }, + logLevel: "warn", + }); + + const service = new SessionsReplicationService({ + clickhouse, + pgConnectionUrl: postgresContainer.getConnectionUri(), + serviceName: "sessions-replication", + slotName: "sessions_to_clickhouse_v1", + publicationName: "sessions_to_clickhouse_v1_publication", + redisOptions, + maxFlushConcurrency: 1, + flushIntervalMs: 100, + flushBatchSize: 1, + leaderLockTimeoutMs: 5000, + leaderLockExtendIntervalMs: 1000, + ackIntervalSeconds: 5, + logLevel: "warn", + }); + + await service.start(); + + const organization = await prisma.organization.create({ + data: { title: "test", slug: "test" }, + }); + const project = await prisma.project.create({ + data: { + name: "test", + slug: "test", + organizationId: organization.id, + externalRef: "test", + }, + }); + const environment = await prisma.runtimeEnvironment.create({ + data: { + slug: "test", + type: "DEVELOPMENT", + projectId: project.id, + organizationId: organization.id, + apiKey: "test", + pkApiKey: "test", + shortcode: "test", + }, + }); + + const created = await prisma.session.create({ + data: { + id: "session_test_update_1", + friendlyId: "session_update1", + type: "chat.agent", + projectId: project.id, + runtimeEnvironmentId: environment.id, + environmentType: "DEVELOPMENT", + organizationId: organization.id, + taskIdentifier: "my-agent", + triggerConfig: { + basePayload: { messages: [], trigger: "preload" }, + }, + }, + }); + + await setTimeout(1000); + + await prisma.session.update({ + where: { id: created.id }, + data: { closedAt: new Date(), closedReason: "test-close" }, + }); + + await setTimeout(2000); + + const querySessions = clickhouse.reader.query({ + name: "read-sessions-closed", + query: "SELECT closed_reason, closed_at FROM trigger_dev.sessions_v1 FINAL", + schema: z.any(), + }); + + const [queryError, result] = await querySessions({}); + + expect(queryError).toBeNull(); + expect(result?.length).toBe(1); + expect(result?.[0].closed_reason).toBe("test-close"); + expect(result?.[0].closed_at).toBeDefined(); + + await service.stop(); + } + ); +}); diff --git a/internal-packages/clickhouse/schema/030_create_sessions_v1.sql b/internal-packages/clickhouse/schema/030_create_sessions_v1.sql new file mode 100644 index 00000000000..f575953ea80 --- /dev/null +++ b/internal-packages/clickhouse/schema/030_create_sessions_v1.sql @@ -0,0 +1,42 @@ +-- +goose Up + +CREATE TABLE trigger_dev.sessions_v1 +( + /* ─── identity ─────────────────────────────────────────────── */ + environment_id String, + organization_id String, + project_id String, + session_id String, + + environment_type LowCardinality(String), + friendly_id String, + external_id String DEFAULT '', + + /* ─── type discriminator ──────────────────────────────────── */ + type LowCardinality(String), + task_identifier String DEFAULT '', + + /* ─── filtering / free-form ──────────────────────────────── */ + tags Array(String) CODEC(ZSTD(1)), + metadata JSON(max_dynamic_paths = 256), + + /* ─── terminal markers ────────────────────────────────────── */ + closed_at Nullable(DateTime64(3)), + closed_reason String DEFAULT '', + expires_at Nullable(DateTime64(3)), + + /* ─── timing ─────────────────────────────────────────────── */ + created_at DateTime64(3), + updated_at DateTime64(3), + + /* ─── commit lsn ────────────────────────────────────────── */ + _version UInt64, + _is_deleted UInt8 DEFAULT 0 +) +ENGINE = ReplacingMergeTree(_version, _is_deleted) +PARTITION BY toYYYYMM(created_at) +ORDER BY (organization_id, project_id, environment_id, created_at, session_id) +SETTINGS enable_json_type = 1; + +-- +goose Down +DROP TABLE IF EXISTS trigger_dev.sessions_v1; diff --git a/internal-packages/clickhouse/src/index.ts b/internal-packages/clickhouse/src/index.ts index c6b8858fa9c..45f0fa485a7 100644 --- a/internal-packages/clickhouse/src/index.ts +++ b/internal-packages/clickhouse/src/index.ts @@ -28,6 +28,12 @@ import { } from "./taskEvents.js"; import { insertMetrics } from "./metrics.js"; import { insertLlmMetrics } from "./llmMetrics.js"; +import { + getSessionTagsQueryBuilder, + getSessionsCountQueryBuilder, + getSessionsQueryBuilder, + insertSessionsCompactArrays, +} from "./sessions.js"; import { getGlobalModelMetrics, getGlobalModelComparison, @@ -57,6 +63,7 @@ export type * from "./metrics.js"; export type * from "./llmMetrics.js"; export type * from "./llmModelAggregates.js"; export type * from "./errors.js"; +export type * from "./sessions.js"; export type * from "./client/queryBuilder.js"; // Re-export column constants, indices, and type-safe accessors @@ -69,6 +76,8 @@ export { getPayloadField, } from "./taskRuns.js"; +export { SESSION_COLUMNS, SESSION_INDEX, getSessionField } from "./sessions.js"; + // TSQL query execution export { executeTSQL, @@ -251,6 +260,15 @@ export class ClickHouse { }; } + get sessions() { + return { + insertCompactArrays: insertSessionsCompactArrays(this.writer), + queryBuilder: getSessionsQueryBuilder(this.reader), + countQueryBuilder: getSessionsCountQueryBuilder(this.reader), + tagQueryBuilder: getSessionTagsQueryBuilder(this.reader), + }; + } + get taskEventsV2() { return { insert: insertTaskEventsV2(this.writer), diff --git a/internal-packages/clickhouse/src/sessions.ts b/internal-packages/clickhouse/src/sessions.ts new file mode 100644 index 00000000000..567fe65511e --- /dev/null +++ b/internal-packages/clickhouse/src/sessions.ts @@ -0,0 +1,184 @@ +import { ClickHouseSettings } from "@clickhouse/client"; +import { z } from "zod"; +import { ClickhouseReader, ClickhouseWriter } from "./client/types.js"; + +export const SessionV1 = z.object({ + environment_id: z.string(), + organization_id: z.string(), + project_id: z.string(), + session_id: z.string(), + environment_type: z.string(), + friendly_id: z.string(), + external_id: z.string().default(""), + type: z.string(), + task_identifier: z.string().default(""), + tags: z.array(z.string()).default([]), + metadata: z.unknown(), + closed_at: z.number().int().nullish(), + closed_reason: z.string().default(""), + expires_at: z.number().int().nullish(), + created_at: z.number().int(), + updated_at: z.number().int(), + _version: z.string(), + _is_deleted: z.number().int().default(0), +}); + +export type SessionV1 = z.input; + +// Column order for compact format - must match ClickHouse table schema +export const SESSION_COLUMNS = [ + "environment_id", + "organization_id", + "project_id", + "session_id", + "environment_type", + "friendly_id", + "external_id", + "type", + "task_identifier", + "tags", + "metadata", + "closed_at", + "closed_reason", + "expires_at", + "created_at", + "updated_at", + "_version", + "_is_deleted", +] as const; + +export type SessionColumnName = (typeof SESSION_COLUMNS)[number]; + +export const SESSION_INDEX = Object.fromEntries(SESSION_COLUMNS.map((col, idx) => [col, idx])) as { + readonly [K in SessionColumnName]: number; +}; + +export type SessionFieldTypes = { + environment_id: string; + organization_id: string; + project_id: string; + session_id: string; + environment_type: string; + friendly_id: string; + external_id: string; + type: string; + task_identifier: string; + tags: string[]; + metadata: { data: unknown }; + closed_at: number | null; + closed_reason: string; + expires_at: number | null; + created_at: number; + updated_at: number; + _version: string; + _is_deleted: number; +}; + +/** + * Type-safe tuple representing a Session insert array. + * Order matches {@link SESSION_COLUMNS} exactly. + */ +export type SessionInsertArray = [ + environment_id: string, + organization_id: string, + project_id: string, + session_id: string, + environment_type: string, + friendly_id: string, + external_id: string, + type: string, + task_identifier: string, + tags: string[], + metadata: { data: unknown }, + closed_at: number | null, + closed_reason: string, + expires_at: number | null, + created_at: number, + updated_at: number, + _version: string, + _is_deleted: number, +]; + +export function getSessionField( + session: SessionInsertArray, + field: K +): SessionFieldTypes[K] { + return session[SESSION_INDEX[field]] as SessionFieldTypes[K]; +} + +export function insertSessionsCompactArrays(ch: ClickhouseWriter, settings?: ClickHouseSettings) { + return ch.insertCompactRaw({ + name: "insertSessionsCompactArrays", + table: "trigger_dev.sessions_v1", + columns: SESSION_COLUMNS, + settings: { + enable_json_type: 1, + type_json_skip_duplicated_paths: 1, + ...settings, + }, + }); +} + +export function insertSessions(ch: ClickhouseWriter, settings?: ClickHouseSettings) { + return ch.insert({ + name: "insertSessions", + table: "trigger_dev.sessions_v1", + schema: SessionV1, + settings: { + enable_json_type: 1, + type_json_skip_duplicated_paths: 1, + ...settings, + }, + }); +} + +// ─── read path ─────────────────────────────────────────────────── + +export const SessionV1QueryResult = z.object({ + session_id: z.string(), +}); + +export type SessionV1QueryResult = z.infer; + +/** + * Base query builder for listing Sessions. Filters + pagination are composed + * on top of this; callers can chain `.where(...).orderBy(...).limit(...)`. + */ +export function getSessionsQueryBuilder(ch: ClickhouseReader, settings?: ClickHouseSettings) { + return ch.queryBuilder({ + name: "getSessions", + baseQuery: "SELECT session_id FROM trigger_dev.sessions_v1 FINAL", + schema: SessionV1QueryResult, + settings, + }); +} + +export function getSessionsCountQueryBuilder( + ch: ClickhouseReader, + settings?: ClickHouseSettings +) { + return ch.queryBuilder({ + name: "getSessionsCount", + baseQuery: "SELECT count() as count FROM trigger_dev.sessions_v1 FINAL", + schema: z.object({ count: z.number().int() }), + settings, + }); +} + +export const SessionTagsQueryResult = z.object({ + tag: z.string(), +}); + +export type SessionTagsQueryResult = z.infer; + +export function getSessionTagsQueryBuilder( + ch: ClickhouseReader, + settings?: ClickHouseSettings +) { + return ch.queryBuilder({ + name: "getSessionTags", + baseQuery: "SELECT DISTINCT arrayJoin(tags) as tag FROM trigger_dev.sessions_v1", + schema: SessionTagsQueryResult, + settings, + }); +} diff --git a/internal-packages/database/prisma/migrations/20260419000000_add_sessions_table/migration.sql b/internal-packages/database/prisma/migrations/20260419000000_add_sessions_table/migration.sql new file mode 100644 index 00000000000..4cd7e543223 --- /dev/null +++ b/internal-packages/database/prisma/migrations/20260419000000_add_sessions_table/migration.sql @@ -0,0 +1,33 @@ +-- CreateTable +CREATE TABLE "Session" ( + "id" TEXT NOT NULL, + "friendlyId" TEXT NOT NULL, + "externalId" TEXT, + "type" TEXT NOT NULL, + "projectId" TEXT NOT NULL, + "runtimeEnvironmentId" TEXT NOT NULL, + "environmentType" "RuntimeEnvironmentType" NOT NULL, + "organizationId" TEXT NOT NULL, + "taskIdentifier" TEXT, + "tags" TEXT[] NOT NULL DEFAULT ARRAY[]::TEXT[], + "metadata" JSONB, + "closedAt" TIMESTAMP(3), + "closedReason" TEXT, + "expiresAt" TIMESTAMP(3), + "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, + "updatedAt" TIMESTAMP(3) NOT NULL, + + CONSTRAINT "Session_pkey" PRIMARY KEY ("id") +); + +-- CreateIndex +CREATE UNIQUE INDEX "Session_friendlyId_key" + ON "Session"("friendlyId"); + +-- CreateIndex +CREATE UNIQUE INDEX "Session_runtimeEnvironmentId_externalId_key" + ON "Session"("runtimeEnvironmentId", "externalId"); + +-- CreateIndex +CREATE INDEX "Session_expiresAt_idx" + ON "Session"("expiresAt"); diff --git a/internal-packages/database/prisma/migrations/20260426190818_sessions_as_run_manager/migration.sql b/internal-packages/database/prisma/migrations/20260426190818_sessions_as_run_manager/migration.sql new file mode 100644 index 00000000000..a0f12496781 --- /dev/null +++ b/internal-packages/database/prisma/migrations/20260426190818_sessions_as_run_manager/migration.sql @@ -0,0 +1,31 @@ +-- AlterTable +ALTER TABLE "Session" + ADD COLUMN "currentRunId" TEXT, + ADD COLUMN "currentRunVersion" INTEGER NOT NULL DEFAULT 0, + ADD COLUMN "triggerConfig" JSONB NOT NULL, + ALTER COLUMN "taskIdentifier" SET NOT NULL; + +-- CreateTable +CREATE TABLE "SessionRun" ( + "id" TEXT NOT NULL, + "sessionId" TEXT NOT NULL, + "runId" TEXT NOT NULL, + "reason" TEXT NOT NULL, + "triggeredAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, + + CONSTRAINT "SessionRun_pkey" PRIMARY KEY ("id") +); + +-- CreateIndex +CREATE UNIQUE INDEX "SessionRun_runId_key" + ON "SessionRun"("runId"); + +-- CreateIndex +CREATE INDEX "SessionRun_sessionId_idx" + ON "SessionRun"("sessionId"); + +-- AddForeignKey +ALTER TABLE "SessionRun" + ADD CONSTRAINT "SessionRun_sessionId_fkey" + FOREIGN KEY ("sessionId") REFERENCES "Session"("id") + ON DELETE CASCADE ON UPDATE CASCADE; diff --git a/internal-packages/database/prisma/migrations/20260426190819_session_current_run_id_index/migration.sql b/internal-packages/database/prisma/migrations/20260426190819_session_current_run_id_index/migration.sql new file mode 100644 index 00000000000..479353a3e04 --- /dev/null +++ b/internal-packages/database/prisma/migrations/20260426190819_session_current_run_id_index/migration.sql @@ -0,0 +1,3 @@ +-- CreateIndex +CREATE INDEX CONCURRENTLY IF NOT EXISTS "Session_currentRunId_idx" + ON "Session"("currentRunId"); diff --git a/internal-packages/database/prisma/schema.prisma b/internal-packages/database/prisma/schema.prisma index 9ccf2495d3a..ee75ce82b5f 100644 --- a/internal-packages/database/prisma/schema.prisma +++ b/internal-packages/database/prisma/schema.prisma @@ -686,6 +686,92 @@ enum TaskTriggerSource { SCHEDULED } +/// Durable, typed, bidirectional I/O primitive. Owns two S2 streams (.out / .in). +/// The row is essentially static — no status, no counters, no pointers. No +/// foreign keys: project/runtimeEnvironment/organization ids are plain +/// scalar columns (matches TaskRun pattern). List-style queries are served +/// from ClickHouse, not Postgres, so only point-lookup indexes live here. +model Session { + id String @id @default(cuid()) + friendlyId String @unique + /// User-supplied identifier scoped to the environment. Used for + /// idempotent upsert and for resolving sessions via the public API. + externalId String? + + /// Plain string — intentionally not an enum. + type String + + /// Denormalized scoping columns — no FK relations. + projectId String + runtimeEnvironmentId String + environmentType RuntimeEnvironmentType + organizationId String + + /// Task this session triggers runs against. Required — Sessions are + /// task-bound: creating a session also triggers its first run, and + /// every subsequent re-trigger uses this same identifier. + taskIdentifier String + + /// Trigger config used for every run this session schedules. Shape + /// (validated at the route layer, opaque to the DB): + /// { basePayload: object, machine?: string, queue?: string, + /// tags?: string[], maxAttempts?: number, + /// idleTimeoutInSeconds?: number } + /// `basePayload` carries the customer's client-data; runtime fields + /// (chatId, messages, trigger) are merged at trigger time. + triggerConfig Json + + tags String[] @default([]) + metadata Json? + + /// Live run pointer — non-FK so run deletion never cascades. Can lag + /// reality; the `.in/append` handler re-checks the snapshot status + /// before reusing it. + currentRunId String? + /// Monotonic counter used for optimistic locking on `currentRunId` + /// swaps. Bumped atomically alongside any update that changes + /// `currentRunId`. + currentRunVersion Int @default(0) + + /// Terminal markers — written once, never flipped back. + closedAt DateTime? + closedReason String? + expiresAt DateTime? + + createdAt DateTime @default(now()) + updatedAt DateTime @updatedAt + + runs SessionRun[] + + /// Idempotency: `(env, externalId)` uniquely identifies a session. + /// PostgreSQL treats NULLs as distinct, so `externalId=NULL` rows never collide. + @@unique([runtimeEnvironmentId, externalId]) + @@index([expiresAt]) + @@index([currentRunId]) +} + +/// Historical record of every run a Session has owned. Append-only — +/// rows are inserted on each `ensureRunForSession` claim, never updated. +/// Lets us reconstruct the run timeline of a chat for debugging / +/// dashboard surfaces. The relation cascades on Session delete (tied to +/// the session lifecycle) but `runId` is a plain string column with no +/// FK to TaskRun so run pruning is independent. +model SessionRun { + id String @id @default(cuid()) + + sessionId String + /// TaskRun.id (no FK — runs may be archived independently of session history) + runId String @unique + /// One of: "initial" | "continuation" | "upgrade" | "manual". + /// Plain string for forward-compat with future trigger reasons. + reason String + triggeredAt DateTime @default(now()) + + session Session @relation(fields: [sessionId], references: [id], onDelete: Cascade) + + @@index([sessionId]) +} + model TaskRun { id String @id @default(cuid()) diff --git a/packages/core/src/v3/isomorphic/friendlyId.ts b/packages/core/src/v3/isomorphic/friendlyId.ts index a230f8c7450..66575c7c178 100644 --- a/packages/core/src/v3/isomorphic/friendlyId.ts +++ b/packages/core/src/v3/isomorphic/friendlyId.ts @@ -97,6 +97,7 @@ export const BatchId = new IdUtil("batch"); export const BulkActionId = new IdUtil("bulk"); export const AttemptId = new IdUtil("attempt"); export const ErrorId = new IdUtil("error"); +export const SessionId = new IdUtil("session"); export class IdGenerator { private alphabet: string; diff --git a/packages/core/src/v3/schemas/api.ts b/packages/core/src/v3/schemas/api.ts index 6d324a10d11..0db92a67c64 100644 --- a/packages/core/src/v3/schemas/api.ts +++ b/packages/core/src/v3/schemas/api.ts @@ -1411,6 +1411,38 @@ export type CreateInputStreamWaitpointResponseBody = z.infer< typeof CreateInputStreamWaitpointResponseBody >; +/** + * Create a run-scoped waitpoint that completes when the next record lands on + * a Session channel (`.in` or `.out`). Mirrors `CreateInputStreamWaitpointRequestBody` + * but keyed by `{sessionId, io}` instead of `{runId, streamId}`. The run is + * still the thing being suspended — Session only supplies the trigger source. + */ +export const CreateSessionStreamWaitpointRequestBody = z.object({ + /** Session friendlyId (`session_*`) or user-supplied externalId. */ + session: z.string(), + io: z.enum(["out", "in"]), + timeout: z.string().optional(), + idempotencyKey: z.string().optional(), + idempotencyKeyTTL: z.string().optional(), + tags: z.union([z.string(), z.array(z.string())]).optional(), + /** + * Last S2 sequence number the client has seen on this session channel. + * Used to catch data that arrived before `.wait()` was called. + */ + lastSeqNum: z.number().optional(), +}); +export type CreateSessionStreamWaitpointRequestBody = z.infer< + typeof CreateSessionStreamWaitpointRequestBody +>; + +export const CreateSessionStreamWaitpointResponseBody = z.object({ + waitpointId: z.string(), + isCached: z.boolean(), +}); +export type CreateSessionStreamWaitpointResponseBody = z.infer< + typeof CreateSessionStreamWaitpointResponseBody +>; + export const waitpointTokenStatuses = ["WAITING", "COMPLETED", "TIMED_OUT"] as const; export const WaitpointTokenStatus = z.enum(waitpointTokenStatuses); export type WaitpointTokenStatus = z.infer; @@ -1449,6 +1481,219 @@ export const CompleteWaitpointTokenRequestBody = z.object({ }); export type CompleteWaitpointTokenRequestBody = z.infer; +/** + * Trigger config persisted on a Session. Drives every run the session + * schedules — `basePayload` is the customer's wire payload (for + * chat.agent: `{ chatId, ...clientData }`), runtime fields like + * `trigger: "preload" | "trigger"` are merged on top per-call by the + * server's trigger machinery. + */ +export const SessionTriggerConfig = z.object({ + basePayload: z.record(z.unknown()), + machine: MachinePresetName.optional(), + queue: z.string().max(128).optional(), + tags: z.array(z.string().max(128)).max(5).optional(), + maxAttempts: z.number().int().positive().max(10).optional(), + /** Convenience field surfaced to chat.agent via the wire payload. */ + idleTimeoutInSeconds: z.number().int().positive().max(3600).optional(), +}); +export type SessionTriggerConfig = z.infer; + +/** + * Request body for `POST /api/v1/sessions`. Creates a Session and + * triggers its first run. Sessions are task-bound: `taskIdentifier` and + * `triggerConfig` are required, and re-runs scheduled by the server + * (after run termination, after `end-and-continue`) reuse the same + * config. + */ +export const CreateSessionRequestBody = z.object({ + /** Plain string discriminator — e.g. `"chat.agent"`. Not validated against an enum on the server. */ + type: z.string().min(1).max(64), + /** User-supplied idempotency key. Unique per environment. Empty strings are rejected. */ + externalId: z + .string() + .trim() + .min(1) + .max(256) + .refine((v) => !v.startsWith("session_"), { + message: "externalId cannot start with 'session_' (reserved prefix for internal friendlyIds)", + }) + .optional(), + /** Task this session triggers runs against. Required. */ + taskIdentifier: z.string().min(1).max(128), + /** Trigger config used for every run scheduled by this session. */ + triggerConfig: SessionTriggerConfig, + /** Up to 10 tags for dashboard filtering. */ + tags: z.array(z.string().max(128)).max(10).optional(), + /** Arbitrary JSON metadata. */ + metadata: z.record(z.unknown()).optional(), + /** Absolute expiry timestamp for retention. */ + expiresAt: z.coerce.date().optional(), +}); +export type CreateSessionRequestBody = z.infer; + +export const SessionItem = z.object({ + id: z.string(), + externalId: z.string().nullable(), + type: z.string(), + taskIdentifier: z.string(), + /** + * Optional on the wire because some surfaces (the list endpoint backed + * by ClickHouse, list-page rendering) don't carry triggerConfig. + * Always populated on `POST /sessions` and `GET /sessions/:id`. + */ + triggerConfig: SessionTriggerConfig.optional(), + /** + * Friendly id of the live run for this session, if any. Optional on + * the wire — list surfaces may not include it. Routes that emit + * `SessionItem` are responsible for resolving the friendly form + * from the underlying cuid before returning. + */ + currentRunId: z.string().nullable().optional(), + tags: z.array(z.string()), + metadata: z.record(z.unknown()).nullable(), + closedAt: z.coerce.date().nullable(), + closedReason: z.string().nullable(), + expiresAt: z.coerce.date().nullable(), + createdAt: z.coerce.date(), + updatedAt: z.coerce.date(), +}); +export type SessionItem = z.infer; + +export const CreatedSessionResponseBody = SessionItem.extend({ + /** Friendly id of the first run triggered alongside session create. */ + runId: z.string(), + /** Session-scoped public access token: `read:sessions:{ext} + write:sessions:{ext}`. */ + publicAccessToken: z.string(), + /** True if the session existed already (idempotent upsert), false if newly created. */ + isCached: z.boolean(), +}); +export type CreatedSessionResponseBody = z.infer; + +export const RetrieveSessionResponseBody = SessionItem; +export type RetrieveSessionResponseBody = z.infer; + +/** + * Body for `POST /api/v1/sessions/:session/end-and-continue`. Used by the + * running agent to request a clean handoff to a fresh run on the latest + * deployed version (typical use case: `chat.requestUpgrade`). The + * server triggers a new run, atomically swaps `currentRunId`, and the + * caller exits. + */ +export const EndAndContinueSessionRequestBody = z.object({ + /** The friendlyId of the run requesting the handoff. */ + callingRunId: z.string(), + /** Free-form label for the SessionRun audit row. e.g. `"upgrade"`. */ + reason: z.string().max(64), +}); +export type EndAndContinueSessionRequestBody = z.infer; + +export const EndAndContinueSessionResponseBody = z.object({ + /** friendlyId of the run that has taken over the session. */ + runId: z.string(), + /** + * False when the swap was preempted (a different run was already + * running by the time we tried to claim). The caller should treat + * this as "someone else moved on" — exit cleanly without expecting + * to drive the next run. + */ + swapped: z.boolean(), +}); +export type EndAndContinueSessionResponseBody = z.infer< + typeof EndAndContinueSessionResponseBody +>; + +export const UpdateSessionRequestBody = z.object({ + tags: z.array(z.string().max(128)).max(10).optional(), + metadata: z.record(z.unknown()).nullable().optional(), + // Null explicitly clears the externalId; non-null values must be non-empty. + externalId: z + .union([ + z.literal(null), + z + .string() + .trim() + .min(1) + .max(256) + .refine((v) => !v.startsWith("session_"), { + message: + "externalId cannot start with 'session_' (reserved prefix for internal friendlyIds)", + }), + ]) + .optional(), +}); +export type UpdateSessionRequestBody = z.infer; + +export const CloseSessionRequestBody = z.object({ + reason: z.string().max(256).optional(), +}); +export type CloseSessionRequestBody = z.infer; + +export const SessionStatus = z.enum(["ACTIVE", "CLOSED", "EXPIRED"]); +export type SessionStatus = z.infer; + +/** + * Server-side validation schema for `GET /api/v1/sessions`. Follows the same + * cursor-pagination convention as runs/waitpoints (`page[size]`, + * `page[after]`, `page[before]`) and uses the `filter[*]` prefix for + * narrowing fields — both produced automatically by `zodfetchCursorPage` + * and the matching client-side search-query helper. + */ +export const ListSessionsQueryParams = z + .object({ + "page[size]": z.coerce.number().int().min(1).max(100).default(20), + "page[after]": z.string().optional(), + "page[before]": z.string().optional(), + "filter[type]": z.union([z.string(), z.array(z.string())]).optional(), + "filter[tags]": z.union([z.string(), z.array(z.string())]).optional(), + "filter[taskIdentifier]": z.union([z.string(), z.array(z.string())]).optional(), + "filter[externalId]": z.string().optional(), + "filter[status]": z.union([SessionStatus, z.array(SessionStatus)]).optional(), + "filter[createdAt][period]": z.string().optional(), + "filter[createdAt][from]": z.coerce.number().int().optional(), + "filter[createdAt][to]": z.coerce.number().int().optional(), + }) + .refine( + (value) => !(value["page[after]"] && value["page[before]"]), + { + message: "Cannot pass both page[after] and page[before] on the same request", + path: ["page[before]"], + } + ); +export type ListSessionsQueryParams = z.infer; + +/** + * Client-facing list options — flattened shape that + * {@link ApiClient.listSessions} converts into the `filter[*]` / `page[*]` + * query string before sending. + */ +export const ListSessionsOptions = z.object({ + limit: z.number().int().min(1).max(100).optional(), + after: z.string().optional(), + before: z.string().optional(), + type: z.union([z.string(), z.array(z.string())]).optional(), + tag: z.union([z.string(), z.array(z.string())]).optional(), + taskIdentifier: z.union([z.string(), z.array(z.string())]).optional(), + externalId: z.string().optional(), + status: z.union([SessionStatus, z.array(SessionStatus)]).optional(), + period: z.string().optional(), + from: z.union([z.number(), z.date()]).optional(), + to: z.union([z.number(), z.date()]).optional(), +}); +export type ListSessionsOptions = z.infer; + +export const ListedSessionItem = SessionItem; +export type ListedSessionItem = z.infer; + +export const ListSessionsResponseBody = z.object({ + data: z.array(ListedSessionItem), + pagination: z.object({ + next: z.string().optional(), + previous: z.string().optional(), + }), +}); +export type ListSessionsResponseBody = z.infer; + export const CompleteWaitpointTokenResponseBody = z.object({ success: z.literal(true), }); From fefe61f0065937fcf4cc89d7e3f39b9702a5a18c Mon Sep 17 00:00:00 2001 From: nicktrn <55853254+nicktrn@users.noreply.github.com> Date: Tue, 28 Apr 2026 15:42:12 +0100 Subject: [PATCH 8/8] ci(helm): roll prereleases on main pushes + manual trigger (#3461) Today the helm prerelease workflow only fires on PRs that touch `hosting/k8s/helm/**`. Two consequences we ran into: 1. The `changeset-release/main` PR's prerelease comment goes stale once the release branch gets force-pushed without a helm-touching commit (the bot's `Chart.yaml` bump alone doesn't seem to refire the trigger reliably). 2. The release PR's chart references an `appVersion` (e.g. `v4.4.5`) whose Docker images don't exist until *after* merge + tag. So that prerelease chart can't actually be installed end-to-end. Renames the workflow to `helm-prerelease.yml` and adds two new triggers: - **`push: main`** with `paths: hosting/k8s/helm/**` -> rolling prereleases versioned `-main.`. `appVersion` stays at whatever `Chart.yaml` has (i.e. last released), so installs pull real images. Tests that chart structure is deployable, even if the app code is one release behind. - **`workflow_dispatch`** with optional `app_version` input -> manually trigger a prerelease and optionally override `appVersion` (e.g. pin to `main` or a specific tag). Useful for testing chart + app-version combinations on demand. PR behavior unchanged: same `-pr.` versioning, same posted/updated comment. Why not also bypass paths for `changeset-release/main`? The release PR's chart references not-yet-built `v4.4.5` images, so those prereleases aren't actually installable. The rolling main prerelease covers the testable case better. Why not SHA-pin `appVersion` to a built image like `main-`? Bigger change - the docker publish workflows currently only push `:main` (no SHA-suffixed tag). Worth doing later if we want first-class "install one chart, get exactly that commit's app code" testing, but out of scope here. Diff is mostly a rename. Substantive changes: - new `push` and `workflow_dispatch` triggers - `prerelease` job `if:` extended for the new event types - version logic branches per event - new "Override appVersion" step (workflow_dispatch only) - new "Write run summary" step so non-PR runs surface the install instructions - PR comment steps gated on `github.event_name == 'pull_request'` - concurrency group falls back to `github.ref` for non-PR runs --- ...-pr-prerelease.yml => helm-prerelease.yml} | 63 +++++++++++++++++-- .github/workflows/pr_checks.yml | 1 + hosting/k8s/helm/README.md | 2 +- 3 files changed, 59 insertions(+), 7 deletions(-) rename .github/workflows/{helm-pr-prerelease.yml => helm-prerelease.yml} (62%) diff --git a/.github/workflows/helm-pr-prerelease.yml b/.github/workflows/helm-prerelease.yml similarity index 62% rename from .github/workflows/helm-pr-prerelease.yml rename to .github/workflows/helm-prerelease.yml index f5bbfebde8d..98335192075 100644 --- a/.github/workflows/helm-pr-prerelease.yml +++ b/.github/workflows/helm-prerelease.yml @@ -1,13 +1,25 @@ -name: 🧭 Helm Chart PR Prerelease +name: 🧭 Helm Chart Prerelease on: pull_request: types: [opened, synchronize, reopened] paths: - "hosting/k8s/helm/**" + push: + branches: + - main + paths: + - "hosting/k8s/helm/**" + workflow_dispatch: + inputs: + app_version: + description: "Override appVersion (e.g. 'main', 'v4.4.4'). Leave empty to keep Chart.yaml value." + required: false + type: string + default: "" concurrency: - group: helm-prerelease-${{ github.event.pull_request.number }} + group: helm-prerelease-${{ github.event.pull_request.number || github.ref }} cancel-in-progress: true env: @@ -54,7 +66,10 @@ jobs: prerelease: needs: lint-and-test - if: github.event.pull_request.head.repo.full_name == github.repository + if: | + (github.event_name == 'pull_request' && github.event.pull_request.head.repo.full_name == github.repository) || + github.event_name == 'push' || + github.event_name == 'workflow_dispatch' runs-on: ubuntu-latest permissions: contents: read @@ -88,9 +103,21 @@ jobs: id: version run: | BASE_VERSION=$(grep '^version:' ./hosting/k8s/helm/Chart.yaml | awk '{print $2}') - PR_NUMBER=${{ github.event.pull_request.number }} - SHORT_SHA=$(echo "${{ github.event.pull_request.head.sha }}" | cut -c1-7) - PRERELEASE_VERSION="${BASE_VERSION}-pr${PR_NUMBER}.${SHORT_SHA}" + if [[ "${{ github.event_name }}" == "pull_request" ]]; then + PR_NUMBER=${{ github.event.pull_request.number }} + SHORT_SHA=$(echo "${{ github.event.pull_request.head.sha }}" | cut -c1-7) + PRERELEASE_VERSION="${BASE_VERSION}-pr${PR_NUMBER}.${SHORT_SHA}" + elif [[ "${{ github.event_name }}" == "push" ]]; then + SHORT_SHA=$(echo "${{ github.sha }}" | cut -c1-7) + PRERELEASE_VERSION="${BASE_VERSION}-main.${SHORT_SHA}" + else + SHORT_SHA=$(echo "${{ github.sha }}" | cut -c1-7) + REF_SLUG=$(echo "${{ github.ref_name }}" | tr '/' '-' | tr -cd 'a-zA-Z0-9-') + if [[ -z "$REF_SLUG" ]]; then + REF_SLUG="manual" + fi + PRERELEASE_VERSION="${BASE_VERSION}-${REF_SLUG}.${SHORT_SHA}" + fi echo "version=$PRERELEASE_VERSION" >> $GITHUB_OUTPUT echo "Prerelease version: $PRERELEASE_VERSION" @@ -98,6 +125,13 @@ jobs: run: | sed -i "s/^version:.*/version: ${{ steps.version.outputs.version }}/" ./hosting/k8s/helm/Chart.yaml + - name: Override appVersion + if: github.event_name == 'workflow_dispatch' && inputs.app_version != '' + env: + APP_VERSION: ${{ inputs.app_version }} + run: | + yq -i '.appVersion = strenv(APP_VERSION)' ./hosting/k8s/helm/Chart.yaml + - name: Package Helm Chart run: | helm package ./hosting/k8s/helm/ --destination /tmp/ @@ -110,7 +144,23 @@ jobs: # Push to GHCR OCI registry helm push "$CHART_PACKAGE" "oci://${{ env.REGISTRY }}/${{ github.repository_owner }}/charts" + - name: Write run summary + run: | + { + echo "### 🧭 Helm Chart Prerelease Published" + echo "" + echo "**Version:** \`${{ steps.version.outputs.version }}\`" + echo "" + echo "**Install:**" + echo '```bash' + echo "helm upgrade --install trigger \\" + echo " oci://${{ env.REGISTRY }}/${{ github.repository_owner }}/charts/${{ env.CHART_NAME }} \\" + echo " --version \"${{ steps.version.outputs.version }}\"" + echo '```' + } >> "$GITHUB_STEP_SUMMARY" + - name: Find existing comment + if: github.event_name == 'pull_request' uses: peter-evans/find-comment@v3 id: find-comment with: @@ -119,6 +169,7 @@ jobs: body-includes: "Helm Chart Prerelease Published" - name: Create or update PR comment + if: github.event_name == 'pull_request' uses: peter-evans/create-or-update-comment@v4 with: comment-id: ${{ steps.find-comment.outputs.comment-id }} diff --git a/.github/workflows/pr_checks.yml b/.github/workflows/pr_checks.yml index 12da89db3b2..be9009ae96a 100644 --- a/.github/workflows/pr_checks.yml +++ b/.github/workflows/pr_checks.yml @@ -7,6 +7,7 @@ on: - "docs/**" - ".changeset/**" - "hosting/**" + - ".github/workflows/helm-prerelease.yml" concurrency: group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }} diff --git a/hosting/k8s/helm/README.md b/hosting/k8s/helm/README.md index 4b54b52af76..33e64a93a23 100644 --- a/hosting/k8s/helm/README.md +++ b/hosting/k8s/helm/README.md @@ -736,4 +736,4 @@ helm upgrade --install trigger . \ - Documentation: https://trigger.dev/docs/self-hosting - GitHub Issues: https://github.com/triggerdotdev/trigger.dev/issues -- Discord: https://discord.gg/untWVke9aH \ No newline at end of file +- Discord: https://discord.gg/untWVke9aH