Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughAdds a full External DB Sync feature: DB schema + migrations, sync library and queue, cron runner, internal APIs (sequencer, poller, sync-engine, fusebox, status), dashboard UI, extensive E2E tests and utilities, CI/workflow updates, and integration points in CRUD flows. Changes
Sequence Diagram(s)sequenceDiagram
participant Cron as Cron Runner (run-cron-jobs)
participant Seq as Sequencer API
participant DB as Internal DB
participant Queue as OutgoingRequest Queue
participant Poll as Poller API
participant Ext as External DB / Upstash
Cron->>Seq: GET /api/latest/internal/external-db-sync/sequencer (Bearer)
Seq->>DB: SELECT/UPDATE ProjectUser/ContactChannel (assign sequenceIds, mark shouldUpdateSequenceId)
Seq->>Queue: INSERT batch OutgoingRequest rows
Seq-->>Cron: { ok, iterations }
Cron->>Poll: GET /api/latest/internal/external-db-sync/poller (Bearer)
Poll->>Queue: CLAIM OutgoingRequest rows (FOR UPDATE SKIP LOCKED)
alt Direct mode (STACK_EXTERNAL_DB_SYNC_DIRECT=true)
Poll->>Ext: Apply INSERT/UPDATE/DELETE to external Postgres
else Upstash/Publish mode
Poll->>Ext: Publish batch payload to Upstash
end
Poll->>DB: DELETE/UPDATE OutgoingRequest (finished)
Poll-->>Cron: { ok, requests_processed }
sequenceDiagram
participant API as App API (CRUD)
participant SyncLib as external-db-sync lib
participant DB as Internal DB
participant Queue as OutgoingRequest Queue
API->>DB: Update ProjectUser / ContactChannel
API->>SyncLib: withExternalDbSyncUpdate(wrapper)
SyncLib->>DB: Set shouldUpdateSequenceId = true / update sequence metadata
API->>SyncLib: markProjectUserForExternalDbSync(tenancyId, projectUserId)
SyncLib->>Queue: enqueueExternalDbSync(tenancyId) -> INSERT OutgoingRequest (dedup)
SyncLib-->>API: return updated entity
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
This reverts commit 937db90.
|
I'll analyze this and get back to you. |
There was a problem hiding this comment.
Pull request overview
This PR implements a comprehensive external database synchronization system to replicate data from Stack's internal database to user-configured external databases in near real-time.
Key changes:
- Adds sequence ID tracking for ProjectUser, ContactChannel, and DeletedRow tables
- Implements a sequencer that assigns sequence IDs to track change order
- Implements a poller that queues sync requests via QStash
- Implements a sync engine that pulls changes and pushes them to external databases
Reviewed changes
Copilot reviewed 27 out of 28 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| vercel.json | Adds cron jobs for sequencer and poller endpoints (runs every minute) |
| pnpm-lock.yaml | Updates package dependencies including pg client library |
| package.json | Adds @types/pg dependency for TypeScript support |
| apps/e2e/package.json | Adds pg and @types/pg for E2E testing |
| docker/dependencies/docker.compose.yaml | Adds external-db-test PostgreSQL service for testing |
| apps/backend/prisma/schema.prisma | Adds sequenceId fields, OutgoingRequest and DeletedRow tables |
| apps/backend/prisma/migrations/* | Database migrations for sequence IDs, triggers, and sync tables |
| packages/stack-shared/src/config/schema.ts | Adds dbSync configuration schema for external databases |
| packages/stack-shared/src/config/db-sync-mappings.ts | Defines default SQL mappings for PartialUsers table |
| apps/backend/src/lib/external-db-sync.ts | Core sync logic for pushing data to external databases |
| apps/backend/src/app/api/latest/internal/external-db-sync/* | API endpoints for sequencer, poller, and sync engine |
| apps/backend/scripts/run-cron-jobs.ts | Local development script to simulate cron jobs |
| apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-*.test.ts | Comprehensive E2E tests for basic, advanced, and race condition scenarios |
| apps/backend/src/route-handlers/smart-route-handler.tsx | Removes extensive logging from API requests |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Greptile OverviewGreptile SummaryThis PR implements a comprehensive external database synchronization system that continuously replicates changes from Stack's internal database to user-configured external databases. The implementation uses a sequence-based approach with four main components: a sequencer that assigns monotonically increasing IDs to changed rows, a poller that queues sync requests, a QStash-based queue for reliable delivery, and a sync engine that executes the actual data transfer using user-defined SQL mappings. Key Changes:
Architecture Strengths:
Concerns:
Confidence Score: 3/5
Important Files ChangedFile Analysis
Sequence DiagramsequenceDiagram
participant User as User Action
participant Internal as Internal DB
participant Trigger as DB Triggers
participant Sequencer as Sequencer<br/>(Cron: Every 1min)
participant Poller as Poller<br/>(Cron: Every 1min)
participant Queue as OutgoingRequest<br/>Table
participant QStash as QStash Queue
participant SyncEngine as Sync Engine
participant External as External DB
User->>Internal: INSERT/UPDATE/DELETE User/Contact
Internal->>Trigger: Fire trigger
Trigger->>Internal: Set shouldUpdateSequenceId=TRUE
Note over Sequencer: Runs every 50ms for 2min
Sequencer->>Internal: Call backfill_null_sequence_ids()
Internal->>Internal: Assign sequenceId to changed rows
Internal->>Internal: Set shouldUpdateSequenceId=FALSE
Internal->>Queue: INSERT into OutgoingRequest<br/>(via enqueue_tenant_sync)
Note over Poller: Runs every 50ms for 2min
Poller->>Queue: SELECT pending requests<br/>(FOR UPDATE SKIP LOCKED)
Queue->>Poller: Return up to 100 requests
Poller->>Queue: UPDATE fulfilledAt=NOW()
Poller->>QStash: publishJSON(sync-engine endpoint)
QStash->>SyncEngine: POST /sync-engine
SyncEngine->>SyncEngine: Verify Upstash signature
SyncEngine->>Internal: Execute internalDbFetchQuery<br/>(with tenancyId parameter)
Internal->>SyncEngine: Return changed rows (ordered by sequenceId)
SyncEngine->>External: Connect to external DB
SyncEngine->>External: Create table if needed<br/>(targetTableSchema)
loop For each row
SyncEngine->>External: Execute externalDbUpdateQuery<br/>(upsert with sequence check)
end
External->>External: Update only if newSeq > oldSeq
SyncEngine->>QStash: Return success
|
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (20)
apps/backend/prisma/migrations/20251125030551_add_sequence_id/migration.sql (1)
1-6: Consider documenting the sequence increment strategy.The
INCREMENT BY 11in the global sequence is unusual and likely implements a per-instance ID distribution strategy for multi-instance deployments. Add an inline SQL comment explaining this choice for future maintainers.Ref: Past review feedback suggested consolidating all sequence-related migrations and documenting each step. Consider grouping this with the subsequent
shouldUpdateSequenceIdand index migrations into a single, well-documented migration file.-- Add this comment to explain the strategy CREATE SEQUENCE global_seq_id AS BIGINT START 1 INCREMENT BY 11 -- Increment by 11 to enable per-instance ID ranges (e.g., instance 0: 1,12,23...; instance 1: 2,13,24...) in multi-instance deployments NO MINVALUE NO MAXVALUE;apps/backend/prisma/migrations/20251128210001_add_should_update_sequence_id_indexes/migration.sql (1)
1-7: LGTM!The partial indexes on
shouldUpdateSequenceId = TRUEare an efficient choice for queries filtering unsynced records, reducing index size by excluding already-synced rows. The naming and SPLIT_STATEMENT_SENTINEL structure are consistent with related migrations.Consider adding an inline SQL comment explaining the partial index strategy:
-- Partial index for efficient querying of unsynced records CREATE INDEX "ProjectUser_shouldUpdateSequenceId_idx" ON "ProjectUser"("shouldUpdateSequenceId") WHERE "shouldUpdateSequenceId" = TRUE;apps/backend/prisma/migrations/20251128210000_add_should_update_sequence_id_columns/migration.sql (1)
1-7: LGTM!The addition of
shouldUpdateSequenceIdwith defaultTRUEacross ProjectUser, ContactChannel, and DeletedRow ensures all existing records are marked for synchronization to external databases. This flag enables efficient querying of unsynced records paired with the partial indexes in the follow-up migration.Add inline SQL comments to clarify the column's purpose:
-- Track which records need to be synced to external databases (reset to FALSE after sync) ALTER TABLE "ProjectUser" ADD COLUMN "shouldUpdateSequenceId" BOOLEAN NOT NULL DEFAULT TRUE;apps/backend/prisma/migrations/20251125180000_add_deleted_row_table/migration.sql (1)
8-9: Consider data retention strategy for fulfilled deletions.The
fulfilledAttimestamp tracks when deletions have been synced, but there's no documented retention policy or cleanup process for old rows. Over time, theDeletedRowtable could accumulate millions of fulfilled records, impacting query performance and storage.Recommend documenting a data retention policy (e.g., "delete rows where fulfilledAt < NOW() - INTERVAL '30 days'") and implementing a cleanup job.
apps/backend/prisma/migrations/20251125172224_add_outgoing_request/migration.sql (1)
2-12: Consider adding an index oncreatedAtfor query performance.The poller route orders by
createdAtwhen claiming pending requests (ORDER BY "createdAt"). As the table grows, this ordering operation on unfulfilled requests could benefit from a composite index.CREATE INDEX "OutgoingRequest_fulfilledAt_idx" ON "OutgoingRequest"("fulfilledAt"); + +CREATE INDEX "OutgoingRequest_createdAt_idx" ON "OutgoingRequest"("createdAt");Alternatively, a composite index
(fulfilledAt, createdAt)could further optimize the query pattern used in the poller.apps/backend/src/app/api/latest/internal/external-db-sync/poller/route.ts (2)
81-87: Fragile environment detection pattern.Using
.includes("development")on the environment string is unusual and could match unintended values. The standard approach is direct equality comparison.-if (getNodeEnvironment().includes("development") || getNodeEnvironment().includes("test")) { +const env = getNodeEnvironment(); +if (env === "development" || env === "test") {
74-74: Consider defining a type forqstashOptions.Casting to
anyloses type safety. Define an interface for the expected shape ofqstashOptionsto catch errors at compile time.interface QstashOptions { url: string; body: unknown; } // ... const options = request.qstashOptions as QstashOptions;packages/stack-shared/src/config/schema.ts (1)
11-11: Unused import:yupArrayThe import
yupArraywas added but doesn't appear to be used in this file. Based on the code, onlyyupTupleis used for array-like structures (Line 216).-import { productSchema, userSpecifiedIdSchema, yupArray, yupBoolean, yupDate, yupMixed, yupNever, yupNumber, yupObject, yupRecord, yupString, yupTuple, yupUnion } from "../schema-fields"; +import { productSchema, userSpecifiedIdSchema, yupBoolean, yupDate, yupMixed, yupNever, yupNumber, yupObject, yupRecord, yupString, yupTuple, yupUnion } from "../schema-fields";apps/backend/scripts/run-cron-jobs.ts (2)
25-29: Cron jobs don't run immediately on startup.The
setIntervalschedules jobs to run after the first 60-second delay. Consider whether the jobs should also run immediately when the script starts:for (const endpoint of endpoints) { + run(endpoint); // Run immediately on startup setInterval(() => { run(endpoint); }, 60000); }
16-23: Consider adding a timeout to the fetch request.The
fetchcall has no timeout. If the backend hangs, this could cause the cron job to stall indefinitely. Per coding guidelines, blocking calls without timeouts can cause cascading issues.const run = (endpoint: string) => runAsynchronously(async () => { console.log(`Running ${endpoint}...`); - const res = await fetch(`${baseUrl}${endpoint}`, { - headers: { 'Authorization': `Bearer ${cronSecret}` }, - }); + const controller = new AbortController(); + const timeoutId = setTimeout(() => controller.abort(), 120000); // 2 minute timeout + try { + const res = await fetch(`${baseUrl}${endpoint}`, { + headers: { 'Authorization': `Bearer ${cronSecret}` }, + signal: controller.signal, + }); + if (!res.ok) throw new StackAssertionError(`Failed to call ${endpoint}: ${res.status} ${res.statusText}\n${await res.text()}`, { res }); + console.log(`${endpoint} completed.`); + } finally { + clearTimeout(timeoutId); + } - if (!res.ok) throw new StackAssertionError(`Failed to call ${endpoint}: ${res.status} ${res.statusText}\n${await res.text()}`, { res }); - console.log(`${endpoint} completed.`); });apps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts (1)
50-65: Consider adding a circuit breaker for repeated failures.The loop catches errors and continues (Lines 53-55), which could lead to repeatedly hitting the database with failing queries for the full 2-minute duration. Consider tracking consecutive failures and breaking early:
let iterations = 0; + let consecutiveFailures = 0; + const maxConsecutiveFailures = 10; while (performance.now() - startTime < maxDurationMs) { try { await globalPrismaClient.$executeRaw`SELECT backfill_null_sequence_ids()`; + consecutiveFailures = 0; // Reset on success } catch (error) { console.warn('[sequencer] Failed to run backfill_null_sequence_ids:', error); + consecutiveFailures++; + if (consecutiveFailures >= maxConsecutiveFailures) { + console.error('[sequencer] Too many consecutive failures, breaking loop'); + break; + } } iterations++; // ... }apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-race.test.ts (1)
298-349: Consider parameterizing the long sleep duration.The 70-second sleep makes this test slow for CI. If the poller interval is configurable in tests, you could reduce the wait time proportionally. Alternatively, mark this as a "slow test" that runs only in nightly builds.
apps/backend/prisma/schema.prisma (1)
895-911: Consider adding an index onsequenceIdforDeletedRowif sync performance becomes a concern.The model has a partial index on
shouldUpdateSequenceId(migration 20251128210001) that helps the backfill function. The sync engine's query filters bytenancyIdandtableName, then orders bysequenceId. While the@uniqueconstraint onsequenceIdprovides an index, a composite index on(tenancyId, tableName, sequenceId)could optimize the sync fetch query ifDeletedRowvolumes grow significantly. This is an optional optimization rather than a requirement.apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-advanced.test.ts (3)
2-2: Unused import:generateSecureRandomStringThis import is not used anywhere in the file.
-import { generateSecureRandomString } from '@stackframe/stack-shared/dist/utils/crypto';
390-390: Missing test timeoutThis test lacks an explicit timeout, unlike other tests in this file. Consider adding
TEST_TIMEOUTfor consistency and to prevent hanging tests.- }); + }, TEST_TIMEOUT);
1011-1011: Preferslice()over deprecatedsubstr()
substr()is deprecated in favor ofslice()orsubstring().- const testRunId = `${Date.now()}-${Math.random().toString(36).substr(2, 9)}`; + const testRunId = `${Date.now()}-${Math.random().toString(36).slice(2, 11)}`;apps/backend/src/lib/external-db-sync.ts (3)
48-69: Row-by-row processing may cause performance bottleneckEach row is processed with a separate
await externalClient.query()call. For high-volume syncs (e.g., the 200-user test), this results in 200+ sequential round-trips to the external database.Consider batching multiple rows into a single transaction or using multi-value inserts for better throughput.
+ // Batch rows within a transaction for better performance + await externalClient.query('BEGIN'); + try { for (const row of newRows) { const { tenancyId, ...rest } = row; await externalClient.query(upsertQuery, Object.values(rest)); } + await externalClient.query('COMMIT'); + } catch (err) { + await externalClient.query('ROLLBACK'); + throw err; + }
82-90: Type safety: Excessiveas anycastsUsing
as anyto accesssourceTablesandtargetTablePrimaryKeybypasses TypeScript's type checking. If theCompleteConfigtype is correct, consider updating it to include these properties properly, or use type guards.
192-203: Sequential database syncing limits throughputEach external database is synced sequentially. For tenants with multiple external DBs configured, this could be parallelized with
Promise.all()to reduce total sync time.- for (const [dbId, dbConfig] of Object.entries(externalDatabases)) { - await syncDatabase(dbId, dbConfig, internalPrisma, tenancy.id); - } + await Promise.all( + Object.entries(externalDatabases).map(([dbId, dbConfig]) => + syncDatabase(dbId, dbConfig, internalPrisma, tenancy.id) + ) + );apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-utils.ts (1)
209-215: Consider adding type forexternalDatabasesparameterThe
externalDatabasesparameter is typed asany. Consider using the proper type from the config schema for better type safety.-export async function createProjectWithExternalDb(externalDatabases: any, projectOptions?: { display_name?: string, description?: string }) { +export async function createProjectWithExternalDb( + externalDatabases: Record<string, { type: string; connectionString: string; mappings?: unknown }>, + projectOptions?: { display_name?: string; description?: string } +) {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (27)
apps/backend/package.json(2 hunks)apps/backend/prisma/migrations/20251125030551_add_sequence_id/migration.sql(1 hunks)apps/backend/prisma/migrations/20251125172224_add_outgoing_request/migration.sql(1 hunks)apps/backend/prisma/migrations/20251125180000_add_deleted_row_table/migration.sql(1 hunks)apps/backend/prisma/migrations/20251128210000_add_should_update_sequence_id_columns/migration.sql(1 hunks)apps/backend/prisma/migrations/20251128210001_add_should_update_sequence_id_indexes/migration.sql(1 hunks)apps/backend/prisma/migrations/20251128210002_add_reset_sequence_id_function_and_triggers/migration.sql(1 hunks)apps/backend/prisma/migrations/20251128210003_log_deleted_row_function_and_triggers/migration.sql(1 hunks)apps/backend/prisma/migrations/20251128210004_add_sync_functions/migration.sql(1 hunks)apps/backend/prisma/schema.prisma(4 hunks)apps/backend/scripts/run-cron-jobs.ts(1 hunks)apps/backend/src/app/api/latest/internal/external-db-sync/poller/route.ts(1 hunks)apps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts(1 hunks)apps/backend/src/app/api/latest/internal/external-db-sync/sync-engine/route.tsx(1 hunks)apps/backend/src/lib/external-db-sync.ts(1 hunks)apps/backend/src/route-handlers/smart-route-handler.tsx(1 hunks)apps/e2e/package.json(1 hunks)apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-advanced.test.ts(1 hunks)apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts(1 hunks)apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-race.test.ts(1 hunks)apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-utils.ts(1 hunks)docker/dependencies/docker.compose.yaml(2 hunks)package.json(1 hunks)packages/stack-shared/src/config/db-sync-mappings.ts(1 hunks)packages/stack-shared/src/config/schema-fuzzer.test.ts(1 hunks)packages/stack-shared/src/config/schema.ts(4 hunks)vercel.json(1 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx}: For blocking alerts and errors, never usetoast, as they are easily missed by the user. Instead, use alerts
Keep hover/click transitions snappy and fast. Don't delay the action with a pre-transition; apply transitions after the action instead
NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error). UserunAsynchronouslyorrunAsynchronouslyWithAlertinstead for asynchronous error handling
When creating hover transitions, avoid hover-enter transitions and just use hover-exit transitions (e.g.,transition-colors hover:transition-none)
Use ES6 maps instead of records wherever possible
Files:
packages/stack-shared/src/config/schema-fuzzer.test.tsapps/backend/src/app/api/latest/internal/external-db-sync/poller/route.tspackages/stack-shared/src/config/db-sync-mappings.tsapps/backend/src/app/api/latest/internal/external-db-sync/sync-engine/route.tsxapps/e2e/tests/backend/endpoints/api/v1/external-db-sync-race.test.tsapps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.tsapps/backend/src/route-handlers/smart-route-handler.tsxapps/backend/scripts/run-cron-jobs.tsapps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.tsapps/backend/src/lib/external-db-sync.tsapps/e2e/tests/backend/endpoints/api/v1/external-db-sync-utils.tsapps/e2e/tests/backend/endpoints/api/v1/external-db-sync-advanced.test.tspackages/stack-shared/src/config/schema.ts
**/*.{test,spec}.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
When writing tests, prefer .toMatchInlineSnapshot over other selectors, if possible
Files:
packages/stack-shared/src/config/schema-fuzzer.test.tsapps/e2e/tests/backend/endpoints/api/v1/external-db-sync-race.test.tsapps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.tsapps/e2e/tests/backend/endpoints/api/v1/external-db-sync-advanced.test.ts
apps/backend/src/app/api/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
The project uses a custom route handler system in the backend for consistent API responses
Files:
apps/backend/src/app/api/latest/internal/external-db-sync/poller/route.tsapps/backend/src/app/api/latest/internal/external-db-sync/sync-engine/route.tsxapps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts
apps/{backend,dashboard}/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
NEVER use Next.js dynamic functions if you can avoid them. Instead, prefer using a client component and prefer
usePathnameinstead ofawait params
Files:
apps/backend/src/app/api/latest/internal/external-db-sync/poller/route.tsapps/backend/src/app/api/latest/internal/external-db-sync/sync-engine/route.tsxapps/backend/src/route-handlers/smart-route-handler.tsxapps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.tsapps/backend/src/lib/external-db-sync.ts
packages/stack-shared/src/config/schema.ts
📄 CodeRabbit inference engine (AGENTS.md)
Whenever making backwards-incompatible changes to the config schema, update the migration functions in
packages/stack-shared/src/config/schema.ts
Files:
packages/stack-shared/src/config/schema.ts
🧠 Learnings (4)
📚 Learning: 2025-11-28T21:21:39.123Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-28T21:21:39.123Z
Learning: Applies to packages/stack-shared/src/config/schema.ts : Whenever making backwards-incompatible changes to the config schema, update the migration functions in `packages/stack-shared/src/config/schema.ts`
Applied to files:
packages/stack-shared/src/config/schema-fuzzer.test.tspackages/stack-shared/src/config/db-sync-mappings.tsapps/backend/prisma/migrations/20251128210004_add_sync_functions/migration.sqlapps/backend/package.jsonapps/backend/prisma/schema.prismaapps/backend/src/lib/external-db-sync.tspackages/stack-shared/src/config/schema.ts
📚 Learning: 2025-11-28T21:21:39.123Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-28T21:21:39.123Z
Learning: Applies to apps/backend/src/app/api/**/*.{ts,tsx} : The project uses a custom route handler system in the backend for consistent API responses
Applied to files:
apps/backend/src/app/api/latest/internal/external-db-sync/poller/route.tsapps/backend/src/app/api/latest/internal/external-db-sync/sync-engine/route.tsxapps/backend/src/route-handlers/smart-route-handler.tsxapps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts
📚 Learning: 2025-11-28T21:21:39.123Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-28T21:21:39.123Z
Learning: Always add new E2E tests when changing the API or SDK interface
Applied to files:
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts
📚 Learning: 2025-11-28T21:21:39.123Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-28T21:21:39.123Z
Learning: Applies to apps-{frontend,config}.ts{,x} : To update the list of apps available, edit `apps-frontend.tsx` and `apps-config.ts`
Applied to files:
packages/stack-shared/src/config/schema.ts
🧬 Code graph analysis (6)
apps/backend/src/app/api/latest/internal/external-db-sync/sync-engine/route.tsx (5)
apps/backend/src/route-handlers/smart-route-handler.tsx (1)
createSmartRouteHandler(185-270)packages/stack-shared/src/schema-fields.ts (5)
yupObject(247-251)yupTuple(217-221)yupString(187-190)yupNumber(191-194)yupBoolean(195-198)apps/backend/src/lib/upstash.tsx (1)
ensureUpstashSignature(16-35)apps/backend/src/lib/tenancies.tsx (1)
getTenancy(82-91)apps/backend/src/lib/external-db-sync.ts (1)
syncExternalDatabases(192-203)
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-race.test.ts (2)
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-utils.ts (6)
TestDbManager(16-67)createProjectWithExternalDb(209-215)waitForTable(150-169)waitForCondition(73-88)TEST_TIMEOUT(10-10)waitForSyncedDeletion(125-145)apps/e2e/tests/backend/backend-helpers.ts (1)
niceBackendFetch(109-173)
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts (2)
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-utils.ts (9)
TestDbManager(16-67)createProjectWithExternalDb(209-215)waitForSyncedData(93-120)verifyInExternalDb(182-189)TEST_TIMEOUT(10-10)waitForSyncedDeletion(125-145)verifyNotInExternalDb(174-177)waitForTable(150-169)waitForCondition(73-88)apps/e2e/tests/backend/backend-helpers.ts (1)
niceBackendFetch(109-173)
apps/backend/scripts/run-cron-jobs.ts (3)
packages/stack-shared/src/utils/env.tsx (1)
getEnvVariable(16-58)packages/stack-shared/src/utils/promises.tsx (1)
runAsynchronously(343-366)packages/stack-shared/src/utils/errors.tsx (1)
StackAssertionError(69-85)
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-advanced.test.ts (2)
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-utils.ts (9)
TestDbManager(16-67)createProjectWithExternalDb(209-215)waitForCondition(73-88)TEST_TIMEOUT(10-10)waitForTable(150-169)waitForSyncedData(93-120)HIGH_VOLUME_TIMEOUT(11-11)waitForSyncedDeletion(125-145)verifyNotInExternalDb(174-177)packages/stack-shared/src/config/db-sync-mappings.ts (1)
DEFAULT_DB_SYNC_MAPPINGS(1-166)
packages/stack-shared/src/config/schema.ts (2)
packages/stack-shared/src/schema-fields.ts (5)
yupObject(247-251)yupRecord(283-322)userSpecifiedIdSchema(442-442)yupString(187-190)yupTuple(217-221)packages/stack-shared/src/config/db-sync-mappings.ts (1)
DEFAULT_DB_SYNC_MAPPINGS(1-166)
🪛 GitHub Check: restart-dev-and-test
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts
[warning] 334-334:
Trailing spaces not allowed
[warning] 334-334:
Trailing spaces not allowed
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-utils.ts
[failure] 223-223:
Expected indentation of 2 spaces but found 4
[failure] 222-222:
Expected indentation of 4 spaces but found 6
[failure] 221-221:
Expected indentation of 2 spaces but found 4
[warning] 31-31:
Trailing spaces not allowed
[failure] 223-223:
Expected indentation of 2 spaces but found 4
[failure] 222-222:
Expected indentation of 4 spaces but found 6
[failure] 221-221:
Expected indentation of 2 spaces but found 4
[warning] 31-31:
Trailing spaces not allowed
🪛 GitHub Check: restart-dev-and-test-with-custom-base-port
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts
[warning] 334-334:
Trailing spaces not allowed
[warning] 334-334:
Trailing spaces not allowed
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-utils.ts
[failure] 223-223:
Expected indentation of 2 spaces but found 4
[failure] 222-222:
Expected indentation of 4 spaces but found 6
[failure] 221-221:
Expected indentation of 2 spaces but found 4
[warning] 31-31:
Trailing spaces not allowed
[failure] 223-223:
Expected indentation of 2 spaces but found 4
[failure] 222-222:
Expected indentation of 4 spaces but found 6
[failure] 221-221:
Expected indentation of 2 spaces but found 4
[warning] 31-31:
Trailing spaces not allowed
🪛 GitHub Check: setup-tests
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts
[warning] 334-334:
Trailing spaces not allowed
[warning] 334-334:
Trailing spaces not allowed
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-utils.ts
[failure] 223-223:
Expected indentation of 2 spaces but found 4
[failure] 222-222:
Expected indentation of 4 spaces but found 6
[failure] 221-221:
Expected indentation of 2 spaces but found 4
[warning] 31-31:
Trailing spaces not allowed
[failure] 223-223:
Expected indentation of 2 spaces but found 4
[failure] 222-222:
Expected indentation of 4 spaces but found 6
[failure] 221-221:
Expected indentation of 2 spaces but found 4
[warning] 31-31:
Trailing spaces not allowed
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: CodeQL analysis (javascript-typescript)
- GitHub Check: Vercel Agent Review
- GitHub Check: Cursor Bugbot
- GitHub Check: claude-review
🔇 Additional comments (38)
package.json (1)
68-68: LGTM!The addition of
@types/pgto the monorepo root devDependencies provides TypeScript type support for PostgreSQL interactions across backend migrations, utilities, and end-to-end tests introduced in this PR.apps/e2e/package.json (1)
21-23: LGTM!The additions of
pgand@types/pgto e2e devDependencies enable end-to-end tests to interact with external PostgreSQL databases provisioned by the new external-db-test service. Version pinning is consistent with the monorepo root.docker/dependencies/docker.compose.yaml (1)
207-231: LGTM!The new
external-db-testPostgreSQL service follows the established patterns used by other test databases in the compose file (port mapping withNEXT_PUBLIC_STACK_PORT_PREFIX, volume persistence, hardcoded dev credentials). This isolates external DB sync tests from the main database and enables reproducible e2e testing.apps/backend/package.json (2)
10-10: LGTM!The addition of
run-cron-jobsto the dev concurrency labels integrates the background cron scheduler into local development, enabling testing of the sequencer and poller workflows that drive external DB synchronization.Please verify that
scripts/run-cron-jobs.tsproperly handles process signals (SIGTERM, SIGINT) so thatpnpm run devterminates cleanly without leaving orphaned cron processes.
38-39: LGTM!The
run-cron-jobsscript follows the established pattern: wrapped withwith-envfor environment setup and usestsxfor TypeScript execution. Dependencies (pgandtsx) are already present in the file.apps/backend/prisma/migrations/20251125180000_add_deleted_row_table/migration.sql (1)
15-19: LGTM!The indexes on
tenancyId,tableName, andsequenceIdare well-chosen for the expected query patterns: filtering by tenant, scanning deleted rows from specific tables, and correlating with sequence IDs for sync ordering.vercel.json (1)
2-11: LGTM!The cron configuration correctly schedules both the poller and sequencer endpoints to run every minute. This aligns with the internal route implementations that handle authentication via
CRON_SECRET.packages/stack-shared/src/config/schema-fuzzer.test.ts (1)
52-72: LGTM!The
dbSyncfuzzer configuration properly mirrors the new schema structure with representative test values for all fields includingexternalDatabases,mappings, and their nested properties.apps/backend/src/route-handlers/smart-route-handler.tsx (1)
101-102: Verify the logging removal is intentional.Based on the AI summary, this change removes development-time logging including request/response details and timing measurements. While this reduces log noise, ensure you have alternative observability (e.g., via Sentry spans at lines 71-80 and 232-248) to debug production issues.
apps/backend/prisma/migrations/20251128210003_log_deleted_row_function_and_triggers/migration.sql (2)
44-54: Triggers are correctly attached.Both triggers are appropriately configured as
BEFORE DELETE FOR EACH ROW, which ensures theOLDrecord is available for logging before the actual deletion occurs.
2-42: Well-structured trigger function for audit logging.The implementation correctly uses
BEFORE DELETEto captureOLDrow data and dynamically extracts primary key columns viapg_index/pg_attribute. Both ProjectUser and ContactChannel tables have the requiredtenancyIdcolumn, so the function will work as implemented. However, a couple of observations remain:
- If additional tables are added to this trigger function in the future without a
tenancyIdcolumn, the function will fail at runtime. Consider documenting this requirement or adding a check.- The dynamic PK extraction loop runs on every delete. For high-volume deletes, consider whether this overhead is acceptable or if caching the PK structure would be beneficial.
packages/stack-shared/src/config/schema.ts (2)
204-223: NewdbSyncschema added to branch config.The schema structure looks reasonable. A few observations:
connectionStringis marked asdefined()(required), but the defaults (Lines 594-595) settypeandconnectionStringtoundefined. This could cause validation issues when a database entry exists but isn't fully configured.The
targetTablePrimaryKeyusesyupTuple([yupString().defined()])which only allows exactly one string element. If composite primary keys with multiple columns are needed, consider usingyupArray(yupString().defined())instead.Per the coding guidelines for this file, backwards-incompatible changes to the config schema require migration functions in
migrateConfigOverride. Since this is a new field with defaults, existing configs should remain valid, but please confirm this is intentional.Based on learnings, whenever making backwards-incompatible changes to the config schema, update the migration functions. Please verify that adding
dbSyncdoesn't require a migration for existing projects.
591-598: Defaults fordbSync.externalDatabaseslook consistent with the schema.The default factory function returns
undefinedfortypeandconnectionString, withDEFAULT_DB_SYNC_MAPPINGSfor mappings. This aligns with the schema where these fields are optional until explicitly configured.apps/backend/scripts/run-cron-jobs.ts (1)
32-36: Error handling for main() follows project conventions.The
runAsynchronouslyusage for therunfunction and the.catch()onmain()withprocess.exit(1)is appropriate for a standalone script. The eslint disable comment acknowledges the intentional deviation.apps/backend/src/app/api/latest/internal/external-db-sync/sync-engine/route.tsx (1)
7-31: Route handler structure follows project conventions.The use of
createSmartRouteHandlerwith metadata, request/response schemas aligns with the project's custom route handler system. The Upstash signature validation viaensureUpstashSignatureis appropriate for webhook security.apps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts (2)
38-42: Auth check is straightforward and correct.The Bearer token comparison against
CRON_SECRETis appropriate for internal cron-triggered endpoints. The 401 StatusError on mismatch is the correct response.
14-37: Route handler metadata and schema are well-defined.The
hidden: trueflag appropriately hides this internal endpoint from public documentation. The response schema correctly reflects the returned structure.apps/backend/prisma/migrations/20251128210002_add_reset_sequence_id_function_and_triggers/migration.sql (1)
1-29: LGTM!The trigger logic correctly re-flags rows for sequence ID updates: when a processed row (
shouldUpdateSequenceId = FALSE) gets modified, the trigger resets it toTRUEso the sequencer picks it up again. TheBEFORE UPDATEtiming ensures the flag is set before the row is committed.apps/backend/prisma/migrations/20251128210004_add_sync_functions/migration.sql (1)
24-99: LGTM! Batched processing with concurrency-safe locking.The
FOR UPDATE SKIP LOCKEDpattern correctly handles concurrent sequencer invocations without deadlocks. Each table is limited to 1000 rows per invocation—ensure the sequencer cron runs frequently enough (50ms as documented) to drain backlogs during high-write periods.apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-race.test.ts (1)
222-750: Well-structured race condition test suite.The tests comprehensively cover critical scenarios: uncommitted visibility, commit ordering, multi-row transactions, and sequence ID precedence. The
finallyblocks ensure proper client cleanup even on test failures.packages/stack-shared/src/config/db-sync-mappings.ts (2)
77-164: Correct sequence-based conflict resolution.The asymmetric comparison (
>=for deletes,>for upserts) ensures delete events at the same sequence ID take precedence—preventing stale data resurrection. TheON CONFLICTclause's final guard (EXCLUDED."sequenceId" > "PartialUsers"."sequenceId") adds a second layer of protection against out-of-order processing.
1-76: LGTM!The mapping correctly handles the
BooleanTrueenum conversion (line 34) and usesGREATEST()to track the maximum sequence ID across the joined tables. TheLIMIT 1000 ORDER BY sequenceIdenables paginated incremental fetching.apps/backend/prisma/schema.prisma (1)
176-178: LGTM!The schema additions are well-designed:
@uniqueonsequenceIdprevents duplicates,@default(true)onshouldUpdateSequenceIdensures new rows are automatically queued for sequencing, and theOutgoingRequestindex onfulfilledAtsupports efficient pending-request lookups.Also applies to: 258-260, 884-893
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-advanced.test.ts (6)
19-29: LGTM!Test setup and teardown are properly structured with
describe.sequentialto avoid concurrency issues, and theTestDbManagerlifecycle is correctly managed.
39-157: LGTM!Multi-tenant isolation test is thorough - it properly validates that User A only appears in Project A's databases and User B only in Project B's, with explicit negative assertions to confirm no cross-tenant leakage.
167-237: LGTM!The sequenceId tracking test correctly validates ordering guarantees using
BigIntfor comparison, which is appropriate for potentially large sequence values.
434-505: LGTM!The high-volume batching test is well-designed with retry logic, unique email generation, and appropriate timeout handling for the 200-user stress test.
625-692: LGTM!The write protection test properly validates that readonly PostgreSQL credentials prevent unauthorized modifications while still allowing reads - this is a valuable security verification.
883-980: LGTM!The complex lifecycle test thoroughly exercises the full create → update → update → delete → recreate → update sequence with proper sequenceId tracking and ID verification at each step.
apps/backend/src/lib/external-db-sync.ts (1)
6-8: LGTM!Simple and clean configuration accessor function.
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts (5)
1-17: LGTM!Imports and test suite setup are correctly structured with
describe.sequentialto avoid shared state issues.
37-60: LGTM!Insert test correctly validates that new users appear in the external DB after sync completes.
113-150: LGTM!Delete test properly validates the full deletion flow including verifying 404 response from the internal API before checking external DB sync.
304-333: LGTM!Resilience test is valuable - it confirms that one bad database configuration doesn't block synchronization to healthy databases.
520-565: LGTM!The race condition protection test cleverly simulates an out-of-order delete by directly issuing a conditional DELETE against the external DB, validating that sequence-based guards work correctly.
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-utils.ts (3)
16-67: LGTM!
TestDbManagerclass is well-designed with proper resource management - unique database names prevent collisions, and cleanup safely handles errors during database drops.
73-88: LGTM!Generic polling helper with configurable timeout, interval, and descriptive error messages - clean and reusable.
93-120: LGTM!
waitForSyncedDataproperly handles the case where the table doesn't exist yet (error code42P01) and supports optional name verification.
N2D4
left a comment
There was a problem hiding this comment.
this looks awesome! i have a lot of comments but overall this is very exciting :]
- i really like how much of this PR is tests (almost two-thirds of all lines of code), made it very pleasant to review :]
- important: every time something is different than what we expect (eg. some assertion is not true, etc) it should always be loud and annoying. the worst thing that could happen is that an error is quiet, bothering users but not appearing in our alerts on sentry. throw new StackAssertionError is the preferred way to do this (or the throwErr function which does the same), with captureError for the cases where that doesn't work.
- good job on separating things into files! this folder structure makes a lot of sense, i think
- make sure to really understand the security sensitive parts of the PR. in particular i think the "grant access to the Users table to everyone" in the table schema query and the custom configs are really dangerous, a less thorough reviewer may have missed them
Co-authored-by: Konsti Wohlwend <n2d4xc@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
apps/backend/scripts/run-cron-jobs.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx}: For blocking alerts and errors, never usetoast, as they are easily missed by the user. Instead, use alerts
Keep hover/click transitions snappy and fast. Don't delay the action with a pre-transition; apply transitions after the action instead
NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error). UserunAsynchronouslyorrunAsynchronouslyWithAlertinstead for asynchronous error handling
When creating hover transitions, avoid hover-enter transitions and just use hover-exit transitions (e.g.,transition-colors hover:transition-none)
Use ES6 maps instead of records wherever possible
Files:
apps/backend/scripts/run-cron-jobs.ts
🧬 Code graph analysis (1)
apps/backend/scripts/run-cron-jobs.ts (4)
packages/stack-shared/src/utils/env.tsx (1)
getEnvVariable(16-58)apps/dashboard/src/components/vibe-coding/chat-adapters.ts (1)
run(22-59)packages/stack-shared/src/utils/promises.tsx (2)
runAsynchronously(343-366)wait(260-268)packages/stack-shared/src/utils/errors.tsx (1)
StackAssertionError(69-85)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
- GitHub Check: Vercel Agent Review
- GitHub Check: build (22.x)
- GitHub Check: build (22.x)
- GitHub Check: build (22.x)
- GitHub Check: check_prisma_migrations (22.x)
- GitHub Check: setup-tests
- GitHub Check: all-good
- GitHub Check: lint_and_build (latest)
- GitHub Check: docker
- GitHub Check: restart-dev-and-test-with-custom-base-port
- GitHub Check: restart-dev-and-test
…er/route.ts Co-authored-by: Konsti Wohlwend <n2d4xc@gmail.com>
…er/route.ts Co-authored-by: Konsti Wohlwend <n2d4xc@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
apps/backend/src/app/api/latest/internal/external-db-sync/poller/route.ts (3)
46-49: Consider renamingbusySleepMsand revisiting the 2‑minute windowMinor/naming and tuning nit:
busySleepMsis a bit misleading since this is not a busy-spin; something likepollSleepMsorloopSleepMswould be clearer.- Previous feedback suggested a
maxDurationMsof ~3 minutes to tolerate worst‑case Vercel cron skew (e.g. invocations at:05and:06still overlapping enough to drain backlogs). If that concern still applies, consider bumping from 2 to 3 minutes.
79-85: Document thehost.docker.internalrewrite for dev/testThe dev/test rewrite of
localhost/127.0.0.1tohost.docker.internalis subtle and environment‑specific. Adding a short comment here explaining that this is required for Dockerized e2e / local QStash to reach the backend from inside the container would help future readers understand why this block exists and avoid accidentally “simplifying” it away.You might also cache
getNodeEnvironment()in a local variable to avoid calling it twice, though that’s purely cosmetic.
51-68: Avoid markingOutgoingRequestas fulfilled before QStash publish succeedsRight now
claimPendingRequestssetsfulfilledAt = NOW()in a transaction (lines 54-64), beforeprocessRequestsactually callsupstash.publishJSON(lines 88-91). IfpublishJSONfails, the request stays marked as fulfilled and is never retried, so the external DB can silently fall out of sync. This matches prior automated feedback and should be treated as at-least-once delivery instead of maybe-once.A safer pattern here is:
- In
claimPendingRequests, only select pending rows withFOR UPDATE SKIP LOCKEDand do not touchfulfilledAt.- After a successful
publishJSONcall for a given request, setfulfilledAt = NOW()in a separate transaction.- Leave failed publishes with
fulfilledAt IS NULLso they get picked up again on the next poll.Concretely:
- async function claimPendingRequests(): Promise<OutgoingRequest[]> { - return await retryTransaction(globalPrismaClient, async (tx) => { - const rows = await tx.$queryRaw<OutgoingRequest[]>` - UPDATE "OutgoingRequest" - SET "fulfilledAt" = NOW() - WHERE "id" IN ( - SELECT id - FROM "OutgoingRequest" - WHERE "fulfilledAt" IS NULL - ORDER BY "createdAt" - LIMIT 100 - FOR UPDATE SKIP LOCKED - ) - RETURNING *; - `; - return rows; - }); - } + async function claimPendingRequests(): Promise<OutgoingRequest[]> { + return await retryTransaction(globalPrismaClient, async (tx) => { + const rows = await tx.$queryRaw<OutgoingRequest[]>` + SELECT "id", "createdAt", "qstashOptions", "fulfilledAt" + FROM "OutgoingRequest" + WHERE "fulfilledAt" IS NULL + ORDER BY "createdAt" + LIMIT 100 + FOR UPDATE SKIP LOCKED; + `; + return rows; + }); + }and after a successful publish:
- await upstash.publishJSON({ - url: fullUrl, - body: options.body, - }); - - processed++; + await upstash.publishJSON({ + url: fullUrl, + body: options.body, + }); + + // Only mark as fulfilled after successful QStash publish + await retryTransaction(globalPrismaClient, async (tx) => { + await tx.$queryRaw` + UPDATE "OutgoingRequest" + SET "fulfilledAt" = NOW() + WHERE "id" = ${request.id}; + `; + }); + + processed++;This removes the permanent-loss failure mode; downstream code should already be idempotent given the sequencing/sync use case, but if not, please double-check that repeated publishes are acceptable.
Also applies to: 88-93
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
apps/backend/src/app/api/latest/internal/external-db-sync/poller/route.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx}: For blocking alerts and errors, never usetoast, as they are easily missed by the user. Instead, use alerts
Keep hover/click transitions snappy and fast. Don't delay the action with a pre-transition; apply transitions after the action instead
NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error). UserunAsynchronouslyorrunAsynchronouslyWithAlertinstead for asynchronous error handling
When creating hover transitions, avoid hover-enter transitions and just use hover-exit transitions (e.g.,transition-colors hover:transition-none)
Use ES6 maps instead of records wherever possible
Files:
apps/backend/src/app/api/latest/internal/external-db-sync/poller/route.ts
apps/backend/src/app/api/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
The project uses a custom route handler system in the backend for consistent API responses
Files:
apps/backend/src/app/api/latest/internal/external-db-sync/poller/route.ts
apps/{backend,dashboard}/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
NEVER use Next.js dynamic functions if you can avoid them. Instead, prefer using a client component and prefer
usePathnameinstead ofawait params
Files:
apps/backend/src/app/api/latest/internal/external-db-sync/poller/route.ts
🧠 Learnings (3)
📚 Learning: 2025-11-28T21:21:39.123Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-28T21:21:39.123Z
Learning: Applies to apps/backend/src/app/api/**/*.{ts,tsx} : The project uses a custom route handler system in the backend for consistent API responses
Applied to files:
apps/backend/src/app/api/latest/internal/external-db-sync/poller/route.ts
📚 Learning: 2025-11-28T21:21:39.123Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-28T21:21:39.123Z
Learning: Applies to **/*.{ts,tsx} : NEVER try-catch-all, NEVER void a promise, and NEVER .catch(console.error). Use `runAsynchronously` or `runAsynchronouslyWithAlert` instead for asynchronous error handling
Applied to files:
apps/backend/src/app/api/latest/internal/external-db-sync/poller/route.ts
📚 Learning: 2025-11-28T21:21:39.123Z
Learnt from: CR
Repo: stack-auth/stack-auth PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-28T21:21:39.123Z
Learning: Applies to **/*.{ts,tsx} : For blocking alerts and errors, never use `toast`, as they are easily missed by the user. Instead, use alerts
Applied to files:
apps/backend/src/app/api/latest/internal/external-db-sync/poller/route.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
- GitHub Check: build (22.x)
- GitHub Check: build (22.x)
- GitHub Check: build (22.x)
- GitHub Check: restart-dev-and-test-with-custom-base-port
- GitHub Check: docker
- GitHub Check: setup-tests
- GitHub Check: check_prisma_migrations (22.x)
- GitHub Check: lint_and_build (latest)
- GitHub Check: all-good
- GitHub Check: Vercel Agent Review
- GitHub Check: restart-dev-and-test
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
docker/dev-postgres-replica/entrypoint.sh (1)
59-61:⚠️ Potential issue | 🟡 MinorAdd defensive check for pg_stat_statements configuration.
pg_stat_statements is only configured during initial base backup (fresh PGDATA). While the configuration should persist in the named volume across restarts, it's worth adding a defensive warning if the config is missing in the else branch—this helps catch edge cases like accidental file deletion or partial volume resets.
Suggested warning check
else echo "PGDATA already initialized, starting replica..." + if ! grep -q "pg_stat_statements" "${PGDATA}/postgresql.auto.conf"; then + echo "WARNING: pg_stat_statements is not configured. Recreate volume or update postgresql.auto.conf." + fi fi
🤖 Fix all issues with AI agents
In `@apps/backend/src/app/api/latest/internal/external-db-sync/poller/route.ts`:
- Around line 245-274: The loop never breaks on iterationResult.stopReason so
the poller keeps running until maxDurationMs; after each iteration (where
iterationResult is returned from traceSpan) check if iterationResult.stopReason
is non-null and if so break the while loop (or return) to stop further polling;
update the loop around the traceSpan call in route.ts (the while loop that uses
iterationCount/totalRequestsProcessed, traceSpan, getExternalDbSyncFusebox,
claimPendingRequests, processRequests, pollIntervalMs, and maxDurationMs) to
break when iterationResult.stopReason is truthy before awaiting
wait(pollIntervalMs).
In
`@apps/backend/src/app/api/latest/internal/external-db-sync/sequencer/route.ts`:
- Around line 207-239: The loop never checks iterationResult.stopReason so it
keeps polling until maxDurationMs; after each traceSpan call (where
iterationResult is returned from traceSpan in the while loop), inspect
iterationResult.stopReason and break the loop when it's non-null (e.g.,
"disabled" or "idle"); use the existing variables/symbols
getExternalDbSyncFusebox, backfillSequenceIds, iterationResult, iterations,
pollIntervalMs and preserve the await wait(pollIntervalMs) behavior only when
continuing the loop.
🧹 Nitpick comments (1)
apps/backend/src/app/api/latest/internal/external-db-sync/poller/route.ts (1)
56-59:getLocalApiBaseUrlis defined but never used.This function is not called anywhere in the file. The request processing uses
getEnvVariable("NEXT_PUBLIC_STACK_API_URL")instead.♻️ Remove unused function
-function getLocalApiBaseUrl(): string { - const prefix = getEnvVariable("NEXT_PUBLIC_STACK_PORT_PREFIX", "81"); - return `http://localhost:${prefix}02`; -} -
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @.github/workflows/e2e-api-tests.yaml:
- Around line 162-171: The workflow steps named "Run tests", "Run tests again
(attempt 1)", and "Run tests again (attempt 2)" are calling the test script with
an extra literal "run" (e.g., `pnpm test run ...`) which results in `vitest run
run`; update each step to call `pnpm test` instead and keep the conditional
worker args intact (i.e., replace `pnpm test run ${{ matrix.freestyle-mode ==
'prod' && '--min-workers=1 --max-workers=1' || '' }}` with `pnpm test ${{
matrix.freestyle-mode == 'prod' && '--min-workers=1 --max-workers=1' || '' }}`
for the three steps).
🧹 Nitpick comments (1)
.github/workflows/e2e-source-of-truth-api-tests.yaml (1)
18-26: Annotate placeholder DB credentials to avoid secret-scanner noise.The connection strings in this env block include password-looking tokens, and Checkov already flagged CKV_SECRET_4. If these are intentionally dummy values for local containers, add a Checkov skip comment or clarify with a safer placeholder to prevent false positives and accidental reuse as real secrets.
✅ Minimal suppression option
env: NODE_ENV: test STACK_ENABLE_HARDCODED_PASSKEY_CHALLENGE_FOR_TESTING: yes STACK_ACCESS_TOKEN_EXPIRATION_TIME: 30m + # checkov:skip=CKV_SECRET_4: local test DB credentials only (non-production) STACK_OVERRIDE_SOURCE_OF_TRUTH: '{"type": "postgres", "connectionString": "postgres://postgres:PASSWORD-PLACEHOLDER--uqfEC1hmmv@localhost:8128/source-of-truth-db?schema=sot-schema"}' STACK_TEST_SOURCE_OF_TRUTH: true + # checkov:skip=CKV_SECRET_4: local test DB credentials only (non-production) STACK_DATABASE_CONNECTION_STRING: "postgres://postgres:PASSWORD-PLACEHOLDER--uqfEC1hmmv@localhost:8128/stackframe" STACK_FORCE_EXTERNAL_DB_SYNC: "true" STACK_EXTERNAL_DB_SYNC_MAX_DURATION_MS: "20000" STACK_EXTERNAL_DB_SYNC_DIRECT: "false"
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-utils.ts`:
- Around line 208-319: Replace usages of catch (err: any) in waitForSyncedData,
waitForSyncedDeletion, and countUsersInExternalDb with catch (err: unknown) and
guard access to err.code via a narrow type check (e.g., ensure typeof err ===
'object' && err !== null && 'code' in err and then check (err as { code?:
unknown }).code === '42P01'); alternatively extract that logic into a small
helper like isPgUndefinedTableError(err: unknown): boolean and call it from each
catch branch so you safely detect the '42P01' case and preserve existing
behavior.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@apps/backend/prisma/schema.prisma`:
- Around line 1098-1117: The DeletedRow model currently retains full deleted-row
PII in the data field with no retention policy; add an optional DateTime field
(e.g., syncCompletedAt) to the DeletedRow model and set it when all external DB
syncs are confirmed (use the same code paths that reference
sequenceId/shouldUpdateSequenceId to mark completion), then implement a
scheduled cleanup job that either nullifies DeletedRow.data or deletes the
DeletedRow entirely for records where syncCompletedAt is older than your
retention window (e.g., 30 days); also ensure the sync completion code updates
syncCompletedAt and that any partial-index logic (WHERE shouldUpdateSequenceId =
TRUE) continues to work.
In `@apps/e2e/tests/backend/endpoints/api/v1/users.test.ts`:
- Line 12: The test currently uses it.only ("it.only(\"should not be able to
read own user\", ...)" ) which causes all other tests in the file to be skipped;
remove the ".only" so the test reads it("should not be able to read own user",
...) (or use test(...) if your suite uses that alias) to restore normal test
running and ensure the full suite executes.
🧹 Nitpick comments (2)
apps/e2e/tests/backend/endpoints/api/v1/external-db-sync-basics.test.ts (1)
19-28: Consider adding proper typing forexternalDatabasesparameter.The
externalDatabasesparameter usesanytype. Per coding guidelines,anyshould be avoided. Consider defining a proper type or importing the expected shape from the sync configuration types.Proposed type definition
+type ExternalDbConfig = { + [key: string]: { + type: 'postgres', + connectionString: string, + } +}; + const createProjectWithExternalDb = ( - externalDatabases: any, + externalDatabases: ExternalDbConfig, projectOptions?: { display_name?: string, description?: string } ) => {apps/dashboard/src/app/(main)/(protected)/projects/[projectId]/external-db-sync/page-client.tsx (1)
153-156: Edge case:formatMillis(0)returns "—" instead of a formatted date.The falsy check
if (!value)will treat0as a missing value. While timestamp0(epoch) is unlikely in practice, this could mask issues during debugging.Proposed fix
function formatMillis(value: number | null) { - if (!value) return "—"; + if (value === null || value === undefined) return "—"; return new Date(value).toLocaleString(); }

This PR revolves around the following components
What has been done
How to review this PR:
Note
Introduces a cron-driven external DB sync pipeline with global sequencing, internal poller and webhook sync engine, new DB tables/functions, config schema/mappings, and comprehensive e2e tests.
global_seq_id) andsequenceId/shouldUpdateSequenceIdtoProjectUser,ContactChannel,DeletedRowwith partial indexes.DeletedRow(capture deletes) andOutgoingRequest(queue) tables; add unique/indexes.log_deleted_row,reset_sequence_id_on_update,backfill_null_sequence_ids,enqueue_tenant_sync.GET /api/latest/internal/external-db-sync/sequencer,GET /poller,POST /sync-engine(Upstash-verified) for sync orchestration.vercel.jsonschedules and localscripts/run-cron-jobs.ts; start in dev viadevscript.src/lib/external-db-sync.tsto read tenant mappings and upsert to external Postgres (schema bootstrap, param checks, sequencing).DEFAULT_DB_SYNC_MAPPINGSand config schemadbSync.externalDatabasesin shared config.external-db-testPostgres for tests; e2e deps forpgtypes.Written by Cursor Bugbot for commit 3f2a8ef. This will update automatically on new commits. Configure here.
Summary by CodeRabbit
New Features
Admin UI
Tests
Chores