Conversation
📝 WalkthroughWalkthroughReplaces repository CODEOWNERS, tightens CI/release branch policies, adds local CI tooling and many docs, introduces a full Bot Runner (browser/session/captions/attribution/buffer/meet/auth/utils) and Chrome extension, and implements a substantial AI Backend: DB schemas, repositories, services (OpenAI/RAG/auth), pipelines, routes, and tests. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant BR as Bot Runner (Playwright)
participant CP as CaptionParser (in-page observer)
participant TB as TranscriptBuffer
participant API as AI Backend API
participant DB as Database
BR->>CP: inject observer & __onCaptionUpdate callback
CP->>BR: emit RawCaption (speaker, text, timestamp)
CP->>TB: onCaption(RawCaption)
TB->>TB: attribute speaker, buffer events, form batch
TB->>API: POST /api/v1/meetings/:id/transcripts/batch (TranscriptBatch)
API->>DB: persist batch (transcripts table)
API->>API: increment counters, ack
API-->>TB: 200 OK (ack)
API->>API: optionally POST /api/v1/meetings/:id/generate-mom (trigger pipeline)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In @.github/workflows/ci.yml:
- Around line 20-24: The workflow is vulnerable to shell injection by
interpolating github.head_ref and github.base_ref directly into a shell command;
update the CI step that sets SOURCE and TARGET (the variables currently assigned
from github.head_ref and github.base_ref) to treat these values as untrusted by
exporting them as environment variables or passing them safely to the shell and
printing with a safe formatter (e.g., use environment variable assignment in the
job or printf '%s' with single-parameter expansion) instead of direct unquoted
expansion; locate the block that defines SOURCE and TARGET in
.github/workflows/ci.yml and replace the unsafe assignment/echo with a secure
pattern that does not allow command substitution or globbing for values derived
from github.head_ref/github.base_ref.
In @.github/workflows/release.yml:
- Around line 12-23: The validate-source job currently only prints messages;
update the Validate merge source step to actually check the workflow trigger and
branch sources and fail when they don't match the expected flow: inspect
GITHUB_EVENT_NAME and the event payload (e.g.,
github.event.pull_request.base.ref and github.event.pull_request.head.ref or
GITHUB_REF/GITHUB_BASE_REF/GITHUB_HEAD_REF depending on trigger) and exit
non‑zero if the release on main was not produced by a merge from dev (i.e., base
is main and head/dev is not dev). Modify the Validate merge source step within
the validate-source job to perform this conditional check and fail the job when
validation fails.
- Around line 57-58: The current release steps use blunt fallbacks ("git tag -a
... || echo" and "git push ... || echo") that hide all failures; instead,
explicitly check for the tag and remote presence and only suppress the specific
"already exists/pushed" cases: replace the line creating the tag with a
conditional that uses git rev-parse --verify "refs/tags/v$VERSION" (or git
show-ref) to detect an existing tag and only skip creation when it exists,
otherwise run git tag -a "v$VERSION" -m "Release v$VERSION" and fail on other
errors; similarly for the push step, run git push origin "v$VERSION" and on
error use git ls-remote --exit-code --tags origin "refs/tags/v$VERSION" to
determine if the tag is already on the remote (echo and continue only in that
case), otherwise surface the error and exit non‑zero.
In `@docs/CONTRIBUTING.md`:
- Around line 5-8: Add a language specifier `text` to the fenced code blocks
that contain ASCII diagrams and branch examples so they pass markdownlint MD040;
update the two diagram/code-fence occurrences shown (the single-line diagram
block with "feature/* ──→ dev ──→ main" and the multi-line branch examples block
containing "feature/[package]-[description]" etc.) and likewise apply the same
change to the other affected fenced blocks noted (lines 18–24) by changing each
opening ``` to ```text so the blocks are treated as plaintext.
| run: | | ||
| SOURCE="${{ github.head_ref }}" | ||
| TARGET="${{ github.base_ref }}" | ||
|
|
||
| echo "Source: $SOURCE → Target: $TARGET" |
There was a problem hiding this comment.
Critical: Script injection vulnerability via untrusted input.
github.head_ref is user-controlled (attackers can name branches with shell metacharacters). Using it directly in a shell script enables command injection. For example, a branch named feature/test$(whoami) would execute arbitrary commands.
Pass untrusted values through environment variables instead:
🔒 Proposed fix to prevent script injection
- name: Check branch naming
if: github.event_name == 'pull_request'
+ env:
+ SOURCE: ${{ github.head_ref }}
+ TARGET: ${{ github.base_ref }}
run: |
- SOURCE="${{ github.head_ref }}"
- TARGET="${{ github.base_ref }}"
-
echo "Source: $SOURCE → Target: $TARGET"
# Block direct PRs to main (must come from dev)
if [[ "$TARGET" == "main" && "$SOURCE" != "dev" ]]; then📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| run: | | |
| SOURCE="${{ github.head_ref }}" | |
| TARGET="${{ github.base_ref }}" | |
| echo "Source: $SOURCE → Target: $TARGET" | |
| - name: Check branch naming | |
| if: github.event_name == 'pull_request' | |
| env: | |
| SOURCE: ${{ github.head_ref }} | |
| TARGET: ${{ github.base_ref }} | |
| run: | | |
| echo "Source: $SOURCE → Target: $TARGET" |
🧰 Tools
🪛 actionlint (1.7.10)
[error] 20-20: "github.head_ref" is potentially untrusted. avoid using it directly in inline scripts. instead, pass it through an environment variable. see https://docs.github.com/en/actions/reference/security/secure-use#good-practices-for-mitigating-script-injection-attacks for more details
(expression)
🤖 Prompt for AI Agents
In @.github/workflows/ci.yml around lines 20 - 24, The workflow is vulnerable to
shell injection by interpolating github.head_ref and github.base_ref directly
into a shell command; update the CI step that sets SOURCE and TARGET (the
variables currently assigned from github.head_ref and github.base_ref) to treat
these values as untrusted by exporting them as environment variables or passing
them safely to the shell and printing with a safe formatter (e.g., use
environment variable assignment in the job or printf '%s' with single-parameter
expansion) instead of direct unquoted expansion; locate the block that defines
SOURCE and TARGET in .github/workflows/ci.yml and replace the unsafe
assignment/echo with a secure pattern that does not allow command substitution
or globbing for values derived from github.head_ref/github.base_ref.
| # Only main can trigger releases, and only from dev merges | ||
| validate-source: | ||
| runs-on: ubuntu-latest | ||
| steps: | ||
| - uses: actions/checkout@v4 | ||
| with: | ||
| fetch-depth: 2 | ||
|
|
||
| - name: Validate merge source | ||
| run: | | ||
| echo "Release triggered on main branch" | ||
| echo "This should only come from dev branch merges" |
There was a problem hiding this comment.
validate-source job doesn't actually validate anything.
The job comments claim it validates that releases come from dev branch merges, but the implementation only echoes informational messages without performing any actual validation. The merge source is never checked.
If this validation is important, implement an actual check:
🔧 Proposed fix to add actual validation
validate-source:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Validate merge source
run: |
- echo "Release triggered on main branch"
- echo "This should only come from dev branch merges"
+ # Get the parent commits of the merge
+ PARENT_BRANCHES=$(git log -1 --pretty=%P HEAD)
+ echo "Merge parents: $PARENT_BRANCHES"
+
+ # For a merge commit, check if dev was involved
+ if git branch -r --contains HEAD~1 | grep -q 'origin/dev'; then
+ echo "✅ Release validated: merge includes dev branch"
+ else
+ echo "⚠️ Warning: Could not confirm this release came from dev branch"
+ fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Only main can trigger releases, and only from dev merges | |
| validate-source: | |
| runs-on: ubuntu-latest | |
| steps: | |
| - uses: actions/checkout@v4 | |
| with: | |
| fetch-depth: 2 | |
| - name: Validate merge source | |
| run: | | |
| echo "Release triggered on main branch" | |
| echo "This should only come from dev branch merges" | |
| # Only main can trigger releases, and only from dev merges | |
| validate-source: | |
| runs-on: ubuntu-latest | |
| steps: | |
| - uses: actions/checkout@v4 | |
| with: | |
| fetch-depth: 2 | |
| - name: Validate merge source | |
| run: | | |
| # Get the parent commits of the merge | |
| PARENT_BRANCHES=$(git log -1 --pretty=%P HEAD) | |
| echo "Merge parents: $PARENT_BRANCHES" | |
| # For a merge commit, check if dev was involved | |
| if git branch -r --contains HEAD~1 | grep -q 'origin/dev'; then | |
| echo "✅ Release validated: merge includes dev branch" | |
| else | |
| echo "⚠️ Warning: Could not confirm this release came from dev branch" | |
| fi |
🤖 Prompt for AI Agents
In @.github/workflows/release.yml around lines 12 - 23, The validate-source job
currently only prints messages; update the Validate merge source step to
actually check the workflow trigger and branch sources and fail when they don't
match the expected flow: inspect GITHUB_EVENT_NAME and the event payload (e.g.,
github.event.pull_request.base.ref and github.event.pull_request.head.ref or
GITHUB_REF/GITHUB_BASE_REF/GITHUB_HEAD_REF depending on trigger) and exit
non‑zero if the release on main was not produced by a merge from dev (i.e., base
is main and head/dev is not dev). Modify the Validate merge source step within
the validate-source job to perform this conditional check and fail the job when
validation fails.
| git tag -a "v$VERSION" -m "Release v$VERSION" || echo "Tag already exists" | ||
| git push origin "v$VERSION" || echo "Tag already pushed" |
There was a problem hiding this comment.
Idempotent tag handling may mask real git errors.
Using || echo "Tag already exists" will suppress all errors, not just the "tag exists" case. Legitimate failures (permissions, network issues, corrupted state) will be silently ignored.
Consider checking the specific exit condition:
🔧 Proposed fix for safer error handling
- git tag -a "v$VERSION" -m "Release v$VERSION" || echo "Tag already exists"
- git push origin "v$VERSION" || echo "Tag already pushed"
+ if git rev-parse "v$VERSION" >/dev/null 2>&1; then
+ echo "Tag v$VERSION already exists"
+ else
+ git tag -a "v$VERSION" -m "Release v$VERSION"
+ fi
+
+ if git ls-remote --tags origin | grep -q "refs/tags/v$VERSION"; then
+ echo "Tag v$VERSION already pushed"
+ else
+ git push origin "v$VERSION"
+ fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| git tag -a "v$VERSION" -m "Release v$VERSION" || echo "Tag already exists" | |
| git push origin "v$VERSION" || echo "Tag already pushed" | |
| if git rev-parse "v$VERSION" >/dev/null 2>&1; then | |
| echo "Tag v$VERSION already exists" | |
| else | |
| git tag -a "v$VERSION" -m "Release v$VERSION" | |
| fi | |
| if git ls-remote --tags origin | grep -q "refs/tags/v$VERSION"; then | |
| echo "Tag v$VERSION already pushed" | |
| else | |
| git push origin "v$VERSION" | |
| fi |
🤖 Prompt for AI Agents
In @.github/workflows/release.yml around lines 57 - 58, The current release
steps use blunt fallbacks ("git tag -a ... || echo" and "git push ... || echo")
that hide all failures; instead, explicitly check for the tag and remote
presence and only suppress the specific "already exists/pushed" cases: replace
the line creating the tag with a conditional that uses git rev-parse --verify
"refs/tags/v$VERSION" (or git show-ref) to detect an existing tag and only skip
creation when it exists, otherwise run git tag -a "v$VERSION" -m "Release
v$VERSION" and fail on other errors; similarly for the push step, run git push
origin "v$VERSION" and on error use git ls-remote --exit-code --tags origin
"refs/tags/v$VERSION" to determine if the tag is already on the remote (echo and
continue only in that case), otherwise surface the error and exit non‑zero.
| ``` | ||
| feature/* ──→ dev ──→ main | ||
| fix/* ──┘ | ||
| ``` |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add language specifier to fenced code blocks.
Per markdownlint MD040, fenced code blocks should have a language specified. For ASCII diagrams, use text or plaintext:
📝 Proposed fix
-```
+```text
feature/* ──→ dev ──→ main
fix/* ──┘
```diff
-```
+```text
feature/[package]-[description]
fix/[package]-[description]
hotfix/[description]
docs/[description]
chore/[description]
</details>
Also applies to: 18-24
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 markdownlint-cli2 (0.20.0)</summary>
[warning] 5-5: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
In @docs/CONTRIBUTING.md around lines 5 - 8, Add a language specifier text to
the fenced code blocks that contain ASCII diagrams and branch examples so they
pass markdownlint MD040; update the two diagram/code-fence occurrences shown
(the single-line diagram block with "feature/* ──→ dev ──→ main" and the
multi-line branch examples block containing "feature/[package]-[description]"
etc.) and likewise apply the same change to the other affected fenced blocks
noted (lines 18–24) by changing each opening totext so the blocks are
treated as plaintext.
</details>
<!-- fingerprinting:phantom:poseidon:ocelot -->
<!-- This is an auto-generated comment by CodeRabbit -->
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
.context/API.md (1)
44-52: 🧹 Nitpick | 🔵 TrivialAdd explicit HTTP error guidance for validation.
Consider documenting the expected 400 response semantics when
safeParsefails so consumers know the contract.✍️ Suggested doc tweak
Always validate incoming data: ```typescript import { TranscriptBatchSchema } from '@meeting-ai/shared/schemas'; const result = TranscriptBatchSchema.safeParse(data); if (!result.success) { - // Handle validation error + // Return 400 with validation details }</details> </blockquote></details> <details> <summary>.agent/workflows/dev-setup.md (1)</summary><blockquote> `18-60`: _⚠️ Potential issue_ | _🟡 Minor_ **Fix ordered list numbering to satisfy markdownlint (MD029).** The current numbering (2., 3., …) violates the configured style. <details> <summary>🔧 Suggested fix</summary> ```diff -2. Build the shared package first: +1. Build the shared package first: @@ -3. Build all packages: +1. Build all packages: @@ -4. Set up Husky git hooks: +1. Set up Husky git hooks: @@ -5. Copy environment files: +1. Copy environment files: @@ -6. Start development servers: +1. Start development servers:docs/API_CONTRACTS.md (1)
16-48:⚠️ Potential issue | 🟡 MinorResponse example
processedCountdoesn’t match request.The request contains a single event but the response shows
processedCount: 5, which is inconsistent.✅ Suggested correction
{ "acknowledged": true, - "processedCount": 5, + "processedCount": 1, "errors": [] }
🤖 Fix all issues with AI agents
In @.agent/workflows/ci-local.md:
- Line 9: Remove the stray literal line "// turbo-all" from the Markdown file so
it no longer renders as text; locate the exact string "// turbo-all" in
.agent/workflows/ci-local.md and delete that line (or replace it with an
appropriate Markdown comment or remove entirely) to clean up the rendered
document.
- Around line 11-27: The ordered list uses explicit increasing numbers and
should use auto-numbering to satisfy MD029; change the three list entries so
each line starts with "1." instead of "1.", "2.", "3." — specifically update the
list items that introduce the code blocks for the commands "pnpm run pre-push",
"./scripts/ci-local.sh", and "./scripts/ci-local.sh --test" so all three use
"1." as the marker (preserving the code fences and command text).
In @.agent/workflows/run-tests.md:
- Line 11: Remove the stray JavaScript-style comments "// turbo" found in the
markdown (they appear in multiple places) — either delete them or replace each
with a meaningful markdown note or an HTML comment (<!-- turbo -->) so the
intent is clear; update the sections that currently show "-// turbo" to use
proper markdown text or remove the lines entirely to keep the run-tests.md
content clean and well-formed.
In @.context/SPRINT.md:
- Line 12: Update the assignee label in the sprint header "Bot Runner (Friend)"
to a clear identifier: replace "Friend" with the actual team member's full name
or a role-based identifier (e.g., "Bot Runner — Alice Smith" or "Bot Runner —
Backend Team") so ownership is explicit; edit the header string "Bot Runner
(Friend)" in the .context/SPRINT.md file accordingly.
In `@docs/API_CONTRACTS.md`:
- Around line 51-67: The API contract uses inconsistent path parameter names;
update the three endpoints shown (POST /api/v1/meetings/:meetingId/end and POST
/api/v1/meetings/:meetingId/participant, and any /api/v1/meetings/:meetingId
occurrences) to use the standardized :id parameter (e.g., POST
/api/v1/meetings/:id/end and POST /api/v1/meetings/:id/participant) so they
match .context/API.md and packages/ai-backend/README.md; search for and replace
:meetingId with :id in this document to keep naming consistent across docs.
In `@packages/shared/package.json`:
- Around line 8-29: The exports map is inconsistent: the root export "."
includes a "require" condition but subpath exports ("./types", "./schemas",
"./contracts", "./constants") only have "types" and "import", so either remove
the "require" entry from the root to make the package ESM-only, or add matching
"require" entries to each subpath export so CommonJS consumers can require
subpaths; update the package.json exports block accordingly by editing the
"exports" object to either drop the "require" key under "." or add "require":
"./dist/... .js" entries to each of "./types", "./schemas", "./contracts", and
"./constants" to mirror the root entry.
- Around line 39-43: package.json currently pins "zod" (^3.22.0) and "vitest"
(^1.2.0); update these dependency entries to the newer major releases (e.g., zod
v4.x and vitest v4.x) by changing the version strings for "zod" and "vitest" in
the dependencies/devDependencies section, then run a full test pass and address
any breaking changes surfaced (refer to Zod and Vitest migration guides for API
changes impacting code that imports/uses zod or tests that use vitest APIs).
In `@scripts/ci-local.sh`:
- Around line 20-66: The CI script currently suppresses stderr by appending
"2>/dev/null" to all check invocations (the conditional commands invoking "pnpm
run typecheck", "pnpm run lint", "pnpm run format:check", "pnpm run build" and
the optional "pnpm run test"), which hides useful error output; remove the
"2>/dev/null" redirections from those conditionals so failures print their
stderr to the console (leave the rest of the logic and the FAILED variable
handling unchanged) and run the same checks using the existing command strings
to preserve behavior apart from allowing error diagnostics to surface.
|
|
||
| Run this before pushing to validate locally: | ||
|
|
||
| // turbo-all |
There was a problem hiding this comment.
Remove stray // turbo-all line.
It renders as literal text in Markdown and looks like a leftover note.
🧹 Suggested fix
-// turbo-all📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // turbo-all |
🤖 Prompt for AI Agents
In @.agent/workflows/ci-local.md at line 9, Remove the stray literal line "//
turbo-all" from the Markdown file so it no longer renders as text; locate the
exact string "// turbo-all" in .agent/workflows/ci-local.md and delete that line
(or replace it with an appropriate Markdown comment or remove entirely) to clean
up the rendered document.
| 1. Quick check (lint + format + build): | ||
|
|
||
| ```bash | ||
| pnpm run pre-push | ||
| ``` | ||
|
|
||
| 2. Full CI with colorized output: | ||
|
|
||
| ```bash | ||
| ./scripts/ci-local.sh | ||
| ``` | ||
|
|
||
| 3. Full CI including tests: | ||
|
|
||
| ```bash | ||
| ./scripts/ci-local.sh --test | ||
| ``` |
There was a problem hiding this comment.
Fix ordered list numbering to satisfy markdownlint (MD029).
Use 1. for all items to allow auto-numbering.
🔧 Suggested fix
-2. Full CI with colorized output:
+1. Full CI with colorized output:
@@
-3. Full CI including tests:
+1. Full CI including tests:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| 1. Quick check (lint + format + build): | |
| ```bash | |
| pnpm run pre-push | |
| ``` | |
| 2. Full CI with colorized output: | |
| ```bash | |
| ./scripts/ci-local.sh | |
| ``` | |
| 3. Full CI including tests: | |
| ```bash | |
| ./scripts/ci-local.sh --test | |
| ``` | |
| 1. Quick check (lint + format + build): | |
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 17-17: Ordered list item prefix
Expected: 1; Actual: 2; Style: 1/1/1
(MD029, ol-prefix)
[warning] 23-23: Ordered list item prefix
Expected: 1; Actual: 3; Style: 1/1/1
(MD029, ol-prefix)
🤖 Prompt for AI Agents
In @.agent/workflows/ci-local.md around lines 11 - 27, The ordered list uses
explicit increasing numbers and should use auto-numbering to satisfy MD029;
change the three list entries so each line starts with "1." instead of "1.",
"2.", "3." — specifically update the list items that introduce the code blocks
for the commands "pnpm run pre-push", "./scripts/ci-local.sh", and
"./scripts/ci-local.sh --test" so all three use "1." as the marker (preserving
the code fences and command text).
| @@ -9,26 +9,33 @@ Execute tests for all packages or specific packages. | |||
| ## All Tests | |||
|
|
|||
| // turbo | |||
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Remove or clarify stray // turbo comments.
These JavaScript-style comments (// turbo) appear throughout the markdown file but serve no apparent purpose. They may be leftover placeholders or incomplete annotations. Consider removing them or replacing with meaningful markdown comments/context if they were intended to indicate something about Turborepo.
📝 Proposed fix to remove stray comments
## All Tests
-// turbo
```bash
pnpm testPackage-Specific Tests
Shared Package
-// turbo
pnpm --filter `@meeting-ai/shared` testBot Runner
-// turbo
pnpm --filter `@meeting-ai/bot-runner` testAI Backend
-// turbo
pnpm --filter `@meeting-ai/ai-backend` test</details>
Also applies to: 21-21, 29-29, 37-37
<details>
<summary>🤖 Prompt for AI Agents</summary>
In @.agent/workflows/run-tests.md at line 11, Remove the stray JavaScript-style
comments "// turbo" found in the markdown (they appear in multiple places) —
either delete them or replace each with a meaningful markdown note or an HTML
comment () so the intent is clear; update the sections that
currently show "-// turbo" to use proper markdown text or remove the lines
entirely to keep the run-tests.md content clean and well-formed.
</details>
<!-- fingerprinting:phantom:poseidon:ocelot -->
<!-- This is an auto-generated comment by CodeRabbit -->
| "exports": { | ||
| ".": { | ||
| "types": "./dist/index.d.ts", | ||
| "import": "./dist/index.js", | ||
| "require": "./dist/index.js" | ||
| }, | ||
| "scripts": { | ||
| "build": "tsc", | ||
| "dev": "tsc --watch", | ||
| "typecheck": "tsc --noEmit", | ||
| "test": "vitest run", | ||
| "test:watch": "vitest", | ||
| "clean": "rm -rf dist" | ||
| "./types": { | ||
| "types": "./dist/types/index.d.ts", | ||
| "import": "./dist/types/index.js" | ||
| }, | ||
| "dependencies": { | ||
| "zod": "^3.22.0" | ||
| "./schemas": { | ||
| "types": "./dist/schemas/index.d.ts", | ||
| "import": "./dist/schemas/index.js" | ||
| }, | ||
| "devDependencies": { | ||
| "vitest": "^1.2.0" | ||
| "./contracts": { | ||
| "types": "./dist/contracts/index.d.ts", | ||
| "import": "./dist/contracts/index.js" | ||
| }, | ||
| "./constants": { | ||
| "types": "./dist/constants/index.d.ts", | ||
| "import": "./dist/constants/index.js" | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Inconsistent CommonJS support in exports map.
The root export (.) includes a "require" condition for CommonJS consumers, but the subpath exports (./types, ./schemas, ./contracts, ./constants) only define "types" and "import". This inconsistency means CommonJS consumers can require('@meeting-ai/shared') but cannot require('@meeting-ai/shared/types').
If the package is ESM-only (which is fine for a private internal package), consider removing the "require" from the root export for consistency. If CommonJS support is needed, add "require" conditions to all subpath exports.
♻️ Option A: Make ESM-only (recommended for modern monorepos)
{
"name": "@meeting-ai/shared",
"version": "0.0.1",
"private": true,
"description": "Shared types, schemas, and contracts for Meeting AI system",
+ "type": "module",
"main": "./dist/index.js",
"types": "./dist/index.d.ts",
"exports": {
".": {
"types": "./dist/index.d.ts",
- "import": "./dist/index.js",
- "require": "./dist/index.js"
+ "import": "./dist/index.js"
},♻️ Option B: Add CommonJS support to all subpaths
"./types": {
"types": "./dist/types/index.d.ts",
- "import": "./dist/types/index.js"
+ "import": "./dist/types/index.js",
+ "require": "./dist/types/index.js"
},
"./schemas": {
"types": "./dist/schemas/index.d.ts",
- "import": "./dist/schemas/index.js"
+ "import": "./dist/schemas/index.js",
+ "require": "./dist/schemas/index.js"
},
"./contracts": {
"types": "./dist/contracts/index.d.ts",
- "import": "./dist/contracts/index.js"
+ "import": "./dist/contracts/index.js",
+ "require": "./dist/contracts/index.js"
},
"./constants": {
"types": "./dist/constants/index.d.ts",
- "import": "./dist/constants/index.js"
+ "import": "./dist/constants/index.js",
+ "require": "./dist/constants/index.js"
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "exports": { | |
| ".": { | |
| "types": "./dist/index.d.ts", | |
| "import": "./dist/index.js", | |
| "require": "./dist/index.js" | |
| }, | |
| "scripts": { | |
| "build": "tsc", | |
| "dev": "tsc --watch", | |
| "typecheck": "tsc --noEmit", | |
| "test": "vitest run", | |
| "test:watch": "vitest", | |
| "clean": "rm -rf dist" | |
| "./types": { | |
| "types": "./dist/types/index.d.ts", | |
| "import": "./dist/types/index.js" | |
| }, | |
| "dependencies": { | |
| "zod": "^3.22.0" | |
| "./schemas": { | |
| "types": "./dist/schemas/index.d.ts", | |
| "import": "./dist/schemas/index.js" | |
| }, | |
| "devDependencies": { | |
| "vitest": "^1.2.0" | |
| "./contracts": { | |
| "types": "./dist/contracts/index.d.ts", | |
| "import": "./dist/contracts/index.js" | |
| }, | |
| "./constants": { | |
| "types": "./dist/constants/index.d.ts", | |
| "import": "./dist/constants/index.js" | |
| } | |
| "exports": { | |
| ".": { | |
| "types": "./dist/index.d.ts", | |
| "import": "./dist/index.js", | |
| "require": "./dist/index.js" | |
| }, | |
| "./types": { | |
| "types": "./dist/types/index.d.ts", | |
| "import": "./dist/types/index.js", | |
| "require": "./dist/types/index.js" | |
| }, | |
| "./schemas": { | |
| "types": "./dist/schemas/index.d.ts", | |
| "import": "./dist/schemas/index.js", | |
| "require": "./dist/schemas/index.js" | |
| }, | |
| "./contracts": { | |
| "types": "./dist/contracts/index.d.ts", | |
| "import": "./dist/contracts/index.js", | |
| "require": "./dist/contracts/index.js" | |
| }, | |
| "./constants": { | |
| "types": "./dist/constants/index.d.ts", | |
| "import": "./dist/constants/index.js", | |
| "require": "./dist/constants/index.js" | |
| } |
🤖 Prompt for AI Agents
In `@packages/shared/package.json` around lines 8 - 29, The exports map is
inconsistent: the root export "." includes a "require" condition but subpath
exports ("./types", "./schemas", "./contracts", "./constants") only have "types"
and "import", so either remove the "require" entry from the root to make the
package ESM-only, or add matching "require" entries to each subpath export so
CommonJS consumers can require subpaths; update the package.json exports block
accordingly by editing the "exports" object to either drop the "require" key
under "." or add "require": "./dist/... .js" entries to each of "./types",
"./schemas", "./contracts", and "./constants" to mirror the root entry.
| "dependencies": { | ||
| "zod": "^3.22.0" | ||
| }, | ||
| "devDependencies": { | ||
| "vitest": "^1.2.0" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check latest versions of zod and vitest on npm
echo "=== Checking zod ==="
npm view zod version
echo -e "\n=== Checking vitest ==="
npm view vitest version
echo -e "\n=== Checking zod security advisories ==="
npm audit --json 2>/dev/null | jq '.vulnerabilities.zod // "No vulnerabilities found"' 2>/dev/null || echo "npm audit check completed"
echo -e "\n=== Checking vitest security advisories ==="
npm audit --json 2>/dev/null | jq '.vulnerabilities.vitest // "No vulnerabilities found"' 2>/dev/null || echo "npm audit check completed"Repository: KumarSashank/AI-Product-Manager
Length of output: 274
Consider updating to newer major versions of zod and vitest.
The dependencies are free from known security vulnerabilities, but are significantly outdated: zod is at 3.22.0 with 4.3.6 available, and vitest is at 1.2.0 with 4.0.18 available. While the caret constraints allow for patch updates within the current major versions, updating to newer major versions would align the project with current standards and potentially include performance improvements and new features. However, major version upgrades should be tested thoroughly for breaking changes.
🤖 Prompt for AI Agents
In `@packages/shared/package.json` around lines 39 - 43, package.json currently
pins "zod" (^3.22.0) and "vitest" (^1.2.0); update these dependency entries to
the newer major releases (e.g., zod v4.x and vitest v4.x) by changing the
version strings for "zod" and "vitest" in the dependencies/devDependencies
section, then run a full test pass and address any breaking changes surfaced
(refer to Zod and Vitest migration guides for API changes impacting code that
imports/uses zod or tests that use vitest APIs).
| # 1. TypeScript Check | ||
| echo -e "\n${YELLOW}[1/4] TypeScript Check...${NC}" | ||
| if pnpm run typecheck 2>/dev/null; then | ||
| echo -e "${GREEN}✓ TypeScript check passed${NC}" | ||
| else | ||
| echo -e "${RED}✗ TypeScript check failed${NC}" | ||
| FAILED=1 | ||
| fi | ||
|
|
||
| # 2. Lint Check | ||
| echo -e "\n${YELLOW}[2/4] ESLint Check...${NC}" | ||
| if pnpm run lint 2>/dev/null; then | ||
| echo -e "${GREEN}✓ Lint check passed${NC}" | ||
| else | ||
| echo -e "${RED}✗ Lint check failed${NC}" | ||
| echo -e "${YELLOW} Run 'pnpm run lint:fix' to auto-fix${NC}" | ||
| FAILED=1 | ||
| fi | ||
|
|
||
| # 3. Format Check | ||
| echo -e "\n${YELLOW}[3/4] Prettier Format Check...${NC}" | ||
| if pnpm run format:check 2>/dev/null; then | ||
| echo -e "${GREEN}✓ Format check passed${NC}" | ||
| else | ||
| echo -e "${RED}✗ Format check failed${NC}" | ||
| echo -e "${YELLOW} Run 'pnpm run format' to auto-fix${NC}" | ||
| FAILED=1 | ||
| fi | ||
|
|
||
| # 4. Build Check | ||
| echo -e "\n${YELLOW}[4/4] Build Check...${NC}" | ||
| if pnpm run build 2>/dev/null; then | ||
| echo -e "${GREEN}✓ Build check passed${NC}" | ||
| else | ||
| echo -e "${RED}✗ Build check failed${NC}" | ||
| FAILED=1 | ||
| fi | ||
|
|
||
| # Optional: Run tests if --test flag is passed | ||
| if [[ "$1" == "--test" ]]; then | ||
| echo -e "\n${YELLOW}[5/5] Running Tests...${NC}" | ||
| if pnpm run test 2>/dev/null; then | ||
| echo -e "${GREEN}✓ Tests passed${NC}" | ||
| else | ||
| echo -e "${RED}✗ Tests failed${NC}" | ||
| FAILED=1 | ||
| fi |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Don’t suppress stderr for CI diagnostics.
Redirecting stderr to /dev/null hides actionable errors when a check fails.
🧰 Suggested change
-if pnpm run typecheck 2>/dev/null; then
+if pnpm run typecheck; then
@@
-if pnpm run lint 2>/dev/null; then
+if pnpm run lint; then
@@
-if pnpm run format:check 2>/dev/null; then
+if pnpm run format:check; then
@@
-if pnpm run build 2>/dev/null; then
+if pnpm run build; then
@@
- if pnpm run test 2>/dev/null; then
+ if pnpm run test; then📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # 1. TypeScript Check | |
| echo -e "\n${YELLOW}[1/4] TypeScript Check...${NC}" | |
| if pnpm run typecheck 2>/dev/null; then | |
| echo -e "${GREEN}✓ TypeScript check passed${NC}" | |
| else | |
| echo -e "${RED}✗ TypeScript check failed${NC}" | |
| FAILED=1 | |
| fi | |
| # 2. Lint Check | |
| echo -e "\n${YELLOW}[2/4] ESLint Check...${NC}" | |
| if pnpm run lint 2>/dev/null; then | |
| echo -e "${GREEN}✓ Lint check passed${NC}" | |
| else | |
| echo -e "${RED}✗ Lint check failed${NC}" | |
| echo -e "${YELLOW} Run 'pnpm run lint:fix' to auto-fix${NC}" | |
| FAILED=1 | |
| fi | |
| # 3. Format Check | |
| echo -e "\n${YELLOW}[3/4] Prettier Format Check...${NC}" | |
| if pnpm run format:check 2>/dev/null; then | |
| echo -e "${GREEN}✓ Format check passed${NC}" | |
| else | |
| echo -e "${RED}✗ Format check failed${NC}" | |
| echo -e "${YELLOW} Run 'pnpm run format' to auto-fix${NC}" | |
| FAILED=1 | |
| fi | |
| # 4. Build Check | |
| echo -e "\n${YELLOW}[4/4] Build Check...${NC}" | |
| if pnpm run build 2>/dev/null; then | |
| echo -e "${GREEN}✓ Build check passed${NC}" | |
| else | |
| echo -e "${RED}✗ Build check failed${NC}" | |
| FAILED=1 | |
| fi | |
| # Optional: Run tests if --test flag is passed | |
| if [[ "$1" == "--test" ]]; then | |
| echo -e "\n${YELLOW}[5/5] Running Tests...${NC}" | |
| if pnpm run test 2>/dev/null; then | |
| echo -e "${GREEN}✓ Tests passed${NC}" | |
| else | |
| echo -e "${RED}✗ Tests failed${NC}" | |
| FAILED=1 | |
| fi | |
| # 1. TypeScript Check | |
| echo -e "\n${YELLOW}[1/4] TypeScript Check...${NC}" | |
| if pnpm run typecheck; then | |
| echo -e "${GREEN}✓ TypeScript check passed${NC}" | |
| else | |
| echo -e "${RED}✗ TypeScript check failed${NC}" | |
| FAILED=1 | |
| fi | |
| # 2. Lint Check | |
| echo -e "\n${YELLOW}[2/4] ESLint Check...${NC}" | |
| if pnpm run lint; then | |
| echo -e "${GREEN}✓ Lint check passed${NC}" | |
| else | |
| echo -e "${RED}✗ Lint check failed${NC}" | |
| echo -e "${YELLOW} Run 'pnpm run lint:fix' to auto-fix${NC}" | |
| FAILED=1 | |
| fi | |
| # 3. Format Check | |
| echo -e "\n${YELLOW}[3/4] Prettier Format Check...${NC}" | |
| if pnpm run format:check; then | |
| echo -e "${GREEN}✓ Format check passed${NC}" | |
| else | |
| echo -e "${RED}✗ Format check failed${NC}" | |
| echo -e "${YELLOW} Run 'pnpm run format' to auto-fix${NC}" | |
| FAILED=1 | |
| fi | |
| # 4. Build Check | |
| echo -e "\n${YELLOW}[4/4] Build Check...${NC}" | |
| if pnpm run build; then | |
| echo -e "${GREEN}✓ Build check passed${NC}" | |
| else | |
| echo -e "${RED}✗ Build check failed${NC}" | |
| FAILED=1 | |
| fi | |
| # Optional: Run tests if --test flag is passed | |
| if [[ "$1" == "--test" ]]; then | |
| echo -e "\n${YELLOW}[5/5] Running Tests...${NC}" | |
| if pnpm run test; then | |
| echo -e "${GREEN}✓ Tests passed${NC}" | |
| else | |
| echo -e "${RED}✗ Tests failed${NC}" | |
| FAILED=1 | |
| fi |
🤖 Prompt for AI Agents
In `@scripts/ci-local.sh` around lines 20 - 66, The CI script currently suppresses
stderr by appending "2>/dev/null" to all check invocations (the conditional
commands invoking "pnpm run typecheck", "pnpm run lint", "pnpm run
format:check", "pnpm run build" and the optional "pnpm run test"), which hides
useful error output; remove the "2>/dev/null" redirections from those
conditionals so failures print their stderr to the console (leave the rest of
the logic and the FAILED variable handling unchanged) and run the same checks
using the existing command strings to preserve behavior apart from allowing
error diagnostics to surface.
… and session management.
…ntrol, and participant tracking.
- OpenAI service with Zod structured outputs - MoM generation pipeline (transcript → summary → highlights → items) - Action items extraction (14 meeting item types) - Embeddings schema for semantic search (pgvector-ready) - RAG service with cosine similarity search - AI routes: /generate-mom, /extract-items, /search, /context
Tests (140 total): - OpenAI service schema validation and token utilities - MoM pipeline with mocked dependencies - Action items pipeline extraction and stats - AI routes integration tests Documentation: - AI_PIPELINE.md - Architecture, components, API reference - ENVIRONMENT.md - Configuration guide - Updated .env.example with clear sections
- API.md: Added AI pipeline endpoints (MoM, action items, search) - SPRINT.md: Updated with AI backend completion status - BOT_RUNNER_INTEGRATION.md: Comprehensive integration guide - README.md: Added integration guide to index
… new bot-runner package.
…ch configuration for improved stability.
…eting join reliability with improved retry logic, window management, and error handling. its working on mac currently
…typing, clicking, and delays in the meeting joiner.
…alth, and expanded architecture for the bot runner.
…in the web app, supported by a new backend auth service and schema updates.
…nauthenticated meet restrictions
…to extract live captions and speaker information.
…havior - Added ghost-cursor-playwright for Bezier curve mouse movements - Human-like typing with random delays and occasional typos - Random delays between actions to simulate reading - Browser console log capture for debugging - Increased retry attempts and improved denial detection
…k/AI-Product-Manager into feat/bot_runner/exp2
Feat/bot runner/exp2
- Fix exactOptionalPropertyTypes in auth.service.ts - Add omitPassword helper to avoid unused destructure - Fix bcrypt/jwt imports (use * as namespace) - Remove unused vars in web package - All 140 tests pass, lint clean, build succeeds
feat(ai-backend): Complete AI backend core — database, pipelines, API, auth & web app
…nd integrate backend API routes
There was a problem hiding this comment.
Actionable comments posted: 179
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
docker-compose.yml (1)
14-16: 🧹 Nitpick | 🔵 TrivialConsider using healthcheck condition in
depends_on.The postgres healthcheck is configured, but
ai-backenduses basicdepends_onwhich only waits for container start, not readiness. This can cause startup race conditions.Proposed fix
depends_on: - - postgres - - redis + postgres: + condition: service_healthy + redis: + condition: service_started🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker-compose.yml` around lines 14 - 16, Update the ai-backend service's depends_on so it waits for postgres readiness (not just start): replace the simple list under depends_on with a mapping for postgres that sets condition: service_healthy (e.g., depends_on: { postgres: { condition: service_healthy }, redis: { condition: service_started } } ), reference the ai-backend depends_on entry and the postgres healthcheck; if you need redis readiness add a healthcheck for redis and change its condition to service_healthy or use an init/wait-for script for services without healthchecks.docs/ARCHITECTURE.md (1)
5-5:⚠️ Potential issue | 🟡 MinorFix overview count to match the updated architecture diagram.
Line 5 still says “two main services,” but the diagram now includes the Web Dashboard as an additional system component.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/ARCHITECTURE.md` at line 5, Update the sentence that currently reads "The Meeting AI system consists of two main services" to reflect the added Web Dashboard component by changing the count to three and/or explicitly listing the components (e.g., "three main components: [Service A], [Service B], and the Web Dashboard") so the text matches the updated architecture diagram; locate the exact phrase "The Meeting AI system consists of two main services" in ARCHITECTURE.md and adjust it to mention the Web Dashboard.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 4708683d-51a3-4bcd-b765-a9c30447ae2c
⛔ Files ignored due to path filters (11)
package-lock.jsonis excluded by!**/package-lock.jsonpackages/chrome-extension/icons/icon-128.pngis excluded by!**/*.pngpackages/chrome-extension/icons/icon-16.pngis excluded by!**/*.pngpackages/chrome-extension/icons/icon-48.pngis excluded by!**/*.pngpackages/web/app/favicon.icois excluded by!**/*.icopackages/web/public/file.svgis excluded by!**/*.svgpackages/web/public/globe.svgis excluded by!**/*.svgpackages/web/public/next.svgis excluded by!**/*.svgpackages/web/public/vercel.svgis excluded by!**/*.svgpackages/web/public/window.svgis excluded by!**/*.svgpnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (138)
.context/API.md.context/BOT_RUNNER_INTEGRATION.md.context/README.md.context/SPRINT.md.git_log.txt.git_status.txt.github/workflows/ci.yml.gitignore.husky/pre-push.prettierignorePROGRESS_SLIDE.mdPROGRESS_SUMMARY.mdREADME.mddocker-compose.ymldocs/AI_PIPELINE.mddocs/API_ENDPOINTS.mddocs/ARCHITECTURE.mddocs/ENVIRONMENT.mddocs/README.mddocs/TESTING.mddocs/TOOLING.mddocs/database/MIGRATIONS.mddocs/database/OVERVIEW.mddocs/database/REPOSITORIES.mddocs/database/RUNBOOK.mddocs/database/SCHEMA.mddocs/database/TROUBLESHOOTING.mdlog-utf8.txtpackage.jsonpackages/ai-backend/.env.examplepackages/ai-backend/drizzle.config.tspackages/ai-backend/package.jsonpackages/ai-backend/scripts/test-openai.tspackages/ai-backend/src/db/index.tspackages/ai-backend/src/db/repositories/index.tspackages/ai-backend/src/db/repositories/meeting.repository.tspackages/ai-backend/src/db/repositories/meetingItems.repository.tspackages/ai-backend/src/db/repositories/mom.repository.tspackages/ai-backend/src/db/repositories/transcript.repository.tspackages/ai-backend/src/db/schema/embeddings.tspackages/ai-backend/src/db/schema/enums.test.tspackages/ai-backend/src/db/schema/enums.tspackages/ai-backend/src/db/schema/index.tspackages/ai-backend/src/db/schema/meetingItems.test.tspackages/ai-backend/src/db/schema/meetingItems.tspackages/ai-backend/src/db/schema/meetings.test.tspackages/ai-backend/src/db/schema/meetings.tspackages/ai-backend/src/db/schema/mom.test.tspackages/ai-backend/src/db/schema/mom.tspackages/ai-backend/src/db/schema/organizations.test.tspackages/ai-backend/src/db/schema/organizations.tspackages/ai-backend/src/db/schema/transcripts.test.tspackages/ai-backend/src/db/schema/transcripts.tspackages/ai-backend/src/db/schema/users.tspackages/ai-backend/src/index.tspackages/ai-backend/src/middleware/auth.middleware.tspackages/ai-backend/src/pipelines/actionItems.pipeline.test.tspackages/ai-backend/src/pipelines/actionItems.pipeline.tspackages/ai-backend/src/pipelines/index.tspackages/ai-backend/src/pipelines/mom.pipeline.test.tspackages/ai-backend/src/pipelines/mom.pipeline.tspackages/ai-backend/src/routes/ai.test.tspackages/ai-backend/src/routes/ai.tspackages/ai-backend/src/routes/auth.tspackages/ai-backend/src/routes/bot.tspackages/ai-backend/src/routes/extension.tspackages/ai-backend/src/routes/index.tspackages/ai-backend/src/routes/meetingItems.test.tspackages/ai-backend/src/routes/meetingItems.tspackages/ai-backend/src/routes/meetings.test.tspackages/ai-backend/src/routes/meetings.tspackages/ai-backend/src/routes/mom.test.tspackages/ai-backend/src/routes/mom.tspackages/ai-backend/src/routes/projects.tspackages/ai-backend/src/routes/transcripts.test.tspackages/ai-backend/src/routes/transcripts.tspackages/ai-backend/src/routes/upload.test.tspackages/ai-backend/src/routes/upload.tspackages/ai-backend/src/services/auth.service.tspackages/ai-backend/src/services/index.tspackages/ai-backend/src/services/openai.service.test.tspackages/ai-backend/src/services/openai.service.tspackages/ai-backend/src/services/rag.service.tspackages/ai-backend/tsconfig.jsonpackages/ai-backend/vitest.config.tspackages/bot-runner/README.mdpackages/bot-runner/package.jsonpackages/bot-runner/src/api/client.tspackages/bot-runner/src/api/index.tspackages/bot-runner/src/auth/index.tspackages/bot-runner/src/auth/manager.tspackages/bot-runner/src/browser/index.tspackages/bot-runner/src/browser/launcher.tspackages/bot-runner/src/browser/session.tspackages/bot-runner/src/captions/attribution.tspackages/bot-runner/src/captions/buffer.tspackages/bot-runner/src/captions/index.tspackages/bot-runner/src/captions/parser.tspackages/bot-runner/src/index.tspackages/bot-runner/src/meet/captions.tspackages/bot-runner/src/meet/index.tspackages/bot-runner/src/meet/joiner.tspackages/bot-runner/src/meet/participants.tspackages/bot-runner/src/types/ghost-cursor-playwright.d.tspackages/bot-runner/src/utils/human.tspackages/bot-runner/transcripts/transcript-meet-1773078118868.txtpackages/bot-runner/transcripts/transcript-meet-1773079153795.txtpackages/bot-runner/tsconfig.jsonpackages/chrome-extension/README.mdpackages/chrome-extension/manifest.jsonpackages/chrome-extension/package.jsonpackages/chrome-extension/src/background/service-worker.jspackages/chrome-extension/src/content/caption-observer.jspackages/chrome-extension/src/popup/popup.csspackages/chrome-extension/src/popup/popup.htmlpackages/chrome-extension/src/popup/popup.jspackages/shared/src/constants/index.tspackages/shared/src/types/meeting.tspackages/web/.gitignorepackages/web/README.mdpackages/web/app/(auth)/signin/page.tsxpackages/web/app/(auth)/signup/page.tsxpackages/web/app/(dashboard)/layout.tsxpackages/web/app/(dashboard)/projects/[id]/page.tsxpackages/web/app/(dashboard)/projects/page.tsxpackages/web/app/globals.csspackages/web/app/layout.tsxpackages/web/app/page.tsxpackages/web/components/auth/AuthForm.tsxpackages/web/components/chat/ChatInterface.tsxpackages/web/components/projects/TaskList.tsxpackages/web/eslint.config.mjspackages/web/lib/api.tspackages/web/next.config.tspackages/web/package.jsonpackages/web/postcss.config.mjspackages/web/tsconfig.jsonstatus-utf8.txt
|
|
||
| # Web package (not ready yet) | ||
| packages/web/ |
There was a problem hiding this comment.
Do not ignore an entire package directory in version control.
Ignoring packages/web/ defeats the purpose of a monorepo—any code there becomes invisible to Git, CI, and collaborators. If the package isn't ready:
- Keep it untracked by not committing it yet, or
- Commit it as work-in-progress with a clear README, or
- Use a feature branch.
Hiding a whole package via .gitignore risks accidental data loss and confusion.
Proposed fix
-
-# Web package (not ready yet)
-packages/web/📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Web package (not ready yet) | |
| packages/web/ |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore around lines 44 - 46, Remove the blanket ignore for the monorepo
package entry "packages/web/" from .gitignore so the web package is tracked by
Git; instead, if you need to ignore only build or local artifacts, replace that
line with targeted patterns (e.g., build/dist, node_modules, .env.local) or keep
the package uncommitted until ready, but do not hide the entire package
directory. Ensure the "packages/web/" entry is deleted and add any specific
artifact patterns you actually want excluded.
| # Generated | ||
| *.d.ts |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider narrowing the *.d.ts ignore pattern.
Ignoring all *.d.ts files is overly broad. Generated declarations in dist/ are already covered by the dist/ rule. Manually authored type declarations (ambient modules, type augmentations) should still be formatted.
Consider removing this rule or narrowing it to specific paths if there are generated .d.ts files outside dist/.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.prettierignore around lines 13 - 14, The *.d.ts ignore pattern in
.prettierignore is too broad and skips manually authored declaration files;
replace or narrow it to only the generated declaration locations (e.g., use a
pattern like dist/**/*.d.ts or build/**/*.d.ts) or remove the rule entirely so
manual ambient modules/type augmentations are still formatted; update the *.d.ts
entry in .prettierignore accordingly.
| | Date | Change | Author | | ||
| | ---------- | ----------------------- | ------------- | | ||
| | 2024-02-06 | Initial runbook created | @KumarSashank | |
There was a problem hiding this comment.
Update runbook change-log date to current revision date.
The initial entry date appears stale for this PR context and may reduce trust in operational freshness.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/database/RUNBOOK.md` around lines 128 - 130, Update the runbook
changelog row currently showing "2024-02-06 | Initial runbook created |
`@KumarSashank`" to the current revision date for this PR; locate that table entry
in docs/database/RUNBOOK.md and replace the date string "2024-02-06" with the
appropriate current date so the top changelog row reflects the latest revision.
| @@ -0,0 +1,263 @@ | |||
| # Database Schema Reference | |||
|
|
|||
| > Complete reference for all database tables and relationships. | |||
There was a problem hiding this comment.
This reference is incomplete.
The diagram names team_members, recurring_series, progress_updates, and tags, but the table reference never documents them. Either add those sections or soften the “Complete reference” wording.
Also applies to: 30-31
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/database/SCHEMA.md` at line 3, The document title claims a "Complete
reference" but the tables team_members, recurring_series, progress_updates, and
tags referenced in the diagram are missing; either add documented sections for
each of these tables (schema, columns, PK/FK relationships, example rows) and
link them from the diagram, or change the heading and any copy that says
"Complete reference" to a softer phrase like "Reference" or "Partial reference"
and remove the implication of completeness; ensure you update all occurrences
that assert completeness (including the spots that reference the diagram) so the
doc and diagram stay consistent.
| recurring_series ||--o{ meetings : contains | ||
| meetings ||--o{ participants : attended_by | ||
| meetings ||--o{ transcript_events : captured | ||
| meetings ||--|| moms : generates |
There was a problem hiding this comment.
Document meetings → moms as optional.
moms.meeting_id being unique means one MoM belongs to one meeting; it does not guarantee every meeting already has one. The ||--|| edge overstates the actual constraint and will mislead readers writing queries or migrations.
Also applies to: 143-143
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/database/SCHEMA.md` at line 18, Update the ER diagram and schema text to
mark the relationship from meetings to moms as optional: change the relationship
line using the existing symbols (e.g., replace the current "meetings ||--|| moms
: generates" with the notation for an optional one-to-one such as "meetings
||--o| moms : generates"), and add/document that moms.meeting_id is unique but
nullable (or nullable with a unique constraint) so readers know a meeting may
not have a MoM; ensure references to moms.meeting_id and the relationship in the
surrounding schema text are updated to match this optional constraint.
| const handleLogout = async () => { | ||
| await authApi.logout(); | ||
| router.push('/signin'); | ||
| }; |
There was a problem hiding this comment.
Handle potential logout failure.
If authApi.logout() throws (e.g., network error), the user is redirected to /signin without server-side session being invalidated. This could leave a stale session. Consider adding error handling or showing feedback on failure.
🛡️ Proposed fix
const handleLogout = async () => {
- await authApi.logout();
- router.push('/signin');
+ try {
+ await authApi.logout();
+ } catch (error) {
+ console.error('Logout failed:', error);
+ } finally {
+ router.push('/signin');
+ }
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const handleLogout = async () => { | |
| await authApi.logout(); | |
| router.push('/signin'); | |
| }; | |
| const handleLogout = async () => { | |
| try { | |
| await authApi.logout(); | |
| } catch (error) { | |
| console.error('Logout failed:', error); | |
| } finally { | |
| router.push('/signin'); | |
| } | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/app/`(dashboard)/layout.tsx around lines 22 - 25, The
handleLogout function currently calls authApi.logout() and immediately redirects
via router.push('/signin'), which will redirect the user even if logout fails;
wrap the logout call in a try/catch inside handleLogout, only call
router.push('/signin') on successful await of authApi.logout(), and on error
surface feedback (e.g., set an error state or call your toast/error handler) and
log the error (include the caught error) so the session isn't silently left
stale; reference handleLogout, authApi.logout, and router.push when applying
this change.
| body { | ||
| background: var(--background); | ||
| color: var(--foreground); | ||
| font-family: Arial, Helvetica, sans-serif; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Font-family doesn't use the theme variable.
The body's font-family uses a hardcoded fallback stack (Arial, Helvetica, sans-serif) instead of the --font-sans variable defined in @theme inline. This creates an inconsistency between the theme configuration and actual usage.
♻️ Suggested fix to use theme variable
body {
background: var(--background);
color: var(--foreground);
- font-family: Arial, Helvetica, sans-serif;
+ font-family: var(--font-sans), Arial, Helvetica, sans-serif;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| body { | |
| background: var(--background); | |
| color: var(--foreground); | |
| font-family: Arial, Helvetica, sans-serif; | |
| } | |
| body { | |
| background: var(--background); | |
| color: var(--foreground); | |
| font-family: var(--font-sans), Arial, Helvetica, sans-serif; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/app/globals.css` around lines 22 - 26, The body rule uses a
hardcoded font stack instead of the theme variable; update the body selector's
font-family to reference the CSS custom property --font-sans (e.g., font-family:
var(--font-sans, Arial, Helvetica, sans-serif)) so the theme-provided font is
used while keeping a safe fallback; change the font-family in the body block and
remove the hardcoded stack.
| @@ -0,0 +1,36 @@ | |||
| This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app). | |||
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add a top-level heading to comply with markdown standards.
Per static analysis (MD041), the first line should be a top-level heading. This improves accessibility and document structure.
📝 Proposed fix
+# `@meeting-ai/web`
+
This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app).📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app). | |
| # `@meeting-ai/web` | |
| This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app). |
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 1-1: First line in a file should be a top-level heading
(MD041, first-line-heading, first-line-h1)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/README.md` at line 1, Add a top-level Markdown heading at the
very start of the README.md (e.g., "# packages/web" or another short project
title) so the first line is a level‑1 heading; insert this heading above the
existing sentence "This is a [Next.js]..." and keep the current content
unchanged after the new heading.
| "moduleResolution": "bundler", | ||
| "resolveJsonModule": true, | ||
| "isolatedModules": true, | ||
| "jsx": "react-jsx", |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Next.js tsconfig.json recommended jsx setting preserve vs react-jsx
💡 Result:
Next.js recommends "jsx": "preserve" in tsconfig.json for TypeScript projects. This is the default setting automatically configured when adding TypeScript to a Next.js project (via create-next-app or running next dev after renaming files to .ts/.tsx). "preserve" instructs TypeScript to leave JSX syntax unchanged in output files (emitting .jsx files), while Next.js handles the JSX transformation using its own optimized compiler (SWC by default since v12). This avoids double transformation and enables Next.js-specific optimizations. In contrast, "react-jsx" (new JSX Transform from React 17+) emits calls to jsx/jsxs from react/jsx-runtime, requiring no React import for JSX but performing the transform during TypeScript compilation. Next.js overrides this to "preserve" because it manages the transform itself for better performance and features. Changing to "react-jsx" resolves some VS Code IntelliSense issues (e.g., unnecessary React imports) but gets reset by Next.js on next dev/build, as shown in terminal messages like "jsx was set to preserve (next.js implements its own optimized jsx transform)". Workarounds include TypeScript version adjustments or path mappings, but "preserve" is the official recommendation. This holds as of Next.js 15 (2026), with no changes noted in recent docs or issues.
Citations:
- 1: https://stackoverflow.com/questions/79046311/next-js-automatically-resets-tsconfig-json-jsx-option-causing-react-import-wa
- 2: jsx was set to preserve (next.js implements its own optimized jsx transform) vercel/next.js#19155
- 3: https://nextjs.org/docs/pages/api-reference/config/typescript
- 4: Next.js sets invalid and irrelevant tsconfig options. vercel/next.js#45617
- 5: https://www.typescriptlang.org/docs/handbook/jsx.html
- 6: https://nextjs.org/docs/14/app/building-your-application/configuring/typescript
- 7: https://nextjs.org/docs/app/getting-started/installation
- 8: Update tsconfig of example to be consistent with default output of next.js vercel/next.js#25581
- 9: https://nextjs.org/docs/basic-features/typescript
Change jsx to "preserve".
Next.js requires "jsx": "preserve" in tsconfig.json—the framework handles JSX transformation internally. Using "react-jsx" will be overridden by Next.js during next dev/next build, resulting in a configuration mismatch.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/tsconfig.json` at line 14, Update the tsconfig JSON to set the
TypeScript JSX emit to "preserve" instead of "react-jsx": locate the "jsx"
compilerOption (currently "jsx": "react-jsx") in tsconfig.json and change its
value to "preserve" so Next.js can handle JSX transformation internally and
avoid config mismatches during next dev/build.
| ``` | ||
| Backend (AI) ████████████████████░ 90% ✅ | ||
| Frontend (Web) ███████████████░░░░░ 70% ✅ | ||
| Bot Runner █░░░░░░░░░░░░░░░░░░░ 5% ⚠️ | ||
| Documentation ████████████████████░ 95% ✅ | ||
| ``` |
There was a problem hiding this comment.
Add a language label to fenced code blocks to satisfy markdownlint (MD040).
Use a language like text for non-code visual blocks.
📝 Proposed fix
-```
+```text
Backend (AI) ████████████████████░ 90% ✅
Frontend (Web) ███████████████░░░░░ 70% ✅
Bot Runner █░░░░░░░░░░░░░░░░░░░ 5% ⚠️
Documentation ████████████████████░ 95% ✅</details>
<!-- suggestion_start -->
<details>
<summary>📝 Committable suggestion</summary>
> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
```suggestion
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@PROGRESS_SLIDE.md` around lines 11 - 16, The fenced code block containing the
progress bars (the block beginning with the line "Backend (AI)
████████████████████░ 90% ✅") needs a language label to satisfy markdownlint
MD040; update the opening triple-backtick to include a language token such as
"text" (e.g., change ``` to ```text) so the block becomes a labeled code fence
for non-code visual content.
| events: transcriptBuffer.map((t, i) => ({ | ||
| speaker: t.speakerName, | ||
| content: t.text, | ||
| startTime: t.timestamp, | ||
| sequence: sequenceCounter + i, | ||
| confidence: t.confidence ?? 0.9, |
There was a problem hiding this comment.
Transcript field names are inconsistent with backend contract.
This guide uses startTime/sequence, but the backend contract in this PR uses capturedAt/sequenceNumber. Keeping these mismatched will break Bot Runner ingestion.
Suggested patch
- startTime: t.timestamp,
- sequence: sequenceCounter + i,
+ capturedAt: t.timestamp,
+ sequenceNumber: sequenceCounter + i,
@@
- -d '{"events":[{"speaker":"John","content":"Hello team","sequence":1}]}'
+ -d '{"events":[{"speaker":"John","content":"Hello team","sequenceNumber":1,"capturedAt":"2026-02-06T19:30:55Z"}]}'Also applies to: 142-145
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.context/BOT_RUNNER_INTEGRATION.md around lines 16 - 21, The event mapping
uses the wrong field names for the backend contract: update the mapping inside
the events: transcriptBuffer.map(...) so that startTime becomes capturedAt and
sequence becomes sequenceNumber (e.g., replace startTime: t.timestamp with
capturedAt: t.timestamp and sequence: sequenceCounter + i with sequenceNumber:
sequenceCounter + i); also apply the same renaming to the other similar mapping
block that constructs transcript events (the one that sets confidence and
speaker/content) so all emitted events use capturedAt and sequenceNumber
consistently.
|
|
||
| ## Meeting Lifecycle Flow | ||
|
|
||
| ``` |
There was a problem hiding this comment.
Add a language identifier to the fenced flow diagram block.
Line 72 should specify a language (e.g., text) to satisfy markdownlint MD040.
Suggested patch
-```
+```text🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 72-72: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.context/BOT_RUNNER_INTEGRATION.md at line 72, The fenced flow diagram block
uses a bare triple-backtick fence (```) and should include a language identifier
to satisfy markdownlint MD040; update the block by changing the opening fence
from ``` to ```text (or another appropriate language) so the flow diagram block
is fenced as ```text and the closing fence remains ```.
|
|
||
| ## Base URL | ||
|
|
||
| ``` |
There was a problem hiding this comment.
Add a language to the Base URL fenced block (MD040).
Use text for the Base URL snippet.
Suggested patch
-```
+```text
http://localhost:3000/api/v1</details>
<!-- suggestion_start -->
<details>
<summary>📝 Committable suggestion</summary>
> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
```suggestion
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 7-7: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/API_ENDPOINTS.md` at line 7, The Base URL fenced code block is missing a
language tag which triggers MD040; edit the fenced block around the Base URL
snippet (the triple-backtick code fence containing
"http://localhost:3000/api/v1") and add the language identifier text immediately
after the opening backticks so the block becomes a ```text fenced block.
| ## Minutes of Meeting (MoM) | ||
|
|
||
| ### POST /meetings/:id/mom | ||
|
|
||
| Create or update MoM. | ||
|
|
||
| **Request:** | ||
|
|
||
| ```json | ||
| { | ||
| "executiveSummary": "Team discussed Q1 priorities...", | ||
| "detailedSummary": "Full summary here...", | ||
| "aiModelVersion": "gpt-4-turbo", | ||
| "overallConfidence": 0.92, | ||
| "processingTimeMs": 1500 | ||
| } | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ### GET /meetings/:id/mom | ||
|
|
||
| Get MoM for a meeting (includes highlights). | ||
|
|
||
| --- | ||
|
|
||
| ### POST /meetings/:id/highlights | ||
|
|
||
| Add a highlight. | ||
|
|
||
| **Request:** | ||
|
|
||
| ```json | ||
| { | ||
| "highlightType": "key_point", | ||
| "content": "Decided to launch feature X by March 1", | ||
| "importance": 8, | ||
| "keywords": ["launch", "feature X", "deadline"] | ||
| } | ||
| ``` | ||
|
|
||
| **Valid types:** `executive_summary`, `key_point`, `notable_quote`, `outcome` | ||
|
|
||
| --- | ||
|
|
||
| ### POST /meetings/:id/highlights/batch | ||
|
|
||
| Batch add highlights. | ||
|
|
||
| --- | ||
|
|
||
| ### GET /meetings/:id/highlights | ||
|
|
||
| Get highlights (optionally filter by type). | ||
|
|
||
| **Query params:** `?type=key_point` | ||
|
|
||
| --- | ||
|
|
||
| ### GET /mom/recent | ||
|
|
||
| Get recent MoMs across all meetings. | ||
|
|
||
| **Query params:** `?limit=20` | ||
|
|
||
| --- | ||
|
|
There was a problem hiding this comment.
MoM section is missing AI trigger/status endpoints used by integration flow.
The reference documents MoM retrieval/creation but does not document POST /meetings/:id/generate-mom and GET /meetings/:id/ai-status, which are required by the bot integration lifecycle in this PR context. This leaves the public contract incomplete for clients.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/API_ENDPOINTS.md` around lines 400 - 466, Add documentation entries for
the missing AI trigger/status endpoints used by the integration: document POST
/meetings/:id/generate-mom (trigger AI MoM generation, request body shape,
optional params like model/version/async flag, expected 202/200 responses and
webhook/callback or job id behavior) and GET /meetings/:id/ai-status (poll job
status, response schema including status values like
pending/processing/succeeded/failed, progress/error/message and resulting momId
or highlights), and mention authentication and relevant query params; reference
the endpoint paths POST /meetings/:id/generate-mom and GET
/meetings/:id/ai-status so clients know how to trigger and poll AI MoM jobs in
the integration lifecycle.
| > **Owner**: @KumarSashank | ||
| > **Package**: `@meeting-ai/ai-backend` | ||
| > **Last Updated**: 2024-02-06 | ||
|
|
There was a problem hiding this comment.
Refresh Last Updated metadata.
The current date stamp appears out-of-sync with this PR’s timeline; please update to the actual revision date.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/database/OVERVIEW.md` around lines 3 - 6, Update the "Last Updated"
metadata in the document by replacing the outdated date value in the "Last
Updated" field (the metadata line containing **Last Updated**) with the actual
revision date for this PR; locate the metadata block that includes **Owner**,
**Package**, and **Last Updated** and change the date string to the
current/accurate revision timestamp.
| <label className="block text-sm font-medium text-gray-300 mb-2">Full Name</label> | ||
| <input |
There was a problem hiding this comment.
Improve form accessibility semantics (label/input association and alert role).
Inputs are not explicitly bound to labels, and the error container should be announced as an alert.
Suggested patch
-<label className="block text-sm font-medium text-gray-300 mb-2">Email</label>
+<label htmlFor="email" className="block text-sm font-medium text-gray-300 mb-2">Email</label>
<input
+ id="email"
type="email"
@@
-{error && (
- <div className="p-3 rounded-lg bg-red-500/20 border border-red-500/30 text-red-300 text-sm">
+{error && (
+ <div role="alert" className="p-3 rounded-lg bg-red-500/20 border border-red-500/30 text-red-300 text-sm">
{error}
</div>
)}Also applies to: 84-85, 96-98, 109-111
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/components/auth/AuthForm.tsx` around lines 71 - 72, The form
labels in AuthForm.tsx are not associated with their inputs and the error
container lacks an accessible alert role; update each label/input pair (e.g.,
the "Full Name" label and its corresponding input, and the other pairs around
lines shown) by adding matching id attributes on inputs and htmlFor attributes
on labels so they are explicitly bound, and mark the error/message container
with role="alert" (and consider aria-live="assertive") so screen readers
announce validation messages; ensure the unique JSX elements (the label elements
and their paired input elements in AuthForm.tsx) are updated consistently across
the other occurrences referenced (lines ~84-85, 96-98, 109-111).
| <div className="flex-1 overflow-y-auto p-4 space-y-4"> | ||
| {messages.length === 0 ? ( |
There was a problem hiding this comment.
Make the chat controls and updates accessible.
The messages region starting on Line 99 is not announced when new assistant replies arrive, the text box on Line 176 relies on placeholder text instead of an accessible label, and the icon-only button on Line 183 has no accessible name. Screen-reader users will have trouble using this surface reliably.
♿ Minimal accessibility fix
- <div className="flex-1 overflow-y-auto p-4 space-y-4">
+ <div
+ className="flex-1 overflow-y-auto p-4 space-y-4"
+ role="log"
+ aria-live="polite"
+ aria-relevant="additions text"
+ >
@@
- <div className="flex gap-2">
+ <div className="flex gap-2">
+ <label htmlFor="project-chat-input" className="sr-only">
+ Ask a question about this project
+ </label>
<input
+ id="project-chat-input"
type="text"
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Ask a question..."
@@
<button
type="submit"
+ aria-label="Send message"
disabled={loading || !input.trim()}
className="px-4 py-3 rounded-xl bg-purple-500 text-white hover:bg-purple-600 disabled:opacity-50 disabled:cursor-not-allowed transition"
>Also applies to: 174-196
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/components/chat/ChatInterface.tsx` around lines 99 - 100, The
messages container in ChatInterface (the <div className="flex-1 overflow-y-auto
p-4 space-y-4"> that renders {messages}) must be made an accessible live region
(e.g., add aria-live="polite" and role="log" or aria-atomic="true") so new
assistant replies are announced; the text input (the message textarea/input near
Line ~176) must get a persistent accessible label (aria-label or a
visually-hidden <label> tied by id) instead of relying only on placeholder text;
and the icon-only button (the send/toolbar icon button near Line ~183) must have
an accessible name (aria-label or aria-labelledby) to describe its action.
Update the JSX in ChatInterface to add those attributes to the messages region,
add a label/id or aria-label to the message input element, and add an
appropriate aria-label to the icon-only button.
| ``` | ||
| AI-Product-Manager/ | ||
| ├── packages/ | ||
| │ ├── shared/ # Shared types, schemas, contracts | ||
| │ ├── web/ # Next.js 16 dashboard (Port 3001) | ||
| │ ├── bot-runner/ # Playwright bot (Placeholder) | ||
| │ └── ai-backend/ # Fastify API server (Port 3000) | ||
| ├── docs/ # Comprehensive documentation | ||
| └── docker-compose.yml # PostgreSQL setup | ||
| ``` |
There was a problem hiding this comment.
Resolve markdownlint warnings in code fence and ordered lists.
Line 43 needs a fenced code language, and list numbering style warnings appear at Lines 345, 350, 355, and 362.
📝 Suggested markdownlint-compliant patch
-```
+```text
AI-Product-Manager/
├── packages/
│ ├── shared/ # Shared types, schemas, contracts
@@
-4. **Longitudinal Analysis** 🟢
+1. **Longitudinal Analysis** 🟢
@@
-5. **Notifications** 🟢
+2. **Notifications** 🟢
@@
-6. **Export Features** 🟢
+3. **Export Features** 🟢
@@
-7. **Advanced Features** 🔵
+1. **Advanced Features** 🔵Also applies to: 345-362
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 43-43: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@PROGRESS_SUMMARY.md` around lines 43 - 52, The fenced code block showing the
project tree in PROGRESS_SUMMARY.md is missing a language tag; add a language
identifier (for example `text`) to the opening ``` so markdownlint stops
complaining, and fix the ordered list numbering inconsistencies for the sections
titled "Longitudinal Analysis", "Notifications", "Export Features", and
"Advanced Features" so that they follow a proper sequential numbering (e.g., 1.,
2., 3., 4.) instead of the mixed values shown; locate the tree block and the
headings by their exact text "AI-Product-Manager/" and the section titles
"Longitudinal Analysis", "Notifications", "Export Features", and "Advanced
Features" to make the edits.
| ### 4. **Bot Runner** (`@meeting-ai/bot-runner`) - **~5% Complete** | ||
|
|
||
| - ⚠️ Placeholder implementation only | ||
| - ⚠️ TODO: Playwright bot logic | ||
| - ⚠️ TODO: Google Meet integration | ||
| - ⚠️ TODO: Caption capture | ||
| - ⚠️ TODO: Transcript streaming | ||
|
|
There was a problem hiding this comment.
Bot Runner status appears materially outdated.
This section reports placeholder-only (~5%), which conflicts with the current PR scope describing substantial Bot Runner implementation. Please align this status to avoid misleading roadmap/readiness decisions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@PROGRESS_SUMMARY.md` around lines 227 - 234, Update the "Bot Runner" section
in PROGRESS_SUMMARY.md (the "### 4. **Bot Runner** (`@meeting-ai/bot-runner`) -
**~5% Complete**" block) to reflect the current PR's actual implementation
status instead of the placeholder ~5% note: change the completion percentage,
remove the "Placeholder implementation only" flag, and replace the TODO list
with a concise, accurate list of implemented features (e.g., Playwright bot
logic, Google Meet integration, caption capture, transcript streaming) and any
remaining work; ensure the wording matches the PR scope so the roadmap/readiness
status is not misleading.
|
|
||
| ## Architecture | ||
|
|
||
| ``` |
There was a problem hiding this comment.
Specify a language for the architecture fenced block (MD040).
Line 24 should include a language tag (use text for ASCII diagrams).
Suggested patch
-```
+```text🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 24-24: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` at line 24, Change the architecture fenced code block's opening
fence from ``` to ```text so the diagram has a language tag (e.g., update the
README's architecture fenced block opening fence to use ```text to satisfy
MD040).
| ``` | ||
| POST /api/v1/stream/transcript | ||
| POST /api/v1/meetings/:id/transcripts/batch | ||
| Content-Type: application/json | ||
|
|
||
| { | ||
| "events": [ | ||
| { | ||
| "speaker": "John Doe", | ||
| "content": "Let's discuss the roadmap", | ||
| "startTime": "2024-02-06T10:00:00Z", | ||
| "endTime": "2024-02-06T10:00:05Z", | ||
| "sequence": 1, | ||
| "confidence": 0.95 | ||
| } | ||
| ] | ||
| } | ||
| ``` |
There was a problem hiding this comment.
Add language tags to these fenced blocks.
markdownlint is already flagging MD040 on these fences. Label them (http, json, or text) so the docs lint job passes cleanly.
Also applies to: 29-33, 41-45, 49-61, 65-78, 103-109
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 9-9: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.context/API.md around lines 9 - 25, Add explicit language tags to the
fenced code blocks for the API examples: mark the HTTP request fence containing
"POST /api/v1/meetings/:id/transcripts/batch" as `http` and mark the JSON body
fence (the object with "events") as `json`; do the same for the other examples
called out (the blocks at ranges 29-33, 41-45, 49-61, 65-78, 103-109) so each
fence is labeled `http`, `json`, or `text` appropriately to satisfy markdownlint
MD040.
| { | ||
| "events": [ | ||
| { | ||
| "speaker": "John Doe", | ||
| "content": "Let's discuss the roadmap", | ||
| "startTime": "2024-02-06T10:00:00Z", | ||
| "endTime": "2024-02-06T10:00:05Z", | ||
| "sequence": 1, | ||
| "confidence": 0.95 | ||
| } | ||
| ] | ||
| } |
There was a problem hiding this comment.
The transcript batch example no longer matches the implemented API.
packages/ai-backend/src/routes/transcripts.ts expects sequenceNumber and capturedAt. The documented startTime, endTime, and sequence fields are not the current contract, so this payload is no longer copy/paste-safe.
Suggested fix
{
"events": [
{
"speaker": "John Doe",
"content": "Let's discuss the roadmap",
- "startTime": "2024-02-06T10:00:00Z",
- "endTime": "2024-02-06T10:00:05Z",
- "sequence": 1,
+ "sequenceNumber": 1,
+ "capturedAt": "2024-02-06T10:00:05Z",
"confidence": 0.95
}
]
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.context/API.md around lines 13 - 24, The transcript batch example in
.context/API.md is out of sync with the actual contract used by the transcripts
route; update the example payload to use the fields expected by
packages/ai-backend/src/routes/transcripts.ts (replace "sequence" with
"sequenceNumber" and replace "startTime"/"endTime" with a single "capturedAt"
timestamp per event, and ensure other keys like "speaker", "content", and
"confidence" remain). Edit the JSON example so each event object includes
"speaker", "content", "capturedAt", "sequenceNumber", and "confidence" to match
the names parsed by the route handlers.
| ``` | ||
| ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ | ||
| │ Bot Runner │────▶│ AI Backend │────▶│ PostgreSQL │ | ||
| │ (Transcription) │ │ (Processing) │ │ (Storage) │ | ||
| └─────────────────┘ └─────────────────┘ └─────────────────┘ | ||
| │ | ||
| ▼ | ||
| ┌─────────────────┐ | ||
| │ OpenAI API │ | ||
| │ (GPT-4o) │ | ||
| └─────────────────┘ | ||
| ``` |
There was a problem hiding this comment.
Add language specification to fenced code blocks.
Markdownlint flags code blocks without language identifiers. For ASCII diagrams, use text or plaintext as the language.
🔧 Proposed fix
-```
+```text
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐Apply the same fix to the second diagram block at line 46.
Also applies to: 46-57
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 9-9: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/AI_PIPELINE.md` around lines 9 - 20, The fenced ASCII diagrams using
box-drawing characters (e.g., lines containing "┌─────────────────┐", "│ Bot
Runner │────▶│ AI Backend", and the second similar block with "OpenAI API
(GPT-4o)") should include a language identifier to satisfy markdownlint; change
both code fences from ``` to ```text (or ```plaintext) so the blocks start with
```text and end with ```, ensuring both diagram blocks (the one showing Bot
Runner → AI Backend → PostgreSQL and the one including OpenAI API / GPT-4o) are
updated.
| ### Option 2: Migration Files | ||
|
|
||
| For production-ready changes: | ||
|
|
||
| ```bash | ||
| # 1. Edit schema files | ||
|
|
||
| # 2. Generate migration SQL | ||
| pnpm --filter @meeting-ai/ai-backend db:generate | ||
|
|
||
| # 3. Review generated SQL in drizzle/ folder | ||
|
|
||
| # 4. Apply migration | ||
| pnpm --filter @meeting-ai/ai-backend db:push | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Drizzle Kit, does drizzle-kit push apply previously generated migration files, or does it diff the current schema directly against the database?
💡 Result:
In Drizzle Kit, drizzle-kit push diffs the current schema directly against the database. It does not apply previously generated migration files. It works by: 1) Reading the Drizzle schema file(s) to compose a JSON snapshot, 2) Introspecting (pulling) the current database schema, 3) Generating SQL migrations based on the differences (diff) between the schema snapshot and the database, and 4) Applying those generated SQL migrations directly to the database. This omits SQL file generation and is designed for a code-first approach, ideal for rapid prototyping. This contrasts with drizzle-kit generate (which creates migration files) followed by drizzle-kit migrate or the migrate function (which applies those files).
Citations:
- 1: https://orm.drizzle.team/docs/drizzle-kit-push
- 2: https://orm.drizzle.team/docs/kit-overview
- 3: https://www.answeroverflow.com/m/1220264364666519572
Change Step 4 from db:push to db:migrate.
The workflow generates migration files but then applies them with db:push, which is incorrect. Drizzle's push command performs direct schema-to-database diffing and skips generated migration files entirely. To apply the generated migrations shown in Step 3, use db:migrate (or equivalent migrate function) instead of db:push.
If you prefer the direct-diff approach, remove Steps 2 and 3 and use only db:push on its own.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/database/MIGRATIONS.md` around lines 31 - 45, Update the migration
workflow in the docs: replace the final step that runs "pnpm --filter
`@meeting-ai/ai-backend` db:push" with "pnpm --filter `@meeting-ai/ai-backend`
db:migrate" so the generated migration files from Step 2/Step 3 are applied (or
alternatively remove Steps 2 and 3 and keep db:push if you intend direct schema
diffing); adjust the text under "Option 2: Migration Files" to use db:migrate
instead of db:push and ensure the sequence references the generated migration
files in the drizzle/ folder.
| export const teams = pgTable('teams', { | ||
| id: uuid('id').primaryKey().defaultRandom(), | ||
| organizationId: uuid('organization_id') | ||
| .notNull() | ||
| .references(() => organizations.id), | ||
| name: text('name').notNull(), | ||
| description: text('description'), | ||
| isActive: boolean('is_active').default(true).notNull(), | ||
| createdAt: timestamp('created_at').defaultNow().notNull(), | ||
| updatedAt: timestamp('updated_at').defaultNow().notNull(), | ||
| }); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Missing onDelete cascade behavior on foreign keys.
Foreign key references for teams.organizationId, teamMembers.teamId, and projects.organizationId/teamId don't specify deletion behavior. Deleting a parent organization or team could leave orphaned records or fail with FK constraint errors.
🛠️ Example fix for teams table
export const teams = pgTable('teams', {
id: uuid('id').primaryKey().defaultRandom(),
organizationId: uuid('organization_id')
.notNull()
- .references(() => organizations.id),
+ .references(() => organizations.id, { onDelete: 'cascade' }),
name: text('name').notNull(),
...
});Apply similar changes to teamMembers.teamId, projects.organizationId, and projects.teamId based on your desired deletion semantics (cascade, set null, or restrict).
Also applies to: 46-62
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/ai-backend/src/db/schema/organizations.ts` around lines 21 - 31, The
foreign key references in the schema (teams.organizationId, teamMembers.teamId,
projects.organizationId, projects.teamId) lack explicit onDelete behavior and
should be updated to include the desired deletion semantics (e.g.,
.references(() => organizations.id).onDelete('cascade') or .onDelete('set null')
as appropriate); locate the pgTable definitions for teams, teamMembers, and
projects and append the appropriate .onDelete('<behavior>') to the references()
calls for organizationId and teamId to prevent orphaned rows or FK errors when
parent rows are deleted.
| width: 8px; | ||
| height: 8px; | ||
| border-radius: 50%; | ||
| background: currentColor; |
There was a problem hiding this comment.
Fix stylelint errors for consistency.
Multiple stylelint violations detected:
- Line 137:
currentColorshould becurrentcolor(value-keyword-case) - Line 166:
fadeInshould befade-in(keyframes-name-pattern) - Line 441:
slideDownshould beslide-down(keyframes-name-pattern)
🔧 Proposed fixes
.status-dot {
width: 8px;
height: 8px;
border-radius: 50%;
- background: currentColor;
+ background: currentcolor;
flex-shrink: 0;
}-@keyframes fadeIn {
+@keyframes fade-in {
from {Update animation references:
.state-panel {
- animation: fadeIn 0.3s ease;
+ animation: fade-in 0.3s ease;
}-@keyframes slideDown {
+@keyframes slide-down {
from { .settings-panel {
- animation: slideDown 0.3s ease;
+ animation: slide-down 0.3s ease;
}Also applies to: 166-166, 441-441
🧰 Tools
🪛 Stylelint (17.5.0)
[error] 137-137: Expected "currentColor" to be "currentcolor" (value-keyword-case)
(value-keyword-case)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/chrome-extension/src/popup/popup.css` at line 137, Replace the CSS
value and keyframe names to match stylelint rules: change the property value
background: currentColor to background: currentcolor (value-keyword-case),
rename `@keyframes` fadeIn to `@keyframes` fade-in and `@keyframes` slideDown to
`@keyframes` slide-down (keyframes-name-pattern), and update all
animation/animation-name references (and vendor-prefixed variants) that use
fadeIn or slideDown to use fade-in and slide-down respectively so names stay
consistent across declarations and keyframes.
| import { useEffect, useState } from 'react'; | ||
|
|
||
| import { ChatInterface } from '@/components/chat/ChatInterface'; | ||
| import { MoMDisplay } from '@/components/mom/MoMDisplay'; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for MoMDisplay component
fd -t f "MoMDisplay" packages/web/
# Check if components/mom directory exists
ls -la packages/web/components/mom/ 2>/dev/null || echo "Directory not found"
# Alternative location check
fd -t f -e tsx -e ts "MoMDisplay" packages/web/Repository: KumarSashank/AI-Product-Manager
Length of output: 94
Remove or replace the import for non-existent component @/components/mom/MoMDisplay.
The component file and directory packages/web/components/mom/ do not exist in the codebase. This import will cause a build or runtime failure.
🧰 Tools
🪛 ESLint
[error] 8-8: There should be at least one empty line between import groups
(import/order)
🪛 GitHub Check: lint-and-format
[failure] 8-8:
Unable to resolve path to module '@/components/mom/MoMDisplay'
[failure] 8-8:
There should be at least one empty line between import groups
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/app/`(dashboard)/projects/[id]/page.tsx at line 8, The import of
the non-existent component MoMDisplay in page.tsx will break builds; remove the
line importing '@/components/mom/MoMDisplay' or replace it with an existing
component (or create the missing component) and update any usages of MoMDisplay
in the file to use the replacement; locate the import statement for MoMDisplay
and either delete it and its JSX usages or point it to the correct existing
module name.
| onClick={() => { | ||
| alert('To use the extension:\n1. Open the meeting link\n2. Click the Meeting AI extension icon in Chrome\n3. Select this project from the dropdown\n4. Click "Start Capture"'); | ||
| }} | ||
| className="px-4 py-2 rounded-lg bg-cyan-500 text-white text-sm font-medium hover:bg-cyan-600 transition" | ||
| > | ||
| How to use |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Avoid alert() for user guidance.
Using alert() blocks the UI and provides a poor user experience. Consider a tooltip, modal, or inline expandable section instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/app/`(dashboard)/projects/[id]/page.tsx around lines 296 - 301,
Replace the blocking alert used in the onClick handler for the "How to use"
button with a non-blocking UI element (modal, tooltip, or expandable inline
section): add a showHowTo boolean state (e.g., useState in the page component)
and change the onClick for the "How to use" button to set showHowTo true;
implement a HowToModal (or Tooltip/Expandable) component that displays the
instructions (steps 1–4) and includes an accessible close control that sets
showHowTo false; ensure the rendering of HowToModal is conditional on showHowTo
and that it is keyboard-accessible and styled consistently with existing classes
instead of using alert().
| const typeInfo = itemTypeConfig[item.itemType] || itemTypeConfig.action_item; | ||
| const statusInfo = statusConfig[item.status] || statusConfig.pending; | ||
| const priorityInfo = priorityConfig[item.priority || 'medium']; | ||
|
|
||
| return ( | ||
| <div | ||
| key={item.id} | ||
| className={`bg-white/5 hover:bg-white/10 border-l-4 ${priorityInfo.color} rounded-lg p-4 transition-all group`} |
There was a problem hiding this comment.
Guard the priority lookup before reading priorityInfo.color.
Line 71 only falls back when item.priority is missing. If the API ever sends an unexpected non-empty value, priorityInfo becomes undefined and Line 76 throws during render. Mirror the fallback pattern already used for itemType and status.
🛡️ Minimal fix
- const priorityInfo = priorityConfig[item.priority || 'medium'];
+ const priorityInfo =
+ priorityConfig[item.priority || 'medium'] ?? priorityConfig.medium;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const typeInfo = itemTypeConfig[item.itemType] || itemTypeConfig.action_item; | |
| const statusInfo = statusConfig[item.status] || statusConfig.pending; | |
| const priorityInfo = priorityConfig[item.priority || 'medium']; | |
| return ( | |
| <div | |
| key={item.id} | |
| className={`bg-white/5 hover:bg-white/10 border-l-4 ${priorityInfo.color} rounded-lg p-4 transition-all group`} | |
| const typeInfo = itemTypeConfig[item.itemType] || itemTypeConfig.action_item; | |
| const statusInfo = statusConfig[item.status] || statusConfig.pending; | |
| const priorityInfo = | |
| priorityConfig[item.priority || 'medium'] ?? priorityConfig.medium; | |
| return ( | |
| <div | |
| key={item.id} | |
| className={`bg-white/5 hover:bg-white/10 border-l-4 ${priorityInfo.color} rounded-lg p-4 transition-all group`} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/components/projects/TaskList.tsx` around lines 69 - 76, The
priority lookup can return undefined for unexpected values which causes reading
priorityInfo.color to throw; mirror the safe fallback used for
typeInfo/statusInfo by computing priorityInfo from priorityConfig with a
fallback (e.g., priorityConfig[item.priority] || priorityConfig['medium'])
before using it in the className; update the assignment of priorityInfo (and
then the className that references priorityInfo.color) so it always has a
defined object.
| {item.dueDate && ( | ||
| <div className="flex items-center gap-1"> | ||
| <svg | ||
| className="w-3.5 h-3.5" | ||
| fill="none" | ||
| stroke="currentColor" | ||
| viewBox="0 0 24 24" | ||
| > | ||
| <path | ||
| strokeLinecap="round" | ||
| strokeLinejoin="round" | ||
| strokeWidth={2} | ||
| d="M8 7V3m8 4V3m-9 8h10M5 21h14a2 2 0 002-2V7a2 2 0 00-2-2H5a2 2 0 00-2 2v12a2 2 0 002 2z" | ||
| /> | ||
| </svg> | ||
| <span>{new Date(item.dueDate).toLocaleDateString()}</span> |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
How does JavaScript parse new Date('2024-02-09'), and why can toLocaleDateString() show the previous day in U.S. time zones?
💡 Result:
JavaScript parses new Date('2024-02-09') as February 9, 2024, at 00:00:00 UTC. This is because date-only strings in YYYY-MM-DD format (using hyphens) are treated as UTC by the Date constructor, which uses Date.parse. MDN confirms that such strings without timezone info are interpreted as UTC. When toLocaleDateString is called in a U.S. time zone (e.g., EST, UTC-5), it converts this UTC timestamp to local time: 00:00:00 UTC becomes 19:00:00 the previous day (February 8) local time. Since toLocaleDateString formats only the date portion in the local timezone, it shows the previous day. Example in EST (UTC-5): - UTC: 2024-02-09 00:00:00 - Local: 2024-02-08 19:00:00 EST - toLocaleDateString('en-US'): "2/8/2024" This behavior is consistent across major browsers and explained in ECMAScript specs via MDN Date.parse and Date constructor docs. Note: ES6 aimed to change this to local time, but current implementations (as of 2026) retain UTC parsing for YYYY-MM-DD.
Citations:
- 1: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/Date
- 2: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/parse
- 3: https://medium.com/@drevets/exploring-unexpected-behavior-with-javascript-date-objects-edaec2b527f9
- 4: https://stackoverflow.com/questions/21152934/strange-behavior-in-javascript-new-date-function
- 5: https://stackoverflow.com/questions/39113293/javascript-date-issues-returning-1-day-less-than-set-date-timezone
- 6: https://stackoverflow.com/questions/29174810/javascript-date-timezone-issue
- 7: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toLocaleDateString
- 8: https://stackoverflow.com/questions/32877278/tolocaledatestring-is-subtracting-a-day
🏁 Script executed:
# First, check if the file exists and get its size
wc -l packages/web/components/projects/TaskList.tsxRepository: KumarSashank/AI-Product-Manager
Length of output: 124
🏁 Script executed:
# Read the specific lines in question (106-121)
sed -n '106,121p' packages/web/components/projects/TaskList.tsxRepository: KumarSashank/AI-Product-Manager
Length of output: 821
🏁 Script executed:
# Get more context around these lines to understand the data structure
sed -n '90,130p' packages/web/components/projects/TaskList.tsxRepository: KumarSashank/AI-Product-Manager
Length of output: 1736
🏁 Script executed:
# Search for the type definition of 'item' to understand the dueDate format
rg "item\.dueDate|dueDate.*:" packages/web/components/projects/ -A 2 -B 2Repository: KumarSashank/AI-Product-Manager
Length of output: 896
🏁 Script executed:
# Read the beginning of TaskList.tsx to see the component definition and props
head -50 packages/web/components/projects/TaskList.tsxRepository: KumarSashank/AI-Product-Manager
Length of output: 2652
🏁 Script executed:
# Search for type definitions related to Task or Item
rg "interface Task|type Task|export.*Task" packages/web -A 5 -B 1Repository: KumarSashank/AI-Product-Manager
Length of output: 1215
🏁 Script executed:
# Find where TaskList component is imported/used
rg "TaskList" packages/web --type tsx --type ts -lRepository: KumarSashank/AI-Product-Manager
Length of output: 102
🏁 Script executed:
# Search for MeetingItem type definition (without filetype filter)
rg "interface MeetingItem|type MeetingItem|export.*MeetingItem" packages/ -A 10 -B 1Repository: KumarSashank/AI-Product-Manager
Length of output: 2523
🏁 Script executed:
# Look in lib/api for the MeetingItem definition
find packages/web -path "*lib/api*" -type fRepository: KumarSashank/AI-Product-Manager
Length of output: 98
🏁 Script executed:
# Search more broadly for MeetingItem
rg "MeetingItem" packages/ -A 5 -B 1 --max-count=20Repository: KumarSashank/AI-Product-Manager
Length of output: 7663
🏁 Script executed:
# Check the database schema or backend to see how dueDate is stored/formatted
rg "dueDate" packages/ai-backend -B 2 -A 2Repository: KumarSashank/AI-Product-Manager
Length of output: 6983
🏁 Script executed:
# Look for any date parsing or formatting in the backend
rg "toISOString|toLocaleDateString|new Date.*dueDate" packages/ -B 2 -A 2Repository: KumarSashank/AI-Product-Manager
Length of output: 3139
🏁 Script executed:
# Check the database schema definition for meetingItems table
rg "meetingItems|dueDate" packages/ai-backend/src/db -B 3 -A 3Repository: KumarSashank/AI-Product-Manager
Length of output: 19869
Fix the date parsing to handle date-only strings correctly.
Line 121 passes item.dueDate (a YYYY-MM-DD string) directly to new Date(), which parses it as UTC midnight. When toLocaleDateString() converts to U.S. time zones (EST, CST, etc.), midnight UTC becomes the previous day in local time, showing the wrong date (e.g., 2/8/2024 instead of 2/9/2024).
Use a date library (date-fns, day.js) or manually parse as a local date instead. Alternatively, format the raw string without passing through the Date constructor:
// Option 1: Format raw string
<span>{item.dueDate}</span>
// Option 2: Parse as local date
const [year, month, day] = item.dueDate.split('-');
const localDate = new Date(year, month - 1, day);
<span>{localDate.toLocaleDateString()}</span>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/components/projects/TaskList.tsx` around lines 106 - 121, The
dueDate string (item.dueDate) is being passed directly to new Date() in
TaskList.tsx which treats YYYY-MM-DD as UTC and can shift the displayed day in
some timezones; update the rendering to parse the YYYY-MM-DD as a local date
instead (or use a date library) before calling toLocaleDateString — e.g. split
item.dueDate into year, month, day and construct new Date(year, month-1, day) or
use date-fns/dayjs to parse as local, then render that formatted value instead
of passing item.dueDate directly to new Date().
| async searchByKeywords(_organizationId: string, keywords: string[]) { | ||
| // Note: This is a simple implementation. For production, use full-text search | ||
| // or pgvector for semantic search | ||
| const allHighlights = await db.query.meetingHighlights.findMany({ | ||
| with: { | ||
| meeting: true, | ||
| }, | ||
| }); | ||
|
|
||
| // Filter by keywords (simple string matching) | ||
| return allHighlights.filter((h) => { | ||
| if (!h.keywords) return false; | ||
| return keywords.some((kw) => | ||
| h.keywords?.some((hkw) => hkw.toLowerCase().includes(kw.toLowerCase())) | ||
| ); | ||
| }); |
There was a problem hiding this comment.
Apply organization scoping before searching highlights.
_organizationId is unused here, so this loads every highlight and can return keyword matches from other organizations. That's a cross-tenant data leak on top of an O(total_highlights) scan.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/ai-backend/src/db/repositories/mom.repository.ts` around lines 112 -
127, The searchByKeywords function ignores the _organizationId and fetches all
meetingHighlights, causing cross-tenant leaks; update the
db.query.meetingHighlights.findMany call in searchByKeywords to scope results by
organization (e.g., add a where clause that filters highlights to
meeting.organizationId === _organizationId or equivalent relation filter like
where: { meeting: { organizationId: _organizationId } }) so only highlights for
that organization are loaded, then keep the existing keyword filtering on the
returned set.
| async findRecent(limit = 20) { | ||
| return db.query.moms.findMany({ | ||
| orderBy: [desc(moms.generatedAt)], | ||
| limit, | ||
| with: { | ||
| meeting: true, | ||
| highlights: true, | ||
| }, | ||
| }); | ||
| } |
There was a problem hiding this comment.
findRecent is unscoped despite its contract.
The method comment says "for an organization", but there is no organization parameter or filter in the query. Any multi-tenant caller will get every organization's MoMs.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/ai-backend/src/db/repositories/mom.repository.ts` around lines 133 -
142, The findRecent method (async findRecent) is unscoped to an organization;
update its signature to accept an organizationId (e.g.,
findRecent(organizationId: string, limit = 20)) and add a where filter to the
db.query.moms.findMany call to restrict results to the provided organizationId
(use moms.organizationId or the correct column symbol) while keeping the
existing orderBy (desc(moms.generatedAt)), limit, and with { meeting, highlights
}; also update any call sites to pass the organizationId.
| async countByMeetingId(meetingId: string): Promise<number> { | ||
| const result = await db | ||
| .select() | ||
| .from(transcriptEvents) | ||
| .where(eq(transcriptEvents.meetingId, meetingId)); | ||
| return result.length; |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Use COUNT(*) instead of materializing the whole transcript.
This pulls every row for the meeting into memory just to return result.length. A database count aggregate keeps this cheap for long meetings.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/ai-backend/src/db/repositories/transcript.repository.ts` around
lines 75 - 80, The current countByMeetingId implementation queries all rows from
transcriptEvents and returns result.length, which materializes every transcript;
replace it with a COUNT(*) aggregate query using the DB's aggregate/select API
(e.g., use db.select({ count: sql`count(*)`
}).from(transcriptEvents).where(eq(transcriptEvents.meetingId, meetingId))) in
the countByMeetingId function so the database returns a single numeric count,
then convert/parse that returned value to a number and return it; also handle
possible bigint/string result types from the driver when converting the
aggregate to a JS number.
| async getTranscriptBySpeaker(meetingId: string) { | ||
| const events = await this.findByMeetingId(meetingId); | ||
| const bySpeaker: Record<string, TranscriptEvent[]> = {}; | ||
|
|
||
| for (const event of events) { | ||
| if (!bySpeaker[event.speaker]) { | ||
| bySpeaker[event.speaker] = []; | ||
| } | ||
| bySpeaker[event.speaker]!.push(event); | ||
| } | ||
|
|
||
| return bySpeaker; |
There was a problem hiding this comment.
Don't group by speaker into a plain object keyed by user input.
Speaker names come from external input. Values like __proto__ or constructor can corrupt this accumulator or throw before push(). Use Object.create(null) or a Map instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/ai-backend/src/db/repositories/transcript.repository.ts` around
lines 94 - 105, The accumulator in getTranscriptBySpeaker uses a plain object
keyed by external speaker names which can be polluted by prototype properties;
change bySpeaker to a safe map-like container (e.g., initialize bySpeaker with
Object.create(null) or use a Map<string, TranscriptEvent[]>) and update the loop
to read/update entries via bySpeaker[speaker] or bySpeaker.get/set accordingly,
then return the safe structure (or convert the Map to a plain object if callers
expect one). Ensure references to bySpeaker in getTranscriptBySpeaker and any
consumers are updated to match the chosen type.
| */ | ||
|
|
||
| import { relations } from 'drizzle-orm'; | ||
| import { pgTable, uuid, text, timestamp, date, integer, real, jsonb } from 'drizzle-orm/pg-core'; |
There was a problem hiding this comment.
Prevent duplicate tag links in meeting_item_tags.
Right now repeated add-tag calls can create multiple identical (meetingItemId, tagId) rows, which will duplicate tags in joins and make the endpoint non-idempotent.
🔐 Suggested composite key
-import { pgTable, uuid, text, timestamp, date, integer, real, jsonb } from 'drizzle-orm/pg-core';
+import {
+ pgTable,
+ uuid,
+ text,
+ timestamp,
+ date,
+ integer,
+ real,
+ jsonb,
+ primaryKey,
+} from 'drizzle-orm/pg-core';
...
-export const meetingItemTags = pgTable('meeting_item_tags', {
- meetingItemId: uuid('meeting_item_id')
- .notNull()
- .references(() => meetingItems.id),
- tagId: uuid('tag_id')
- .notNull()
- .references(() => tags.id),
-});
+export const meetingItemTags = pgTable(
+ 'meeting_item_tags',
+ {
+ meetingItemId: uuid('meeting_item_id')
+ .notNull()
+ .references(() => meetingItems.id),
+ tagId: uuid('tag_id')
+ .notNull()
+ .references(() => tags.id),
+ },
+ (table) => ({
+ pk: primaryKey({ columns: [table.meetingItemId, table.tagId] }),
+ })
+);Also applies to: 75-82
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/ai-backend/src/db/schema/meetingItems.ts` at line 7, The
meeting_item_tags join table allows duplicate (meetingItemId, tagId) pairs;
update its schema definition (the pgTable declaration for meeting_item_tags) to
enforce a composite unique constraint or primary key on (meetingItemId, tagId)
so repeated add-tag calls are idempotent; modify the table definition where
meeting_item_tags is declared (and similarly for the other join table around
lines 75-82) to add the composite key constraint using the table options
provided by drizzle-orm (e.g., uniqueKey/primaryKey on meetingItemId and tagId)
so inserts of the same pair will be prevented at the DB level.
| // Tell content script to stop | ||
| if (tabId) { | ||
| chrome.tabs.sendMessage(tabId, { type: 'STOP_CAPTURE' }); | ||
| } | ||
|
|
||
| // Complete meeting in backend | ||
| if (currentMeetingId) { | ||
| await completeMeeting(currentMeetingId); | ||
| console.log(`[Meeting AI BG] Meeting completed: ${currentMeetingId}`); | ||
|
|
||
| // Trigger AI processing | ||
| try { | ||
| await triggerMoM(currentMeetingId); | ||
| console.log('[Meeting AI BG] MoM generation triggered'); | ||
| } catch (e) { | ||
| console.warn('[Meeting AI BG] MoM generation failed:', e.message); | ||
| } | ||
|
|
||
| try { | ||
| await triggerExtractItems(currentMeetingId); | ||
| console.log('[Meeting AI BG] Item extraction triggered'); | ||
| } catch (e) { | ||
| console.warn('[Meeting AI BG] Item extraction failed:', e.message); | ||
| } | ||
| } |
There was a problem hiding this comment.
Flush the final transcript batch before completing the meeting.
STOP_CAPTURE is fire-and-forget, but the meeting is completed and AI processing starts immediately afterward. Any buffered captions still in the tab at that point are lost from the transcript and downstream MoM generation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/chrome-extension/src/background/service-worker.js` around lines 142
- 166, The final caption batch is lost because STOP_CAPTURE is fire-and-forget;
before calling completeMeeting(currentMeetingId) and triggering
triggerMoM/triggerExtractItems, send a synchronous flush/ack message to the
content script (e.g., new message type 'FLUSH_CAPTIONS' or make STOP_CAPTURE
return an acknowledgement) and await the response so the content script persists
any buffered captions to the background (or backend) before proceeding; update
the logic around tabId, STOP_CAPTURE, completeMeeting, triggerMoM and
triggerExtractItems to wait for that flush acknowledgement.
| stopDurationTimer(); | ||
| stopStatsPolling(); | ||
|
|
||
| const result = await sendToBackground('POPUP_STOP_CAPTURE', { | ||
| tabId: currentTabId, | ||
| }); | ||
|
|
||
| if (result?.success) { | ||
| showState('ready'); | ||
| } | ||
|
|
||
| els.stopBtn.disabled = false; | ||
| els.stopBtn.innerHTML = ` | ||
| <svg width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2"> | ||
| <rect x="6" y="6" width="12" height="12" rx="2" fill="currentColor"/> | ||
| </svg> | ||
| Stop Capture | ||
| `; |
There was a problem hiding this comment.
Don't tear down the capturing UI until stop succeeds.
stopDurationTimer() and stopStatsPolling() run before the background confirms the capture stopped. If POPUP_STOP_CAPTURE fails, the popup stays on the capturing screen but its timer/stats freeze.
🧰 Suggested fix
- stopDurationTimer();
- stopStatsPolling();
-
const result = await sendToBackground('POPUP_STOP_CAPTURE', {
tabId: currentTabId,
});
if (result?.success) {
+ stopDurationTimer();
+ stopStatsPolling();
showState('ready');
+ } else {
+ setConnection('disconnected', result?.error || 'Failed to stop capture');
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| stopDurationTimer(); | |
| stopStatsPolling(); | |
| const result = await sendToBackground('POPUP_STOP_CAPTURE', { | |
| tabId: currentTabId, | |
| }); | |
| if (result?.success) { | |
| showState('ready'); | |
| } | |
| els.stopBtn.disabled = false; | |
| els.stopBtn.innerHTML = ` | |
| <svg width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2"> | |
| <rect x="6" y="6" width="12" height="12" rx="2" fill="currentColor"/> | |
| </svg> | |
| Stop Capture | |
| `; | |
| const result = await sendToBackground('POPUP_STOP_CAPTURE', { | |
| tabId: currentTabId, | |
| }); | |
| if (result?.success) { | |
| stopDurationTimer(); | |
| stopStatsPolling(); | |
| showState('ready'); | |
| } else { | |
| setConnection('disconnected', result?.error || 'Failed to stop capture'); | |
| } | |
| els.stopBtn.disabled = false; | |
| els.stopBtn.innerHTML = ` | |
| <svg width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2"> | |
| <rect x="6" y="6" width="12" height="12" rx="2" fill="currentColor"/> | |
| </svg> | |
| Stop Capture | |
| `; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/chrome-extension/src/popup/popup.js` around lines 344 - 361, The
current flow calls stopDurationTimer() and stopStatsPolling() before awaiting
sendToBackground('POPUP_STOP_CAPTURE'), causing the popup UI to tear down even
if the background stop fails; change the logic so
sendToBackground('POPUP_STOP_CAPTURE', { tabId: currentTabId }) is awaited first
and only when result?.success is true then call stopDurationTimer(),
stopStatsPolling(), update showState('ready'), and update els.stopBtn
(disabled/innerHTML); if result?.success is false, do not stop the
timers/polling and instead surface an error (e.g., log or show a toast) so the
capturing UI remains consistent.
| import { ChatInterface } from '@/components/chat/ChatInterface'; | ||
| import { MoMDisplay } from '@/components/mom/MoMDisplay'; | ||
| import { TaskList } from '@/components/projects/TaskList'; | ||
| import { TranscriptUpload } from '@/components/transcript/TranscriptUpload'; | ||
| import { projectsApi, botApi, Project, Meeting, MeetingItem, MoM, ProjectStats } from '@/lib/api'; |
There was a problem hiding this comment.
Fix import ordering and grouping per ESLint rules.
Static analysis flags multiple import order violations. Group imports correctly: external packages first, then internal aliases, with empty lines between groups.
🔧 Proposed fix
'use client';
import Link from 'next/link';
import { useParams } from 'next/navigation';
import { useEffect, useState } from 'react';
import { ChatInterface } from '@/components/chat/ChatInterface';
import { MoMDisplay } from '@/components/mom/MoMDisplay';
import { TaskList } from '@/components/projects/TaskList';
import { TranscriptUpload } from '@/components/transcript/TranscriptUpload';
+
import { projectsApi, botApi, Project, Meeting, MeetingItem, MoM, ProjectStats } from '@/lib/api';Note: ESLint also indicates TaskList should be imported before MoMDisplay. Verify your ESLint import/order configuration and sort accordingly.
🧰 Tools
🪛 ESLint
[error] 7-7: There should be at least one empty line between import groups
(import/order)
[error] 8-8: There should be at least one empty line between import groups
(import/order)
[error] 9-9: There should be at least one empty line between import groups
(import/order)
[error] 9-9: @/components/projects/TaskList import should occur before import of @/components/mom/MoMDisplay
(import/order)
[error] 10-10: There should be at least one empty line between import groups
(import/order)
[error] 11-11: @/lib/api import should occur before import of @/components/mom/MoMDisplay
(import/order)
🪛 GitHub Check: lint-and-format
[failure] 9-9:
@/components/projects/TaskList import should occur before import of @/components/mom/MoMDisplay
[failure] 9-9:
There should be at least one empty line between import groups
[failure] 8-8:
Unable to resolve path to module '@/components/mom/MoMDisplay'
[failure] 8-8:
There should be at least one empty line between import groups
[failure] 7-7:
There should be at least one empty line between import groups
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/app/`(dashboard)/projects/[id]/page.tsx around lines 7 - 11,
Reorder and regroup the imports to satisfy ESLint import/order: place external
package imports first (none present here), then internal/alias imports grouped
together with a single blank line separating distinct groups; specifically
ensure TaskList is imported before MoMDisplay and keep ChatInterface and
TranscriptUpload with the other component imports, and leave the api/type
imports (projectsApi, botApi, Project, Meeting, MeetingItem, MoM, ProjectStats)
as a separate grouped import; update the import statements for ChatInterface,
TaskList, MoMDisplay, TranscriptUpload and the projectsApi/... import to reflect
this ordering.
| useEffect(() => { | ||
| loadProject(); | ||
| }, [projectId]); |
There was a problem hiding this comment.
Missing dependency in useEffect hook.
loadProject is referenced inside the effect but not listed in the dependency array. This causes a stale closure warning and may lead to bugs if loadProject changes.
🔧 Proposed fix using useCallback
+import { useCallback, useEffect, useState } from 'react';
-import { useEffect, useState } from 'react';
...
- const loadProject = async () => {
+ const loadProject = useCallback(async () => {
try {
const res = await projectsApi.get(projectId);
setProject(res.project);
setMeetings(res.meetings);
setItems(res.items);
setMoms(res.moms || {});
setStats(res.stats);
} catch (err) {
console.error('Failed to load project:', err);
} finally {
setLoading(false);
}
- };
+ }, [projectId]);
useEffect(() => {
loadProject();
- }, [projectId]);
+ }, [loadProject]);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| useEffect(() => { | |
| loadProject(); | |
| }, [projectId]); | |
| import { useCallback, useEffect, useState } from 'react'; | |
| ... | |
| const loadProject = useCallback(async () => { | |
| try { | |
| const res = await projectsApi.get(projectId); | |
| setProject(res.project); | |
| setMeetings(res.meetings); | |
| setItems(res.items); | |
| setMoms(res.moms || {}); | |
| setStats(res.stats); | |
| } catch (err) { | |
| console.error('Failed to load project:', err); | |
| } finally { | |
| setLoading(false); | |
| } | |
| }, [projectId]); | |
| useEffect(() => { | |
| loadProject(); | |
| }, [loadProject]); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/app/`(dashboard)/projects/[id]/page.tsx around lines 34 - 36,
The useEffect references loadProject but doesn't include it in the dependency
array, causing a stale-closure warning; wrap the loadProject function in
useCallback (e.g., const loadProject = useCallback(..., [projectId, /* other
deps */])) so its identity is stable, then add loadProject to the useEffect
dependency array (useEffect(() => { loadProject(); }, [projectId, loadProject]))
— or alternatively include loadProject directly in the dependency array if you
don't want to use useCallback; ensure any values used inside loadProject are
also listed in its dependency array.
| useEffect(() => { | ||
| if (!botSessionId || ['stopped', 'error', 'idle'].includes(botStatus)) return; | ||
|
|
||
| const interval = setInterval(async () => { | ||
| try { | ||
| const res = await botApi.status(botSessionId); | ||
| setBotStatus(res.status); | ||
| if (['stopped', 'error'].includes(res.status)) { | ||
| clearInterval(interval); | ||
| } | ||
| } catch { | ||
| clearInterval(interval); | ||
| } | ||
| }, 3000); | ||
|
|
||
| return () => clearInterval(interval); | ||
| }, [botSessionId, botStatus]); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Stale botStatus reference in polling interval closure.
The interval callback captures botStatus at creation time. While the dependency array includes botStatus, creating and clearing intervals on every status change is inefficient. Consider using a ref for the status check.
♻️ Proposed refactor using ref
+ const botStatusRef = useRef(botStatus);
+ useEffect(() => {
+ botStatusRef.current = botStatus;
+ }, [botStatus]);
useEffect(() => {
- if (!botSessionId || ['stopped', 'error', 'idle'].includes(botStatus)) return;
+ if (!botSessionId || ['stopped', 'error', 'idle'].includes(botStatusRef.current)) return;
const interval = setInterval(async () => {
try {
const res = await botApi.status(botSessionId);
setBotStatus(res.status);
if (['stopped', 'error'].includes(res.status)) {
clearInterval(interval);
}
} catch {
clearInterval(interval);
}
}, 3000);
return () => clearInterval(interval);
- }, [botSessionId, botStatus]);
+ }, [botSessionId]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/app/`(dashboard)/projects/[id]/page.tsx around lines 92 - 108,
The useEffect polling closure captures a stale botStatus and recreates the
interval on every status change; fix by introducing a mutable ref (e.g.,
botStatusRef) that you update whenever botStatus changes and have the interval
callback read from botStatusRef instead of the captured botStatus, make the
effect that sets up setInterval only depend on botSessionId (not botStatus),
call botApi.status(botSessionId) inside the interval, call setBotStatus with
response and update botStatusRef.current, and clearInterval when the response
status (res.status) or botStatusRef.current becomes 'stopped' or 'error' and on
component cleanup; keep try/catch to clear the interval on errors.
| /** | ||
| * Complete a meeting (set end time and calculate duration) | ||
| */ | ||
| async complete(id: string, endTime = new Date()) { | ||
| const meeting = await this.findById(id); | ||
| if (!meeting || !meeting.startTime) return null; | ||
|
|
||
| const durationMinutes = Math.round((endTime.getTime() - meeting.startTime.getTime()) / 60000); | ||
|
|
||
| const result = await db | ||
| .update(meetings) | ||
| .set({ | ||
| status: 'completed', | ||
| endTime, | ||
| durationMinutes, | ||
| updatedAt: new Date(), | ||
| }) | ||
| .where(eq(meetings.id, id)) | ||
| .returning(); | ||
| return result[0]; | ||
| } |
There was a problem hiding this comment.
No validation for endTime > startTime.
If endTime is before startTime, durationMinutes will be negative. Consider adding validation or using Math.max(0, ...) to prevent storing invalid data.
🛡️ Proposed defensive fix
const durationMinutes = Math.round((endTime.getTime() - meeting.startTime.getTime()) / 60000);
+ if (durationMinutes < 0) {
+ throw new Error('End time cannot be before start time');
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** | |
| * Complete a meeting (set end time and calculate duration) | |
| */ | |
| async complete(id: string, endTime = new Date()) { | |
| const meeting = await this.findById(id); | |
| if (!meeting || !meeting.startTime) return null; | |
| const durationMinutes = Math.round((endTime.getTime() - meeting.startTime.getTime()) / 60000); | |
| const result = await db | |
| .update(meetings) | |
| .set({ | |
| status: 'completed', | |
| endTime, | |
| durationMinutes, | |
| updatedAt: new Date(), | |
| }) | |
| .where(eq(meetings.id, id)) | |
| .returning(); | |
| return result[0]; | |
| } | |
| /** | |
| * Complete a meeting (set end time and calculate duration) | |
| */ | |
| async complete(id: string, endTime = new Date()) { | |
| const meeting = await this.findById(id); | |
| if (!meeting || !meeting.startTime) return null; | |
| const durationMinutes = Math.round((endTime.getTime() - meeting.startTime.getTime()) / 60000); | |
| if (durationMinutes < 0) { | |
| throw new Error('End time cannot be before start time'); | |
| } | |
| const result = await db | |
| .update(meetings) | |
| .set({ | |
| status: 'completed', | |
| endTime, | |
| durationMinutes, | |
| updatedAt: new Date(), | |
| }) | |
| .where(eq(meetings.id, id)) | |
| .returning(); | |
| return result[0]; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/ai-backend/src/db/repositories/meeting.repository.ts` around lines
63 - 83, The complete method currently calculates durationMinutes from endTime -
meeting.startTime and can produce negative values if endTime is before
startTime; update the meeting.repository.ts complete(id: string, endTime = new
Date()) implementation to validate and clamp the duration: after fetching
meeting (in the complete function and before computing durationMinutes) compare
endTime and meeting.startTime, set durationMinutes = Math.max(0,
Math.round((endTime.getTime() - meeting.startTime.getTime()) / 60000)) or return
null/throw if you prefer rejecting invalid endTime, and ensure the update
payload uses the clamped durationMinutes and a normalized endTime; reference
symbols: complete, meeting.startTime, endTime, durationMinutes, and the update
on meetings.
| /** | ||
| * Increment transcript event count | ||
| */ | ||
| async incrementTranscriptCount(meetingId: string, count = 1) { | ||
| const meeting = await this.findById(meetingId); | ||
| if (!meeting) return null; | ||
|
|
||
| const result = await db | ||
| .update(meetings) | ||
| .set({ | ||
| totalTranscriptEvents: (meeting.totalTranscriptEvents || 0) + count, | ||
| }) | ||
| .where(eq(meetings.id, meetingId)) | ||
| .returning(); | ||
| return result[0]; | ||
| } |
There was a problem hiding this comment.
Race condition in transcript count increment.
The read-modify-write pattern is susceptible to lost updates under concurrent requests. Consider using SQL arithmetic directly or a transaction with row-level locking.
⚡ Proposed fix using SQL increment
+import { eq, desc, and, gte, lte, sql } from 'drizzle-orm';
async incrementTranscriptCount(meetingId: string, count = 1) {
- const meeting = await this.findById(meetingId);
- if (!meeting) return null;
-
const result = await db
.update(meetings)
.set({
- totalTranscriptEvents: (meeting.totalTranscriptEvents || 0) + count,
+ totalTranscriptEvents: sql`COALESCE(${meetings.totalTranscriptEvents}, 0) + ${count}`,
})
.where(eq(meetings.id, meetingId))
.returning();
- return result[0];
+ return result[0] ?? null;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/ai-backend/src/db/repositories/meeting.repository.ts` around lines
140 - 155, The current incrementTranscriptCount uses a read-modify-write
(findById then update) which causes race conditions; change it to perform the
increment atomically in the UPDATE statement itself (avoid the initial
this.findById). In the incrementTranscriptCount function use a single
db.update(meetings).set(...) with a SQL arithmetic expression that adds the
incoming count to the existing meetings.totalTranscriptEvents (e.g.,
totalTranscriptEvents = totalTranscriptEvents + count) and keep the same
where(eq(meetings.id, meetingId)) and returning() logic; if the update affects
no rows return null. This removes the race window and ensures concurrent
increments are applied correctly.
| /** | ||
| * Delete a meeting | ||
| */ | ||
| async delete(id: string) { | ||
| return db.delete(meetings).where(eq(meetings.id, id)).returning(); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if participants table has ON DELETE CASCADE or if manual cleanup is needed
ast-grep --pattern 'references(() => meetings.id)'Repository: KumarSashank/AI-Product-Manager
Length of output: 57
🏁 Script executed:
# First, let's find the schema definition files
fd -e ts -e js | grep -i schema | head -20Repository: KumarSashank/AI-Product-Manager
Length of output: 835
🏁 Script executed:
# Look for the database schema or models directory
find . -type d -name "*schema*" -o -type d -name "*models*" -o -type d -name "*db*" | grep -v node_modules | head -20Repository: KumarSashank/AI-Product-Manager
Length of output: 169
🏁 Script executed:
# Search for the participants table definition
rg "participants" -t ts --max-count 20Repository: KumarSashank/AI-Product-Manager
Length of output: 5624
🏁 Script executed:
# Read the meetings schema to see the participants table FK definition
cat -n packages/ai-backend/src/db/schema/meetings.tsRepository: KumarSashank/AI-Product-Manager
Length of output: 4043
🏁 Script executed:
# Read the meetingItems schema to see how it's structured
cat -n packages/ai-backend/src/db/schema/meetingItems.tsRepository: KumarSashank/AI-Product-Manager
Length of output: 5182
🏁 Script executed:
# Look at the meetingItems repository delete method
cat -n packages/ai-backend/src/db/repositories/meetingItems.repository.tsRepository: KumarSashank/AI-Product-Manager
Length of output: 6861
🏁 Script executed:
# Find all references to meetings.id in the schema
rg "references\(\(\) => meetings" packages/ai-backend/src/db/schema/ -A 2Repository: KumarSashank/AI-Product-Manager
Length of output: 1937
🏁 Script executed:
# Check the exact meeting.repository.ts delete method
sed -n '157,162p' packages/ai-backend/src/db/repositories/meeting.repository.tsRepository: KumarSashank/AI-Product-Manager
Length of output: 212
🏁 Script executed:
# Check all dependent tables more clearly - transcripts, moms, embeddings
rg "export const (transcripts|moms|meetingHighlights|embeddings)" packages/ai-backend/src/db/schema/ -A 15 | head -100Repository: KumarSashank/AI-Product-Manager
Length of output: 3861
🏁 Script executed:
# Check transcripts schema to see if it has cascade
cat -n packages/ai-backend/src/db/schema/transcripts.ts | head -30Repository: KumarSashank/AI-Product-Manager
Length of output: 1367
🏁 Script executed:
# Check embeddings to see the cascade example
cat -n packages/ai-backend/src/db/schema/embeddings.ts | grep -A 3 -B 3 "onDelete"Repository: KumarSashank/AI-Product-Manager
Length of output: 415
Delete will fail with FK constraint violations.
Deleting a meeting without first removing dependent records will fail on multiple foreign key constraints. At least 6 tables reference meetings.id without ON DELETE CASCADE:
participantstranscriptEventsmomsmeetingHighlightsmeetingItemsprogressUpdates
The meetingItems.delete() method already demonstrates the correct pattern by manually deleting dependent records (progressUpdates and meetingItemTags) before deleting the item itself. Follow the same approach: delete in reverse dependency order before deleting the meeting record.
Note: embeddings.ts shows that Drizzle ORM supports { onDelete: 'cascade' }, so alternatively you could configure cascade at the schema level for consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/ai-backend/src/db/repositories/meeting.repository.ts` around lines
157 - 162, The delete(id: string) method in meeting.repository.ts currently
attempts to delete a meeting directly and will fail due to FK constraints;
update MeetingRepository.delete to first remove dependent rows in reverse
dependency order (delete meetingItems and their dependent
progressUpdates/meetingItemTags using the existing meetingItems.delete pattern
or directly delete progressUpdates and meetingItemTags where meetingId = id,
then delete meetingHighlights, moms, transcriptEvents, participants, and any
other tables referencing meetings.id) using db.delete(...).where(eq(...)) for
each table, and only after those deletes remove the meetings row; alternatively
consider adding { onDelete: 'cascade' } in the schema if you prefer declarative
cascade behavior.
| async findByType( | ||
| meetingId: string, | ||
| type: | ||
| | 'action_item' | ||
| | 'decision' | ||
| | 'blocker' | ||
| | 'risk' | ||
| | 'announcement' | ||
| | 'project_update' | ||
| | 'idea' | ||
| | 'question' | ||
| | 'commitment' | ||
| | 'deadline' | ||
| | 'dependency' | ||
| | 'parking_lot' | ||
| | 'key_takeaway' | ||
| | 'reference' | ||
| ) { |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider extracting the item type union.
The inline union type for type parameter is lengthy. Consider defining a shared type (e.g., MeetingItemType) that can be reused across the codebase and kept in sync with the schema enum.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/ai-backend/src/db/repositories/meetingItems.repository.ts` around
lines 61 - 78, Extract the long inline union used for the `type` parameter in
the `findByType` method into a named type (e.g., `MeetingItemType`) and replace
the union in `findByType(meetingId: string, type: ...)` with that new type;
create and export `MeetingItemType` in a shared types module (or the repository
file if local) and, if possible, align or import it from the schema/enum that
defines meeting item kinds so it stays in sync across the codebase.
| /** | ||
| * Find pending action items for a person | ||
| */ | ||
| async findPendingByAssignee(assigneeEmail: string) { | ||
| return db | ||
| .select() | ||
| .from(meetingItems) | ||
| .where( | ||
| and( | ||
| eq(meetingItems.assigneeEmail, assigneeEmail), | ||
| eq(meetingItems.itemType, 'action_item'), | ||
| not(eq(meetingItems.status, 'completed')), | ||
| not(eq(meetingItems.status, 'cancelled')) | ||
| ) | ||
| ) | ||
| .orderBy(desc(meetingItems.priority), desc(meetingItems.dueDate)); | ||
| } |
There was a problem hiding this comment.
Incorrect sort order for pending action items.
The dueDate is sorted in descending order, which means items due furthest in the future appear first. For pending action items, users typically need items due soonest to appear first so they don't miss deadlines.
🐛 Proposed fix
.orderBy(desc(meetingItems.priority), desc(meetingItems.dueDate));
+ .orderBy(desc(meetingItems.priority), meetingItems.dueDate);Note: Drizzle defaults to ASC when no direction is specified, or you can use asc(meetingItems.dueDate) explicitly.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** | |
| * Find pending action items for a person | |
| */ | |
| async findPendingByAssignee(assigneeEmail: string) { | |
| return db | |
| .select() | |
| .from(meetingItems) | |
| .where( | |
| and( | |
| eq(meetingItems.assigneeEmail, assigneeEmail), | |
| eq(meetingItems.itemType, 'action_item'), | |
| not(eq(meetingItems.status, 'completed')), | |
| not(eq(meetingItems.status, 'cancelled')) | |
| ) | |
| ) | |
| .orderBy(desc(meetingItems.priority), desc(meetingItems.dueDate)); | |
| } | |
| /** | |
| * Find pending action items for a person | |
| */ | |
| async findPendingByAssignee(assigneeEmail: string) { | |
| return db | |
| .select() | |
| .from(meetingItems) | |
| .where( | |
| and( | |
| eq(meetingItems.assigneeEmail, assigneeEmail), | |
| eq(meetingItems.itemType, 'action_item'), | |
| not(eq(meetingItems.status, 'completed')), | |
| not(eq(meetingItems.status, 'cancelled')) | |
| ) | |
| ) | |
| .orderBy(desc(meetingItems.priority), meetingItems.dueDate); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/ai-backend/src/db/repositories/meetingItems.repository.ts` around
lines 85 - 101, The query in findPendingByAssignee sorts dueDate descending so
far-future items show first; update the orderBy call on the db query that
references meetingItems (the orderBy currently using
desc(meetingItems.priority), desc(meetingItems.dueDate)) to sort priority
descending but dueDate ascending (e.g., desc(meetingItems.priority) and
asc(meetingItems.dueDate) or omit desc for dueDate) so items due soonest appear
first.
| async generateEmbedding(text: string): Promise<number[]> { | ||
| const response = await openai.embeddings.create({ | ||
| model: this.embeddingModel, | ||
| input: text, | ||
| }); | ||
|
|
||
| return response.data[0]?.embedding ?? []; | ||
| } |
There was a problem hiding this comment.
Silent failure on missing embedding.
Returning an empty array when embedding is missing (?? []) silently fails. This could cause downstream issues with vector storage/search. Consider throwing an error if the embedding is unexpectedly absent.
🛡️ Proposed fix
- return response.data[0]?.embedding ?? [];
+ const embedding = response.data[0]?.embedding;
+ if (!embedding) {
+ throw new Error('No embedding returned from OpenAI');
+ }
+ return embedding;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async generateEmbedding(text: string): Promise<number[]> { | |
| const response = await openai.embeddings.create({ | |
| model: this.embeddingModel, | |
| input: text, | |
| }); | |
| return response.data[0]?.embedding ?? []; | |
| } | |
| async generateEmbedding(text: string): Promise<number[]> { | |
| const response = await openai.embeddings.create({ | |
| model: this.embeddingModel, | |
| input: text, | |
| }); | |
| const embedding = response.data[0]?.embedding; | |
| if (!embedding) { | |
| throw new Error('No embedding returned from OpenAI'); | |
| } | |
| return embedding; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/ai-backend/src/services/openai.service.ts` around lines 266 - 273,
The generateEmbedding function currently returns an empty array when
response.data[0]?.embedding is missing, which can silently break downstream
vector logic; change generateEmbedding to validate the response and throw a
descriptive error (e.g., "Missing embedding in OpenAI response" including
response or request context) if embedding is undefined or null instead of
returning [], so callers receive a failure and can handle/retry; update any
callers or add JSDoc/comments to indicate this method throws on missing
embeddings.
| /** | ||
| * Estimate token count (rough approximation) | ||
| */ | ||
| estimateTokens(text: string): number { | ||
| // Rough estimate: ~4 chars per token for English | ||
| return Math.ceil(text.length / 4); | ||
| } | ||
|
|
||
| /** | ||
| * Check if text fits within token limit | ||
| */ | ||
| fitsInContext(text: string, maxTokens: number = 128000): boolean { | ||
| return this.estimateTokens(text) < maxTokens; | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
How to count tokens for OpenAI API in JavaScript?
💡 Result:
To count tokens for OpenAI API requests in JavaScript, use these two main approaches: 1. Local estimation with tiktoken (client-side, no API call needed, works for text): - Install: npm install tiktoken (WASM) or js-tiktoken (pure JS) - Example for simple text: import { encoding_for_model } from 'tiktoken'; const encoder = encoding_for_model('gpt-4o'); const tokens = encoder.encode('Your text here'); console.log(tokens.length); // token count encoder.free; - For chat messages, account for overhead (3-4 tokens per message depending on model): function countTokens(messages, model) { const enc = encoding_for_model(model); let tokens = 0; // Add logic for system/user/assistant overhead per OpenAI cookbook patterns for (const msg of messages) { tokens += enc.encode(msg.content).length; // Add 3 tokens per message + role-specific overhead } // Model-specific adjustments (e.g., +3 for gpt-3.5-turbo messages) enc.free; return tokens; } 2. Official API method (server-side, handles images/files/tools exactly): - Use OpenAI SDK: npm install openai - Call the input tokens endpoint: import OpenAI from 'openai'; const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }); const response = await openai.responses.inputTokens.count({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Your prompt' }], // Add tools, images, etc. as in actual request }); console.log(response.input_tokens); This matches exactly what the model will consume, including complex inputs. Prefer local tiktoken for speed/cost on plain text; use API for accuracy with multimodal/complex inputs. Tiktoken matches API for text within ~1-3 tokens.
Citations:
- 1: https://developers.openai.com/api/docs/guides/token-counting/
- 2: https://stackoverflow.com/questions/76216113/how-can-i-count-tokens-before-making-api-call
- 3: https://developers.openai.com/api/reference/typescript/resources/responses/subresources/input_tokens/methods/count
- 4: https://www.npmjs.com/package/tiktoken
- 5: https://registry.npmjs.org/js-tiktoken
- 6: https://stephenwalther.com/fun-with-tiktoken-and-nodejs/
- 7: https://developers.openai.com/cookbook/examples/how_to_count_tokens_with_tiktoken
🏁 Script executed:
# Search for usages of fitsInContext and estimateTokens
rg "fitsInContext|estimateTokens" --type ts --type js -B 2 -A 2Repository: KumarSashank/AI-Product-Manager
Length of output: 5135
🏁 Script executed:
# Check the full openai.service.ts file to understand context better
wc -l packages/ai-backend/src/services/openai.service.tsRepository: KumarSashank/AI-Product-Manager
Length of output: 129
🏁 Script executed:
# Look at the broader context of how token limits are handled
rg "maxTokens|token.*limit|context.*window" packages/ai-backend/src/services/ --type ts -B 1 -A 1Repository: KumarSashank/AI-Product-Manager
Length of output: 1654
Token estimation should use proper tokenizer for RAG context budgeting.
The ~4 chars per token heuristic is used in rag.service.ts for token budgeting when accumulating context. A 20-50% variance (common with code, numbers, or non-English text) can cause actual token counts to exceed limits, breaking context chunking. Replace with tiktoken (available via npm install tiktoken) which matches OpenAI API within 1-3 tokens and is the OpenAI-recommended approach.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/ai-backend/src/services/openai.service.ts` around lines 289 - 303,
Replace the simplistic char-based heuristic in estimateTokens and fitsInContext
with an actual tokenizer: import and initialize the tiktoken tokenizer that
matches the model you use (e.g., encoding for "gpt-4o" or appropriate model) and
use it in estimateTokens(text: string) to return the token count (e.g.,
encoding.encode(text).length) and in fitsInContext(text: string,
maxTokens:number) to compare that exact count against maxTokens; ensure you
close/free the encoding if required by tiktoken and add a fallback (still keep a
safe-char-heuristic) for environments where tiktoken fails to initialize.
| for (let i = 0; i < speakers.length; i++) { | ||
| for (let j = i + 1; j < speakers.length; j++) { | ||
| const s1 = speakers[i]; | ||
| const s2 = speakers[j]; | ||
|
|
||
| if (!s1 || !s2) continue; | ||
|
|
||
| if (this.areSimilarNames(s1.displayName, s2.displayName, similarity)) { | ||
| // Merge s2 into s1 | ||
| s1.nameVariations = new Set([...s1.nameVariations, ...s2.nameVariations]); | ||
| s1.captionCount += s2.captionCount; | ||
| if (s2.firstSeen < s1.firstSeen) { | ||
| s1.firstSeen = s2.firstSeen; | ||
| } | ||
| if (s2.lastActive > s1.lastActive) { | ||
| s1.lastActive = s2.lastActive; | ||
| } | ||
|
|
||
| // Update name mappings | ||
| for (const name of s2.nameVariations) { | ||
| const normalized = this.normalizeName(name); | ||
| this.nameToSpeakerId.set(normalized, s1.id); | ||
| } | ||
|
|
||
| // Remove s2 | ||
| this.speakers.delete(s2.id); | ||
|
|
||
| logger.info({ merged: s2.displayName, into: s1.displayName }, 'Merged similar speakers'); | ||
| } |
There was a problem hiding this comment.
Skip speakers that were already removed earlier in the merge pass.
mergeSimilarSpeakers() iterates over the snapshot from Line 136. After deleting s2, later iterations can still select that stale object as s1 and remap names to an ID that no longer exists when three similar names are present.
Proposed fix
for (let j = i + 1; j < speakers.length; j++) {
const s1 = speakers[i];
const s2 = speakers[j];
if (!s1 || !s2) continue;
+ if (!this.speakers.has(s1.id) || !this.speakers.has(s2.id)) continue;
if (this.areSimilarNames(s1.displayName, s2.displayName, similarity)) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/bot-runner/src/captions/attribution.ts` around lines 138 - 166,
mergeSimilarSpeakers is merging s2 into s1 from a snapshot, but after deleting
s2 later iterations can still pick a stale s1 or s2; before operating on s1 or
s2 (inside the nested loops) check that both still exist in the live
this.speakers collection (e.g., if (!this.speakers.has(s1.id) ||
!this.speakers.has(s2.id)) continue;) and only perform the merge,
nameToSpeakerId remapping, and this.speakers.delete(s2.id) when both are present
so you never remap names to a removed ID; update the loop in
mergeSimilarSpeakers to skip removed speakers accordingly.
| const isVisible = await Promise.race([ | ||
| container.isVisible({ timeout: 2000 }).catch(() => false), | ||
| containerAlt.isVisible({ timeout: 2000 }).catch(() => false), | ||
| containerAlt2.isVisible({ timeout: 2000 }).catch(() => false), | ||
| this.page | ||
| .locator(SELECTORS.captionContainerAlt3) | ||
| .isVisible({ timeout: 2000 }) | ||
| .catch(() => false), | ||
| ]); |
There was a problem hiding this comment.
Promise.race() makes caption verification return false too early.
The first selector check to settle wins here, even if it resolves false while another selector is already visible. That creates false negatives and unnecessary retries.
Proposed fix
- const isVisible = await Promise.race([
+ const checks = await Promise.all([
container.isVisible({ timeout: 2000 }).catch(() => false),
containerAlt.isVisible({ timeout: 2000 }).catch(() => false),
containerAlt2.isVisible({ timeout: 2000 }).catch(() => false),
this.page
.locator(SELECTORS.captionContainerAlt3)
.isVisible({ timeout: 2000 })
.catch(() => false),
]);
- return isVisible;
+ return checks.some(Boolean);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const isVisible = await Promise.race([ | |
| container.isVisible({ timeout: 2000 }).catch(() => false), | |
| containerAlt.isVisible({ timeout: 2000 }).catch(() => false), | |
| containerAlt2.isVisible({ timeout: 2000 }).catch(() => false), | |
| this.page | |
| .locator(SELECTORS.captionContainerAlt3) | |
| .isVisible({ timeout: 2000 }) | |
| .catch(() => false), | |
| ]); | |
| const checks = await Promise.all([ | |
| container.isVisible({ timeout: 2000 }).catch(() => false), | |
| containerAlt.isVisible({ timeout: 2000 }).catch(() => false), | |
| containerAlt2.isVisible({ timeout: 2000 }).catch(() => false), | |
| this.page | |
| .locator(SELECTORS.captionContainerAlt3) | |
| .isVisible({ timeout: 2000 }) | |
| .catch(() => false), | |
| ]); | |
| return checks.some(Boolean); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/bot-runner/src/meet/captions.ts` around lines 140 - 148,
Promise.race causes a false result to win if the fastest check returns false;
replace it with aggregating all visibility checks and returning true if any
succeed. Change the Promise.race([...]) around the calls to container.isVisible,
containerAlt.isVisible, containerAlt2.isVisible and
this.page.locator(SELECTORS.captionContainerAlt3).isVisible to
Promise.all([...]).then(results => results.some(Boolean)) (keep the existing
.catch(() => false) on each visibility promise) so the function only returns
false if all selectors are not visible.
| async function apiFetch<T>(endpoint: string, options: RequestInit = {}): Promise<T> { | ||
| const url = `${API_BASE_URL}${endpoint}`; | ||
|
|
||
| const response = await fetch(url, { | ||
| ...options, | ||
| credentials: 'include', // Include cookies for auth | ||
| headers: { | ||
| 'Content-Type': 'application/json', | ||
| ...options.headers, | ||
| }, | ||
| }); | ||
|
|
||
| const data = await response.json(); | ||
|
|
||
| if (!response.ok) { | ||
| throw new Error(data.error || 'Something went wrong'); | ||
| } | ||
|
|
||
| return data; | ||
| } |
There was a problem hiding this comment.
Unconditional response.json() may throw on non-JSON responses.
If the server returns a non-JSON response (e.g., 204 No Content, 502 HTML error page, network timeout), response.json() will throw a parsing error, obscuring the actual issue.
🛡️ Proposed fix with safer JSON parsing
const response = await fetch(url, {
...options,
credentials: 'include',
headers: {
'Content-Type': 'application/json',
...options.headers,
},
});
- const data = await response.json();
+ let data: unknown;
+ try {
+ data = await response.json();
+ } catch {
+ if (!response.ok) {
+ throw new Error(`Request failed with status ${response.status}`);
+ }
+ return undefined as T; // For 204 No Content
+ }
if (!response.ok) {
- throw new Error(data.error || 'Something went wrong');
+ throw new Error((data as { error?: string })?.error || 'Something went wrong');
}
return data as T;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async function apiFetch<T>(endpoint: string, options: RequestInit = {}): Promise<T> { | |
| const url = `${API_BASE_URL}${endpoint}`; | |
| const response = await fetch(url, { | |
| ...options, | |
| credentials: 'include', // Include cookies for auth | |
| headers: { | |
| 'Content-Type': 'application/json', | |
| ...options.headers, | |
| }, | |
| }); | |
| const data = await response.json(); | |
| if (!response.ok) { | |
| throw new Error(data.error || 'Something went wrong'); | |
| } | |
| return data; | |
| } | |
| async function apiFetch<T>(endpoint: string, options: RequestInit = {}): Promise<T> { | |
| const url = `${API_BASE_URL}${endpoint}`; | |
| const response = await fetch(url, { | |
| ...options, | |
| credentials: 'include', // Include cookies for auth | |
| headers: { | |
| 'Content-Type': 'application/json', | |
| ...options.headers, | |
| }, | |
| }); | |
| let data: unknown; | |
| try { | |
| data = await response.json(); | |
| } catch { | |
| if (!response.ok) { | |
| throw new Error(`Request failed with status ${response.status}`); | |
| } | |
| return undefined as T; // For 204 No Content | |
| } | |
| if (!response.ok) { | |
| throw new Error((data as { error?: string })?.error || 'Something went wrong'); | |
| } | |
| return data as T; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/web/lib/api.ts` around lines 14 - 33, The apiFetch function
unconditionally calls response.json(), which can throw for non-JSON responses
(e.g., 204, HTML error pages); update apiFetch to first check response.status
(handle 204/205 as empty) and response.headers.get('content-type') to see if it
includes 'application/json' before calling response.json(), otherwise read
response.text() (or set data = null) and wrap parsing in try/catch so parsing
errors are handled; when throwing on !response.ok include the HTTP status and
the parsed fallback body (text or null) in the Error to preserve useful debug
info while keeping API_BASE_URL and the apiFetch signature unchanged.
Description
Type of Change
Package(s) Affected
@meeting-ai/shared@meeting-ai/bot-runner@meeting-ai/ai-backendChecklist
For Shared Package Changes
bot-runnerandai-backendtests pass with these changesScreenshots / Recordings
Additional Notes
Summary by CodeRabbit
New Features
Documentation
Chores
Tests