Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,20 @@ docker run --name sqlchat --platform linux/amd64 --env NEXTAUTH_SECRET="$(openss

- `NEXT_PUBLIC_ALLOW_SELF_OPENAI_KEY`: Set to `true` to allow users to bring their own OpenAI API key.

### Using MiniMax

SQL Chat supports [MiniMax](https://www.minimax.io/) models (M2.7 and M2.7 Highspeed) via their OpenAI-compatible API. To use MiniMax:

1. Get an API key from [MiniMax Platform](https://platform.minimax.io/).
2. Set the environment variables:

```bash
OPENAI_API_KEY=your-minimax-api-key
OPENAI_API_ENDPOINT=https://api.minimax.io
```

Or, if `NEXT_PUBLIC_ALLOW_SELF_OPENAI_KEY=true`, enter the API key and endpoint (`https://api.minimax.io`) in the Settings UI, then select a MiniMax model.


### Database related

Expand Down
14 changes: 14 additions & 0 deletions README.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,20 @@ docker run --name sqlchat --platform linux/amd64 --env NEXTAUTH_SECRET="$(openss

- `NEXT_PUBLIC_ALLOW_SELF_OPENAI_KEY`: 置为 `true` 以允许 SQL Chat 服务的用户使用自己的 key。

### 使用 MiniMax

SQL Chat 支持通过 OpenAI 兼容 API 使用 [MiniMax](https://www.minimax.io/) 模型(M2.7 和 M2.7 Highspeed)。使用方法:

1. 从 [MiniMax 开放平台](https://platform.minimax.io/) 获取 API Key。
2. 设置环境变量:

```bash
OPENAI_API_KEY=your-minimax-api-key
OPENAI_API_ENDPOINT=https://api.minimax.io
```

或者,如果设置了 `NEXT_PUBLIC_ALLOW_SELF_OPENAI_KEY=true`,可以在设置界面中输入 API Key 和 Endpoint(`https://api.minimax.io`),然后选择 MiniMax 模型。

### 数据库相关

- `NEXT_PUBLIC_USE_DATABASE`: 置为 `true` 使得 SQL Chat 启动时使用数据库,这会开启如下功能:
Expand Down
8 changes: 6 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,10 @@
"export": "next export",
"start": "next start",
"lint": "next lint",
"stripe": "stripe listen --forward-to localhost:3000/api/stripe/webhook"
"stripe": "stripe listen --forward-to localhost:3000/api/stripe/webhook",
"test": "vitest run",
"test:unit": "vitest run tests/unit",
"test:integration": "vitest run tests/integration"
},
"dependencies": {
"@emotion/react": "^11.10.6",
Expand Down Expand Up @@ -92,7 +95,8 @@
"react-syntax-highlighter": "^15.5.0",
"tailwindcss": "^3.2.4",
"ts-node": "^10.9.1",
"typescript": "^4.9.5"
"typescript": "^4.9.5",
"vitest": "^4.1.0"
},
Comment on lines 97 to 100
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

package.json adds a new dev dependency (vitest), but pnpm-lock.yaml is not updated. Since this repo uses pnpm and commits the lockfile, installs/CI won’t pick up the new dependency deterministically until the lockfile is regenerated and committed.

Copilot uses AI. Check for mistakes.
"prisma": {
"seed": "ts-node --compiler-options {\"module\":\"CommonJS\"} prisma/seed.ts"
Expand Down
14 changes: 14 additions & 0 deletions src/components/OpenAIApiConfigView.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,20 @@ const OpenAIApiConfigView = () => {
disabled: false,
tooltip: "",
},
{
id: "MiniMax-M2.7",
title: `MiniMax M2.7`,
cost: 1,
disabled: false,
tooltip: "",
},
{
id: "MiniMax-M2.7-highspeed",
title: `MiniMax M2.7 Highspeed`,
cost: 1,
disabled: false,
tooltip: "",
},
];

const maskedKey = (str: string) => {
Expand Down
20 changes: 19 additions & 1 deletion src/utils/model.ts
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,25 @@ const deepseekChat = {
cost_per_call: 1,
};

export const models = [gpt35turbo, gpt4, gpt4turbo, gpt4ho, deepseekChat];
const minimaxM27 = {
name: "MiniMax-M2.7",
temperature: 0.01,
frequency_penalty: 0.0,
presence_penalty: 0.0,
max_token: 204800,
cost_per_call: 1,
};

const minimaxM27Highspeed = {
name: "MiniMax-M2.7-highspeed",
temperature: 0.01,
frequency_penalty: 0.0,
presence_penalty: 0.0,
max_token: 204800,
cost_per_call: 1,
};

export const models = [gpt35turbo, gpt4, gpt4turbo, gpt4ho, deepseekChat, minimaxM27, minimaxM27Highspeed];

export const getModel = (name: string) => {
for (const model of models) {
Expand Down
113 changes: 113 additions & 0 deletions tests/integration/minimax.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
import { describe, it, expect } from "vitest";

const API_KEY = process.env.MINIMAX_API_KEY;
const BASE_URL = "https://api.minimax.io/v1";

Comment on lines +3 to +5
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This integration test uses process.env.MINIMAX_API_KEY and hard-codes BASE_URL to include /v1, while the app/docs use OPENAI_API_KEY + a base endpoint host (and then force the request path to /v1/chat/completions). For consistency and to avoid tests being unexpectedly skipped, consider accepting OPENAI_API_KEY (or both) and constructing the URL the same way as production (base host + /v1/...).

Copilot uses AI. Check for mistakes.
describe.skipIf(!API_KEY)("MiniMax Integration", () => {
it(
"completes a basic chat request with MiniMax-M2.7",
async () => {
const response = await fetch(`${BASE_URL}/chat/completions`, {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${API_KEY}`,
},
body: JSON.stringify({
model: "MiniMax-M2.7",
messages: [
{
role: "user",
content: 'Respond with exactly: "MiniMax works"',
},
],
temperature: 0.01,
max_tokens: 20,
}),
});

expect(response.ok).toBe(true);
const data = await response.json();
expect(data.choices).toBeDefined();
expect(data.choices.length).toBeGreaterThan(0);
expect(data.choices[0].message.content).toBeTruthy();
},
30000
);

it(
"handles streaming response",
async () => {
const response = await fetch(`${BASE_URL}/chat/completions`, {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${API_KEY}`,
},
body: JSON.stringify({
model: "MiniMax-M2.7",
messages: [{ role: "user", content: "Count 1 to 3" }],
temperature: 0.01,
max_tokens: 50,
stream: true,
}),
});

expect(response.ok).toBe(true);

const reader = response.body!.getReader();
const decoder = new TextDecoder();
let chunks = 0;
let fullText = "";

while (true) {
const { done, value } = await reader.read();
if (done) break;
const text = decoder.decode(value, { stream: true });
const lines = text.split("\n").filter((l) => l.startsWith("data: "));
for (const line of lines) {
const data = line.slice(6);
if (data === "[DONE]") continue;
try {
const json = JSON.parse(data);
const content = json.choices?.[0]?.delta?.content;
if (content) {
fullText += content;
chunks++;
}
} catch {
// skip incomplete JSON
}
}
}
Comment on lines +63 to +82
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The streaming test parses each reader.read() chunk by splitting on newlines and then JSON.parseing each data: line, but SSE frames/JSON payloads can be split across chunk boundaries. Silently skipping JSON.parse errors can drop content and make this test flaky; buffer incomplete lines between reads (or reuse an SSE parser like eventsource-parser) so only complete events are parsed.

Copilot uses AI. Check for mistakes.

expect(chunks).toBeGreaterThan(0);
expect(fullText.length).toBeGreaterThan(0);
},
30000
);

it(
"works with MiniMax-M2.7-highspeed model",
async () => {
const response = await fetch(`${BASE_URL}/chat/completions`, {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${API_KEY}`,
},
body: JSON.stringify({
model: "MiniMax-M2.7-highspeed",
messages: [{ role: "user", content: "Say hello" }],
temperature: 0.01,
max_tokens: 20,
}),
});

expect(response.ok).toBe(true);
const data = await response.json();
expect(data.choices[0].message.content).toBeTruthy();
},
30000
);
});
63 changes: 63 additions & 0 deletions tests/unit/model.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
import { describe, it, expect } from "vitest";
import { models, getModel } from "@/utils/model";

describe("Model configuration", () => {
describe("models array", () => {
it("contains MiniMax-M2.7", () => {
const minimax = models.find((m) => m.name === "MiniMax-M2.7");
expect(minimax).toBeDefined();
});

it("contains MiniMax-M2.7-highspeed", () => {
const minimax = models.find((m) => m.name === "MiniMax-M2.7-highspeed");
expect(minimax).toBeDefined();
});

it("MiniMax-M2.7 has correct config", () => {
const minimax = models.find((m) => m.name === "MiniMax-M2.7")!;
expect(minimax.temperature).toBeGreaterThan(0);
expect(minimax.temperature).toBeLessThanOrEqual(1);
expect(minimax.max_token).toBe(204800);
expect(minimax.cost_per_call).toBe(1);
});

it("MiniMax-M2.7-highspeed has correct config", () => {
const minimax = models.find((m) => m.name === "MiniMax-M2.7-highspeed")!;
expect(minimax.temperature).toBeGreaterThan(0);
expect(minimax.temperature).toBeLessThanOrEqual(1);
expect(minimax.max_token).toBe(204800);
expect(minimax.cost_per_call).toBe(1);
});

it("MiniMax temperature is within valid range (0, 1]", () => {
const minimaxModels = models.filter((m) => m.name.startsWith("MiniMax"));
expect(minimaxModels.length).toBe(2);
for (const model of minimaxModels) {
expect(model.temperature).toBeGreaterThan(0);
expect(model.temperature).toBeLessThanOrEqual(1);
}
});
});

describe("getModel", () => {
it("returns MiniMax-M2.7 by name", () => {
const model = getModel("MiniMax-M2.7");
expect(model.name).toBe("MiniMax-M2.7");
});

it("returns MiniMax-M2.7-highspeed by name", () => {
const model = getModel("MiniMax-M2.7-highspeed");
expect(model.name).toBe("MiniMax-M2.7-highspeed");
});

it("returns default model for unknown name", () => {
const model = getModel("unknown-model");
expect(model.name).toBe("gpt-3.5-turbo");
});

it("returns correct model for empty string", () => {
const model = getModel("");
expect(model.name).toBe("gpt-3.5-turbo");
});
});
});
13 changes: 13 additions & 0 deletions vitest.config.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
import { defineConfig } from "vitest/config";
import path from "path";

export default defineConfig({
test: {
include: ["tests/**/*.test.ts"],
},
resolve: {
alias: {
"@": path.resolve(__dirname, "src"),
},
},
});
Loading