Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
improvement(models): derive provider colors/resellers from definition…
…s, reorient FAQs to agent builder

Dynamic data:
- Add `color` and `isReseller` fields to ProviderDefinition interface
- Move brand colors for all 10 providers into their definitions
- Mark 6 reseller providers (Azure, Bedrock, Vertex, OpenRouter, Fireworks)
- consts.ts now derives color map from MODEL_CATALOG_PROVIDERS
- model-comparison-charts derives RESELLER_PROVIDERS from catalog
- Fix deepseek name: Deepseek → DeepSeek; remove now-redundant
  PROVIDER_NAME_OVERRIDES and getProviderDisplayName from utils
- Add color/isReseller fields to CatalogProvider; clean up duplicate
  providerDisplayName in searchText array

FAQs:
- Replace all 4 main-page FAQs with 5 agent-builder-oriented ones
  covering model selection, context windows, pricing, tool use, and
  how to use models in a Sim agent workflow
- buildProviderFaqs: add conditional tool use FAQ per provider
- buildModelFaqs: add bestFor FAQ (conditional on field presence);
  improve context window answer to explain agent implications;
  tighten capabilities answer wording
  • Loading branch information
waleedlatif1 committed Apr 11, 2026
commit a2f3145b2a7d59f47c6e5354f2704897be0f2225
19 changes: 5 additions & 14 deletions apps/sim/app/(landing)/models/components/consts.ts
Original file line number Diff line number Diff line change
@@ -1,18 +1,9 @@
export const PROVIDER_COLORS: Record<string, string> = {
anthropic: '#D97757',
openai: '#E8E8E8',
google: '#4285F4',
xai: '#555555',
mistral: '#F7D046',
groq: '#F55036',
cerebras: '#6D5BF7',
deepseek: '#4D6BFE',
fireworks: '#FF6D3A',
bedrock: '#FF9900',
}
import { MODEL_CATALOG_PROVIDERS } from '@/app/(landing)/models/utils'

const DEFAULT_COLOR = '#888888'
const colorMap = new Map(
MODEL_CATALOG_PROVIDERS.filter((p) => p.color).map((p) => [p.id, p.color as string])
)

export function getProviderColor(providerId: string): string {
return PROVIDER_COLORS[providerId] ?? DEFAULT_COLOR
return colorMap.get(providerId) ?? '#888888'
}
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,9 @@ import {
} from '@/app/(landing)/models/utils'

/** Providers that host other providers' models — deprioritized to avoid duplicates. */
const RESELLER_PROVIDERS = new Set([
'azure-openai',
'azure-anthropic',
'bedrock',
'vertex',
'openrouter',
'fireworks',
])
const RESELLER_PROVIDERS = new Set(
MODEL_CATALOG_PROVIDERS.filter((p) => p.isReseller).map((p) => p.id)
)

const PROVIDER_ICON_MAP: Record<string, ComponentType<{ className?: string }>> = (() => {
const map: Record<string, ComponentType<{ className?: string }>> = {}
Expand Down
21 changes: 13 additions & 8 deletions apps/sim/app/(landing)/models/page.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -22,24 +22,29 @@ const baseUrl = getBaseUrl()

const faqItems = [
{
question: 'What is the Sim AI models directory?',
question: 'Which AI models are best for building agents and automated workflows?',
answer:
'The Sim AI models directory is a public catalog of the language models and providers tracked inside Sim. It shows provider coverage, model IDs, pricing per one million tokens, context windows, and supported capabilities such as reasoning controls, structured outputs, and deep research.',
'The most important factors for agent tasks are reliable tool use (function calling), a large enough context window to track conversation history and tool outputs, and consistent instruction following. In Sim, OpenAI GPT-4.1, Anthropic Claude Sonnet, and Google Gemini 2.5 Pro are popular choices — each supports tool use, structured outputs, and context windows of 128K tokens or more. For cost-sensitive or high-throughput agents, Groq and Cerebras offer significantly faster inference at lower cost.',
},
{
question: 'Can I compare models from multiple providers in one place?',
question: 'What does context window size mean when running an AI agent?',
answer:
'Yes. This page organizes every tracked model by provider and lets you search across providers, model names, and capabilities. You can quickly compare OpenAI, Anthropic, Google, xAI, Mistral, Groq, Cerebras, Fireworks, Bedrock, and more from a single directory.',
'The context window is the total number of tokens a model can process in a single call, including your system prompt, conversation history, tool call results, and any documents you pass in. For agents running multi-step tasks, context fills up quickly — each tool result and each retrieved document adds tokens. A 128K-token context window fits roughly 300 pages of text; models like Gemini 2.5 Pro support up to 1M tokens, enough to hold an entire codebase in a single pass.',
},
{
question: 'Are these model prices shown per million tokens?',
question: 'Are model prices shown per million tokens?',
answer:
'Yes. Input, cached input, and output prices on this page are shown per one million tokens based on the provider metadata tracked in Sim.',
'Yes. Input, cached input, and output prices are all listed per one million tokens, matching how providers bill through their APIs. For agents that chain multiple calls, costs compound quickly — an agent completing 100 turns at 10K tokens each consumes roughly 1M tokens per session. Cached input pricing applies when a provider supports prompt caching, where a repeated prefix like a system prompt is billed at a reduced rate.',
},
{
question: 'Does Sim support providers with dynamic model catalogs too?',
question: 'Which AI models support tool use and function calling?',
answer:
'Yes. Some providers such as OpenRouter, Fireworks, Ollama, and vLLM load their model lists dynamically at runtime. Those providers are still shown here even when their full public model list is not hard-coded into the catalog.',
'Tool use — also called function calling — lets an agent invoke external APIs, query databases, run code, or take any action you define. In Sim, all first-party models from OpenAI, Anthropic, Google, Mistral, Groq, Cerebras, and xAI support tool use. Look for the Tool Use capability tag on any model card in this directory to confirm support.',
},
{
question: 'How do I add a model to a Sim agent workflow?',
answer:
'Open any workflow in Sim, add an Agent block, and select your provider and model from the model picker inside that block. Every model listed in this directory is available in the Agent block. Swapping models takes one click and does not affect the rest of your workflow, making it straightforward to test different models on the same task without rebuilding anything.',
},
]

Expand Down
59 changes: 42 additions & 17 deletions apps/sim/app/(landing)/models/utils.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,6 @@ const PROVIDER_PREFIXES: Record<string, string[]> = {
vllm: ['vllm/'],
}

const PROVIDER_NAME_OVERRIDES: Record<string, string> = {
deepseek: 'DeepSeek',
vllm: 'vLLM',
xai: 'xAI',
}

const TOKEN_REPLACEMENTS: Record<string, string> = {
ai: 'AI',
aws: 'AWS',
Expand Down Expand Up @@ -127,6 +121,8 @@ export interface CatalogProvider {
defaultModel: string
defaultModelDisplayName: string
icon?: ComponentType<{ className?: string }>
color?: string
isReseller: boolean
contextInformationAvailable: boolean
providerCapabilityTags: string[]
modelCount: number
Expand Down Expand Up @@ -419,10 +415,6 @@ function buildModelSummary(
return parts.filter(Boolean).join(' ')
}

function getProviderDisplayName(providerId: string, providerName: string): string {
return PROVIDER_NAME_OVERRIDES[providerId] ?? providerName
}

function computeModelRelevanceScore(model: CatalogModel): number {
return (
(model.capabilities.reasoningEffort ? 10 : 0) +
Expand All @@ -439,7 +431,7 @@ function compareModelsByRelevance(a: CatalogModel, b: CatalogModel): number {

const rawProviders = Object.values(PROVIDER_DEFINITIONS).map((provider) => {
const providerSlug = slugify(provider.id)
const providerDisplayName = getProviderDisplayName(provider.id, provider.name)
const providerDisplayName = provider.name
const providerCapabilityTags = buildCapabilityTags(provider.capabilities ?? {})

const models: CatalogModel[] = provider.models.map((model) => {
Expand Down Expand Up @@ -509,14 +501,15 @@ const rawProviders = Object.values(PROVIDER_DEFINITIONS).map((provider) => {
defaultModel: provider.defaultModel,
defaultModelDisplayName,
icon: provider.icon,
color: provider.color,
isReseller: provider.isReseller ?? false,
contextInformationAvailable: provider.contextInformationAvailable !== false,
providerCapabilityTags,
modelCount: models.length,
models,
featuredModels,
searchText: [
provider.name,
providerDisplayName,
provider.id,
provider.description,
provider.defaultModel,
Expand Down Expand Up @@ -633,7 +626,13 @@ export function buildProviderFaqs(provider: CatalogProvider): CatalogFaq[] {
const cheapestModel = getCheapestProviderModel(provider)
const largestContextModel = getLargestContextProviderModel(provider)

return [
const toolUseModels = provider.models.filter(
(m) =>
m.capabilities.toolUsageControl !== undefined ||
provider.providerCapabilityTags.includes('Tool Use')
)

const faqs: CatalogFaq[] = [
{
question: `What ${provider.name} models are available in Sim?`,
answer: `Sim currently tracks ${provider.modelCount} ${provider.name} model${provider.modelCount === 1 ? '' : 's'} including ${provider.models
Expand Down Expand Up @@ -664,10 +663,27 @@ export function buildProviderFaqs(provider: CatalogProvider): CatalogFaq[] {
: `Context window details are not fully available for every ${provider.name} model in the public catalog.`,
},
]

if (toolUseModels.length > 0) {
faqs.push({
question: `Which ${provider.name} models support tool use and function calling in Sim?`,
answer:
toolUseModels.length === provider.modelCount
? `All ${provider.name} models in Sim support tool use and function calling, allowing agents to invoke external APIs, query databases, and run custom actions.`
: `${toolUseModels
.slice(0, 5)
.map((m) => m.displayName)
.join(
', '
)}${toolUseModels.length > 5 ? ', and others' : ''} support tool use and function calling in Sim, enabling agents to invoke external APIs and run custom actions.`,
})
}

return faqs
}

export function buildModelFaqs(provider: CatalogProvider, model: CatalogModel): CatalogFaq[] {
return [
const faqs: CatalogFaq[] = [
{
question: `What is ${model.displayName}?`,
answer: `${model.displayName} is a ${provider.name} model available in Sim. ${model.summary}`,
Expand All @@ -679,17 +695,26 @@ export function buildModelFaqs(provider: CatalogProvider, model: CatalogModel):
{
question: `What is the context window for ${model.displayName}?`,
answer: model.contextWindow
? `${model.displayName} supports a listed context window of ${formatTokenCount(model.contextWindow)} tokens in Sim.`
? `${model.displayName} supports a context window of ${formatTokenCount(model.contextWindow)} tokens in Sim. In an agent workflow, this determines how much conversation history, tool outputs, and retrieved documents the model can hold in a single call.`
: `A public context window value is not currently tracked for ${model.displayName}.`,
},
{
question: `What capabilities does ${model.displayName} support?`,
answer:
model.capabilityTags.length > 0
? `${model.displayName} supports ${model.capabilityTags.join(', ')}.`
: `${model.displayName} is available in Sim, but no extra public capability flags are currently tracked for this model.`,
? `${model.displayName} supports the following capabilities in Sim: ${model.capabilityTags.join(', ')}.`
: `${model.displayName} supports standard text generation in Sim. No additional capability flags such as tool use or structured outputs are currently tracked for this model.`,
},
]

if (model.bestFor) {
faqs.push({
question: `What is ${model.displayName} best used for?`,
answer: `${model.bestFor} When used in a Sim workflow, it can be selected in any Agent block from the model picker.`,
})
}

return faqs
}

export function buildModelCapabilityFacts(model: CatalogModel): CapabilityFact[] {
Expand Down
Loading
Loading