Conversation
Add 10 SKILL.md files covering all major TanStack AI capabilities: - chat-experience: server endpoint + client UI + streaming - tool-calling: toolDefinition, server/client execution, approval - media-generation: image, video, TTS, transcription - code-mode: sandbox drivers, skills system, client integration - structured-outputs: outputSchema with Zod/ArkType/Valibot - adapter-configuration: 7 provider adapters with reference files - ag-ui-protocol: server-side streaming protocol - middleware: lifecycle hooks for analytics and caching - custom-backend-integration: custom connection adapters Skills guide AI agents to generate correct TanStack AI patterns and avoid common mistakes (Vercel AI SDK confusion, wrong imports, deprecated APIs, silent failures). Includes domain_map.yaml, skill_spec.md, and skill_tree.yaml artifacts from the domain discovery process.
📝 WalkthroughWalkthroughThis pull request introduces the "@tanstack/intent agent skills" taxonomy for TanStack AI. It adds changeset documentation, comprehensive skill specifications through artifact files, and extensive skill documentation covering core chat functionality, tool calling, media generation, adapter configuration, protocol streaming, middleware, custom backend integration, and code-mode execution. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🚀 Changeset Version Preview2 package(s) bumped directly, 24 bumped as dependents. 🟩 Patch bumps
|
|
View your CI Pipeline Execution ↗ for commit fbfc00e
☁️ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-code-mode
@tanstack/ai-code-mode-skills
@tanstack/ai-devtools-core
@tanstack/ai-elevenlabs
@tanstack/ai-event-client
@tanstack/ai-fal
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-groq
@tanstack/ai-isolate-cloudflare
@tanstack/ai-isolate-node
@tanstack/ai-isolate-quickjs
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
Actionable comments posted: 18
🧹 Nitpick comments (2)
_artifacts/skill_spec.md (1)
172-175: Reduce repeated sentence openings for readability.The “Key Rules” list repeats “Always” in consecutive items; rewording one or two lines will satisfy the style warning and read cleaner.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_artifacts/skill_spec.md` around lines 172 - 175, The four-item "Key Rules" list currently repeats "Always" at the start of consecutive entries; edit the items (e.g., the lines beginning "Always use outputSchema on chat()", "Always ask the user which adapter and model", and "Always prompt the user about Code Mode") to vary phrasing for readability—swap one or two "Always" to alternatives like "Use", "Ask", or "Prompt" (or merge implied verbs) while preserving the original meaning and examples (e.g., `@tanstack/ai-react`, `outputSchema`, Code Mode, adapter/model suggestion).packages/typescript/ai/skills/ai-core/structured-outputs/SKILL.md (1)
27-47: Renamestreamto reflect non-streaming structured output.In this example,
chat()returns a typedPromise(withoutputSchema), sostreamcan mislead readers. Rename toresult/structuredOutputfor clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai/skills/ai-core/structured-outputs/SKILL.md` around lines 27 - 47, The example uses a misleading variable name `stream` even though `chat()` with `outputSchema` returns a typed Promise; rename the variable (e.g., to `result` or `structuredOutput`) and update any accompanying text to match—locate the declaration `const stream = chat({...})`, change the identifier, and adjust the explanatory sentence that mentions return type (currently referencing Promise<InferSchemaType<TSchema>> and AsyncIterable<StreamChunk>) so it correctly refers to the non-streaming structured output variable name.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@_artifacts/skill_tree.yaml`:
- Line 65: The generated Tool Calling skill text in _artifacts/skill_tree.yaml
still references "@standard-schema/spec"; remove that stale requirement string
from the Tool Calling skill description so no "@standard-schema/spec" tokens
remain in the artifact, update the Tool Calling skill's description field (the
skill text/description entry) to omit or replace that phrase, and run the
generation/test that produced this artifact to ensure the reference is not
reintroduced.
In
`@packages/typescript/ai/skills/ai-core/adapter-configuration/references/anthropic-adapter.md`:
- Around line 5-7: The markdown file has fenced code blocks (for example the
block containing "@tanstack/ai-anthropic" and the env-var blocks referenced)
missing language identifiers; update each of those fenced blocks to specify the
text language (use ```text) so the package string and env-var blocks are
annotated correctly and satisfy MD040; search for the "@tanstack/ai-anthropic"
block and the env-var examples and add ```text at their opening fences.
- Around line 45-53: The modelOptions example defines the "thinking" key twice
so the second "thinking: { type: 'adaptive' }" overwrites the first; update the
anthropic-adapter example to show these as mutually exclusive by removing or
commenting out one of the "thinking" blocks (either the enabled block with
budget_tokens or the adaptive block) and leave the remaining "thinking" plus
"effort" and "budget_tokens" fields as the single, clear example; ensure you
keep the keys "thinking", "type", "budget_tokens" and "effort" intact so readers
see valid option names.
In
`@packages/typescript/ai/skills/ai-core/adapter-configuration/references/gemini-adapter.md`:
- Around line 87-90: The fenced code block that lists the env vars
("GOOGLE_API_KEY (preferred)" and "GEMINI_API_KEY (also accepted)") should
include a language tag to satisfy markdownlint MD040; edit the block delimiter
from ``` to something like ```text or ```bash immediately after the opening
backticks in the gemini-adapter.md snippet so the env var lines remain unchanged
but the fence specifies the language.
- Around line 5-7: Add a language tag to the fenced code block containing the
package snippet "@tanstack/ai-gemini" in gemini-adapter.md so markdownlint MD040
is satisfied; update the triple-backtick fence to include a language like "text"
(e.g., ```text) for the existing fenced block so the snippet is explicitly
marked as plain text.
- Around line 46-53: The example defines thinkingConfig twice in the
modelOptions object causing one to overwrite the other; update the doc snippet
so only one thinkingConfig is active at a time by presenting the two variants as
alternatives (e.g., keep one and comment out the other or show them as separate
example blocks), and clearly label each variant (e.g., "level-based" vs
"budget-based") so readers won’t accidentally copy both; refer to the
modelOptions and thinkingConfig keys when making this change.
In
`@packages/typescript/ai/skills/ai-core/adapter-configuration/references/grok-adapter.md`:
- Around line 5-7: Two fenced code blocks are missing language annotations;
update the code fences containing the literal strings "@tanstack/ai-grok" and
"XAI_API_KEY" to include languages (e.g., use ```text for the block with
"@tanstack/ai-grok" and ```bash for the block with "XAI_API_KEY") so
markdownlint MD040 is satisfied; apply the same change to the other occurrence
around the "XAI_API_KEY" block (the one noted at lines 61-63).
In
`@packages/typescript/ai/skills/ai-core/adapter-configuration/references/groq-adapter.md`:
- Around line 5-7: The unlabeled fenced code blocks in groq-adapter.md (e.g.,
the block containing "@tanstack/ai-groq" and the env-var blocks around lines
referenced) need language tags to satisfy MD040; update those triple-backtick
fences to use ```text so the package name and environment variable examples are
explicitly annotated, ensuring consistency for the blocks including the block
with "@tanstack/ai-groq" and the env-var examples at the other referenced
location.
- Around line 47-48: The example sets two mutually exclusive
fields—reasoning_format and include_reasoning—so remove one of them (either
delete reasoning_format: 'parsed' or delete include_reasoning: true) from the
GROQ adapter example to ensure only a single reasoning configuration is present;
update the example that contains the reasoning_format and include_reasoning
entries so callers won't produce invalid API requests.
In
`@packages/typescript/ai/skills/ai-core/adapter-configuration/references/ollama-adapter.md`:
- Around line 39-43: The section title claims provider-specific options go under
modelOptions but the example places temperature at the top-level chat() config;
either update the example to nest provider options under modelOptions (move
properties like temperature into modelOptions in the chat() call) or change the
section title and text to state that provider options may be provided at the top
level; make edits referencing modelOptions, chat(), and temperature so the
wording and example consistently match.
- Around line 5-7: The Markdown fenced code blocks in ollama-adapter.md (the
package block containing "@tanstack/ai-ollama" and the environment-variable
block referenced at lines 67-69) are missing language identifiers and trigger
MD040; update those fenced code blocks to include the language tag `text` (e.g.,
```text) so both the package snippet and the env-var snippet are properly
marked.
In
`@packages/typescript/ai/skills/ai-core/adapter-configuration/references/openai-adapter.md`:
- Around line 5-7: The fenced code blocks containing package names and
environment variable literals (for example the block with "@tanstack/ai-openai"
and the other package/env var fences around lines referenced) are missing a
language tag; update each triple-backtick fenced literal to include the language
label "text" (e.g., change ``` to ```text) so the package and environment
variable fences are consistently marked as text (apply the same fix to the other
occurrences noted, such as the blocks at the later occurrence containing
package/env var literals).
In
`@packages/typescript/ai/skills/ai-core/adapter-configuration/references/openrouter-adapter.md`:
- Around line 5-7: The two fenced code blocks that contain literal values (e.g.
the package string "@tanstack/ai-openrouter" and the env/package block later
around the other occurrence) are missing a language tag and trigger markdownlint
MD040; update both fenced blocks to include the language tag `text` (replace ```
with ```text) so the package/env literal blocks are correctly marked—make this
change for the initial block and the other occurrence referenced (around lines
84-86).
- Line 48: Clarify the naming convention guidance in the OpenRouter adapter
docs: update the guidance near the current camelCase statement to say that
top-level adapter options use camelCase (e.g., topP, frequencyPenalty) while
nested OpenRouter-specific option objects (e.g., reasoning.max_tokens,
webSearchOptions.search_context_size) follow OpenRouter's snake_case naming;
then ensure the examples reflect this convention (either convert nested keys to
snake_case like max_tokens/search_context_size or change the guidance to require
normalizing them to camelCase consistently).
In `@packages/typescript/ai/skills/ai-core/adapter-configuration/SKILL.md`:
- Around line 25-32: The blockquote in SKILL.md is broken by a blank line which
triggers markdownlint MD028; remove the blank line so all lines in the
blockquote (starting at the "**Before implementing:**" paragraph and the
subsequent guidance about fetching provider models and model-meta.ts) are
continuous and each line begins with '>' so the entire guidance remains a single
uninterrupted blockquote.
In `@packages/typescript/ai/skills/ai-core/ag-ui-protocol/SKILL.md`:
- Around line 164-166: Add the missing Markdown language tag "text" to the
fenced code blocks that show the event sequences so they comply with
markdownlint MD040; specifically update the two fenced blocks containing the
sequences "RUN_STARTED -> TEXT_MESSAGE_START -> TEXT_MESSAGE_CONTENT (repeated)
-> TEXT_MESSAGE_END -> RUN_FINISHED" (and the similar block at the later
occurrence) by changing the opening triple backticks to "```text" so both
snippets are fenced as text.
In `@packages/typescript/ai/skills/ai-core/chat-experience/SKILL.md`:
- Around line 166-168: Update the documentation to say that custom `body` fields
are merged at the top level of the POST payload (i.e., the server receives {
messages, data, provider, model, ... }) rather than nested under `data`; replace
the incorrect phrasing that reads `data.provider`/`data.model` with wording that
`provider` and `model` (and any custom fields from `body`) are top-level
alongside `messages` and `data`, matching the guidance in
custom-backend-integration/SKILL.md.
In `@packages/typescript/ai/skills/ai-core/middleware/SKILL.md`:
- Around line 34-39: The example setup uses hook args incorrectly: update the
onFinish and onError handlers to accept the documented second parameter
(FinishInfo/ErrorInfo) instead of reading properties directly from ctx; change
the signatures to onFinish: (ctx, info) => { trackAnalytics({ model: ctx.model,
tokens: info?.usage?.totalTokens }) } and onError: (ctx, info) => {
reportError(info?.error) } to match the documented API and the "Pattern 1" usage
of FinishInfo and ErrorInfo.
---
Nitpick comments:
In `@_artifacts/skill_spec.md`:
- Around line 172-175: The four-item "Key Rules" list currently repeats "Always"
at the start of consecutive entries; edit the items (e.g., the lines beginning
"Always use outputSchema on chat()", "Always ask the user which adapter and
model", and "Always prompt the user about Code Mode") to vary phrasing for
readability—swap one or two "Always" to alternatives like "Use", "Ask", or
"Prompt" (or merge implied verbs) while preserving the original meaning and
examples (e.g., `@tanstack/ai-react`, `outputSchema`, Code Mode, adapter/model
suggestion).
In `@packages/typescript/ai/skills/ai-core/structured-outputs/SKILL.md`:
- Around line 27-47: The example uses a misleading variable name `stream` even
though `chat()` with `outputSchema` returns a typed Promise; rename the variable
(e.g., to `result` or `structuredOutput`) and update any accompanying text to
match—locate the declaration `const stream = chat({...})`, change the
identifier, and adjust the explanatory sentence that mentions return type
(currently referencing Promise<InferSchemaType<TSchema>> and
AsyncIterable<StreamChunk>) so it correctly refers to the non-streaming
structured output variable name.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 07358c8a-0b28-49d0-ac3a-7a5315fbcfbc
📒 Files selected for processing (23)
.changeset/intent-skills.md_artifacts/domain_map.yaml_artifacts/skill_spec.md_artifacts/skill_tree.yamlpackages/typescript/ai-code-mode/package.jsonpackages/typescript/ai-code-mode/skills/ai-code-mode/SKILL.mdpackages/typescript/ai/package.jsonpackages/typescript/ai/skills/ai-core/SKILL.mdpackages/typescript/ai/skills/ai-core/adapter-configuration/SKILL.mdpackages/typescript/ai/skills/ai-core/adapter-configuration/references/anthropic-adapter.mdpackages/typescript/ai/skills/ai-core/adapter-configuration/references/gemini-adapter.mdpackages/typescript/ai/skills/ai-core/adapter-configuration/references/grok-adapter.mdpackages/typescript/ai/skills/ai-core/adapter-configuration/references/groq-adapter.mdpackages/typescript/ai/skills/ai-core/adapter-configuration/references/ollama-adapter.mdpackages/typescript/ai/skills/ai-core/adapter-configuration/references/openai-adapter.mdpackages/typescript/ai/skills/ai-core/adapter-configuration/references/openrouter-adapter.mdpackages/typescript/ai/skills/ai-core/ag-ui-protocol/SKILL.mdpackages/typescript/ai/skills/ai-core/chat-experience/SKILL.mdpackages/typescript/ai/skills/ai-core/custom-backend-integration/SKILL.mdpackages/typescript/ai/skills/ai-core/media-generation/SKILL.mdpackages/typescript/ai/skills/ai-core/middleware/SKILL.mdpackages/typescript/ai/skills/ai-core/structured-outputs/SKILL.mdpackages/typescript/ai/skills/ai-core/tool-calling/SKILL.md
| chat() on server and useChat/clientTools on client, tool approval | ||
| flows with needsApproval and addToolApprovalResponse(), lazy tool | ||
| discovery with lazy:true, rendering ToolCallPart and ToolResultPart | ||
| in UI. Requires @standard-schema/spec for type inference. |
There was a problem hiding this comment.
Remove stale @standard-schema/spec requirement from Tool Calling skill text.
This line contradicts the PR objective that no @standard-schema/spec references remain. Keeping it in generated artifacts will reintroduce the exact confusion this PR is trying to prevent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@_artifacts/skill_tree.yaml` at line 65, The generated Tool Calling skill text
in _artifacts/skill_tree.yaml still references "@standard-schema/spec"; remove
that stale requirement string from the Tool Calling skill description so no
"@standard-schema/spec" tokens remain in the artifact, update the Tool Calling
skill's description field (the skill text/description entry) to omit or replace
that phrase, and run the generation/test that produced this artifact to ensure
the reference is not reintroduced.
| ``` | ||
| @tanstack/ai-anthropic | ||
| ``` |
There was a problem hiding this comment.
Fence language is missing in two blocks.
Please add language identifiers (text) to the package and env-var fenced blocks (MD040).
Also applies to: 86-88
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 5-5: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/typescript/ai/skills/ai-core/adapter-configuration/references/anthropic-adapter.md`
around lines 5 - 7, The markdown file has fenced code blocks (for example the
block containing "@tanstack/ai-anthropic" and the env-var blocks referenced)
missing language identifiers; update each of those fenced blocks to specify the
text language (use ```text) so the package string and env-var blocks are
annotated correctly and satisfy MD040; search for the "@tanstack/ai-anthropic"
block and the env-var examples and add ```text at their opening fences.
| thinking: { | ||
| type: 'enabled', | ||
| budget_tokens: 8000, // must be >= 1024 and < maxTokens | ||
| }, | ||
| // Adaptive thinking (claude-sonnet-4-6, claude-opus-4-6+) | ||
| thinking: { | ||
| type: 'adaptive', | ||
| }, | ||
| effort: 'high', // 'max' | 'high' | 'medium' | 'low' |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check if the file exists and view the area around lines 45-53
cat -n packages/typescript/ai/skills/ai-core/adapter-configuration/references/anthropic-adapter.md | sed -n '40,60p'Repository: TanStack/ai
Length of output: 818
Remove duplicate thinking key in the modelOptions example.
The thinking property appears twice in the same object (lines 45–52), so the second definition overwrites the first. Readers would assume both configurations apply, but only the adaptive thinking would actually be used. Comment out one of the examples instead to show that these are mutually exclusive options.
Proposed doc fix
- // Extended thinking (budget-based)
- thinking: {
- type: 'enabled',
- budget_tokens: 8000, // must be >= 1024 and < maxTokens
- },
- // Adaptive thinking (claude-sonnet-4-6, claude-opus-4-6+)
- thinking: {
- type: 'adaptive',
- },
+ // Choose one thinking mode:
+ // 1) Extended thinking (budget-based)
+ // thinking: { type: 'enabled', budget_tokens: 8000 },
+ // 2) Adaptive thinking (model-dependent)
+ // thinking: { type: 'adaptive' },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/typescript/ai/skills/ai-core/adapter-configuration/references/anthropic-adapter.md`
around lines 45 - 53, The modelOptions example defines the "thinking" key twice
so the second "thinking: { type: 'adaptive' }" overwrites the first; update the
anthropic-adapter example to show these as mutually exclusive by removing or
commenting out one of the "thinking" blocks (either the enabled block with
budget_tokens or the adaptive block) and leave the remaining "thinking" plus
"effort" and "budget_tokens" fields as the single, clear example; ensure you
keep the keys "thinking", "type", "budget_tokens" and "effort" intact so readers
see valid option names.
| ``` | ||
| @tanstack/ai-gemini | ||
| ``` |
There was a problem hiding this comment.
Add fenced code block language for the package snippet.
This triggers markdownlint MD040. Use a language tag (for example text).
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 5-5: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/typescript/ai/skills/ai-core/adapter-configuration/references/gemini-adapter.md`
around lines 5 - 7, Add a language tag to the fenced code block containing the
package snippet "@tanstack/ai-gemini" in gemini-adapter.md so markdownlint MD040
is satisfied; update the triple-backtick fence to include a language like "text"
(e.g., ```text) for the existing fenced block so the snippet is explicitly
marked as plain text.
| thinkingConfig: { | ||
| includeThoughts: true, | ||
| thinkingBudget: 4096, | ||
| }, | ||
| // Thinking (level-based, advanced models) | ||
| thinkingConfig: { | ||
| thinkingLevel: 'THINKING_LEVEL_HIGH', | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify duplicate thinkingConfig keys in this markdown code sample
rg -n -C3 "thinkingConfig\\s*:" packages/typescript/ai/skills/ai-core/adapter-configuration/references/gemini-adapter.mdRepository: TanStack/ai
Length of output: 380
Fix duplicate thinkingConfig keys in the code example.
The modelOptions object at lines 46-53 defines thinkingConfig twice; the second definition overwrites the first. Show these as alternative configurations (one at a time or clearly commented out) to prevent incorrect copy-paste usage.
Suggested fix
modelOptions: {
- // Thinking (budget-based)
- thinkingConfig: {
- includeThoughts: true,
- thinkingBudget: 4096,
- },
- // Thinking (level-based, advanced models)
- thinkingConfig: {
- thinkingLevel: 'THINKING_LEVEL_HIGH',
- },
+ // Thinking (choose one variant based on model support):
+ // Variant A: budget-based
+ thinkingConfig: {
+ includeThoughts: true,
+ thinkingBudget: 4096,
+ },
+ // Variant B (alternative): level-based
+ // thinkingConfig: {
+ // thinkingLevel: 'THINKING_LEVEL_HIGH',
+ // },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/typescript/ai/skills/ai-core/adapter-configuration/references/gemini-adapter.md`
around lines 46 - 53, The example defines thinkingConfig twice in the
modelOptions object causing one to overwrite the other; update the doc snippet
so only one thinkingConfig is active at a time by presenting the two variants as
alternatives (e.g., keep one and comment out the other or show them as separate
example blocks), and clearly label each variant (e.g., "level-based" vs
"budget-based") so readers won’t accidentally copy both; refer to the
modelOptions and thinkingConfig keys when making this change.
| // Reasoning | ||
| reasoning: { | ||
| effort: 'high', // 'none' | 'minimal' | 'low' | 'medium' | 'high' | ||
| max_tokens: 4096, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n "packages/typescript/ai/skills/ai-core/adapter-configuration/references/openrouter-adapter.md" | head -120Repository: TanStack/ai
Length of output: 3470
🏁 Script executed:
fd "openrouter" -type f -name "*.ts" -o -name "*.js" | head -20Repository: TanStack/ai
Length of output: 225
🏁 Script executed:
fd "openrouter" --type f | head -20Repository: TanStack/ai
Length of output: 308
🏁 Script executed:
find packages/typescript/ai-openrouter -type f -name "*.ts" | grep -v test | grep -v node_modulesRepository: TanStack/ai
Length of output: 912
🏁 Script executed:
cat -n "packages/typescript/ai-openrouter/src/text/text-provider-options.ts"Repository: TanStack/ai
Length of output: 12612
🏁 Script executed:
cat -n "packages/typescript/ai-openrouter/src/adapters/text.ts" | head -100Repository: TanStack/ai
Length of output: 3633
🏁 Script executed:
cat -n "packages/typescript/ai-openrouter/src/adapters/text.ts" | sed -n '100,250p'Repository: TanStack/ai
Length of output: 5322
🏁 Script executed:
cat -n "packages/typescript/ai-openrouter/src/adapters/text.ts" | grep -A 150 "mapTextOptionsToSDK"Repository: TanStack/ai
Length of output: 14483
🏁 Script executed:
cat -n "packages/typescript/ai-openrouter/tests/openrouter-adapter.test.ts" | head -150Repository: TanStack/ai
Length of output: 5089
🏁 Script executed:
rg "max_tokens|search_context_size" packages/typescript/ai-openrouter --type ts -B 2 -A 2Repository: TanStack/ai
Length of output: 987
🏁 Script executed:
rg "reasoning|webSearchOptions" packages/typescript/ai-openrouter/tests --type ts -B 2 -A 5Repository: TanStack/ai
Length of output: 1729
🏁 Script executed:
cat -n "packages/typescript/ai-openrouter/src/adapters/text.ts" | sed -n '502,533p'Repository: TanStack/ai
Length of output: 1176
🏁 Script executed:
rg "camelCase|snake_case" packages/typescript/ai-openrouter/src --type ts -B 2 -A 2Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
cat -n "packages/typescript/ai-openrouter/src/utils/client.ts"Repository: TanStack/ai
Length of output: 1815
🏁 Script executed:
cat -n docs/adapters/openrouter.md | head -150Repository: TanStack/ai
Length of output: 4068
🏁 Script executed:
rg "max_tokens|search_context_size" packages/typescript/ai-openrouter --type ts -C 3Repository: TanStack/ai
Length of output: 1363
Clarify naming convention for nested OpenRouter options.
The documentation states that OpenRouter options use camelCase, but the examples show snake_case keys in nested objects (max_tokens in reasoning, search_context_size in webSearchOptions). While most top-level options are camelCase (topP, frequencyPenalty), these nested properties follow OpenRouter API's snake_case convention. Update the guidance on lines 97-98 to clarify that nested OpenRouter-specific options use snake_case, or normalize the example to match the stated convention.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/typescript/ai/skills/ai-core/adapter-configuration/references/openrouter-adapter.md`
at line 48, Clarify the naming convention guidance in the OpenRouter adapter
docs: update the guidance near the current camelCase statement to say that
top-level adapter options use camelCase (e.g., topP, frequencyPenalty) while
nested OpenRouter-specific option objects (e.g., reasoning.max_tokens,
webSearchOptions.search_context_size) follow OpenRouter's snake_case naming;
then ensure the examples reflect this convention (either convert nested keys to
snake_case like max_tokens/search_context_size or change the guidance to require
normalizing them to camelCase consistently).
| > **Dependency:** This skill builds on ai-core. Read it first for critical rules. | ||
|
|
||
| > **Before implementing:** Ask the user which provider and model they want. | ||
| > Then fetch the latest available models from the provider's source code | ||
| > (check the adapter's model metadata file, e.g. `packages/typescript/ai-openai/src/model-meta.ts`) | ||
| > or from the provider's API/docs to recommend the most current model. | ||
| > The model lists in this skill and its reference files may be outdated. | ||
| > Always verify against the source before recommending a specific model. |
There was a problem hiding this comment.
Fix blockquote formatting to satisfy markdownlint MD028.
There is a blank line inside the blockquote section; keep the blockquote continuous.
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 26-26: Blank line inside blockquote
(MD028, no-blanks-blockquote)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai/skills/ai-core/adapter-configuration/SKILL.md` around
lines 25 - 32, The blockquote in SKILL.md is broken by a blank line which
triggers markdownlint MD028; remove the blank line so all lines in the
blockquote (starting at the "**Before implementing:**" paragraph and the
subsequent guidance about fetching provider models and model-meta.ts) are
continuous and each line begins with '>' so the entire guidance remains a single
uninterrupted blockquote.
| ``` | ||
| RUN_STARTED -> TEXT_MESSAGE_START -> TEXT_MESSAGE_CONTENT (repeated) -> TEXT_MESSAGE_END -> RUN_FINISHED | ||
| ``` |
There was a problem hiding this comment.
Add language identifiers to the sequence code fences.
Both fenced blocks are missing a language tag (markdownlint MD040). Use text for these event-sequence snippets.
Also applies to: 170-174
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 164-164: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai/skills/ai-core/ag-ui-protocol/SKILL.md` around lines
164 - 166, Add the missing Markdown language tag "text" to the fenced code
blocks that show the event sequences so they comply with markdownlint MD040;
specifically update the two fenced blocks containing the sequences "RUN_STARTED
-> TEXT_MESSAGE_START -> TEXT_MESSAGE_CONTENT (repeated) -> TEXT_MESSAGE_END ->
RUN_FINISHED" (and the similar block at the later occurrence) by changing the
opening triple backticks to "```text" so both snippets are fenced as text.
| The `body` field is merged into the POST request body alongside `messages`, | ||
| letting the server read `data.provider`, `data.model`, etc. | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Compare body-shape guidance across skills to confirm inconsistency
rg -n -C2 "body field is merged|data\\.provider|provider, model|\\{ messages, data" packages/typescript/ai/skills/ai-core/chat-experience/SKILL.md packages/typescript/ai/skills/ai-core/custom-backend-integration/SKILL.mdRepository: TanStack/ai
Length of output: 1245
🏁 Script executed:
# Search for useChat implementation and how body field is merged
rg -n "body.*merge|messages.*data" packages/typescript --type=ts --type=js -A 3 -B 1 | head -100Repository: TanStack/ai
Length of output: 8727
🏁 Script executed:
# Look for actual payload structure in implementation files
fd -e ts -e js packages/typescript/ai | xargs grep -l "messages" | grep -E "(use|chat)" | head -10Repository: TanStack/ai
Length of output: 448
🏁 Script executed:
# Check type definitions for the request shape
rg -n "body\s*:" packages/typescript --type=ts -A 2 | head -80Repository: TanStack/ai
Length of output: 5050
🏁 Script executed:
# Look at HTTP fetch adapter implementation
cat -n packages/typescript/ai-client/src/connection-adapters.ts | sed -n '260,320p'Repository: TanStack/ai
Length of output: 2516
🏁 Script executed:
# Search for how body is merged with messages/data in HTTP requests
rg -n "JSON.stringify|messages.*data|data.*messages" packages/typescript/ai-client/src/connection-adapters.ts -A 3 -B 1Repository: TanStack/ai
Length of output: 2077
Fix incorrect request body shape documentation — custom fields are top-level, not nested under data.
Lines 166–167 incorrectly state that custom fields can be read as data.provider, data.model, etc. However, the actual implementation spreads custom body fields at the top level alongside messages and data. Align this with the correct guidance in custom-backend-integration/SKILL.md (line 106) which states the server receives { messages, data, provider, model }.
Suggested fix
-The `body` field is merged into the POST request body alongside `messages`,
-letting the server read `data.provider`, `data.model`, etc.
+The `body` field is merged into the POST request body alongside `messages` and `data` as top-level fields.
+The server receives `{ messages, data, ...body }`, so read custom fields directly (e.g., `provider`, `model`).📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| The `body` field is merged into the POST request body alongside `messages`, | |
| letting the server read `data.provider`, `data.model`, etc. | |
| The `body` field is merged into the POST request body alongside `messages` and `data` as top-level fields. | |
| The server receives `{ messages, data, ...body }`, so read custom fields directly (e.g., `provider`, `model`). | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai/skills/ai-core/chat-experience/SKILL.md` around lines
166 - 168, Update the documentation to say that custom `body` fields are merged
at the top level of the POST payload (i.e., the server receives { messages,
data, provider, model, ... }) rather than nested under `data`; replace the
incorrect phrasing that reads `data.provider`/`data.model` with wording that
`provider` and `model` (and any custom fields from `body`) are top-level
alongside `messages` and `data`, matching the guidance in
custom-backend-integration/SKILL.md.
| onFinish: (ctx) => { | ||
| trackAnalytics({ model: ctx.model, tokens: ctx.usage }) | ||
| }, | ||
| onError: (ctx) => { | ||
| reportError(ctx.error) | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/typescript/ai/skills/ai-core/middleware/SKILL.md | head -100Repository: TanStack/ai
Length of output: 5026
🏁 Script executed:
cat -n packages/typescript/ai/skills/ai-core/middleware/SKILL.md | sed -n '100,150p'Repository: TanStack/ai
Length of output: 1758
Setup example uses hook args inconsistently with documented API.
In the first setup snippet (lines 34–39), onFinish and onError attempt to read ctx.usage and ctx.error, but the hook table (lines 53–65) documents that onFinish receives FinishInfo and onError receives ErrorInfo as a second parameter. The correct pattern is shown in "Pattern 1" (lines 93–118), where these hooks receive (ctx, info) and access info.usage?.totalTokens and info.error.
Proposed doc fix
- onFinish: (ctx) => {
- trackAnalytics({ model: ctx.model, tokens: ctx.usage })
+ onFinish: (ctx, info) => {
+ trackAnalytics({ model: ctx.model, tokens: info.usage?.totalTokens })
},
- onError: (ctx) => {
- reportError(ctx.error)
+ onError: (ctx, info) => {
+ reportError(info.error)
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| onFinish: (ctx) => { | |
| trackAnalytics({ model: ctx.model, tokens: ctx.usage }) | |
| }, | |
| onError: (ctx) => { | |
| reportError(ctx.error) | |
| }, | |
| onFinish: (ctx, info) => { | |
| trackAnalytics({ model: ctx.model, tokens: info.usage?.totalTokens }) | |
| }, | |
| onError: (ctx, info) => { | |
| reportError(info.error) | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai/skills/ai-core/middleware/SKILL.md` around lines 34 -
39, The example setup uses hook args incorrectly: update the onFinish and
onError handlers to accept the documented second parameter
(FinishInfo/ErrorInfo) instead of reading properties directly from ctx; change
the signatures to onFinish: (ctx, info) => { trackAnalytics({ model: ctx.model,
tokens: info?.usage?.totalTokens }) } and onError: (ctx, info) => {
reportError(info?.error) } to match the documented API and the "Pattern 1" usage
of FinishInfo and ErrorInfo.
Summary
@tanstack/intentSKILL.md files covering all major TanStack AI capabilities (chat, tools, media generation, code mode, structured outputs, adapters, AG-UI protocol, middleware, custom backends)domain_map.yaml,skill_spec.md,skill_tree.yaml)@tanstack/aiand@tanstack/ai-code-modepackage.json to ship skills with npm packagesSkills guide AI coding agents (Claude Code, Cursor, Copilot, Codex) to generate correct TanStack AI patterns and avoid common mistakes like Vercel AI SDK confusion, wrong imports, deprecated APIs, and silent failures.
Test plan
npx @tanstack/intent validatepasses for both packages (10/10 skills)as anycasts, no@standard-schema/specreferences, no open issue references@tanstack/ai-react), not@tanstack/ai-clientpnpm packin each package)Bugs discovered during domain discovery (separate PRs)
@standard-schema/specsilent type degradation (tool definition does not infer types in .server and .client #235, When `@standard-schema/spec` dependency is missing, the `chat` function returns `any` #191)nulltool input when model produces empty tool_use block, causing agent loop to stall #265)Summary by CodeRabbit
Documentation
Chores