Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions .claude/commands/generate-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,16 +49,16 @@ Identify agent vs client roles from the discovered source files:
## 3. Read reference files

- Find and read an existing guide to use as a structural template. Try in order:
1. Same pattern, different provider: glob `src/pages/docs/guides/ai-transport/*-{pattern}.mdx`
2. Same provider, any pattern: glob `src/pages/docs/guides/ai-transport/{provider-slug}-*.mdx`
3. Any existing guide: glob `src/pages/docs/guides/ai-transport/*.mdx` and pick one
1. Same pattern, different provider: glob `src/pages/docs/ai-transport/guides/*-{pattern}.mdx`
2. Same provider, any pattern: glob `src/pages/docs/ai-transport/guides/{provider-slug}-*.mdx`
3. Any existing guide: glob `src/pages/docs/ai-transport/guides/*.mdx` and pick one
- Read `writing-style-guide.md` for tone rules.
- Read `src/data/nav/aitransport.ts` for the current nav structure.
- Read `src/data/languages/languageData.ts` to confirm supported languages for aiTransport.

## 4. Generate the guide

Write to `src/pages/docs/guides/ai-transport/{provider-slug}-{pattern}.mdx`.
Write to `src/pages/docs/ai-transport/guides/{provider-slug}-{pattern}.mdx`.

If the file already exists, warn and ask before overwriting.

Expand Down Expand Up @@ -155,7 +155,7 @@ Edit `src/data/nav/aitransport.ts`. The Guides section uses nested provider grou
{
name: '{Provider display name}',
pages: [
{ name: '{Pattern display name}', link: '/docs/guides/ai-transport/{provider-slug}-{pattern}' },
{ name: '{Pattern display name}', link: '/docs/ai-transport/guides/{provider-slug}-{pattern}' },
],
},
```
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ node_modules/
.cache/
public
!examples/*/*/public
.env
.env.*
!.env.example
graphql-types.ts
Expand Down
24 changes: 12 additions & 12 deletions src/data/nav/aitransport.ts
Original file line number Diff line number Diff line change
Expand Up @@ -98,19 +98,19 @@ export default {
pages: [
{
name: 'Message per response',
link: '/docs/guides/ai-transport/anthropic/anthropic-message-per-response',
link: '/docs/ai-transport/guides/anthropic/anthropic-message-per-response',
},
{
name: 'Message per token',
link: '/docs/guides/ai-transport/anthropic/anthropic-message-per-token',
link: '/docs/ai-transport/guides/anthropic/anthropic-message-per-token',
},
{
name: 'Human-in-the-loop',
link: '/docs/guides/ai-transport/anthropic/anthropic-human-in-the-loop',
link: '/docs/ai-transport/guides/anthropic/anthropic-human-in-the-loop',
},
{
name: 'Citations',
link: '/docs/guides/ai-transport/anthropic/anthropic-citations',
link: '/docs/ai-transport/guides/anthropic/anthropic-citations',
},
],
},
Expand All @@ -119,19 +119,19 @@ export default {
pages: [
{
name: 'Message per response',
link: '/docs/guides/ai-transport/openai/openai-message-per-response',
link: '/docs/ai-transport/guides/openai/openai-message-per-response',
},
{
name: 'Message per token',
link: '/docs/guides/ai-transport/openai/openai-message-per-token',
link: '/docs/ai-transport/guides/openai/openai-message-per-token',
},
{
name: 'Human-in-the-loop',
link: '/docs/guides/ai-transport/openai/openai-human-in-the-loop',
link: '/docs/ai-transport/guides/openai/openai-human-in-the-loop',
},
{
name: 'Citations',
link: '/docs/guides/ai-transport/openai/openai-citations',
link: '/docs/ai-transport/guides/openai/openai-citations',
},
],
},
Expand All @@ -140,11 +140,11 @@ export default {
pages: [
{
name: 'Message per response',
link: '/docs/guides/ai-transport/langgraph/lang-graph-message-per-response',
link: '/docs/ai-transport/guides/langgraph/lang-graph-message-per-response',
},
{
name: 'Message per token',
link: '/docs/guides/ai-transport/langgraph/lang-graph-message-per-token',
link: '/docs/ai-transport/guides/langgraph/lang-graph-message-per-token',
},
],
},
Expand All @@ -153,11 +153,11 @@ export default {
pages: [
{
name: 'Message per response',
link: '/docs/guides/ai-transport/vercel-ai-sdk/vercel-message-per-response',
link: '/docs/ai-transport/guides/vercel-ai-sdk/vercel-message-per-response',
},
{
name: 'Message per token',
link: '/docs/guides/ai-transport/vercel-ai-sdk/vercel-message-per-token',
link: '/docs/ai-transport/guides/vercel-ai-sdk/vercel-message-per-token',
},
],
},
Expand Down
4 changes: 2 additions & 2 deletions src/data/nav/chat.ts
Original file line number Diff line number Diff line change
Expand Up @@ -228,11 +228,11 @@ export default {
pages: [
{
name: 'Livestream chat',
link: '/docs/guides/chat/build-livestream',
link: '/docs/chat/guides/build-livestream',
},
{
name: 'Handling discontinuity',
link: '/docs/guides/chat/handling-discontinuity',
link: '/docs/chat/guides/handling-discontinuity',
},
],
},
Expand Down
6 changes: 3 additions & 3 deletions src/data/nav/pubsub.ts
Original file line number Diff line number Diff line change
Expand Up @@ -361,15 +361,15 @@ export default {
pages: [
{
name: 'Data streaming',
link: '/docs/guides/pub-sub/data-streaming',
link: '/docs/pub-sub/guides/data-streaming',
},
{
name: 'Dashboards and visualizations',
link: '/docs/guides/pub-sub/dashboards-and-visualizations',
link: '/docs/pub-sub/guides/dashboards-and-visualizations',
},
{
name: 'Handling discontinuity',
link: '/docs/guides/pub-sub/handling-discontinuity',
link: '/docs/pub-sub/guides/handling-discontinuity',
},
],
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
title: "Guide: Attach citations to Anthropic responses using message annotations"
meta_description: "Attach source citations to AI responses from the Anthropic Messages API using Ably message annotations."
meta_keywords: "AI, citations, Anthropic, Claude, Messages API, AI transport, Ably, realtime, message annotations, source attribution"
redirect_from:
- /docs/guides/ai-transport/anthropic/anthropic-citations
---

This guide shows you how to attach source citations to AI responses from Anthropic's [Messages API](https://docs.anthropic.com/en/api/messages) using Ably [message annotations](/docs/messages/annotations). When Anthropic provides citations from documents or search results, you can publish them as annotations on Ably messages, enabling clients to display source references alongside AI responses in realtime.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
title: "Guide: Human-in-the-loop approval with Anthropic"
meta_description: "Implement human approval workflows for AI agent tool calls using Anthropic and Ably with role-based access control."
meta_keywords: "AI, human in the loop, HITL, Anthropic, Claude, tool use, approval workflow, AI transport, Ably, realtime, RBAC"
redirect_from:
- /docs/guides/ai-transport/anthropic/anthropic-human-in-the-loop
---

This guide shows you how to implement a human-in-the-loop (HITL) approval workflow for AI agent tool calls using Anthropic and Ably. The agent requests human approval before executing sensitive operations, with role-based access control to verify approvers have sufficient permissions.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ meta_description: "Stream tokens from the Anthropic Messages API over Ably in re
meta_keywords: "AI, token streaming, Anthropic, Claude, Messages API, AI transport, Ably, realtime, message appends"
redirect_from:
- /docs/guides/ai-transport/anthropic-message-per-response
- /docs/guides/ai-transport/anthropic/anthropic-message-per-response
---

This guide shows you how to stream AI responses from Anthropic's [Messages API](https://docs.anthropic.com/en/api/messages) over Ably using the [message-per-response pattern](/docs/ai-transport/token-streaming/message-per-response). Specifically, it appends each response token to a single Ably message, creating a complete AI response that grows incrementally while delivering tokens in realtime.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ meta_description: "Stream tokens from the Anthropic Messages API over Ably in re
meta_keywords: "AI, token streaming, Anthropic, Claude, Messages API, AI transport, Ably, realtime"
redirect_from:
- /docs/guides/ai-transport/anthropic-message-per-token
- /docs/guides/ai-transport/anthropic/anthropic-message-per-token
---

This guide shows you how to stream AI responses from Anthropic's [Messages API](https://docs.anthropic.com/en/api/messages) over Ably using the [message-per-token pattern](/docs/ai-transport/token-streaming/message-per-token). Specifically, it implements the [explicit start/stop events approach](/docs/ai-transport/token-streaming/message-per-token#explicit-events), which publishes each response token as an individual message, along with explicit lifecycle events to signal when responses begin and end.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ meta_description: "Stream tokens from LangGraph over Ably in realtime using mess
meta_keywords: "AI, token streaming, LangGraph, LangChain, Anthropic, AI transport, Ably, realtime, message appends"
redirect_from:
- /docs/guides/ai-transport/lang-graph-message-per-response
- /docs/guides/ai-transport/langgraph/lang-graph-message-per-response
---

This guide shows you how to stream AI responses from [LangGraph](https://docs.langchain.com/oss/javascript/langgraph/overview) over Ably using the [message-per-response pattern](/docs/ai-transport/token-streaming/message-per-response). Specifically, it appends each response token to a single Ably message, creating a complete AI response that grows incrementally while delivering tokens in realtime.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ meta_description: "Stream tokens from LangGraph over Ably in realtime."
meta_keywords: "AI, token streaming, LangGraph, LangChain, Anthropic, AI transport, Ably, realtime"
redirect_from:
- /docs/guides/ai-transport/lang-graph-message-per-token
- /docs/guides/ai-transport/langgraph/lang-graph-message-per-token
---

This guide shows you how to stream AI responses from [LangGraph](https://docs.langchain.com/oss/javascript/langgraph/overview) over Ably using the [message-per-token pattern](/docs/ai-transport/token-streaming/message-per-token). Specifically, it implements the [explicit start/stop events approach](/docs/ai-transport/token-streaming/message-per-token#explicit-events), which publishes each response token as an individual message, along with explicit lifecycle events to signal when responses begin and end.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
title: "Guide: Attach citations to OpenAI responses using message annotations"
meta_description: "Attach source citations to AI responses from the OpenAI Responses API using Ably message annotations."
meta_keywords: "AI, citations, OpenAI, Responses API, AI transport, Ably, realtime, message annotations, source attribution, web search"
redirect_from:
- /docs/guides/ai-transport/openai/openai-citations
---

This guide shows you how to attach source citations to AI responses from OpenAI's [Responses API](https://platform.openai.com/docs/api-reference/responses) using Ably [message annotations](/docs/messages/annotations). When OpenAI provides citations from web search results, you can publish them as annotations on Ably messages, enabling clients to display source references alongside AI responses in realtime.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
title: "Guide: Human-in-the-loop approval with OpenAI"
meta_description: "Implement human approval workflows for AI agent tool calls using OpenAI and Ably with role-based access control."
meta_keywords: "AI, human in the loop, HITL, OpenAI, tool calls, approval workflow, AI transport, Ably, realtime, RBAC"
redirect_from:
- /docs/guides/ai-transport/openai/openai-human-in-the-loop
---

This guide shows you how to implement a human-in-the-loop (HITL) approval workflow for AI agent tool calls using OpenAI and Ably. The agent requests human approval before executing sensitive operations, with role-based access control to verify approvers have sufficient permissions.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ meta_description: "Stream tokens from the OpenAI Responses API over Ably in real
meta_keywords: "AI, token streaming, OpenAI, Responses API, AI transport, Ably, realtime, message appends"
redirect_from:
- /docs/guides/ai-transport/openai-message-per-response
- /docs/guides/ai-transport/openai/openai-message-per-response
---

This guide shows you how to stream AI responses from OpenAI's [Responses API](https://platform.openai.com/docs/api-reference/responses) over Ably using the [message-per-response pattern](/docs/ai-transport/token-streaming/message-per-response). Specifically, it appends each response token to a single Ably message, creating a complete AI response that grows incrementally while delivering tokens in realtime.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ meta_description: "Stream tokens from the OpenAI Responses API over Ably in real
meta_keywords: "AI, token streaming, OpenAI, Responses API, AI transport, Ably, realtime"
redirect_from:
- /docs/guides/ai-transport/openai-message-per-token
- /docs/guides/ai-transport/openai/openai-message-per-token
---

This guide shows you how to stream AI responses from OpenAI's [Responses API](https://platform.openai.com/docs/api-reference/responses) over Ably using the [message-per-token pattern](/docs/ai-transport/token-streaming/message-per-token). Specifically, it implements the [explicit start/stop events approach](/docs/ai-transport/token-streaming/message-per-token#explicit-events), which publishes each response token as an individual message, along with explicit lifecycle events to signal when responses begin and end.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ meta_description: "Stream tokens from the Vercel AI SDK over Ably in realtime us
meta_keywords: "AI, token streaming, Vercel, AI SDK, AI transport, Ably, realtime, message appends"
redirect_from:
- /docs/guides/ai-transport/vercel-message-per-response
- /docs/guides/ai-transport/vercel-ai-sdk/vercel-message-per-response
---

This guide shows you how to stream AI responses from the [Vercel AI SDK](https://ai-sdk.dev/docs/ai-sdk-core/generating-text) over Ably using the [message-per-response pattern](/docs/ai-transport/token-streaming/message-per-response). Specifically, it appends each response token to a single Ably message, creating a complete AI response that grows incrementally while delivering tokens in realtime.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ meta_description: "Stream tokens from the Vercel AI SDK over Ably in realtime."
meta_keywords: "AI, token streaming, Vercel, AI SDK, AI transport, Ably, realtime"
redirect_from:
- /docs/guides/ai-transport/vercel-message-per-token
- /docs/guides/ai-transport/vercel-ai-sdk/vercel-message-per-token
---

This guide shows you how to stream AI responses from the [Vercel AI SDK](https://ai-sdk.dev/docs/ai-sdk-core/generating-text) over Ably using the [message-per-token pattern](/docs/ai-transport/token-streaming/message-per-token). Specifically, it implements the [explicit start/stop events approach](/docs/ai-transport/token-streaming/message-per-token#explicit-events), which publishes each response token as an individual message, along with explicit lifecycle events to signal when responses begin and end.
Expand Down
20 changes: 10 additions & 10 deletions src/pages/docs/ai-transport/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,19 +26,19 @@ Use the following guides to get started with OpenAI:
title: 'Message-per-response',
description: 'Stream OpenAI responses using message appends',
image: 'icon-tech-javascript',
link: '/docs/guides/ai-transport/openai/openai-message-per-response',
link: '/docs/ai-transport/guides/openai/openai-message-per-response',
},
{
title: 'Message-per-token',
description: 'Stream OpenAI responses using individual token messages',
image: 'icon-tech-javascript',
link: '/docs/guides/ai-transport/openai/openai-message-per-token',
link: '/docs/ai-transport/guides/openai/openai-message-per-token',
},
{
title: 'Human-in-the-loop',
description: 'Implement human-in-the-loop approval workflows with OpenAI',
image: 'icon-tech-javascript',
link: '/docs/guides/ai-transport/openai/openai-human-in-the-loop',
link: '/docs/ai-transport/guides/openai/openai-human-in-the-loop',
},
]}
</Tiles>
Expand All @@ -53,19 +53,19 @@ Use the following guides to get started with Anthropic:
title: 'Message-per-response',
description: 'Stream Anthropic responses using message appends',
image: 'icon-tech-javascript',
link: '/docs/guides/ai-transport/anthropic/anthropic-message-per-response',
link: '/docs/ai-transport/guides/anthropic/anthropic-message-per-response',
},
{
title: 'Message-per-token',
description: 'Stream Anthropic responses using individual token messages',
image: 'icon-tech-javascript',
link: '/docs/guides/ai-transport/anthropic/anthropic-message-per-token',
link: '/docs/ai-transport/guides/anthropic/anthropic-message-per-token',
},
{
title: 'Human-in-the-loop',
description: 'Implement human-in-the-loop approval workflows with Anthropic',
image: 'icon-tech-javascript',
link: '/docs/guides/ai-transport/anthropic/anthropic-human-in-the-loop',
link: '/docs/ai-transport/guides/anthropic/anthropic-human-in-the-loop',
},
]}
</Tiles>
Expand All @@ -80,13 +80,13 @@ Use the following guides to get started with the Vercel AI SDK:
title: 'Message-per-response',
description: 'Stream Vercel AI SDK responses using message appends',
image: 'icon-tech-javascript',
link: '/docs/guides/ai-transport/vercel-ai-sdk/vercel-message-per-response',
link: '/docs/ai-transport/guides/vercel-ai-sdk/vercel-message-per-response',
},
{
title: 'Message-per-token',
description: 'Stream Vercel AI SDK responses using individual token messages',
image: 'icon-tech-javascript',
link: '/docs/guides/ai-transport/vercel-ai-sdk/vercel-message-per-token',
link: '/docs/ai-transport/guides/vercel-ai-sdk/vercel-message-per-token',
},
]}
</Tiles>
Expand All @@ -101,13 +101,13 @@ Use the following guides to get started with LangGraph:
title: 'Message-per-response',
description: 'Stream LangGraph responses using message appends',
image: 'icon-tech-javascript',
link: '/docs/guides/ai-transport/langgraph/lang-graph-message-per-response',
link: '/docs/ai-transport/guides/langgraph/lang-graph-message-per-response',
},
{
title: 'Message-per-token',
description: 'Stream LangGraph responses using individual token messages',
image: 'icon-tech-javascript',
link: '/docs/guides/ai-transport/langgraph/lang-graph-message-per-token',
link: '/docs/ai-transport/guides/langgraph/lang-graph-message-per-token',
},
]}
</Tiles>
Expand Down
2 changes: 1 addition & 1 deletion src/pages/docs/ai-transport/token-streaming/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -89,5 +89,5 @@ Different models and frameworks use different events to signal streaming state,

- Implement token streaming with [message-per-response](/docs/ai-transport/token-streaming/message-per-response) (recommended for most applications)
- Implement token streaming with [message-per-token](/docs/ai-transport/token-streaming/message-per-token) for sliding-window use cases
- Explore the [guides](/docs/guides/ai-transport/openai/openai-message-per-response) for integration with specific models and frameworks
- Explore the [guides](/docs/ai-transport/guides/openai/openai-message-per-response) for integration with specific models and frameworks
- Learn about [sessions and identity](/docs/ai-transport/sessions-identity) in AI Transport applications
Loading
Loading