Conversation
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…age-lock.json for stackone-typescript-agent
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…management. Introduced a new formatError function for better error messages and updated conversationHistory type to enhance type safety.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
This pull request introduces a new TypeScript example application that demonstrates how to build an interactive AI agent using the Vercel AI SDK and StackOne Toolset SDK. The application provides a CLI-based conversational interface with support for tool loading, conversation history management, and both interactive and non-interactive modes.
Changes:
- Added complete example application structure with TypeScript configuration, dependencies, and environment setup
- Implemented an interactive CLI agent with conversation management, error handling, and user-friendly terminal interface
- Provided configuration examples for API keys, tool filtering, and model selection
Reviewed changes
Copilot reviewed 4 out of 6 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| apps/stackone-typescript-agent/tsconfig.json | TypeScript configuration with strict settings and ES module support |
| apps/stackone-typescript-agent/package.json | Package definition with dependencies for Vercel AI SDK, Anthropic, and StackOne |
| apps/stackone-typescript-agent/package-lock.json | Dependency lock file with version resolutions |
| apps/stackone-typescript-agent/.env.example | Environment variable template for required and optional configuration |
| apps/stackone-typescript-agent/src/agent.ts | Main agent implementation with interactive CLI, tool loading, and conversation management |
| .gitignore | Minor formatting fix (trailing newline) |
Files not reviewed (1)
- apps/stackone-typescript-agent/package-lock.json: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "node_modules/ai": { | ||
| "version": "6.0.69", | ||
| "resolved": "https://registry.npmjs.org/ai/-/ai-6.0.69.tgz", | ||
| "integrity": "sha512-zIURMSnNroaVvu47Bm3XhC2y3LRsm8jmkwBgupxF+N7q/s6MpIiv04w1ltlnWqC8+T2PT2rN+f0sUhF+vArkwg==", | ||
| "license": "Apache-2.0", | ||
| "peer": true, | ||
| "dependencies": { | ||
| "@ai-sdk/gateway": "3.0.32", | ||
| "@ai-sdk/provider": "3.0.7", | ||
| "@ai-sdk/provider-utils": "4.0.13", | ||
| "@opentelemetry/api": "1.9.0" | ||
| }, | ||
| "engines": { | ||
| "node": ">=18" | ||
| }, | ||
| "peerDependencies": { | ||
| "zod": "^3.25.76 || ^4.1.8" | ||
| } | ||
| }, |
There was a problem hiding this comment.
Both "ai" (line 736) and "zod" (line 2247) are marked with "peer: true" in the package-lock.json, indicating they're only being installed to satisfy peer dependencies rather than as direct dependencies despite being declared in package.json (lines 20 and 22). This is a critical issue because it means these packages may not be installed at all if the peer dependency resolution changes, or there could be version conflicts. The package-lock.json needs to be regenerated with a fresh npm install to ensure these dependencies are properly installed as direct dependencies. This likely happened due to the specific order of dependency resolution during the initial install.
| @@ -0,0 +1,29 @@ | |||
| { | |||
| "name": "stackone-typescript-agent", | |||
There was a problem hiding this comment.
Missing README.md file for this example application. The other example app (oauth-redirect-proxy) has comprehensive documentation including README.md, DEPLOYMENT.md, and STACKONE_INTEGRATION.md. This app should have at minimum a README.md that explains what it is, how to set it up, how to run it, and provides usage examples. This would help users understand and use this example application.
| const tools = await toolset.fetchTools({ actions }); | ||
| // Type assertion needed due to SDK schema type variance | ||
| return tools.toAISDK() as unknown as ToolSet; |
There was a problem hiding this comment.
The loadTools function may fail if the StackOne API is unreachable or returns an error, but there's no error handling. If fetchTools throws an error, it will propagate to main() and be caught by the top-level catch, but it would be more user-friendly to catch and handle it here with a more specific error message about why tool loading failed. Consider wrapping the fetchTools call in a try-catch block to provide better error context.
| const tools = await toolset.fetchTools({ actions }); | |
| // Type assertion needed due to SDK schema type variance | |
| return tools.toAISDK() as unknown as ToolSet; | |
| try { | |
| const tools = await toolset.fetchTools({ actions }); | |
| // Type assertion needed due to SDK schema type variance | |
| return tools.toAISDK() as unknown as ToolSet; | |
| } catch (error) { | |
| const message = formatError(error); | |
| throw new Error(`Failed to load tools from StackOne API: ${message}`, { cause: error as Error }); | |
| } |
| for await (const line of rl) { | ||
| const trimmed = line.trim(); | ||
| if (trimmed && !["quit", "exit", "q"].includes(trimmed.toLowerCase())) { | ||
| try { | ||
| const response = await runAgent(trimmed, tools); | ||
| console.log(response); | ||
| } catch (error) { | ||
| console.error(`Error: ${formatError(error)}`); | ||
| } | ||
| } | ||
| } | ||
| rl.close(); |
There was a problem hiding this comment.
In non-interactive mode (lines 136-148), when the readline iterator completes (end of input), the function returns but the readline interface is not closed before returning. While rl.close() is called at line 147, it only happens after all lines are processed. However, when using "for await", if the stream ends naturally or an error occurs, the readline interface should be explicitly closed. Consider moving rl.close() to a finally block or ensuring it's called in all exit paths.
| for await (const line of rl) { | |
| const trimmed = line.trim(); | |
| if (trimmed && !["quit", "exit", "q"].includes(trimmed.toLowerCase())) { | |
| try { | |
| const response = await runAgent(trimmed, tools); | |
| console.log(response); | |
| } catch (error) { | |
| console.error(`Error: ${formatError(error)}`); | |
| } | |
| } | |
| } | |
| rl.close(); | |
| try { | |
| for await (const line of rl) { | |
| const trimmed = line.trim(); | |
| if (trimmed && !["quit", "exit", "q"].includes(trimmed.toLowerCase())) { | |
| try { | |
| const response = await runAgent(trimmed, tools); | |
| console.log(response); | |
| } catch (error) { | |
| console.error(`Error: ${formatError(error)}`); | |
| } | |
| } | |
| } | |
| } finally { | |
| rl.close(); | |
| } |
| system: SYSTEM_PROMPT, | ||
| messages: [...conversationHistory, { role: "user" as const, content: userMessage }], | ||
| tools, | ||
| stopWhen: stepCountIs(15), | ||
| }); | ||
|
|
||
| const { text } = result; |
There was a problem hiding this comment.
The messages are being pushed to conversationHistory after successfully getting a response from generateText. However, the user message is included in the messages array passed to generateText (line 87), which means it's part of the request but not yet in the persistent history. If generateText succeeds, both messages are then added. This is correct but could be clearer. Consider adding a comment explaining why the message is included in the request but not added to history until after the successful response.
| system: SYSTEM_PROMPT, | |
| messages: [...conversationHistory, { role: "user" as const, content: userMessage }], | |
| tools, | |
| stopWhen: stepCountIs(15), | |
| }); | |
| const { text } = result; | |
| system: SYSTEM_PROMPT, | |
| // Include the latest user message in the request, but only persist it to | |
| // conversationHistory after a successful response to avoid storing failed turns. | |
| messages: [...conversationHistory, { role: "user" as const, content: userMessage }], | |
| tools, | |
| stopWhen: stepCountIs(15), | |
| }); | |
| const { text } = result; | |
| // Persist both sides of the successful turn into the conversation history. |
| const askQuestion = (): void => { | ||
| rl.question(`${green}${bright} You ${reset}${dim}>${reset} `, async (input) => { | ||
| const trimmed = input.trim(); | ||
|
|
||
| if (["quit", "exit", "q"].includes(trimmed.toLowerCase())) { | ||
| console.log(`\n${dim} Goodbye!${reset}\n`); | ||
| rl.close(); | ||
| return; | ||
| } | ||
|
|
||
| if (trimmed.toLowerCase() === "/clear") { | ||
| conversationHistory.length = 0; | ||
| console.log(`${dim} History cleared.${reset}\n`); | ||
| askQuestion(); | ||
| return; | ||
| } | ||
|
|
||
| if (!trimmed) { | ||
| askQuestion(); | ||
| return; | ||
| } | ||
|
|
||
| try { | ||
| console.log(); | ||
| const response = await runAgent(trimmed, tools); | ||
| console.log(`${magenta}${bright} Assistant ${reset}${dim}>${reset} ${response}\n`); | ||
| } catch (error) { | ||
| console.error(`${yellow} Error: ${formatError(error)}${reset}\n`); | ||
| } | ||
|
|
||
| askQuestion(); | ||
| }); | ||
| }; |
There was a problem hiding this comment.
The askQuestion function is defined with recursion (calling itself at lines 164, 169, 181) which could potentially lead to a stack overflow with very long interactive sessions. While Node.js has a large stack size, a better pattern would be to use a loop instead of recursion. Consider refactoring to use a while loop or the readline's 'line' event pattern instead of recursive calls.
| } | ||
|
|
||
| main().catch((error) => { | ||
| console.error(error); |
There was a problem hiding this comment.
The error handler in the main catch block (line 222-224) logs the full error object which may contain sensitive information like API keys from request headers or other internal details. Consider using formatError(error) here as well for consistency with the rest of the application, or at minimum ensure that only error.message is logged in production environments.
| console.error(error); | |
| if (process.env.NODE_ENV === "production") { | |
| const message = error instanceof Error ? error.message : String(error); | |
| console.error(message); | |
| } else { | |
| console.error(error); | |
| } |
| import { StackOneToolSet } from "@stackone/ai"; | ||
| import * as readline from "readline"; | ||
|
|
||
| const STACKONE_ACCOUNT_ID = process.env.STACKONE_ACCOUNT_ID ?? ""; |
There was a problem hiding this comment.
Using empty string as the default value for STACKONE_ACCOUNT_ID when the environment variable is not set will pass the validation check at line 201, but then fail later when attempting to use it with the StackOne API. It would be clearer and more maintainable to use undefined or check for empty string in the validation. Consider changing to const STACKONE_ACCOUNT_ID = process.env.STACKONE_ACCOUNT_ID; and checking for both undefined and empty string.
| const STACKONE_ACCOUNT_ID = process.env.STACKONE_ACCOUNT_ID ?? ""; | |
| const STACKONE_ACCOUNT_ID = process.env.STACKONE_ACCOUNT_ID; |
| } | ||
|
|
||
| function trimHistory(messages: Message[], maxTurns: number): void { | ||
| if (messages.length <= 0 || maxTurns <= 0) return; |
There was a problem hiding this comment.
The function checks messages.length <= 0 which includes the case where messages.length is negative (impossible for arrays) and exactly 0. The check should be messages.length === 0 or simply !messages.length for clarity. Similarly, maxTurns <= 0 could be maxTurns < 1 for better semantic meaning since we're counting turns.
| if (messages.length <= 0 || maxTurns <= 0) return; | |
| if (!messages.length || maxTurns < 1) return; |
This pull request introduces a new example TypeScript agent application for StackOne, demonstrating how to build an interactive AI assistant using the Vercel AI SDK and StackOne Toolset SDK. The changes include configuration files, dependencies, and a fully implemented
agent.tsscript that handles user interaction, tool loading, conversation management, and error handling.Key additions and changes:
New agent application setup:
package.jsonwith all necessary dependencies, scripts, and engine requirements for the agent application.tsconfig.jsonwith strict TypeScript settings and ES module support.Configuration and environment:
.env.examplefile with required and optional environment variables for API keys, account ID, and configuration overrides.Agent implementation:
src/agent.ts, a complete interactive CLI agent that:Summary by cubic
Added a new TypeScript example agent for StackOne: an interactive CLI assistant built with Vercel AI SDK and StackOne Toolset to quickly test tools and actions locally. Includes configuration, env setup, and scripts to run the agent.
New Features
Migration
Written for commit b525059. Summary will update on new commits.