Skip to content

Comments

Migrate from OpenResponses to Chat Completions API#2

Open
xamdel wants to merge 1 commit intoclientfrom
chat-completions-migration
Open

Migrate from OpenResponses to Chat Completions API#2
xamdel wants to merge 1 commit intoclientfrom
chat-completions-migration

Conversation

@xamdel
Copy link
Collaborator

@xamdel xamdel commented Feb 22, 2026

Summary

  • Replace OpenResponses flat-item format with standard Chat Completions message-based format
  • Rewrite inference-handler.ts from 725 → 298 lines by eliminating bidirectional format conversion (no longer needed since plugin calls pi-ai directly)
  • Update consumer client, proxy, protocol schemas, and tests to match

What changed

File Change
schemas/chatcompletions.ts New schema: MessageSchema, ChatCompletionRequestSchema, ChatCompletionSchema, ChatCompletionChunkSchema
schemas/openresponses.ts Deleted (343 lines)
inference-handler.ts buildContext takes messages[] directly; streaming emits chat.completion.chunk; non-streaming returns choices[].message
sixerr-client.ts + types.ts messages instead of input; /v1/chat/completions path; max_tokens field
protocol.ts Usage fields: prompt_tokens/completion_tokens
http-proxy.ts Route path → /v1/chat/completions
schemas.test.ts 25 tests for new schema (tools, tool results, marketplace extensions, old format rejection)

Server-side migration guide: sixerr-server/CHAT-COMPLETIONS-MIGRATION.md

Test plan

  • pnpm tsc --noEmit — clean
  • pnpm test — 41/41 pass
  • Review: non-streaming response has choices[0].message with optional tool_calls
  • Review: streaming emits chat.completion.chunk with delta field
  • Review: scanner on server side can still parse the response

🤖 Generated with Claude Code

Replace the OpenResponses flat-item format with standard Chat Completions
message-based format. This eliminates the complex bidirectional conversion
between OpenResponses items and pi-ai messages now that the plugin calls
pi-ai directly.

- Replace openresponses.ts schema (343 lines) with chatcompletions.ts
- Rewrite inference-handler.ts (725 → 298 lines): buildContext takes
  messages[] directly, streaming emits chat.completion.chunk objects,
  non-streaming returns choices[].message format
- Update consumer client: messages instead of input, /v1/chat/completions
- Update protocol usage fields: prompt_tokens/completion_tokens
- Update http-proxy route to /v1/chat/completions
- Expand test coverage for new schema (25 tests)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant