Skip to content

Add Brainiall provider — OpenAI-compatible gateway to AWS Bedrock#1009

Closed
fasuizu-br wants to merge 3 commits intoanomalyco:devfrom
fasuizu-br:add-brainiall-provider
Closed

Add Brainiall provider — OpenAI-compatible gateway to AWS Bedrock#1009
fasuizu-br wants to merge 3 commits intoanomalyco:devfrom
fasuizu-br:add-brainiall-provider

Conversation

@fasuizu-br
Copy link

Summary

Adds Brainiall as a new provider — an OpenAI-compatible API gateway to AWS Bedrock with 30 models from 12 providers.

Provider Details

  • API: https://apim-ai-apis.azure-api.net/v1 (OpenAI-compatible)
  • Auth: Bearer token (BRAINIALL_API_KEY)
  • SDK: @ai-sdk/openai-compatible
  • Streaming: Supported for all models
  • Tool calling: Supported for 27/30 models

Models (30)

Family Models Price Range ($/MTok in/out)
Claude (Anthropic) Opus 4.6, Sonnet 4.6, Haiku 4.5, Opus 4.5, 3.5 Haiku, 3 Haiku $0.25-$75.00
DeepSeek R1, V3.2 $0.27-$5.40
Llama (Meta) 3.3 70B, 4 Scout, 3.1 8B, 3.2 3B, 3.2 1B $0.10-$0.72
Nova (Amazon) Pro, Lite, Micro, Premier $0.04-$12.50
Mistral Large 3 675B, Devstral 2, Small 24B $0.10-$6.00
Qwen 3 80B MoE, 3 32B $0.35-$0.50
Other MiniMax M2, Kimi K2.5, Nemotron 30B, GPT OSS 120B, Gemma 27B, Command R+, Palmyra X5, GLM 4.7 $0.10-$15.00

Features

  • Prompt caching with cost breakdown for Claude models (cache_read / cache_write)
  • Context windows up to 1M tokens (MiniMax M2)
  • Reasoning support (Claude Opus 4.6/4.5, Sonnet 4.6, DeepSeek R1, Nova Premier)
  • Image input for Claude, Nova, Mistral Large 3

Validation

  • bun run validate passes with exit code 0
  • All TOML files conform to the Zod schema

@fasuizu-br
Copy link
Author

Hi maintainers! Just checking in on this PR. We've added Brainiall as a new provider with 30 model definitions and an SVG logo. Everything follows the existing TOML format. Would love to get this reviewed when you have a chance. This will enable KiloCode and Vercel AI SDK users to access our 113+ models via Bedrock. Thank you!

@fasuizu-br
Copy link
Author

Hi! Just checking in — is there anything I should update in the provider config or model definitions? Happy to adjust. Thanks!

@fasuizu-br fasuizu-br force-pushed the add-brainiall-provider branch from 4a53ff9 to d22d6e3 Compare February 28, 2026 12:58
…_weights

- Claude Opus 4.6: output 32K → 128K (matches all other providers)
- Claude Haiku 4.5: output 8K → 64K (matches all other providers)
- Nova Lite/Pro/Micro: output 5120 → 8192 (matches Vercel/Bedrock)
- Qwen3-80B: open_weights false → true (model is open source)
Comment on lines +12 to +22
async function promptForApiKey(): Promise<string | null> {
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});

return new Promise((resolve) => {
rl.question(
"Enter Venice API key to include alpha models (or press Enter to skip): ",
(answer) => {
rl.close();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are u updating other unrelated provider code?

@rekram1-node
Copy link
Contributor

This pr changes so much.... It deletes a bucnh of stuff randomly..

@fasuizu-br
Copy link
Author

fasuizu-br commented Mar 10, 2026

Hi @rekram1-node — you're absolutely right. Our AI coding agent ran a git add -A that captured unrelated changes from the working tree, resulting in deletions and modifications to other providers' files. That was entirely our fault for not reviewing the diff before submitting.

I've closed this PR and opened a clean one (#1125) that only adds files under providers/brainiall/ — no modifications to any existing providers. Sorry for the noise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants