Now in public beta — Canada region live

One gateway to Route your AI

The managed agentgateway platform. Route 100+ LLM providers, federate MCP tools, orchestrate A2A agents — with guardrails, observability, and per-tenant isolation built in.

No credit card required · Free tier forever

terminal
$ curl -X POST \
  https://your-app.ca.agw.maniak.io/v1/chat/completions \
  -H "Authorization: Bearer agw_sk_live_xxxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

// Response
{
  "id": "chatcmpl-abc123",
  "model": "gpt-4o-2024-08-06",
  "choices": [{
    "message": {
      "role": "assistant",
      "content": "Hello! How can I help you today?"
    }
  }],
  "usage": { "total_tokens": 28 }
}

Built on agentgateway — the Linux Foundation's open-source agentic proxy

7,000+ GitHub StarsRust PerformanceApache 2.0Linux Foundation

Drop in. Zero code changes.

Maverick is OpenAI-compatible. Swap your base URL, keep your existing SDK code. Works with Python, Node.js, Go, or any HTTP client.

  • OpenAI SDK compatible — just change the base URL
  • Automatic retries, fallback, and load balancing
  • Requests are guardrailed before they hit the provider
  • Full trace visibility in your dashboard
from openai import OpenAI

client = OpenAI(
    api_key="agw_sk_live_xxxxx",
    base_url="https://your-app.ca.agw.maniak.io/v1"
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

Everything you need to ship AI, faster

Stop wiring up provider SDKs, handling retries, and building guardrails from scratch. Maverick handles the infrastructure so you can focus on your product.

Unified LLM Gateway
Route to 100+ providers — OpenAI, Anthropic, Gemini, Bedrock, Azure — through one OpenAI-compatible API. Hot-swap models without changing a single line of code.
MCP Federation
Connect AI agents to any tool via Model Context Protocol. Built-in tool discovery, session management, and automatic capability negotiation across your entire tool ecosystem.
A2A Agent Routing
Enable secure agent-to-agent communication using Google's A2A protocol. Capability discovery, task delegation, and collaborative workflows — natively supported.
Per-Tenant Isolation
Every customer gets a dedicated proxy pod with network-level isolation, separate config, and independent scaling. True multi-tenancy, not just namespacing.
5 Guardrail Engines
Regex filters, OpenAI moderation, AWS Bedrock Guardrails, Google Model Armor, and custom webhook validators — layer them freely, per-route or globally.
Native Observability
OpenTelemetry traces, metrics, and structured logs shipped to your stack. Per-tenant dashboards, latency percentiles, and cost tracking — out of the box.

Up and running in three steps

Get your AI gateway endpoint in under a minute. No infrastructure to manage.

01

Sign up & add your keys

Create an account and add your LLM provider API keys. OpenAI, Anthropic, Gemini, Bedrock — all supported.

02

Get your gateway endpoint

Instantly get a unique, dedicated gateway URL with full network isolation and built-in guardrails.

03

Point your SDK at it

Replace your provider base URL with your Maverick endpoint. Zero code changes — it's OpenAI-compatible.

100+
LLM Providers
Supported
<1ms
Proxy Overhead
Per Request
5
Guardrail Engines
Built-in
3
Regions
Available

Simple, transparent pricing

Start free. Scale when you're ready. No hidden fees.

Free

For hobby projects and experimentation.

$0forever
  • 1 LLM provider
  • 1,000 requests / day
  • Community support
  • Basic analytics dashboard
  • Shared infrastructure
Most Popular
Pro

For teams shipping production AI apps.

$49/ month
  • Unlimited providers
  • 100,000 requests / day
  • Priority email support
  • Advanced analytics & traces
  • Custom guardrail configs
  • MCP + A2A federation
Business

For enterprises with compliance needs.

$199/ month
  • Everything in Pro
  • Unlimited requests
  • Dedicated Slack support
  • SSO & RBAC
  • 99.9% SLA guarantee
  • Custom deployment regions
Enterprise

For organizations that need full control.

Custom
  • Everything in Business
  • Dedicated infrastructure
  • On-prem / VPC deployment
  • Custom SLA & contracts
  • 24/7 premium support
  • Solution architecture team

Ready to ship AI faster?

Get your gateway endpoint in 30 seconds. No credit card required.