GPT-5.4 Is Here: OpenAI's Most Capable Model, Available Now on Smart AIPI

GPT-5.4 is live on Smart AIPI. New state-of-the-art benchmarks across coding, reasoning, math, and agentic tasks — at 75% less than OpenAI direct pricing.

S
Smart AIPI Team
5 min read ·
GPT-5.4 Is Here: OpenAI's Most Capable Model, Available Now on Smart AIPI

TL;DR: GPT-5.4 is live on Smart AIPI. Use model gpt-5.4 in any API call. It beats GPT-5.3 Codex across all major benchmarks — coding, reasoning, math, and agentic tasks — and it's available right now at 75% off OpenAI pricing.

OpenAI just released GPT-5.4, and it's a significant step up from GPT-5.3 Codex. We've already tested it, added it to our model list, and it's live on Smart AIPI right now.

Here's what you need to know.

The Benchmarks

GPT-5.4 sets new state-of-the-art scores across virtually every major benchmark. Here's how it compares to GPT-5.3 Codex, GPT-5.2, Claude Opus 4.6, and Gemini 3.1 Pro:

Benchmark GPT-5.4 GPT-5.3 Codex Claude Opus 4.6 Gemini 3.1 Pro
OSWorld
Computer use
75.0% 74.0% 72.7%
WebArena
Web browsing
67.3% 66.4%
GDPval
Knowledge work
83.0% 70.9% 78.0%
BrowseComp
Agentic browsing
82.7% 77.3% 84.0% 85.9%
SWE-Bench Pro
Software engineering
57.7% 56.8% 54.2%
GPQA Diamond
Expert science reasoning
92.8% 92.6% 91.3% 94.3%
FrontierMath
Advanced mathematics
47.6% 40.7% 36.9%
Toolathlon
Agentic tool use
54.6% 51.9% 44.8%

The standout numbers: 83% on GDPval (knowledge work, up from 70.9%), 57.7% on SWE-Bench Pro (the gold standard for real-world coding), and 47.6% on FrontierMath (advanced math that was considered nearly impossible a year ago).

What About GPT-5.4 Pro?

OpenAI also announced GPT-5.4 Pro, an even more powerful variant designed for extended thinking. The Pro benchmarks are impressive:

  • BrowseComp: 89.3% (highest of any model)
  • GPQA Diamond: 94.4%
  • FrontierMath: 50.0% (first model to hit half)
  • FrontierMath Tier 4: 38.0% (nearly double Claude Opus 4.6's 22.9%)

GPT-5.4 Pro is not yet available through the API. We'll add support the moment it goes live.

How to Use GPT-5.4 on Smart AIPI

It's already live. Just change your model parameter:

Chat Completions API

curl https://api.smartaipi.com/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5.4",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Responses API

curl https://api.smartaipi.com/v1/responses \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5.4",
    "input": "Explain quantum computing in simple terms"
  }'

With Reasoning

curl https://api.smartaipi.com/v1/responses \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5.4",
    "input": "Solve this step by step: what is the integral of x^2 * e^x?",
    "reasoning": {"effort": "high", "summary": "auto"}
  }'

Fast Mode (Priority Processing)

GPT-5.4 supports a fast mode via the service_tier: "priority" parameter. This gives your requests priority processing for lower latency — without changing reasoning depth or output quality.

curl https://api.smartaipi.com/v1/responses \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5.4",
    "input": "Refactor this function",
    "service_tier": "priority"
  }'

This is separate from reasoning.effort — you can use priority processing with any reasoning level. In Codex CLI, use the /fast slash command to toggle priority processing during a session.

Pricing note: Priority processing is billed at 1.5x the standard rate. For GPT-5.4, that means $0.9375/1M input and $5.625/1M output tokens (vs $0.625 and $3.75 at standard rate).

Codex CLI

Update your ~/.codex/config.toml:

model = "gpt-5.4"

OpenCode

Update your config to use gpt-5.4 as the model ID.

Important: The store Parameter

With GPT-5.4 (and all models going forward), the upstream API now requires store to be set to false. If you send "store": true or omit it with an older default, you'll get a "Store must be set to false" error.

Smart AIPI handles this automatically — we default store to false on both the Responses API and WebSocket endpoints. You don't need to change anything. But if your code explicitly sets "store": true, you'll need to update it:

// Before (breaks with GPT-5.4)
{"model": "gpt-5.4", "input": "Hello", "store": true}

// After (works)
{"model": "gpt-5.4", "input": "Hello", "store": false}

// Or just omit it — Smart AIPI defaults to false
{"model": "gpt-5.4", "input": "Hello"}

If you're using previous_response_id for multi-turn conversations, note that store: false means the server won't persist responses. Smart AIPI's WebSocket endpoint handles conversation state internally, so multi-turn still works over WebSockets without storing.

What Changed From GPT-5.3 Codex

For developers, the most meaningful improvements are:

  • Better coding. SWE-Bench Pro went from 56.8% to 57.7% — incremental but meaningful at the frontier. More importantly, the model handles complex multi-file refactors and debugging with noticeably better accuracy in practice.
  • Stronger reasoning. GPQA Diamond improved to 92.8%, and FrontierMath jumped to 47.6%. The model reasons through complex problems more reliably.
  • Better agentic performance. OSWorld (75.0%), Toolathlon (54.6%), and WebArena (67.3%) all show the model is better at autonomous computer use and tool interaction — exactly what you need for coding agents.
  • Knowledge work leap. GDPval went from 70.9% to 83.0%, a 12-point jump. The model is significantly better at real-world knowledge tasks.

Pricing

75% off OpenAI direct pricing. GPT-5.4 on OpenAI costs $2.50/1M input and $15/1M output. Through Smart AIPI, you pay $0.625/1M input and $3.75/1M output.

GPT-5.4 also supports an experimental 1M context window (up from the standard 272K). Requests that exceed 272K tokens count at 2x the normal rate — this is an upstream OpenAI policy that applies regardless of provider.

Free credits included. Every new account gets free credits. Sign up at smartaipi.com/signup, create an API key, and start using GPT-5.4 immediately. No credit card required.

Getting Started

  1. Sign up at smartaipi.com/signup (free credits, no credit card)
  2. Create an API key in the dashboard
  3. Set your base URL to https://api.smartaipi.com/v1
  4. Use model gpt-5.4

That's it. Same API, same tools, better model, 75% cheaper.

GPT-5.4 New Model Benchmarks OpenAI
S
Written by
Smart AIPI

OpenAI-compatible API gateway. Access frontier AI models at 75% less cost.

Start for free

Message sent

We'll get back to you within 2 business days.

Contact Support

Have a question or need help? Send us a message and we'll get back to you within 2 business days.