Documentation
Smart AIPI provides a fully OpenAI-compatible API. Use our service with any existing OpenAI SDK or tool by simply changing the base URL. This documentation covers all endpoints, parameters, and features.
Base URL
https://api.smartaipi.com/v1
Authentication
All API requests require an API key passed in the Authorization header using Bearer token authentication:
Authorization: Bearer YOUR_API_KEY
Migration Guide
Migrating from OpenAI takes less than a minute. Our API is 100% compatible with OpenAI's endpoints.
Get your Smart AIPI API key
Sign up and create an API key from your dashboard.
Change the base URL
Replace https://api.openai.com/v1 with https://api.smartaipi.com/v1
Update your API key
Use your Smart AIPI key instead of your OpenAI key. That's it!
Chat Completions
Create a chat completion for the provided messages and model. This is the main endpoint for interacting with language models.
Request Body Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | Model ID to use (e.g., "gpt-5.3", "gpt-5.3-codex") |
| messages | array | Yes | Array of message objects with role and content |
| temperature | number | No | Sampling temperature (0-2). Higher = more random. Default: 1 |
| max_tokens | integer | No | Maximum tokens to generate in the response |
| top_p | number | No | Nucleus sampling. Consider tokens with top_p probability. Default: 1 |
| frequency_penalty | number | No | Penalize tokens based on frequency (-2 to 2). Default: 0 |
| presence_penalty | number | No | Penalize tokens based on presence (-2 to 2). Default: 0 |
| stop | string/array | No | Stop sequences. Up to 4 sequences where generation stops. |
| stream | boolean | No | Enable streaming responses via SSE. Default: false |
Message Roles
system- Sets the behavior/persona of the assistantuser- Messages from the userassistant- Previous responses from the assistant
Code Examples
from openai import OpenAI
client = OpenAI(
base_url="https://api.smartaipi.com/v1",
api_key="your-api-key"
)
response = client.chat.completions.create(
model="gpt-5.3",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
temperature=0.7,
max_tokens=1000
)
print(response.choices[0].message.content)
Streaming
Enable streaming to receive tokens as they're generated via Server-Sent Events (SSE). This provides a better user experience for long responses.
Set "stream": true in your request to enable streaming.
Streaming Example
from openai import OpenAI
client = OpenAI(
base_url="https://api.smartaipi.com/v1",
api_key="your-api-key"
)
# Enable streaming
stream = client.chat.completions.create(
model="gpt-5.3",
messages=[{"role": "user", "content": "Write a poem"}],
stream=True
)
# Process chunks as they arrive
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
Reasoning Effort
Control the depth of reasoning with the reasoning_effort parameter for GPT-5 models.
Supported Models
GPT-5 Series
gpt-5.3, gpt-5.2, gpt-5.1, gpt-5, and all Codex variants
Reasoning Effort Levels
low- Faster responses, less thorough reasoning. Best for simple tasks.medium- Balanced speed and depth (default)high- Deep analysis for complex problems.xhigh- Extra high. Maximum reasoning depth for the hardest problems.
Example
response = client.chat.completions.create(
model="gpt-5.3",
messages=[{"role": "user", "content": "Analyze this code for bugs..."}],
reasoning_effort="high" # or "xhigh" for maximum depth
)
Image Generation
Generate images from text prompts using our image generation endpoint.
/v1/images/edits) and variations (/v1/images/variations) are not yet supported.
Request Body Parameters
| Parameter | Type | Description |
|---|---|---|
| prompt | string | Text description of the image to generate (required) |
| model | string | "gpt-image-1.5" (frontier), "gpt-image-1", or "gpt-image-1-mini". Default: "gpt-image-1.5" |
| n | integer | Number of images to generate (1-10). Default: 1 |
| size | string | "1024x1024", "1024x1792", or "1792x1024". Default: "1024x1024" |
| quality | string | "standard" or "hd". Default: "standard" |
| style | string | "vivid" or "natural". Default: "vivid" |
Available Models
gpt-image-1.5- Frontier model, highest quality (default)gpt-image-1- Full quality image generationgpt-image-1-mini- Faster, smaller images
Example
response = client.images.generate(
model="gpt-image-1.5",
prompt="A futuristic city at sunset, cyberpunk style",
size="1024x1024",
n=1
)
# Response contains base64-encoded image
image_b64 = response.data[0].b64_json
# Save to file
import base64
with open("output.png", "wb") as f:
f.write(base64.b64decode(image_b64))
List Models
Retrieve a list of all available models. Use this endpoint to dynamically discover which models are available for your account.
Response Format
{
"object": "list",
"data": [
{
"id": "gpt-5.3",
"object": "model",
"created": 1700000000,
"owned_by": "smart-aipi"
},
{
"id": "gpt-5.3-codex",
"object": "model",
"created": 1700000000,
"owned_by": "smart-aipi"
},
// ... more models
]
}
Example
from openai import OpenAI
client = OpenAI(
base_url="https://api.smartaipi.com/v1",
api_key="your-api-key"
)
# List all available models
models = client.models.list()
for model in models.data:
print(model.id)
Available Models
GPT-5 Series
- gpt-5.3
- gpt-5.2
- gpt-5.1
- gpt-5
Codex Series
- gpt-5.3-codex
- gpt-5.2-codex
- gpt-5.1-codex
- gpt-5-codex
- gpt-5.3-codex-max
- gpt-5.1-codex-max
- gpt-5.3-codex-mini
- gpt-5.2-codex-mini
- gpt-5.1-codex-mini
- gpt-5-codex-mini
O-Series Coming Soon
- o3, o3-mini, o4-mini
OpenCode
Switch OpenCode to use Smart AIPI in seconds.
Quick Setup (copy & paste)
{
"providers": {
"smart-aipi": {
"apiKey": "YOUR_API_KEY",
"baseUrl": "https://api.smartaipi.com/v1"
}
},
"models": {
"big": { "provider": "smart-aipi", "model": "gpt-5.3-codex" },
"small": { "provider": "smart-aipi", "model": "gpt-5-codex-mini" }
}
}
Or use environment variables
# Smart AIPI for OpenCode
export OPENAI_API_KEY="YOUR_API_KEY"
export OPENAI_BASE_URL="https://api.smartaipi.com/v1"
Codex CLI
Use Smart AIPI with OpenAI's Codex CLI tool.
Quick Setup (copy & paste)
# Set Smart AIPI as your Codex backend
export OPENAI_API_KEY="YOUR_API_KEY"
export OPENAI_BASE_URL="https://api.smartaipi.com/v1"
# Now use codex normally
codex "fix this bug"
Make it permanent
# Smart AIPI for Codex CLI
export OPENAI_API_KEY="YOUR_API_KEY"
export OPENAI_BASE_URL="https://api.smartaipi.com/v1"
Cursor & Cline
Use Smart AIPI with Cursor IDE or Cline VS Code extension.
Cursor
- Open Cursor Settings
- Go to
Modelstab - Click
+ Add Model - Set Base URL:
https://api.smartaipi.com/v1 - Enter your API key
- Model:
gpt-5.3-codex
Cline (VS Code)
- Open Cline settings in VS Code
- Select
OpenAI Compatible - Base URL:
https://api.smartaipi.com/v1 - Enter your API key
- Model:
gpt-5.3-codex
API Tester
Test the API directly from your browser: