One API call. Web-grounded answers with inline citations, sources, and confidence scores. OpenAI-compatible. Built for developers.
Get a grounded AI answer in one API call.
curl -X POST https://api.miapi.uk/v1/answer \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "question": "What is quantum computing?", "citations": true, "include_sources": true }'
import requests response = requests.post( "https://api.miapi.uk/v1/answer", headers={"Authorization": "Bearer YOUR_API_KEY"}, json={ "question": "What is quantum computing?", "citations": True } ) data = response.json() print(data["answer"]) # "Quantum computing uses qubits to perform calculations [1]..." print(data["sources"]) # [{"title": "...", "url": "...", "snippet": "..."}]
const response = await fetch("https://api.miapi.uk/v1/answer", { method: "POST", headers: { "Content-Type": "application/json", "Authorization": "Bearer YOUR_API_KEY" }, body: JSON.stringify({ question: "What is quantum computing?", citations: true }) }); const data = await response.json(); console.log(data.answer); // Grounded answer with [1][2] citations console.log(data.sources); // Array of source URLs
# Just change base_url — your existing OpenAI code works instantly from openai import OpenAI client = OpenAI( api_key="YOUR_MIAPI_KEY", base_url="https://api.miapi.uk/v1" # ← just this line ) response = client.chat.completions.create( model="miapi-grounded", messages=[{"role": "user", "content": "What is quantum computing?"}] ) print(response.choices[0].message.content) # Real-time web-grounded answer with citations!
Everything you need in one API. Nothing you don't.
Every answer is backed by real-time web search. No hallucinations. Sources included with every response.
Get answers with [1][2] citation markers linked to sources. Professional, verifiable, trustworthy.
Drop-in replacement for /v1/chat/completions. Change one URL and your existing code works instantly.
Pass your own documents. Get answers from your data, web search, or both combined. Perfect for RAG.
Real-time Server-Sent Events streaming. Show answers as they generate, just like ChatGPT.
Get raw search results without LLM processing. Bring your own model. Built for LangChain and RAG pipelines.
Dedicated news endpoint with article dates, sources, and snippets. Real-time news intelligence.
Find images from across the web with titles, thumbnails, dimensions, and source URLs.
~1.5s average response. Built-in caching with X-Cache headers. Repeat queries return in 1ms.
Compare us to every grounded AI API. We include everything they charge extra for.
| Provider | AI Answer | Citations | Knowledge Mode | News Search | OpenAI Compat | Streaming | Per 1,000 queries |
|---|---|---|---|---|---|---|---|
| MIAPI ⚡ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | $2.50–3.60 |
| Perplexity Search | ✗ | ✓ | ✗ | ✗ | ✓ | ✓ | ~$5–12 |
| Brave AI Grounding | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ~$5–8 |
| Perplexity Sonar Pro | ✓ | ✓ | ✗ | ✗ | ✓ | ✓ | ~$8–15 |
| Exa.ai Answer API | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ~$5–25 |
| Tavily Search | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | $8.00 |
| Google Grounding API | ✓ | ✓ | ✗ | ✗ | ✗ | ✓ | $14–35 |
MIAPI is the only API with all 6 features at the lowest price point.
Try MIAPI right now. No signup required for the demo.
Demo: 5 queries per minute limit. Sign up for full access.
Complete documentation for every endpoint.
The core endpoint. Searches the web, synthesizes an AI answer with sources and confidence scores.
| Name | Type | Required | Description |
|---|---|---|---|
| question | string | required | The question to answer (2-1000 chars) |
| mode | string | optional | answer (default), search (raw results), knowledge (from your text) |
| citations | boolean | optional | Include [1][2] citation markers. Default: false |
| knowledge | string | optional | Custom context (up to 10K chars). Used alongside or instead of web search. |
| system_prompt | string | optional | Custom system prompt for the AI (max 2000 chars) |
| response_format | string | optional | text, short, json, markdown |
| temperature | float | optional | 0.0 to 1.0 (default 0.3) |
| max_tokens | integer | optional | 50-2000 (auto if not set) |
| include_sources | boolean | optional | Include source URLs. Default: true |
| search_domains | array | optional | Restrict search to specific domains, e.g. ["wikipedia.org"] |
| exclude_domains | array | optional | Block specific domains from results, e.g. ["reddit.com", "wikipedia.org"] |
| language | string | optional | Force response language, e.g. "Spanish", "French" |
| context | array | optional | Conversation history for follow-up questions |
{
"answer": "Quantum computing uses qubits... [1][2]",
"sources": [
{"title": "...", "url": "...", "snippet": "..."}
],
"confidence": 0.9,
"cached": false,
"query_time_ms": 1200,
"mode": "answer",
"request_id": "req_abc123..."
}
| X-Request-ID | Unique request identifier for debugging | ||
| X-Cache | HIT or MISS — whether the response was cached | ||
| X-Response-Time | Server processing time (e.g. "1200ms") | ||
| X-MIAPI-Version | API version (e.g. "1.6.0") | ||
Drop-in replacement for OpenAI's chat completions. All answers are web-grounded with citations.
| Name | Type | Required | Description |
|---|---|---|---|
| messages | array | required | Array of {role, content} messages |
| model | string | optional | "miapi-grounded" (default) |
| stream | boolean | optional | Enable SSE streaming |
| temperature | float | optional | 0.0-1.0 |
| max_tokens | integer | optional | 50-2000 |
Real-time streaming via Server-Sent Events. Sends sources first, then streams the answer token by token.
Same parameters as /v1/answer. Events: sources → answer (multiple) → done
| Name | Type | Required | Description |
|---|---|---|---|
| query | string | required | News search query |
| num_results | integer | optional | 1-20 (default 5) |
Returns: articles with title, url, snippet, date, source name.
| Name | Type | Required | Description |
|---|---|---|---|
| query | string | required | Image search query |
| num_results | integer | optional | 1-20 (default 5) |
Returns: images with title, url, thumbnail, source, width, height.
Returns raw search results without LLM processing. Ideal for LangChain, RAG pipelines, or bring-your-own-model setups.
| Name | Type | Required | Description |
|---|---|---|---|
| query | string | required | Search query |
| num_results | integer | optional | 1-20 (default 7) |
| search_domains | array | optional | Restrict to specific domains |
| exclude_domains | array | optional | Block specific domains from results |
Public demo endpoint, no API key required. Rate limited to 20 queries/min per IP. Max 500 char questions, 200 token responses.
| Name | Type | Required | Description |
|---|---|---|---|
| question | string | required | Question (2-500 chars) |
| include_sources | boolean | optional | Include source URLs. Default: true |
| response_format | string | optional | text, short, markdown, json |
Returns all available model identifiers. OpenAI-compatible format. No request body needed.
List, create, and revoke API keys on your account. Returns key metadata including creation date, usage count, and rate limit.
Returns queries today, this month, rate limit, tier, and monthly limit. No request body needed.
Buy query packs. Use them anytime. No subscriptions, no expiry.
All queries are one-time purchases. No subscriptions. Buy more anytime. Prices range from $2.50 to $3.60 per 1,000 queries.
Install and start querying in 30 seconds.
Full-featured Python client with sync and async support, streaming, and all endpoints.
from miapi import MIAPI client = MIAPI("YOUR_API_KEY") # Grounded answer with citations result = client.answer("What is CRISPR?", citations=True) print(result.answer) print(result.sources) # Knowledge mode — answer from your data result = client.answer( "What is the return policy?", mode="knowledge", knowledge="Returns accepted within 30 days..." ) # Search only — raw results for your pipeline results = client.search("latest AI research papers") for source in results: print(source.url) # News search news = client.news("technology", num_results=5) # Image search images = client.images("golden retriever")
Plug MIAPI into any MCP-compatible AI assistant — Cursor, Claude Desktop, Windsurf, and more.
Download the server file, set your API key, and your AI assistant gets web search with citations built in.
Questions, feedback, or partnership inquiries — we'd love to hear from you.
Or email us directly at support@miapi.uk