02
Change one line of code
Point your existing OpenAI or Anthropic SDK at Safyan. Every prompt gets optimized automatically.
Python
Node.js
cURL
Copy from openai import OpenAI
client = OpenAI(
base_url="https://api.safyan.ai/v1",
api_key="sfk_your_key_here"
)
response = client.chat.completions.create(
model="safyan-auto",
messages=[{"role": "user", "content": "your prompt here"}]
)
print(response.choices[0].message.content)
Copy import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.safyan.ai/v1",
apiKey: "sfk_your_key_here"
});
const response = await client.chat.completions.create({
model: "safyan-auto",
messages: [{ role: "user", content: "your prompt here" }]
});
console.log(response.choices[0].message.content);
Copy curl https://api.safyan.ai/v1/chat/completions \
-H "Authorization: Bearer sfk_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"model": "safyan-auto",
"messages": [{"role": "user", "content": "your prompt here"}]
}'
03
Every prompt is now optimized
Safyan intercepts every prompt, applies the 7 Cures, routes to the best model, and returns the response in the same format your SDK expects. No code changes beyond the base URL.
safyan-auto — full optimization + routing
safyan-fast — compile only, half cost
04
Monitor everything here
This console shows every request, token count, cost, model used, and the Safyan score delta. You'll see exactly how much better your prompts perform.