What are MVP Examples?
These are minimal, complete, and runnable code examples that you can copy and use immediately. Each example is self-contained and shows the essential code needed to make API calls to Model Router.
Before running these examples, make sure you have:
- Created an API key (see Create API Key)
- Checked which models are available (see List Available Models)
- Replaced
sk-demo123 with your actual API key
- Replaced the model name with one available to your API key
Python Examples
Using OpenAI Official SDK (Recommended)
Recommended: The OpenAI official SDK is the easiest way to use Model Router. You only need to change the base_url parameter - everything else works exactly the same as with OpenAI’s API.
Installation
Complete Example
from openai import OpenAI
# Initialize client with Model Router base URL
client = OpenAI(
base_url="https://app.memorylake.ai/v1", # Only change needed!
api_key="sk-your-api-key-here" # Your Model Router API key
)
# Replace with a model available to your API key
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": "Hello! How are you?"}
]
)
print(response.choices[0].message.content)
Running the Example
- Save the code to a file (e.g.,
chat.py)
- Install dependencies:
pip install openai
- Replace
api_key and model with your values
- Run:
python chat.py
Key Point: The only difference from using OpenAI’s API directly is setting base_url="https://app.memorylake.ai/v1". All other code remains exactly the same!
Using Environment Variables
You can also use environment variables for better security:
export OPENAI_API_KEY="sk-your-api-key-here"
from openai import OpenAI
# Client automatically reads OPENAI_API_KEY from environment
client = OpenAI(
base_url="https://app.memorylake.ai/v1" # Only change needed!
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": "Hello! How are you?"}
]
)
print(response.choices[0].message.content)
Using requests Library (Alternative)
If you prefer using the requests library directly:
Installation
Complete Example
import requests
# Replace with your API key
API_KEY = "sk-your-api-key-here"
BASE_URL = "https://app.memorylake.ai/v1"
# Replace with a model available to your API key
MODEL = "gpt-4o-mini"
def chat_completion(messages):
"""Send a chat completion request"""
url = f"{BASE_URL}/chat/completions"
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
data = {
"model": MODEL,
"messages": messages,
"stream": False
}
response = requests.post(url, headers=headers, json=data)
response.raise_for_status()
return response.json()
# Example usage
if __name__ == "__main__":
messages = [
{"role": "user", "content": "Hello! How are you?"}
]
result = chat_completion(messages)
print(result["choices"][0]["message"]["content"])
Running the Example
- Save the code to a file (e.g.,
chat.py)
- Install dependencies:
pip install requests
- Replace
API_KEY and MODEL with your values
- Run:
python chat.py
TypeScript/JavaScript Examples
Using OpenAI Official SDK (Recommended)
Recommended: The OpenAI official SDK works seamlessly with Model Router. You only need to change the baseURL parameter - everything else works exactly the same as with OpenAI’s API.
Installation
For TypeScript projects, you may also want to install type definitions:
npm install --save-dev typescript @types/node
Complete Example (TypeScript)
import OpenAI from "openai";
// Initialize client with Model Router base URL
const client = new OpenAI({
baseURL: "https://app.memorylake.ai/v1", // Only change needed!
apiKey: "sk-your-api-key-here" // Your Model Router API key
});
// Replace with a model available to your API key
const response = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [
{ role: "user", content: "Hello! How are you?" }
]
});
console.log(response.choices[0].message.content);
Complete Example (JavaScript)
import OpenAI from "openai";
// Initialize client with Model Router base URL
const client = new OpenAI({
baseURL: "https://app.memorylake.ai/v1", // Only change needed!
apiKey: "sk-your-api-key-here" // Your Model Router API key
});
// Replace with a model available to your API key
const response = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [
{ role: "user", content: "Hello! How are you?" }
]
});
console.log(response.choices[0].message.content);
Running the Example
- Save the code to a file (e.g.,
chat.ts or chat.mjs)
- Install dependencies:
npm install openai
- Replace
apiKey and model with your values
- Run:
- TypeScript:
npx ts-node chat.ts (or compile first with tsc)
- JavaScript:
node chat.mjs (or node chat.js if using CommonJS)
Key Point: The only difference from using OpenAI’s API directly is setting baseURL: "https://app.memorylake.ai/v1". All other code remains exactly the same!
Using Environment Variables
You can also use environment variables for better security:
export OPENAI_API_KEY="sk-your-api-key-here"
import OpenAI from "openai";
// Client automatically reads OPENAI_API_KEY from environment
const client = new OpenAI({
baseURL: "https://app.memorylake.ai/v1" // Only change needed!
});
const response = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [
{ role: "user", content: "Hello! How are you?" }
]
});
console.log(response.choices[0].message.content);
Streaming Examples
Python - Using OpenAI Official SDK
For streaming responses with the OpenAI SDK:
from openai import OpenAI
client = OpenAI(
base_url="https://app.memorylake.ai/v1",
api_key="sk-your-api-key-here"
)
stream = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": "Tell me a short story"}
],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end='', flush=True)
print() # New line at the end
Python - Using requests Library
For streaming responses with the requests library:
import requests
import json
API_KEY = "sk-your-api-key-here"
BASE_URL = "https://app.memorylake.ai/v1"
MODEL = "gpt-4o-mini"
def chat_completion_stream(messages):
"""Send a streaming chat completion request"""
url = f"{BASE_URL}/chat/completions"
headers = {
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
}
data = {
"model": MODEL,
"messages": messages,
"stream": True
}
response = requests.post(url, headers=headers, json=data, stream=True)
response.raise_for_status()
for line in response.iter_lines():
if line:
line_text = line.decode('utf-8')
if line_text.startswith('data: '):
json_str = line_text[6:] # Remove 'data: ' prefix
if json_str == '[DONE]':
break
try:
data = json.loads(json_str)
if 'choices' in data and len(data['choices']) > 0:
delta = data['choices'][0].get('delta', {})
if 'content' in delta:
print(delta['content'], end='', flush=True)
except json.JSONDecodeError:
pass
# Example usage
if __name__ == "__main__":
messages = [
{"role": "user", "content": "Tell me a short story"}
]
chat_completion_stream(messages)
print() # New line at the end
TypeScript/JavaScript - Using OpenAI Official SDK
For streaming responses with the OpenAI SDK:
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://app.memorylake.ai/v1",
apiKey: "sk-your-api-key-here"
});
const stream = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [
{ role: "user", content: "Tell me a short story" }
],
stream: true
});
for await (const chunk of stream) {
if (chunk.choices[0]?.delta?.content) {
process.stdout.write(chunk.choices[0].delta.content);
}
}
console.log(); // New line at the end
Other Official SDKs
Model Router is compatible with many official SDKs from different AI platforms. Most of them only require configuring the API_KEY and BASE_URL to work with Model Router.
Key Point: For most official SDKs, you only need to:
- Set the
baseURL or base_url to https://app.memorylake.ai/v1 (or the corresponding endpoint)
- Use your Model Router API key as the
apiKey or api_key
- Everything else works exactly the same as with the original platform’s API
Anthropic Claude SDK
The official Anthropic Claude SDK supports Python, TypeScript/JavaScript, Java, Go, C#, Ruby, and PHP.
Example (Python):
from anthropic import Anthropic
client = Anthropic(
api_key="sk-your-api-key-here",
base_url="https://app.memorylake.ai/claude" # Claude native endpoint
)
message = client.messages.create(
model="claude-3-opus-20240229",
max_tokens=200,
messages=[
{"role": "user", "content": "Hello!"}
]
)
Example (TypeScript):
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic({
apiKey: 'sk-your-api-key-here',
baseURL: 'https://app.memorylake.ai/claude' // Claude native endpoint
});
const message = await client.messages.create({
model: 'claude-3-opus-20240229',
max_tokens: 200,
messages: [
{ role: 'user', content: 'Hello!' }
]
});
Documentation: Anthropic Client SDKs
Vercel AI SDK
The Vercel AI SDK provides a unified interface for multiple AI providers, including OpenAI, Anthropic, Google, and more.
Example:
import { openai } from '@ai-sdk/openai';
const client = openai({
baseURL: 'https://app.memorylake.ai/v1',
apiKey: 'sk-your-api-key-here'
});
Documentation: Vercel AI SDK Providers
Google Gemini SDK
The official Google Gemini SDK supports Python, Node.js, and other languages.
Example (Python):
import google.generativeai as genai
genai.configure(
api_key="sk-your-api-key-here",
transport="rest",
client_options={
"api_endpoint": "https://app.memorylake.ai/gemini"
}
)
model = genai.GenerativeModel('gemini-1.5-pro')
response = model.generate_content("Hello!")
Documentation: Google Gemini API Migration Guide
General Pattern
Most official SDKs follow a similar pattern:
- Install the SDK: Use the official package manager (pip, npm, etc.)
- Configure the client: Set
baseURL/base_url to Model Router’s endpoint
- Set API key: Use your Model Router API key
- Use normally: All other code remains the same
Common Endpoints:
- OpenAI-compatible:
https://app.memorylake.ai/v1
- Claude native:
https://app.memorylake.ai/claude
- Gemini native:
https://app.memorylake.ai/gemini
Before using any SDK, make sure to check which models are available to your API key by calling the model list endpoint.
Next Steps
- Test Your Code: Run one of the examples above with your API key
- Check Your Usage: See View Usage and Billing to monitor your API calls
- Handle Errors: Learn about Error Handling if you encounter issues
- Explore More: Check out Direct API Requests for more advanced usage