3 min read
Table of Contents
Getting Started
Prerequisites
- Python 3.7+
- An OpenAI account
- API key from OpenAI platform
Obtaining an API Key
- Visit OpenAI's platform
- Sign up or log in to your account
- Navigate to the API section
- Create a new API key
- Store your API key securely
Installation and Setup
First, install the OpenAI package using pip:
pip install openai
Set up your API key in your Python script:
from openai import OpenAI
# Initialize the client with your API key
client = OpenAI(api_key='your-api-key-here')
# Alternatively, set it as an environment variable
import os
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
client = OpenAI() # Will automatically use the environment variable
Basic Usage
Making Your First API Call
# Basic chat completion
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hello, how can you help me today?"}
]
)
# Print the response
print(response.choices[0].message.content)
Different Types of Requests
# Chat completion with system message
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What's the weather like?"}
],
temperature=0.7,
max_tokens=150
)
# Image generation
image_response = client.images.generate(
model="dall-e-3",
prompt="A sunset over mountains",
size="1024x1024",
quality="standard",
n=1
)
Advanced Features
Managing Conversations
# Maintaining conversation context
conversation = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hi there!"},
{"role": "assistant", "content": "Hello! How can I help you today?"},
{"role": "user", "content": "Can you help me with Python?"}
]
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=conversation
)
# Add the new response to the conversation
conversation.append({
"role": "assistant",
"content": response.choices[0].message.content
})
Controlling Output
# Adjusting temperature and other parameters
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Write a story about a dragon"}],
temperature=0.9, # Higher temperature for more creative outputs
max_tokens=500, # Control response length
top_p=0.9, # Nucleus sampling
frequency_penalty=0.6, # Reduce repetition
presence_penalty=0.6 # Encourage new topics
)
Best Practices
Rate Limiting and Batching
import time
from typing import List
def process_batch(prompts: List[str], batch_size: int = 5):
results = []
for i in range(0, len(prompts), batch_size):
batch = prompts[i:i + batch_size]
for prompt in batch:
try:
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}]
)
results.append(response.choices[0].message.content)
except Exception as e:
print(f"Error processing prompt: {e}")
time.sleep(1) # Rate limiting
return results
Error Handling
from openai import OpenAIError
def safe_api_call(prompt: str):
try:
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content
except OpenAIError as e:
if "rate limit" in str(e).lower():
time.sleep(20) # Wait and retry for rate limits
return safe_api_call(prompt)
else:
print(f"An error occurred: {e}")
return None
Common Error Messages and Solutions
Rate Limit Exceeded
- Solution: Implement exponential backoff
- Use batch processing
- Consider upgrading your API plan
Invalid API Key
- Solution: Double-check your API key
- Ensure environment variables are set correctly
- Verify API key permissions
Context Length Exceeded
- Solution: Reduce input length
- Split long inputs into chunks
- Use a model with larger context window
Cost Management Tips
- Use token counting to estimate costs:
from openai import OpenAI
import tiktoken
def count_tokens(text: str, model: str = "gpt-3.5-turbo") -> int:
encoding = tiktoken.encoding_for_model(model)
return len(encoding.encode(text))
- Monitor usage:
# Track token usage
def track_usage(response):
usage = response.usage
print(f"Prompt tokens: {usage.prompt_tokens}")
print(f"Completion tokens: {usage.completion_tokens}")
print(f"Total tokens: {usage.total_tokens}")
Remember to always handle your API key securely and never expose it in your code or version control systems. Use environment variables or secure configuration management for production applications.
Related Posts
• 5 min read
APIs (Application Programming Interfaces) are the backbone of modern digital applications. They allow different software systems to communicate, exchange data, and collaborate seamlessly. As businesse...
• 4 min read
In today’s interconnected digital world, APIs (Application Programming Interfaces) are the backbone of communication between different software applications. From mobile apps to cloud services, APIs e...
• 5 min read
In the modern digital ecosystem, APIs (Application Programming Interfaces) serve as the backbone of connectivity. Whether you're building microservices, enabling integrations, or crafting data pipelin...