OpenAI Chat Model Compatibility Error
Problem Statement
Developers integrating OpenAI models frequently encounter errors when using chat models with incompatible API endpoints. The core issue occurs when:
- Using chat models like
gpt-3.5-turbo
with the legacy/v1/completions
endpoint - Not matching API method names to the SDK version
- Using incorrect authentication methods for API keys
Common error messages:
openai.error.InvalidRequestError:
This is a chat model and not supported in the v1/completions endpoint.
Did you mean to use v1/chat/completions?
openai.error.AuthenticationError:
No API key provided.
Recommended Solutions
1. Match Endpoints to Models
Chat models require the chat completions endpoint. Update your API calls:
from openai import OpenAI
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
response = client.chat.completions.create(
model="gpt-3.5-turbo", // [!code focus]
messages=[ // [!code focus]
{"role": "system", "content": "You're a helpful assistant"},
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
response = openai.ChatCompletion.create( // [!code focus]
model="gpt-3.5-turbo",
messages=[ // [!code focus]
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message['content'])
2. Use Compatible Models (Legacy Approach)
If maintaining legacy completion endpoints, switch to compatible models:
response = openai.Completion.create(
model="gpt-3.5-turbo-instruct", // [!code focus]
prompt="Hello!",
max_tokens=100
)
NOT RECOMMENDED
Using text-davinci-003
is deprecated. New applications should use gpt-3.5-turbo-instruct
or newer models.
3. LangChain Implementation
For LangChain users, import the correct chat module:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5-turbo-0613", temperature=0)
# Avoid this approach:
from langchain.llms import OpenAI
llm = OpenAI() # Doesn't support chat models
API Endpoint Compatibility
Use this reference table to match models with endpoints:
API Endpoint | Supported Models |
---|---|
/v1/chat/completions | gpt-3.5-turbo , gpt-4 , fine-tuned chat models |
/v1/completions | gpt-3.5-turbo-instruct , davinci-002 , babbage-002 |
Authentication Best Practices
Resolve "No API key provided" errors:
# Recommended method
openai_api_key = os.getenv("OPENAI_API_KEY") # Using environment variables
# Alternative approaches (less secure)
# openai_api_key = "sk-..." # Hardcoded key (not recommended)
SECURITY NOTE
Use environment variables or secret managers instead of hardcoding API keys.
Version Compatibility Tips
- Avoid downgrading packages unless required for legacy integration
- Update to latest OpenAI SDK:
pip install openai --upgrade
- Verify supported models:
print(client.models.list())
For new applications, always:
- Use chat models with
/v1/chat/completions
- Configure the
messages
parameter with role/content structure - Retrieve responses from
choices[0].message.content
AVOID SOLUTION BYPASSES
Downgrading packages (openai==0.28.1
) creates technical debt and security risks. Prefer endpoint corrections instead.