Skip to content

OpenAI Chat Model Compatibility Error

Problem Statement

Developers integrating OpenAI models frequently encounter errors when using chat models with incompatible API endpoints. The core issue occurs when:

  1. Using chat models like gpt-3.5-turbo with the legacy /v1/completions endpoint
  2. Not matching API method names to the SDK version
  3. Using incorrect authentication methods for API keys

Common error messages:

openai.error.InvalidRequestError: 
This is a chat model and not supported in the v1/completions endpoint. 
Did you mean to use v1/chat/completions?
openai.error.AuthenticationError: 
No API key provided.

1. Match Endpoints to Models

Chat models require the chat completions endpoint. Update your API calls:

python
from openai import OpenAI

client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

response = client.chat.completions.create(
    model="gpt-3.5-turbo",  // [!code focus]
    messages=[  // [!code focus]
        {"role": "system", "content": "You're a helpful assistant"},
        {"role": "user", "content": "Hello!"}
    ]
)
print(response.choices[0].message.content)
python
import openai

openai.api_key = os.getenv("OPENAI_API_KEY")

response = openai.ChatCompletion.create(  // [!code focus]
    model="gpt-3.5-turbo",
    messages=[  // [!code focus]
        {"role": "user", "content": "Hello!"}
    ]
)
print(response.choices[0].message['content'])

2. Use Compatible Models (Legacy Approach)

If maintaining legacy completion endpoints, switch to compatible models:

python
response = openai.Completion.create(
    model="gpt-3.5-turbo-instruct",  // [!code focus]
    prompt="Hello!",
    max_tokens=100
)

NOT RECOMMENDED

Using text-davinci-003 is deprecated. New applications should use gpt-3.5-turbo-instruct or newer models.

3. LangChain Implementation

For LangChain users, import the correct chat module:

python
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-3.5-turbo-0613", temperature=0)
python
# Avoid this approach:
from langchain.llms import OpenAI
llm = OpenAI()  # Doesn't support chat models

API Endpoint Compatibility

Use this reference table to match models with endpoints:

API EndpointSupported Models
/v1/chat/completionsgpt-3.5-turbo, gpt-4, fine-tuned chat models
/v1/completionsgpt-3.5-turbo-instruct, davinci-002, babbage-002

Authentication Best Practices

Resolve "No API key provided" errors:

python
# Recommended method
openai_api_key = os.getenv("OPENAI_API_KEY")  # Using environment variables

# Alternative approaches (less secure)
# openai_api_key = "sk-..."  # Hardcoded key (not recommended)

SECURITY NOTE

Use environment variables or secret managers instead of hardcoding API keys.

Version Compatibility Tips

  1. Avoid downgrading packages unless required for legacy integration
  2. Update to latest OpenAI SDK: pip install openai --upgrade
  3. Verify supported models: print(client.models.list())

For new applications, always:

  1. Use chat models with /v1/chat/completions
  2. Configure the messages parameter with role/content structure
  3. Retrieve responses from choices[0].message.content

AVOID SOLUTION BYPASSES

Downgrading packages (openai==0.28.1) creates technical debt and security risks. Prefer endpoint corrections instead.