top of page

Switching Between OpenAI and Azure OpenAI Endpoints with Python

While both OpenAI and Azure OpenAI Service rely on a common Python client library, there are small changes you need to make to your code when switching back and forth between endpoints. In this article, we’ll walk through the common changes and differences you’ll experience when working across OpenAI and Azure OpenAI.

Switching Between OpenAI and Azure OpenAI Endpoints with Python

You’ll encounter minor code adjustments when transitioning between OpenAI and Azure OpenAI. These adjustments primarily revolve around specifying the model or deployment names:


Model Naming:

  • In OpenAI, you use the model keyword argument to specify the desired language model (e.g., “gpt-3.5-turbo-instruct”).

  • In Azure OpenAI, you refer to deployment names instead of model names. Even using the model parameter, provide the deployment name.


Authentication:

  • Both services require proper authentication. You can use API keys or Microsoft Entra ID.

  • Storing API keys as environment variables enhances security and portability.


How to switch Between OpenAI and Azure OpenAI Endpoints with Python?

Ensure you have the following prerequisites:

  1. API Keys or Microsoft Entra ID: You can authenticate using API keys or Microsoft Entra ID. We recommend using environment variables for secure authentication.

  1. Python Environment: You have Python installed and set up in your development environment.


Authentication

API Key Authentication

For both OpenAI and Azure OpenAI, you’ll need to set up API key authentication. Here’s how you can do it:


OpenAI

import os
from openai import OpenAI

client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

Azure OpenAI

import os
from openai import AzureOpenAI

client = AzureOpenAI(
    api_key=os.getenv("AZURE_OPENAI_API_KEY"),
    api_version="2023-12-01-preview",
    azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
)

Microsoft Entra ID Authentication

If you prefer using Microsoft Entra ID, follow these steps:

from azure.identity import DefaultAzureCredential, get_bearer_token_provider
from openai import AzureOpenAI

token_provider = get_bearer_token_provider(DefaultAzureCredential(), "[4](https://cognitiveservices.azure.com/.default)")
api_version = "2023-12-01-preview"
endpoint = "[5](https://my-resource.openai.azure.com)"

client = AzureOpenAI(
    api_version=api_version,
    azure_endpoint=endpoint,
    azure_ad_token_provider=token_provider
)

Importance of Environment Variables:

  • Security: Storing API keys directly in your code can be risky. Environment variables provide a more secure way to manage sensitive information.

  • Portability: With environment variables, your code remains consistent across different environments (development, staging, production).

  • Ease of Maintenance: If you need to update your API keys, you can do so without modifying your code. Just update the environment variables.


Keyword Argument for Model

Let's explore how OpenAI and Azure OpenAI handle model specification and deployment names:


Model Keyword Argument:

When using OpenAI, you can specify the desired language model using the model keyword argument in your Python code.


For example:

completion = client.completions.create(model="gpt-3.5-turbo-instruct", prompt="<prompt>")

Here, "gpt-3.5-turbo-instruct" represents the specific model you want to use for text generation or other tasks.


Azure OpenAI Deployment Names:

In Azure OpenAI, the focus shifts from model names to deployment names. Even when using the model parameter, you explicitly provide the deployment name.


For instance:

chat_completion = client.chat.completions.create(...)

The chat_completion request targets a specific deployment within Azure OpenAI.


Always Specify Deployment Names:

  • It’s crucial to understand that Azure OpenAI consistently requires deployment names.

  • Whether you’re using the model parameter or not, always include the deployment name to ensure accurate targeting.


Conclusion

OpenAI and Azure OpenAI rely on a common Python client library, making development more consistent and straightforward. When transitioning between endpoints:

  • OpenAI uses the model keyword argument to specify the language model.

  • Azure OpenAI requires deployment names instead of model names, even when using the model parameter.




bottom of page