Pular para o conteúdo
Docs Try Aspire
Docs Try

Connect to OpenAI

Este conteúdo não está disponível em sua língua ainda.

OpenAI logo

This page describes how consuming apps connect to an OpenAI model resource that’s already modeled in your AppHost. For the AppHost API surface — adding an OpenAI parent resource, model resources, API key parameters, and endpoint overrides — see OpenAI hosting integration.

When you reference an OpenAI model resource from your AppHost, Aspire injects the connection information into the consuming app as environment variables. Your app can either read those environment variables directly — the pattern works the same from any language — or, in C#, use the Aspire OpenAI client integration for automatic dependency injection, health checks, and telemetry.

Aspire exposes each property as an environment variable named [RESOURCE]_[PROPERTY]. For instance, the Endpoint property of a resource called chat becomes CHAT_ENDPOINT.

The OpenAI parent resource exposes the following connection properties:

Property NameDescription
EndpointThe base endpoint URI for the OpenAI API, with the format https://api.openai.com/v1
UriThe endpoint URI (same as Endpoint), with the format https://api.openai.com/v1
KeyThe API key for authentication

Example connection string:

Endpoint=https://api.openai.com/v1;Key=sk-proj-abc123...

The OpenAI model resource inherits all properties from its parent resource and adds:

Property NameDescription
ModelNameThe model identifier for inference requests, for instance gpt-4o-mini

Example connection string:

Endpoint=https://api.openai.com/v1;Key=sk-proj-abc123...;Model=gpt-4o-mini

Pick the language your consuming app is written in. Each example assumes your AppHost adds an OpenAI model resource named chat and references it from the consuming app.

For C# apps, the recommended approach is the Aspire OpenAI client integration. It registers an OpenAIClient through dependency injection and, optionally, registers an IChatClient or IEmbeddingGenerator via Microsoft.Extensions.AI. If you’d rather read environment variables directly, see the Read environment variables section at the end of this tab.

Install the 📦 Aspire.OpenAI NuGet package in the client-consuming project:

.NET CLI — Add Aspire.OpenAI package
dotnet add package Aspire.OpenAI

In Program.cs, call AddOpenAIClient on your IHostApplicationBuilder to register an OpenAIClient:

C# — Program.cs
builder.AddOpenAIClient(connectionName: "chat");

Resolve the client through dependency injection:

C# — ExampleService.cs
public class ExampleService(OpenAIClient client)
{
// Use client...
}

Call AddChatClient after AddOpenAIClient to also register an IChatClient from Microsoft.Extensions.AI. The model name is inferred from the connection string’s Model property:

C# — Program.cs
builder.AddOpenAIClient("chat")
.AddChatClient();

If only a parent resource was defined (no child model resource), provide the model name explicitly:

C# — Program.cs
builder.AddOpenAIClient("openai")
.AddChatClient("gpt-4o-mini");

Resolve the IChatClient through dependency injection:

C# — ExampleService.cs
public class ExampleService(IChatClient chatClient)
{
// Use chatClient...
}

To register multiple OpenAIClient instances with different connection names, use AddKeyedOpenAIClient:

C# — Program.cs
builder.AddKeyedOpenAIClient(name: "chat");
builder.AddKeyedOpenAIClient(name: "embeddings");

Then resolve each instance by key:

C# — ExampleService.cs
public class ExampleService(
[FromKeyedServices("chat")] OpenAIClient chatClient,
[FromKeyedServices("embeddings")] OpenAIClient embeddingsClient)
{
// Use clients...
}

The Aspire OpenAI client integration offers multiple ways to provide configuration.

Connection strings. When using a connection string from the ConnectionStrings configuration section, pass the connection name to AddOpenAIClient:

C# — Program.cs
builder.AddOpenAIClient("chat");

The connection string is resolved from the ConnectionStrings section:

JSON — appsettings.json
{
"ConnectionStrings": {
"chat": "Endpoint=https://api.openai.com/v1;Key=${OPENAI_API_KEY};Model=gpt-4o-mini"
}
}

Configuration providers. The client integration supports Microsoft.Extensions.Configuration. It loads OpenAISettings from appsettings.json (or any other configuration source) by using the Aspire:OpenAI key (global) or Aspire:OpenAI:{connectionName} (per named client):

JSON — appsettings.json
{
"Aspire": {
"OpenAI": {
"DisableTracing": false,
"DisableMetrics": false,
"ClientOptions": {
"UserAgentApplicationId": "myapp",
"NetworkTimeout": "00:00:30"
}
}
}
}

Inline delegates. Pass an Action<OpenAISettings> to configure settings inline:

C# — Program.cs
builder.AddOpenAIClient("chat", settings => settings.DisableTracing = true);
builder.AddOpenAIClient("chat", configureOptions: o => o.NetworkTimeout = TimeSpan.FromSeconds(30));

Aspire client integrations enable health checks by default. The OpenAI client integration does not register a runtime health check on its own (health checks are opt-in per model at the hosting level — see Add health check per model).

The Aspire OpenAI client integration automatically configures logging, tracing, and metrics through OpenTelemetry.

Logging categories:

  • OpenAI.*

Tracing activities:

  • OpenAI.* (when OpenTelemetry is enabled)

Metrics:

  • OpenAI.* meter (when OpenTelemetry is enabled)

If you prefer not to use the Aspire client integration, you can read the Aspire-injected connection properties from the environment and construct an OpenAIClient directly:

C# — Program.cs
using OpenAI;
var endpoint = Environment.GetEnvironmentVariable("CHAT_ENDPOINT");
var apiKey = Environment.GetEnvironmentVariable("CHAT_KEY");
var modelName = Environment.GetEnvironmentVariable("CHAT_MODELNAME");
var client = new OpenAIClient(new ApiKeyCredential(apiKey!), new OpenAIClientOptions
{
Endpoint = new Uri(endpoint!)
});
var chatClient = client.GetChatClient(modelName);
// Use chatClient...