Connect to OpenAI
Questi contenuti non sono ancora disponibili nella tua lingua.
This page describes how consuming apps connect to an OpenAI model resource that’s already modeled in your AppHost. For the AppHost API surface — adding an OpenAI parent resource, model resources, API key parameters, and endpoint overrides — see OpenAI hosting integration.
When you reference an OpenAI model resource from your AppHost, Aspire injects the connection information into the consuming app as environment variables. Your app can either read those environment variables directly — the pattern works the same from any language — or, in C#, use the Aspire OpenAI client integration for automatic dependency injection, health checks, and telemetry.
Connection properties
Section titled “Connection properties”Aspire exposes each property as an environment variable named [RESOURCE]_[PROPERTY]. For instance, the Endpoint property of a resource called chat becomes CHAT_ENDPOINT.
OpenAI parent resource
Section titled “OpenAI parent resource”The OpenAI parent resource exposes the following connection properties:
| Property Name | Description |
|---|---|
Endpoint | The base endpoint URI for the OpenAI API, with the format https://api.openai.com/v1 |
Uri | The endpoint URI (same as Endpoint), with the format https://api.openai.com/v1 |
Key | The API key for authentication |
Example connection string:
Endpoint=https://api.openai.com/v1;Key=sk-proj-abc123...OpenAI model resource
Section titled “OpenAI model resource”The OpenAI model resource inherits all properties from its parent resource and adds:
| Property Name | Description |
|---|---|
ModelName | The model identifier for inference requests, for instance gpt-4o-mini |
Example connection string:
Endpoint=https://api.openai.com/v1;Key=sk-proj-abc123...;Model=gpt-4o-miniConnect from your app
Section titled “Connect from your app”Pick the language your consuming app is written in. Each example assumes your AppHost adds an OpenAI model resource named chat and references it from the consuming app.
For C# apps, the recommended approach is the Aspire OpenAI client integration. It registers an OpenAIClient through dependency injection and, optionally, registers an IChatClient or IEmbeddingGenerator via Microsoft.Extensions.AI. If you’d rather read environment variables directly, see the Read environment variables section at the end of this tab.
Install the client integration
Section titled “Install the client integration”Install the 📦 Aspire.OpenAI NuGet package in the client-consuming project:
dotnet add package Aspire.OpenAI#:package Aspire.OpenAI@*<PackageReference Include="Aspire.OpenAI" Version="*" />Add an OpenAI client
Section titled “Add an OpenAI client”In Program.cs, call AddOpenAIClient on your IHostApplicationBuilder to register an OpenAIClient:
builder.AddOpenAIClient(connectionName: "chat");Resolve the client through dependency injection:
public class ExampleService(OpenAIClient client){ // Use client...}Add a chat client
Section titled “Add a chat client”Call AddChatClient after AddOpenAIClient to also register an IChatClient from Microsoft.Extensions.AI. The model name is inferred from the connection string’s Model property:
builder.AddOpenAIClient("chat") .AddChatClient();If only a parent resource was defined (no child model resource), provide the model name explicitly:
builder.AddOpenAIClient("openai") .AddChatClient("gpt-4o-mini");Resolve the IChatClient through dependency injection:
public class ExampleService(IChatClient chatClient){ // Use chatClient...}Add keyed OpenAI clients
Section titled “Add keyed OpenAI clients”To register multiple OpenAIClient instances with different connection names, use AddKeyedOpenAIClient:
builder.AddKeyedOpenAIClient(name: "chat");builder.AddKeyedOpenAIClient(name: "embeddings");Then resolve each instance by key:
public class ExampleService( [FromKeyedServices("chat")] OpenAIClient chatClient, [FromKeyedServices("embeddings")] OpenAIClient embeddingsClient){ // Use clients...}Configuration
Section titled “Configuration”The Aspire OpenAI client integration offers multiple ways to provide configuration.
Connection strings. When using a connection string from the ConnectionStrings configuration section, pass the connection name to AddOpenAIClient:
builder.AddOpenAIClient("chat");The connection string is resolved from the ConnectionStrings section:
{ "ConnectionStrings": { "chat": "Endpoint=https://api.openai.com/v1;Key=${OPENAI_API_KEY};Model=gpt-4o-mini" }}Configuration providers. The client integration supports Microsoft.Extensions.Configuration. It loads OpenAISettings from appsettings.json (or any other configuration source) by using the Aspire:OpenAI key (global) or Aspire:OpenAI:{connectionName} (per named client):
{ "Aspire": { "OpenAI": { "DisableTracing": false, "DisableMetrics": false, "ClientOptions": { "UserAgentApplicationId": "myapp", "NetworkTimeout": "00:00:30" } } }}Inline delegates. Pass an Action<OpenAISettings> to configure settings inline:
builder.AddOpenAIClient("chat", settings => settings.DisableTracing = true);builder.AddOpenAIClient("chat", configureOptions: o => o.NetworkTimeout = TimeSpan.FromSeconds(30));Client integration health checks
Section titled “Client integration health checks”Aspire client integrations enable health checks by default. The OpenAI client integration does not register a runtime health check on its own (health checks are opt-in per model at the hosting level — see Add health check per model).
Observability and telemetry
Section titled “Observability and telemetry”The Aspire OpenAI client integration automatically configures logging, tracing, and metrics through OpenTelemetry.
Logging categories:
OpenAI.*
Tracing activities:
OpenAI.*(when OpenTelemetry is enabled)
Metrics:
OpenAI.*meter (when OpenTelemetry is enabled)
Read environment variables in C#
Section titled “Read environment variables in C#”If you prefer not to use the Aspire client integration, you can read the Aspire-injected connection properties from the environment and construct an OpenAIClient directly:
using OpenAI;
var endpoint = Environment.GetEnvironmentVariable("CHAT_ENDPOINT");var apiKey = Environment.GetEnvironmentVariable("CHAT_KEY");var modelName = Environment.GetEnvironmentVariable("CHAT_MODELNAME");
var client = new OpenAIClient(new ApiKeyCredential(apiKey!), new OpenAIClientOptions{ Endpoint = new Uri(endpoint!)});
var chatClient = client.GetChatClient(modelName);// Use chatClient...Use go-openai, the most widely used OpenAI client for Go:
go get github.com/sashabaranov/go-openaiRead the injected environment variables and connect:
package main
import ( "context" "fmt" "os"
openai "github.com/sashabaranov/go-openai")
func main() { // Read the Aspire-injected connection properties apiKey := os.Getenv("CHAT_KEY") endpoint := os.Getenv("CHAT_ENDPOINT") modelName := os.Getenv("CHAT_MODELNAME")
config := openai.DefaultConfig(apiKey) config.BaseURL = endpoint
client := openai.NewClientWithConfig(config)
resp, err := client.CreateChatCompletion( context.Background(), openai.ChatCompletionRequest{ Model: modelName, Messages: []openai.ChatCompletionMessage{ {Role: openai.ChatMessageRoleUser, Content: "Hello!"}, }, }, ) if err != nil { panic(err) }
fmt.Println(resp.Choices[0].Message.Content)}Install the official OpenAI Python library:
pip install openaiRead the injected environment variables and connect:
import osfrom openai import OpenAI
# Read the Aspire-injected connection propertiesclient = OpenAI( api_key=os.environ["CHAT_KEY"], base_url=os.environ["CHAT_ENDPOINT"],)
model_name = os.environ["CHAT_MODELNAME"]
response = client.chat.completions.create( model=model_name, messages=[{"role": "user", "content": "Hello!"}],)
print(response.choices[0].message.content)Install the official OpenAI Node.js library:
npm install openaiRead the injected environment variables and connect:
import OpenAI from 'openai';
// Read Aspire-injected connection propertiesconst client = new OpenAI({ apiKey: process.env.CHAT_KEY, baseURL: process.env.CHAT_ENDPOINT,});
const modelName = process.env.CHAT_MODELNAME ?? 'gpt-4o-mini';
const response = await client.chat.completions.create({ model: modelName, messages: [{ role: 'user', content: 'Hello!' }],});
console.log(response.choices[0].message.content);