OpenAI integration
Questi contenuti non sono ancora disponibili nella tua lingua.
OpenAI provides access to chat/completions, embeddings, image, and audio models via a REST API. The OpenAI integration lets you:
- Model an OpenAI account (endpoint + API key) once in the AppHost.
- Add one or more model resources that compose their connection strings from the parent.
- Reference those model resources from projects to get strongly-named connection strings.
- Consume those connection strings with the
Aspire.OpenAIcomponent to obtain anOpenAIClientand (optionally) anIChatClient.
Hosting integration
Section titled “Hosting integration”The hosting integration models OpenAI with two resource types:
OpenAIResource: Parent that holds the shared API key and base endpoint (defaults tohttps://api.openai.com/v1).OpenAIModelResource: Child representing a specific model; composes a connection string from the parent (Endpoint+Key+Model).
To access these types and APIs, install the 📦 Aspire.Hosting.OpenAI NuGet package in your AppHost project:
aspire add openaiLa CLI Aspire è interattiva; seleziona il risultato corretto quando richiesto:
Select an integration to add:
> openai (Aspire.Hosting.OpenAI)> Other results listed as selectable options...#:package Aspire.Hosting.OpenAI@*<PackageReference Include="Aspire.Hosting.OpenAI" Version="*" />Add an OpenAI parent resource
Section titled “Add an OpenAI parent resource”var builder = DistributedApplication.CreateBuilder(args);
var openai = builder.AddOpenAI("openai");
builder.AddProject<Projects.ExampleProject>() .WithReference(openai);
// After adding all resources, run the app...Add OpenAI model resources
Section titled “Add OpenAI model resources”Add one or more model children beneath the parent and reference them from projects:
var builder = DistributedApplication.CreateBuilder(args);
var openai = builder.AddOpenAI("openai");
var chat = openai.AddModel("chat", "gpt-4o-mini");var embeddings = openai.AddModel("embeddings", "text-embedding-3-small");
builder.AddProject<Projects.ExampleProject>() .WithReference(chat);
// After adding all resources, run the app...Referencing chat passes a connection string named chat to the project. Multiple models can share the single API key and endpoint via the parent resource.
Use default API key parameter
Section titled “Use default API key parameter”Calling AddOpenAI("openai") creates a secret parameter named openai-openai-apikey. Aspire resolves its value in this order:
- The
Parameters:openai-openai-apikeyconfiguration key (user secrets,appsettings.*, or environment variables). - The
OPENAI_API_KEYenvironment variable.
If neither source provides a value, startup throws an exception. Provide the key via user-secrets:
dotnet user-secrets set Parameters:openai-openai-apikey sk-your-api-keyUse custom API key parameter
Section titled “Use custom API key parameter”Replace the default parameter by creating your own secret parameter and calling WithApiKey on the parent:
var builder = DistributedApplication.CreateBuilder(args);
var apiKey = builder.AddParameter("my-api-key", secret: true);
var openai = builder.AddOpenAI("openai") .WithApiKey(apiKey);
var chat = openai.AddModel("chat", "gpt-4o-mini");
builder.AddProject<Projects.ExampleProject>() .WithReference(chat);Add a custom endpoint
Section titled “Add a custom endpoint”Override the default endpoint (for example to use a proxy or compatible gateway):
var builder = DistributedApplication.CreateBuilder(args);
var openai = builder.AddOpenAI("openai") .WithEndpoint("https://my-gateway.example.com/v1");
var chat = openai.AddModel("chat", "gpt-4o-mini");
builder.AddProject<Projects.ExampleProject>() .WithReference(chat);Health checks
Section titled “Health checks”Add an optional single-run health check per model when diagnosing issues:
var chat = builder.AddOpenAI("openai") .AddModel("chat", "gpt-4o-mini") .WithHealthCheck();The model health check validates endpoint reachability, API key validity (401), and model existence (404). It executes only once per application instance to limit rate-limit implications. A status-page check against https://status.openai.com/api/v2/status.json is automatically registered for each parent resource.
Available models
Section titled “Available models”Common identifiers:
gpt-5gpt-4o-minigpt-4ogpt-4-turbogpt-realtimetext-embedding-3-smalltext-embedding-3-largedall-e-3whisper-1
For more information, see the OpenAI models documentation.
Connection properties
Section titled “Connection properties”When you reference an OpenAI resource using WithReference, the following connection properties are made available to the consuming project:
OpenAI
Section titled “OpenAI”The OpenAI resource exposes the following connection properties:
| Property Name | Description |
|---|---|
Endpoint | The base endpoint URI for the OpenAI API, with the format https://api.openai.com/v1 |
Uri | The endpoint URI (same as Endpoint), with the format https://api.openai.com/v1 |
Key | The API key for authentication |
Example properties:
Uri: https://api.openai.com/v1Key: sk-proj-abc123...OpenAI model
Section titled “OpenAI model”The OpenAI model resource combines the parent properties above and adds the following connection property:
| Property Name | Description |
|---|---|
ModelName | The model identifier for inference requests, for instance gpt-4o-mini |
Client integration
Section titled “Client integration”To get started with the Aspire OpenAI client integration, install the 📦 Aspire.OpenAI NuGet package:
dotnet add package Aspire.OpenAI#:package Aspire.OpenAI@*<PackageReference Include="Aspire.OpenAI" Version="*" />Add an OpenAI client
Section titled “Add an OpenAI client”In the Program.cs file of your client-consuming project, use AddOpenAIClient to register an OpenAIClient for dependency injection. The method requires a connection name parameter:
builder.AddOpenAIClient(connectionName: "chat");After adding the OpenAIClient, you can retrieve the client instance using dependency injection:
public class ExampleService(OpenAIClient client){ // Use client...}Add OpenAI client with registered IChatClient
Section titled “Add OpenAI client with registered IChatClient”builder.AddOpenAIClient("chat") .AddChatClient(); // Model inferred from connection string (Model=...)If only a parent resource was defined (no child model), provide the model name explicitly:
builder.AddOpenAIClient("openai") .AddChatClient("gpt-4o-mini");AddChatClient optionally accepts a model/deployment name; if omitted it comes from the connection string’s Model entry. Inject OpenAIClient or IChatClient as needed.
Configuration
Section titled “Configuration”The OpenAI library provides multiple options to configure the OpenAI connection. Either a Endpoint or a ConnectionString is required.
Use a connection string
Section titled “Use a connection string”Resolved connection string shapes:
Parent (no model):
Endpoint={endpoint};Key={api_key}Model child:
Endpoint={endpoint};Key={api_key};Model={model_name}Use configuration providers
Section titled “Use configuration providers”Configure via Aspire:OpenAI keys (global) and Aspire:OpenAI:{connectionName} (per named client). Example appsettings.json:
{ "ConnectionStrings": { "chat": "Endpoint=https://api.openai.com/v1;Key=${OPENAI_API_KEY};Model=gpt-4o-mini" }, "Aspire": { "OpenAI": { "DisableTracing": false, "DisableMetrics": false, "ClientOptions": { "UserAgentApplicationId": "myapp", "NetworkTimeout": "00:00:30" } } }}Inline configuration:
builder.AddOpenAIClient("chat", settings => settings.DisableTracing = true);builder.AddOpenAIClient("chat", configureOptions: o => o.NetworkTimeout = TimeSpan.FromSeconds(30));Observability and telemetry
Section titled “Observability and telemetry”Logging
Section titled “Logging”OpenAI.*
Tracing
Section titled “Tracing”OpenAI.*(when telemetry enabled and not disabled)
Metrics
Section titled “Metrics”OpenAI.*meter (when telemetry enabled and not disabled)