跳转到内容
Docs Try Aspire
Docs Try

Connect to Azure AI Foundry

此内容尚不支持你的语言。

Azure AI Foundry logo

This page describes how consuming apps connect to an Azure AI Foundry resource that’s already modeled in your AppHost. For the AppHost API surface — adding a Foundry account, model deployments, Foundry projects, hosted agents, and more — see Azure AI Foundry hosting integration.

When you reference an Azure AI Foundry deployment resource from your AppHost, Aspire injects the connection information into the consuming app as environment variables. Your app can either read those environment variables directly — the pattern works the same from any language — or, in C#, use the Azure AI Foundry client integration for automatic dependency injection, health checks, and telemetry via Microsoft.Extensions.AI.

Aspire exposes each property as an environment variable named [RESOURCE]_[PROPERTY]. For instance, the Endpoint property of a resource called chat becomes CHAT_ENDPOINT.

The Foundry account resource (FoundryResource) exposes the following connection properties:

Property NameDescription
EndpointThe base endpoint URI for the Azure AI Foundry account
ApiKeyThe API key for authentication (when local auth is enabled)

Example connection string:

Endpoint=https://my-foundry.services.ai.azure.com/;ApiKey=abc123...

The Foundry deployment resource inherits all properties from its parent account resource and adds:

Property NameDescription
DeploymentThe deployment name as configured in Azure AI Foundry
ModelThe model identifier for inference requests, for instance gpt-5-mini

Example connection string:

Endpoint=https://my-foundry.services.ai.azure.com/;Deployment=chat;Model=gpt-5-mini

The Foundry project resource (FoundryProjectResource) exposes:

Property NameDescription
EndpointThe project-scoped endpoint URI
ApiKeyThe API key for authentication (when local auth is enabled)
ProjectThe project name within the Foundry account

Example connection string:

Endpoint=https://my-foundry.services.ai.azure.com/;Project=my-project

Pick the language your consuming app is written in. Each example assumes your AppHost adds a Foundry deployment resource named chat and references it from the consuming app.

For C# apps, the recommended approach is the Aspire Azure AI Foundry client integration via the 📦 Aspire.Azure.AI.Inference NuGet package. It registers a ChatCompletionsClient through dependency injection and, optionally, registers an IChatClient via Microsoft.Extensions.AI. If you’d rather read environment variables directly, see the Read environment variables section at the end of this tab.

Install the 📦 Aspire.Azure.AI.Inference NuGet package in the client-consuming project:

.NET CLI — Add Aspire.Azure.AI.Inference package
dotnet add package Aspire.Azure.AI.Inference

In Program.cs, call AddAzureAIInferenceChatClient on your IHostApplicationBuilder to register a ChatCompletionsClient:

C# — Program.cs
builder.AddAzureAIInferenceChatClient(connectionName: "chat");

Resolve the client through dependency injection:

C# — ExampleService.cs
public class ExampleService(ChatCompletionsClient client)
{
// Use client for chat completions...
}

Add an IChatClient via Microsoft.Extensions.AI

Section titled “Add an IChatClient via Microsoft.Extensions.AI”

To also register an IChatClient from Microsoft.Extensions.AI, chain AsIChatClient():

C# — Program.cs
builder.AddAzureAIInferenceChatClient("chat")
.AsIChatClient();

Resolve the abstraction through dependency injection:

C# — ExampleService.cs
public class ExampleService(IChatClient chatClient)
{
public async Task<string> CompleteAsync(string prompt)
{
var response = await chatClient.CompleteAsync(prompt);
return response.Message.Text ?? string.Empty;
}
}

To register multiple ChatCompletionsClient instances with different connection names, use AddKeyedAzureAIInferenceChatClient:

C# — Program.cs
builder.AddKeyedAzureAIInferenceChatClient(name: "chat");
builder.AddKeyedAzureAIInferenceChatClient(name: "embeddings");

Then resolve each instance by key:

C# — ExampleService.cs
public class ExampleService(
[FromKeyedServices("chat")] ChatCompletionsClient chatClient,
[FromKeyedServices("embeddings")] ChatCompletionsClient embeddingsClient)
{
// Use clients...
}

The Aspire Azure AI Inference client integration supports multiple configuration approaches.

Connection strings. When using a connection string from the ConnectionStrings configuration section, pass the connection name to AddAzureAIInferenceChatClient:

C# — Program.cs
builder.AddAzureAIInferenceChatClient("chat");

The connection string is resolved from the ConnectionStrings section:

JSON — appsettings.json
{
"ConnectionStrings": {
"chat": "Endpoint=https://my-foundry.services.ai.azure.com/;Deployment=chat;Model=gpt-5-mini"
}
}

Configuration providers. The integration supports Microsoft.Extensions.Configuration using the Aspire:Azure:AI:Inference key:

JSON — appsettings.json
{
"Aspire": {
"Azure": {
"AI": {
"Inference": {
"DisableTracing": false,
"DisableMetrics": false
}
}
}
}
}

The Aspire Azure AI Inference client integration enables health checks by default, verifying that the endpoint is reachable. Integration with the /health HTTP endpoint ensures all registered health checks must pass before the app is considered ready to accept traffic.

The Aspire Azure AI Inference client integration automatically configures logging, tracing, and metrics through OpenTelemetry.

Logging categories:

  • Azure.AI.Inference

Tracing activities:

  • Azure.AI.Inference.*

Metrics:

  • Azure.AI.Inference.*

If you prefer not to use the Aspire client integration, you can read the Aspire-injected connection properties from the environment and construct an AzureAIInferenceClient directly using the 📦 Azure.AI.Inference NuGet package:

C# — Program.cs
using Azure;
using Azure.AI.Inference;
var endpoint = Environment.GetEnvironmentVariable("CHAT_ENDPOINT");
var apiKey = Environment.GetEnvironmentVariable("CHAT_APIKEY");
var deployment = Environment.GetEnvironmentVariable("CHAT_DEPLOYMENT");
var client = new ChatCompletionsClient(
new Uri(endpoint!),
new AzureKeyCredential(apiKey!));
var response = await client.CompleteAsync(new ChatCompletionsOptions
{
Model = deployment,
Messages =
{
new ChatRequestUserMessage("Hello from Aspire!")
}
});
Console.WriteLine(response.Value.Choices[0].Message.Content);