Connect to Azure AI Foundry
Questi contenuti non sono ancora disponibili nella tua lingua.
This page describes how consuming apps connect to an Azure AI Foundry resource that’s already modeled in your AppHost. For the AppHost API surface — adding a Foundry account, model deployments, Foundry projects, hosted agents, and more — see Azure AI Foundry hosting integration.
When you reference an Azure AI Foundry deployment resource from your AppHost, Aspire injects the connection information into the consuming app as environment variables. Your app can either read those environment variables directly — the pattern works the same from any language — or, in C#, use the Azure AI Foundry client integration for automatic dependency injection, health checks, and telemetry via Microsoft.Extensions.AI.
Connection properties
Section titled “Connection properties”Aspire exposes each property as an environment variable named [RESOURCE]_[PROPERTY]. For instance, the Endpoint property of a resource called chat becomes CHAT_ENDPOINT.
Foundry account resource
Section titled “Foundry account resource”The Foundry account resource (FoundryResource) exposes the following connection properties:
| Property Name | Description |
|---|---|
Endpoint | The base endpoint URI for the Azure AI Foundry account |
ApiKey | The API key for authentication (when local auth is enabled) |
Example connection string:
Endpoint=https://my-foundry.services.ai.azure.com/;ApiKey=abc123...Foundry deployment resource
Section titled “Foundry deployment resource”The Foundry deployment resource inherits all properties from its parent account resource and adds:
| Property Name | Description |
|---|---|
Deployment | The deployment name as configured in Azure AI Foundry |
Model | The model identifier for inference requests, for instance gpt-5-mini |
Example connection string:
Endpoint=https://my-foundry.services.ai.azure.com/;Deployment=chat;Model=gpt-5-miniFoundry project resource
Section titled “Foundry project resource”The Foundry project resource (FoundryProjectResource) exposes:
| Property Name | Description |
|---|---|
Endpoint | The project-scoped endpoint URI |
ApiKey | The API key for authentication (when local auth is enabled) |
Project | The project name within the Foundry account |
Example connection string:
Endpoint=https://my-foundry.services.ai.azure.com/;Project=my-projectConnect from your app
Section titled “Connect from your app”Pick the language your consuming app is written in. Each example assumes your AppHost adds a Foundry deployment resource named chat and references it from the consuming app.
For C# apps, the recommended approach is the Aspire Azure AI Foundry client integration via the 📦 Aspire.Azure.AI.Inference NuGet package. It registers a ChatCompletionsClient through dependency injection and, optionally, registers an IChatClient via Microsoft.Extensions.AI. If you’d rather read environment variables directly, see the Read environment variables section at the end of this tab.
Install the client integration
Section titled “Install the client integration”Install the 📦 Aspire.Azure.AI.Inference NuGet package in the client-consuming project:
dotnet add package Aspire.Azure.AI.Inference#:package Aspire.Azure.AI.Inference@*<PackageReference Include="Aspire.Azure.AI.Inference" Version="*" />Add a chat completions client
Section titled “Add a chat completions client”In Program.cs, call AddAzureAIInferenceChatClient on your IHostApplicationBuilder to register a ChatCompletionsClient:
builder.AddAzureAIInferenceChatClient(connectionName: "chat");Resolve the client through dependency injection:
public class ExampleService(ChatCompletionsClient client){ // Use client for chat completions...}Add an IChatClient via Microsoft.Extensions.AI
Section titled “Add an IChatClient via Microsoft.Extensions.AI”To also register an IChatClient from Microsoft.Extensions.AI, chain AsIChatClient():
builder.AddAzureAIInferenceChatClient("chat") .AsIChatClient();Resolve the abstraction through dependency injection:
public class ExampleService(IChatClient chatClient){ public async Task<string> CompleteAsync(string prompt) { var response = await chatClient.CompleteAsync(prompt); return response.Message.Text ?? string.Empty; }}Add keyed clients
Section titled “Add keyed clients”To register multiple ChatCompletionsClient instances with different connection names, use AddKeyedAzureAIInferenceChatClient:
builder.AddKeyedAzureAIInferenceChatClient(name: "chat");builder.AddKeyedAzureAIInferenceChatClient(name: "embeddings");Then resolve each instance by key:
public class ExampleService( [FromKeyedServices("chat")] ChatCompletionsClient chatClient, [FromKeyedServices("embeddings")] ChatCompletionsClient embeddingsClient){ // Use clients...}Configuration
Section titled “Configuration”The Aspire Azure AI Inference client integration supports multiple configuration approaches.
Connection strings. When using a connection string from the ConnectionStrings configuration section, pass the connection name to AddAzureAIInferenceChatClient:
builder.AddAzureAIInferenceChatClient("chat");The connection string is resolved from the ConnectionStrings section:
{ "ConnectionStrings": { "chat": "Endpoint=https://my-foundry.services.ai.azure.com/;Deployment=chat;Model=gpt-5-mini" }}Configuration providers. The integration supports Microsoft.Extensions.Configuration using the Aspire:Azure:AI:Inference key:
{ "Aspire": { "Azure": { "AI": { "Inference": { "DisableTracing": false, "DisableMetrics": false } } } }}Client integration health checks
Section titled “Client integration health checks”The Aspire Azure AI Inference client integration enables health checks by default, verifying that the endpoint is reachable. Integration with the /health HTTP endpoint ensures all registered health checks must pass before the app is considered ready to accept traffic.
Observability and telemetry
Section titled “Observability and telemetry”The Aspire Azure AI Inference client integration automatically configures logging, tracing, and metrics through OpenTelemetry.
Logging categories:
Azure.AI.Inference
Tracing activities:
Azure.AI.Inference.*
Metrics:
Azure.AI.Inference.*
Read environment variables in C#
Section titled “Read environment variables in C#”If you prefer not to use the Aspire client integration, you can read the Aspire-injected connection properties from the environment and construct an AzureAIInferenceClient directly using the 📦 Azure.AI.Inference NuGet package:
using Azure;using Azure.AI.Inference;
var endpoint = Environment.GetEnvironmentVariable("CHAT_ENDPOINT");var apiKey = Environment.GetEnvironmentVariable("CHAT_APIKEY");var deployment = Environment.GetEnvironmentVariable("CHAT_DEPLOYMENT");
var client = new ChatCompletionsClient( new Uri(endpoint!), new AzureKeyCredential(apiKey!));
var response = await client.CompleteAsync(new ChatCompletionsOptions{ Model = deployment, Messages = { new ChatRequestUserMessage("Hello from Aspire!") }});
Console.WriteLine(response.Value.Choices[0].Message.Content);Use the Azure SDK for Go azopenai package, which supports the Azure AI Inference REST API:
go get github.com/Azure/azure-sdk-for-go/sdk/ai/azopenaigo get github.com/Azure/azure-sdk-for-go/sdk/azcoreRead the injected environment variables and connect:
package main
import ( "context" "fmt" "os"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai" "github.com/Azure/azure-sdk-for-go/sdk/azcore")
func main() { // Read the Aspire-injected connection properties endpoint := os.Getenv("CHAT_ENDPOINT") apiKey := os.Getenv("CHAT_APIKEY") deployment := os.Getenv("CHAT_DEPLOYMENT")
client, err := azopenai.NewClientWithKeyCredential( endpoint, azcore.NewKeyCredential(apiKey), nil, ) if err != nil { panic(err) }
resp, err := client.GetChatCompletions( context.Background(), azopenai.ChatCompletionsOptions{ DeploymentName: &deployment, Messages: []azopenai.ChatRequestMessageClassification{ &azopenai.ChatRequestUserMessage{ Content: azopenai.NewChatRequestUserMessageContent("Hello!"), }, }, }, nil, ) if err != nil { panic(err) }
fmt.Println(*resp.Choices[0].Message.Content)}Install the azure-ai-inference and azure-ai-projects packages:
pip install azure-ai-inference azure-ai-projectsUsing azure-ai-inference for direct model calls:
import osfrom azure.ai.inference import ChatCompletionsClientfrom azure.core.credentials import AzureKeyCredential
# Read the Aspire-injected connection propertiesendpoint = os.environ["CHAT_ENDPOINT"]api_key = os.environ["CHAT_APIKEY"]deployment = os.environ["CHAT_DEPLOYMENT"]
client = ChatCompletionsClient( endpoint=endpoint, credential=AzureKeyCredential(api_key),)
response = client.complete( model=deployment, messages=[{"role": "user", "content": "Hello from Aspire!"}],)
print(response.choices[0].message.content)Using azure-ai-projects for Foundry project-scoped access:
import osfrom azure.ai.projects import AIProjectClientfrom azure.core.credentials import AzureKeyCredential
# Read the Aspire-injected project connection propertiesproject_endpoint = os.environ["MY_PROJECT_ENDPOINT"]api_key = os.environ["MY_PROJECT_APIKEY"]
project_client = AIProjectClient( endpoint=project_endpoint, credential=AzureKeyCredential(api_key),)
chat = project_client.inference.get_chat_completions_client()
response = chat.complete( model=os.environ.get("CHAT_DEPLOYMENT", "gpt-5-mini"), messages=[{"role": "user", "content": "Hello from Aspire!"}],)
print(response.choices[0].message.content)Install the official @azure/ai-inference and @azure/ai-projects packages:
npm install @azure/ai-inference @azure/ai-projects @azure/core-authUsing @azure/ai-inference for direct model calls:
import ModelClient from '@azure/ai-inference';import { AzureKeyCredential } from '@azure/core-auth';
// Read Aspire-injected connection propertiesconst endpoint = process.env.CHAT_ENDPOINT!;const apiKey = process.env.CHAT_APIKEY!;const deployment = process.env.CHAT_DEPLOYMENT ?? 'chat';
const client = new ModelClient(endpoint, new AzureKeyCredential(apiKey));
const response = await client.path('/chat/completions').post({ body: { model: deployment, messages: [{ role: 'user', content: 'Hello from Aspire!' }], },});
if (response.status !== '200') { throw new Error(`Request failed: ${response.status}`);}
console.log(response.body.choices[0].message.content);Using @azure/ai-projects for Foundry project-scoped access:
import { AIProjectClient } from '@azure/ai-projects';import { AzureKeyCredential } from '@azure/core-auth';
// Read Aspire-injected project connection propertiesconst projectEndpoint = process.env.MY_PROJECT_ENDPOINT!;const apiKey = process.env.MY_PROJECT_APIKEY!;
const projectClient = new AIProjectClient( projectEndpoint, new AzureKeyCredential(apiKey),);
const chatClient = projectClient.inference.getChatCompletionsClient();
const response = await chatClient.path('/chat/completions').post({ body: { model: process.env.CHAT_DEPLOYMENT ?? 'gpt-5-mini', messages: [{ role: 'user', content: 'Hello from Aspire!' }], },});
if (response.status !== '200') { throw new Error(`Request failed: ${response.status}`);}
console.log(response.body.choices[0].message.content);