इसे छोड़कर कंटेंट पर जाएं
Docs Try Aspire
Docs Try

Connect to GitHub Models

यह कंटेंट अभी तक आपकी भाषा में उपलब्ध नहीं है।

GitHub logo

This page describes how consuming apps connect to a GitHub Model resource that’s already modeled in your AppHost. For the AppHost API surface — adding a model resource, API key parameters, organization configuration, and health checks — see GitHub Models hosting integration.

When you reference a GitHub Model resource from your AppHost, Aspire injects the connection information into the consuming app as environment variables. Your app can either read those environment variables directly — the pattern works the same from any language — or, in C#, use the Aspire client integrations for automatic dependency injection, health checks, and telemetry.

Aspire exposes each property as an environment variable named [RESOURCE]_[PROPERTY]. For instance, the Endpoint property of a resource called chat becomes CHAT_ENDPOINT.

The GitHub Model resource exposes the following connection properties:

Property NameDescription
EndpointThe GitHub Models inference endpoint URI, for example https://models.github.ai/inference
KeyThe API key (GitHub PAT or GITHUB_TOKEN) for authentication
ModelThe model identifier for inference requests, for instance openai/gpt-4o-mini

Example connection string:

Endpoint=https://models.github.ai/inference;Key=github_pat_abc123...;Model=openai/gpt-4o-mini

When an organization is configured, the endpoint includes the organization slug:

Endpoint=https://models.github.ai/orgs/my-org/inference;Key=github_pat_abc123...;Model=openai/gpt-4o-mini

Pick the language your consuming app is written in. Each example assumes your AppHost adds a GitHub Model resource named chat and references it from the consuming app.

For C# apps, the recommended approach is one of the Aspire client integrations. GitHub Models is OpenAI-compatible, so you can use either Aspire.Azure.AI.Inference (for the Azure AI Inference SDK) or Aspire.OpenAI (for the OpenAI SDK). Both integrations register the client through dependency injection and, optionally, register an IChatClient from Microsoft.Extensions.AI. If you’d rather read environment variables directly, see the Read environment variables section at the end of this tab.

Install the 📦 Aspire.Azure.AI.Inference NuGet package in the client-consuming project:

.NET CLI — Add Aspire.Azure.AI.Inference package
dotnet add package Aspire.Azure.AI.Inference

In Program.cs, call AddAzureChatCompletionsClient on your IHostApplicationBuilder to register a ChatCompletionsClient:

C# — Program.cs
builder.AddAzureChatCompletionsClient(connectionName: "chat");

Resolve the client through dependency injection:

C# — ExampleService.cs
public class ExampleService(ChatCompletionsClient client)
{
// Use client...
}
Add a ChatCompletionsClient with IChatClient
Section titled “Add a ChatCompletionsClient with IChatClient”

Call AddChatClient after AddAzureChatCompletionsClient to also register an IChatClient from Microsoft.Extensions.AI:

C# — Program.cs
builder.AddAzureChatCompletionsClient("chat")
.AddChatClient();

Resolve the IChatClient through dependency injection:

C# — ExampleService.cs
public class ExampleService(IChatClient chatClient)
{
public async Task<string> GenerateAsync(string prompt)
{
var response = await chatClient.GetResponseAsync(prompt);
return response.Text;
}
}

For models compatible with the OpenAI API (such as openai/gpt-4o-mini), you can use the OpenAI client.

Install the 📦 Aspire.OpenAI NuGet package in the client-consuming project:

.NET CLI — Add Aspire.OpenAI package
dotnet add package Aspire.OpenAI

In Program.cs, call AddOpenAIClient to register an OpenAIClient:

C# — Program.cs
builder.AddOpenAIClient(connectionName: "chat");

Resolve the client through dependency injection and use the model name from the connection string:

C# — ExampleService.cs
public class ChatService(OpenAIClient client, IConfiguration config)
{
public async Task<string> GetResponseAsync(string prompt)
{
var modelName = config["ConnectionStrings:chat:Model"] ?? "openai/gpt-4o-mini";
var chatClient = client.GetChatClient(modelName);
var response = await chatClient.CompleteChatAsync(
[new UserChatMessage(prompt)]);
return response.Value.Content[0].Text;
}
}
C# — Program.cs
builder.AddOpenAIClient("chat")
.AddChatClient();

The Aspire OpenAI client integration supports configuration through connection strings, Microsoft.Extensions.Configuration, and inline delegates.

Connection strings. Provide a named connection string in appsettings.json:

JSON — appsettings.json
{
"ConnectionStrings": {
"chat": "Endpoint=https://models.github.ai/inference;Key=${GITHUB_TOKEN};Model=openai/gpt-4o-mini"
}
}

Configuration providers. Use the Aspire:OpenAI key to load OpenAISettings:

JSON — appsettings.json
{
"Aspire": {
"OpenAI": {
"DisableTracing": false,
"DisableMetrics": false
}
}
}

Inline delegates.

C# — Program.cs
builder.AddOpenAIClient("chat", settings => settings.DisableTracing = true);

The Aspire OpenAI client integration automatically configures logging, tracing, and metrics through OpenTelemetry.

If you prefer not to use the Aspire client integration, you can read the Aspire-injected connection properties directly:

C# — Program.cs
using Azure;
using Azure.AI.Inference;
var endpoint = Environment.GetEnvironmentVariable("CHAT_ENDPOINT");
var apiKey = Environment.GetEnvironmentVariable("CHAT_KEY");
var modelName = Environment.GetEnvironmentVariable("CHAT_MODEL");
var client = new ChatCompletionsClient(
new Uri(endpoint!),
new AzureKeyCredential(apiKey!));
var response = await client.CompleteAsync(new ChatCompletionsOptions
{
Model = modelName,
Messages = { new ChatRequestUserMessage("Hello!") },
});
Console.WriteLine(response.Value.Choices[0].Message.Content);