Connect to GitHub Models
This page describes how consuming apps connect to a GitHub Model resource that’s already modeled in your AppHost. For the AppHost API surface — adding a model resource, API key parameters, organization configuration, and health checks — see GitHub Models hosting integration.
When you reference a GitHub Model resource from your AppHost, Aspire injects the connection information into the consuming app as environment variables. Your app can either read those environment variables directly — the pattern works the same from any language — or, in C#, use the Aspire client integrations for automatic dependency injection, health checks, and telemetry.
Connection properties
Section titled “Connection properties”Aspire exposes each property as an environment variable named [RESOURCE]_[PROPERTY]. For instance, the Endpoint property of a resource called chat becomes CHAT_ENDPOINT.
GitHub Model resource
Section titled “GitHub Model resource”The GitHub Model resource exposes the following connection properties:
| Property Name | Description |
|---|---|
Endpoint | The GitHub Models inference endpoint URI, for example https://models.github.ai/inference |
Key | The API key (GitHub PAT or GITHUB_TOKEN) for authentication |
Model | The model identifier for inference requests, for instance openai/gpt-4o-mini |
Example connection string:
Endpoint=https://models.github.ai/inference;Key=github_pat_abc123...;Model=openai/gpt-4o-miniWhen an organization is configured, the endpoint includes the organization slug:
Endpoint=https://models.github.ai/orgs/my-org/inference;Key=github_pat_abc123...;Model=openai/gpt-4o-miniConnect from your app
Section titled “Connect from your app”Pick the language your consuming app is written in. Each example assumes your AppHost adds a GitHub Model resource named chat and references it from the consuming app.
For C# apps, the recommended approach is one of the Aspire client integrations. GitHub Models is OpenAI-compatible, so you can use either Aspire.Azure.AI.Inference (for the Azure AI Inference SDK) or Aspire.OpenAI (for the OpenAI SDK). Both integrations register the client through dependency injection and, optionally, register an IChatClient from Microsoft.Extensions.AI. If you’d rather read environment variables directly, see the Read environment variables section at the end of this tab.
Using Azure AI Inference
Section titled “Using Azure AI Inference”Install the client integration
Section titled “Install the client integration”Install the 📦 Aspire.Azure.AI.Inference NuGet package in the client-consuming project:
dotnet add package Aspire.Azure.AI.Inference#:package Aspire.Azure.AI.Inference@*<PackageReference Include="Aspire.Azure.AI.Inference" Version="*" />Add a ChatCompletionsClient
Section titled “Add a ChatCompletionsClient”In Program.cs, call AddAzureChatCompletionsClient on your IHostApplicationBuilder to register a ChatCompletionsClient:
builder.AddAzureChatCompletionsClient(connectionName: "chat");Resolve the client through dependency injection:
public class ExampleService(ChatCompletionsClient client){ // Use client...}Add a ChatCompletionsClient with IChatClient
Section titled “Add a ChatCompletionsClient with IChatClient”Call AddChatClient after AddAzureChatCompletionsClient to also register an IChatClient from Microsoft.Extensions.AI:
builder.AddAzureChatCompletionsClient("chat") .AddChatClient();Resolve the IChatClient through dependency injection:
public class ExampleService(IChatClient chatClient){ public async Task<string> GenerateAsync(string prompt) { var response = await chatClient.GetResponseAsync(prompt); return response.Text; }}Using OpenAI client
Section titled “Using OpenAI client”For models compatible with the OpenAI API (such as openai/gpt-4o-mini), you can use the OpenAI client.
Install the client integration
Section titled “Install the client integration”Install the 📦 Aspire.OpenAI NuGet package in the client-consuming project:
dotnet add package Aspire.OpenAI#:package Aspire.OpenAI@*<PackageReference Include="Aspire.OpenAI" Version="*" />Add an OpenAI client
Section titled “Add an OpenAI client”In Program.cs, call AddOpenAIClient to register an OpenAIClient:
builder.AddOpenAIClient(connectionName: "chat");Resolve the client through dependency injection and use the model name from the connection string:
public class ChatService(OpenAIClient client, IConfiguration config){ public async Task<string> GetResponseAsync(string prompt) { var modelName = config["ConnectionStrings:chat:Model"] ?? "openai/gpt-4o-mini"; var chatClient = client.GetChatClient(modelName); var response = await chatClient.CompleteChatAsync( [new UserChatMessage(prompt)]); return response.Value.Content[0].Text; }}Add an OpenAI client with IChatClient
Section titled “Add an OpenAI client with IChatClient”builder.AddOpenAIClient("chat") .AddChatClient();Configuration
Section titled “Configuration”The Aspire OpenAI client integration supports configuration through connection strings, Microsoft.Extensions.Configuration, and inline delegates.
Connection strings. Provide a named connection string in appsettings.json:
{ "ConnectionStrings": { "chat": "Endpoint=https://models.github.ai/inference;Key=${GITHUB_TOKEN};Model=openai/gpt-4o-mini" }}Configuration providers. Use the Aspire:OpenAI key to load OpenAISettings:
{ "Aspire": { "OpenAI": { "DisableTracing": false, "DisableMetrics": false } }}Inline delegates.
builder.AddOpenAIClient("chat", settings => settings.DisableTracing = true);Observability and telemetry
Section titled “Observability and telemetry”The Aspire OpenAI client integration automatically configures logging, tracing, and metrics through OpenTelemetry.
Read environment variables in C#
Section titled “Read environment variables in C#”If you prefer not to use the Aspire client integration, you can read the Aspire-injected connection properties directly:
using Azure;using Azure.AI.Inference;
var endpoint = Environment.GetEnvironmentVariable("CHAT_ENDPOINT");var apiKey = Environment.GetEnvironmentVariable("CHAT_KEY");var modelName = Environment.GetEnvironmentVariable("CHAT_MODEL");
var client = new ChatCompletionsClient( new Uri(endpoint!), new AzureKeyCredential(apiKey!));
var response = await client.CompleteAsync(new ChatCompletionsOptions{ Model = modelName, Messages = { new ChatRequestUserMessage("Hello!") },});
Console.WriteLine(response.Value.Choices[0].Message.Content);GitHub Models exposes an OpenAI-compatible API, so you can use go-openai with a custom base URL:
go get github.com/sashabaranov/go-openaiRead the injected environment variables and connect:
package main
import ( "context" "fmt" "os"
openai "github.com/sashabaranov/go-openai")
func main() { // Read the Aspire-injected connection properties apiKey := os.Getenv("CHAT_KEY") endpoint := os.Getenv("CHAT_ENDPOINT") modelName := os.Getenv("CHAT_MODEL")
config := openai.DefaultConfig(apiKey) config.BaseURL = endpoint
client := openai.NewClientWithConfig(config)
resp, err := client.CreateChatCompletion( context.Background(), openai.ChatCompletionRequest{ Model: modelName, Messages: []openai.ChatCompletionMessage{ {Role: openai.ChatMessageRoleUser, Content: "Hello!"}, }, }, ) if err != nil { panic(err) }
fmt.Println(resp.Choices[0].Message.Content)}GitHub Models exposes an OpenAI-compatible API. Install either the openai package or azure-ai-inference:
Using the openai package:
pip install openaiimport osfrom openai import OpenAI
client = OpenAI( api_key=os.environ["CHAT_KEY"], base_url=os.environ["CHAT_ENDPOINT"],)
model_name = os.environ["CHAT_MODEL"]
response = client.chat.completions.create( model=model_name, messages=[{"role": "user", "content": "Hello!"}],)
print(response.choices[0].message.content)Using azure-ai-inference:
pip install azure-ai-inferenceimport osfrom azure.ai.inference import ChatCompletionsClientfrom azure.ai.inference.models import SystemMessage, UserMessagefrom azure.core.credentials import AzureKeyCredential
client = ChatCompletionsClient( endpoint=os.environ["CHAT_ENDPOINT"], credential=AzureKeyCredential(os.environ["CHAT_KEY"]),)
model_name = os.environ["CHAT_MODEL"]
response = client.complete( messages=[UserMessage(content="Hello!")], model=model_name,)
print(response.choices[0].message.content)GitHub Models exposes an OpenAI-compatible API. Use the openai npm package with a custom base URL:
npm install openaiimport OpenAI from 'openai';
const client = new OpenAI({ apiKey: process.env.CHAT_KEY, baseURL: process.env.CHAT_ENDPOINT,});
const modelName = process.env.CHAT_MODEL ?? 'openai/gpt-4o-mini';
const response = await client.chat.completions.create({ model: modelName, messages: [{ role: 'user', content: 'Hello!' }],});
console.log(response.choices[0].message.content);