GitHub Models integration
Este conteúdo não está disponível em sua língua ainda.
GitHub Models provides access to various AI models including OpenAI’s GPT models, DeepSeek, Microsoft’s Phi models, and other leading AI models, all accessible through GitHub’s infrastructure. The Aspire GitHub Models integration enables you to connect to GitHub Models from your applications for prototyping and production scenarios.
Hosting integration
Section titled “Hosting integration”The Aspire GitHub Models hosting integration models GitHub Models resources as GitHubModelResource. To access these types and APIs, install the 📦 Aspire.Hosting.GitHub.Models NuGet package:
aspire add github-modelsA Aspire CLI é interativa; escolha o resultado adequado quando solicitado:
Select an integration to add:
> github-models (Aspire.Hosting.GitHub.Models)> Other results listed as selectable options...#:package Aspire.Hosting.GitHub.Models@*<PackageReference Include="Aspire.Hosting.GitHub.Models" Version="*" />Add a GitHub Model resource
Section titled “Add a GitHub Model resource”To add a GitHubModelResource to your AppHost project, call the AddGitHubModel method:
var builder = DistributedApplication.CreateBuilder(args);
var model = GitHubModel.OpenAI.OpenAIGpt4oMini;var chat = builder.AddGitHubModel("chat", model);
builder.AddProject<Projects.ExampleProject>() .WithReference(chat);
builder.Build().Run();The preceding code adds a GitHub Model resource named chat using the GitHubModel constant for OpenAI’s GPT-4o-mini model. The WithReference method passes the connection information to the ExampleProject project.
Specify an organization
Section titled “Specify an organization”For organization-specific requests, you can specify an organization parameter:
var builder = DistributedApplication.CreateBuilder(args);
var organization = builder.AddParameter("github-org");var model = GitHubModel.OpenAI.OpenAIGpt4oMini;var chat = builder.AddGitHubModel("chat", model, organization);
builder.AddProject<Projects.ExampleProject>() .WithReference(chat);
builder.Build().Run();When an organization is specified, the token must be attributed to that organization in GitHub.
Configure API key authentication
Section titled “Configure API key authentication”The GitHub Models integration supports multiple ways to configure authentication:
Default API key parameter
Section titled “Default API key parameter”By default, the integration creates a parameter named {resource_name}-gh-apikey that automatically falls back to the GITHUB_TOKEN environment variable:
var model = GitHubModel.OpenAI.OpenAIGpt4oMini;var chat = builder.AddGitHubModel("chat", model);Then in user secrets:
{ "Parameters": { "chat-gh-apikey": "YOUR_GITHUB_TOKEN_HERE" }}Custom API key parameter
Section titled “Custom API key parameter”You can also specify a custom parameter for the API key:
var apiKey = builder.AddParameter("my-api-key", secret: true);var model = GitHubModel.OpenAI.OpenAIGpt4oMini;var chat = builder.AddGitHubModel("chat", model) .WithApiKey(apiKey);Then in user secrets:
{ "Parameters": { "my-api-key": "YOUR_GITHUB_TOKEN_HERE" }}Health checks
Section titled “Health checks”You can add health checks to verify the GitHub Models endpoint accessibility and API key validity:
var model = GitHubModel.OpenAI.OpenAIGpt4oMini;var chat = builder.AddGitHubModel("chat", model) .WithHealthCheck();Available models
Section titled “Available models”GitHub Models supports various AI models. Use the strongly-typed GitHubModel constants for the most up-to-date list of available models. Some popular options include:
GitHubModel.OpenAI.OpenAIGpt4oMiniGitHubModel.OpenAI.OpenAIGpt41MiniGitHubModel.DeepSeek.DeepSeekV30324GitHubModel.Microsoft.Phi4MiniInstruct
Check the GitHub Models documentation for more information about these models and their capabilities.
Connection properties
Section titled “Connection properties”When you reference a GitHub Model resource using WithReference, the following connection properties are made available to the consuming project:
GitHub Model
Section titled “GitHub Model”The GitHub Model resource exposes the following connection properties:
| Property Name | Description |
|---|---|
Uri | The GitHub Models inference endpoint URI, with the format https://models.github.ai/inference |
Key | The API key (PAT or GitHub App token) for authentication |
ModelName | The model identifier for inference requests, for instance openai/gpt-4o-mini |
Organization | The organization attributed to the request (available when configured) |
Example properties:
Uri: https://models.github.ai/inferenceModelName: openai/gpt-4o-miniClient integration
Section titled “Client integration”To get started with the Aspire GitHub Models client integration, you can use either the Azure AI Inference client or the OpenAI client, depending on your needs and model compatibility.
Using Azure AI Inference client
Section titled “Using Azure AI Inference client”Install the 📦 Aspire.Azure.AI.Inference NuGet package in the client-consuming project:
dotnet add package Aspire.Azure.AI.Inference#:package Aspire.Azure.AI.Inference@*<PackageReference Include="Aspire.Azure.AI.Inference" Version="*" />Add a ChatCompletionsClient
Section titled “Add a ChatCompletionsClient”In the Program.cs file of your client-consuming project, use the AddAzureChatCompletionsClient method to register a ChatCompletionsClient for dependency injection:
builder.AddAzureChatCompletionsClient("chat");You can then retrieve the ChatCompletionsClient instance using dependency injection:
public class ExampleService(ChatCompletionsClient client){ public async Task<string> GetResponseAsync(string prompt) { var response = await client.GetChatCompletionsAsync( new[] { new ChatMessage(ChatRole.User, prompt) });
return response.Value.Choices[0].Message.Content; }}Add ChatCompletionsClient with registered IChatClient
Section titled “Add ChatCompletionsClient with registered IChatClient”If you’re using the Microsoft.Extensions.AI abstractions, you can register an IChatClient:
builder.AddAzureChatCompletionsClient("chat") .AddChatClient();Then use it in your services:
public class StoryService(IChatClient chatClient){ public async Task<string> GenerateStoryAsync(string prompt) { var response = await chatClient.GetResponseAsync(prompt);
return response.Text; }}Using OpenAI client
Section titled “Using OpenAI client”For models compatible with the OpenAI API (such as openai/gpt-4o-mini), you can use the OpenAI client. Install the 📦 Aspire.OpenAI NuGet package:
dotnet add package Aspire.OpenAI#:package Aspire.OpenAI@*<PackageReference Include="Aspire.OpenAI" Version="*" />Add an OpenAI client
Section titled “Add an OpenAI client”builder.AddOpenAIClient("chat");You can then use the OpenAI client:
public class ChatService(OpenAIClient client){ public async Task<string> GetChatResponseAsync(string prompt) { var chatClient = client.GetChatClient(GitHubModel.OpenAI.OpenAIGpt4oMini); var response = await chatClient.CompleteChatAsync( new[] { new UserChatMessage(prompt) });
return response.Value.Content[0].Text; }}Add OpenAI client with registered IChatClient
Section titled “Add OpenAI client with registered IChatClient”builder.AddOpenAIClient("chat") .AddChatClient();Configuration
Section titled “Configuration”The GitHub Models integration supports configuration through user secrets, environment variables, or app settings. The integration automatically uses the GITHUB_TOKEN environment variable if available, or you can specify a custom API key parameter.
Authentication
Section titled “Authentication”The GitHub Models integration requires a GitHub personal access token with models: read permission. The token can be provided in several ways:
Environment variables in Codespaces and GitHub Actions
Section titled “Environment variables in Codespaces and GitHub Actions”When running an app in GitHub Codespaces or GitHub Actions, the GITHUB_TOKEN environment variable is automatically available and can be used without additional configuration. This token has the necessary permissions to access GitHub Models for the repository context.
// No additional configuration needed in Codespaces/GitHub Actionsvar model = GitHubModel.OpenAI.OpenAIGpt4oMini;var chat = builder.AddGitHubModel("chat", model);Personal access tokens for local development
Section titled “Personal access tokens for local development”For local development, you need to create a fine-grained personal access token with the models: read scope and configure it in user secrets:
{ "Parameters": { "chat-gh-apikey": "github_pat_YOUR_TOKEN_HERE" }}Connection string format
Section titled “Connection string format”The connection string follows this format:
Endpoint=https://models.github.ai/inference;Key={api_key};Model={model_name};DeploymentId={model_name}For organization-specific requests:
Endpoint=https://models.github.ai/orgs/{organization}/inference;Key={api_key};Model={model_name};DeploymentId={model_name}Rate limits and costs
Section titled “Rate limits and costs”Sample application
Section titled “Sample application”The dotnet/aspire repo contains an example application demonstrating the GitHub Models integration. You can find the sample in the Aspire GitHub repository.
Observability and telemetry
Section titled “Observability and telemetry”Logging
Section titled “Logging”The GitHub Models integration uses standard HTTP client logging categories:
System.Net.Http.HttpClientMicrosoft.Extensions.Http
Tracing
Section titled “Tracing”HTTP requests to the GitHub Models API are automatically traced when using the Azure AI Inference or OpenAI clients.