Ir al contenido

Get started with the GitHub Models integration

Esta página aún no está disponible en tu idioma.

GitHub logo

GitHub Models provides access to various AI models including OpenAI’s GPT models, DeepSeek, Microsoft’s Phi models, and other leading AI models, all accessible through GitHub’s infrastructure. The Aspire GitHub Models integration enables you to connect to GitHub Models from your applications for prototyping and production scenarios.

In this introduction, you’ll see how to install and use the Aspire GitHub Models integrations in a simple configuration. If you already have this knowledge, see GitHub Models hosting integration for full reference details.

To begin, install the Aspire GitHub Models hosting integration in your Aspire AppHost project. This integration allows you to create and manage GitHub Models resources:

Aspire CLI — Añadir paquete Aspire.Hosting.GitHub.Models
aspire add github-models

La CLI de Aspire es interactiva; asegúrate de seleccionar el resultado adecuado cuando se te pida:

Aspire CLI — Ejemplo de salida
Select an integration to add:
> github-models (Aspire.Hosting.GitHub.Models)
> Other results listed as selectable options...

Next, in the AppHost project, create instances of GitHub Models resources and pass them to the consuming client projects:

C# — AppHost.cs
var builder = DistributedApplication.CreateBuilder(args);
var chat = builder.AddGitHubModel("chat", "openai/gpt-4o-mini");
builder.AddProject<Projects.ExampleProject>()
.WithReference(chat);
builder.Build().Run();

The preceding code adds a GitHub Model resource named chat using the identifier string for OpenAI’s GPT-4o-mini model. The WithReference method passes the connection information to the ExampleProject project.

The GitHub Models integration requires a GitHub personal access token with models: read permission.

Environment variables in Codespaces and GitHub Actions

Section titled “Environment variables in Codespaces and GitHub Actions”

When running an app in GitHub Codespaces or GitHub Actions, the GITHUB_TOKEN environment variable is automatically available and can be used without additional configuration.

Personal access tokens for local development

Section titled “Personal access tokens for local development”

For local development, create a fine-grained personal access token with the models: read scope and configure it in user secrets:

{
"Parameters": {
"chat-gh-apikey": "github_pat_YOUR_TOKEN_HERE"
}
}

To get started with the Aspire GitHub Models client integration, you can use either the Azure AI Inference client or the OpenAI client, depending on your needs and model compatibility.

Install the Azure AI Inference client package in the client-consuming project:

.NET CLI — Add Aspire.Azure.AI.Inference package
dotnet add package Aspire.Azure.AI.Inference

In the Program.cs file of your client-consuming project, use the AddAzureChatCompletionsClient method to register a ChatCompletionsClient for dependency injection:

builder.AddAzureChatCompletionsClient("chat");

You can then retrieve the ChatCompletionsClient instance using dependency injection:

public class ExampleService(ChatCompletionsClient client)
{
public async Task<string> GetResponseAsync(string prompt)
{
var response = await client.GetChatCompletionsAsync(
new[]
{
new ChatMessage(ChatRole.User, prompt)
});
return response.Value.Choices[0].Message.Content;
}
}

For models compatible with the OpenAI API (such as openai/gpt-4o-mini), you can use the OpenAI client:

.NET CLI — Add Aspire.OpenAI package
dotnet add package Aspire.OpenAI
builder.AddOpenAIClient("chat");

You can then use the OpenAI client:

public class ChatService(OpenAIClient client)
{
public async Task<string> GetChatResponseAsync(string prompt)
{
var chatClient = client.GetChatClient(GitHubModel.OpenAI.OpenAIGpt4oMini);
var response = await chatClient.CompleteChatAsync(
new[]
{
new UserChatMessage(prompt)
});
return response.Value.Content[0].Text;
}
}