Questi contenuti non sono ancora disponibili nella tua lingua.
Aspire provides several AI hosting and client integrations that enable you to work with different AI services and platforms. This article provides a compatibility matrix showing which client integrations work with which hosting integrations, along with guidance on the recommended pairings.
For Azure AI Foundry resources, use the Aspire.Azure.AI.Inference client integration. This provides the best compatibility with the diverse range of models available through Azure AI Foundry. For more information, see Azure AI Foundry integration.
For Azure OpenAI resources, use the Aspire.Azure.AI.OpenAI client integration for full Azure-specific features and authentication support. For more information, see Azure OpenAI integration.
For GitHub Models, use the Aspire.Azure.AI.Inference client integration for the best compatibility with the GitHub Models API. For more information, see Aspire GitHub Models integration (Preview).
Understanding how hosting and client integrations communicate through connection strings can help you troubleshoot connectivity issues and understand the underlying mechanics.
Each hosting integration generates connection strings in different formats that are consumed by the client integrations.
Uses either Deployment or Model (in that order). Deployment is set by Aspire.Hosting.Azure.CognitiveServices while Model is set by Aspire.Hosting.OpenAI.
Uses either Deployment or Model (in that order). Deployment is set by Aspire.Hosting.Azure.CognitiveServices while Model is set by Aspire.Hosting.OpenAI.
This integration is a superset of Aspire.OpenAI and supports TokenCredential and Azure-specific features.
Uses either Deployment or Model (in that order). Deployment is set by Aspire.Hosting.Azure.CognitiveServices while Model is set by Aspire.Hosting.GitHub.Models.
Uses EndpointAIInference if available, otherwise Endpoint.