LobeChat
Ctrl K
Back to Discovery
GithubGithub
@GitHub
26 models
With GitHub Models, developers can become AI engineers and leverage the industry's leading AI models.

Supported Models

Github
Maximum Context Length
128K
Maximum Output Length
64K
Input Price
$3.00
Output Price
$12.00
Maximum Context Length
128K
Maximum Output Length
32K
Input Price
$15.00
Output Price
$60.00
Maximum Context Length
128K
Maximum Output Length
16K
Input Price
$0.15
Output Price
$0.60
Maximum Context Length
128K
Maximum Output Length
--
Input Price
$2.50
Output Price
$10.00

Using GitHub Models in LobeChat

cover

GitHub Models is a new feature recently launched by GitHub, designed to provide developers with a free platform to access and experiment with various AI models. GitHub Models offers an interactive sandbox environment where users can test different model parameters and prompts, and observe the responses of the models. The platform supports advanced language models, including OpenAI's GPT-4o, Meta's Llama 3.1, and Mistral's Large 2, covering a wide range of applications from large-scale language models to task-specific models.

This article will guide you on how to use GitHub Models in LobeChat.

Rate Limits for GitHub Models

Currently, the usage of the Playground and free API is subject to limits on the number of requests per minute, the number of requests per day, the number of tokens per request, and the number of concurrent requests. If you hit the rate limit, you will need to wait for the limit to reset before making further requests. The rate limits vary for different models (low, high, and embedding models). For model type information, please refer to the GitHub Marketplace.

GitHub Models Rate Limits

These limits are subject to change at any time. For specific information, please refer to the GitHub Official Documentation.


Configuration Guide for GitHub Models

Step 1: Obtain a GitHub Access Token

  • Log in to GitHub and open the Access Tokens page.
  • Create and configure a new access token.
Creating Access Token
  • Copy and save the generated token from the results returned.
Saving Access Token
  • During the testing phase of GitHub Models, users must apply to join the waitlist in order to gain access.

  • Please store the access token securely, as it will only be displayed once. If you accidentally lose it, you will need to create a new token.

Step 2: Configure GitHub Models in LobeChat

  • Navigate to the Settings interface in LobeChat.
  • Under Language Models, find the GitHub settings.
Entering Access Token
  • Enter the access token you obtained.
  • Select a GitHub model for your AI assistant to start the conversation.
Selecting GitHub Model and Starting Conversation

You are now ready to use the models provided by GitHub for conversations within LobeChat.

Related Providers

OpenAIOpenAI
@OpenAI
22 models
OpenAI is a global leader in artificial intelligence research, with models like the GPT series pushing the frontiers of natural language processing. OpenAI is committed to transforming multiple industries through innovative and efficient AI solutions. Their products demonstrate significant performance and cost-effectiveness, widely used in research, business, and innovative applications.
OllamaOllama
@Ollama
40 models
Ollama provides models that cover a wide range of fields, including code generation, mathematical operations, multilingual processing, and conversational interaction, catering to diverse enterprise-level and localized deployment needs.
Anthropic
ClaudeClaude
@Anthropic
8 models
Anthropic is a company focused on AI research and development, offering a range of advanced language models such as Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus, and Claude 3 Haiku. These models achieve an ideal balance between intelligence, speed, and cost, suitable for various applications from enterprise workloads to rapid-response scenarios. Claude 3.5 Sonnet, as their latest model, has excelled in multiple evaluations while maintaining a high cost-performance ratio.
AWS
BedrockBedrock
@Bedrock
14 models
Bedrock is a service provided by Amazon AWS, focusing on delivering advanced AI language and visual models for enterprises. Its model family includes Anthropic's Claude series, Meta's Llama 3.1 series, and more, offering a range of options from lightweight to high-performance, supporting tasks such as text generation, conversation, and image processing for businesses of varying scales and needs.
Google
GeminiGemini
@Google
14 models
Google's Gemini series represents its most advanced, versatile AI models, developed by Google DeepMind, designed for multimodal capabilities, supporting seamless understanding and processing of text, code, images, audio, and video. Suitable for various environments from data centers to mobile devices, it significantly enhances the efficiency and applicability of AI models.