LobeChat
Ctrl K
Back to Discovery
Baichuan

Baichuan 4

Baichuan4
The model is the best in the country, surpassing mainstream foreign models in Chinese tasks such as knowledge encyclopedias, long texts, and creative generation. It also boasts industry-leading multimodal capabilities, excelling in multiple authoritative evaluation benchmarks.
32K

Providers Supporting This Model

Baichuan
BaichuanBaichuan
BaichuanBaichuan4
Maximum Context Length
32K
Maximum Output Length
4K
Input Price
$14.01
Output Price
$14.01

Model Parameters

Randomness
temperature

This setting affects the diversity of the model's responses. Lower values lead to more predictable and typical responses, while higher values encourage more diverse and less common responses. When set to 0, the model always gives the same response to a given input. View Documentation

Type
FLOAT
Default Value
1.00
Range
0.00 ~ 2.00
Nucleus Sampling
top_p

This setting limits the model's selection to a certain proportion of the most likely vocabulary: only selecting those top words whose cumulative probability reaches P. Lower values make the model's responses more predictable, while the default setting allows the model to choose from the entire range of vocabulary. View Documentation

Type
FLOAT
Default Value
1.00
Range
0.00 ~ 1.00
Topic Freshness
presence_penalty

This setting aims to control the reuse of vocabulary based on its frequency in the input. It attempts to use less of those words that appear more frequently in the input, with usage frequency proportional to occurrence frequency. Vocabulary penalties increase with frequency of occurrence. Negative values encourage vocabulary reuse. View Documentation

Type
FLOAT
Default Value
0.00
Range
-2.00 ~ 2.00
Frequency Penalty
frequency_penalty

This setting adjusts the frequency at which the model reuses specific vocabulary that has already appeared in the input. Higher values reduce the likelihood of such repetition, while negative values have the opposite effect. Vocabulary penalties do not increase with frequency of occurrence. Negative values encourage vocabulary reuse. View Documentation

Type
FLOAT
Default Value
0.00
Range
-2.00 ~ 2.00
Single Response Limit
max_tokens

This setting defines the maximum length that the model can generate in a single response. Setting a higher value allows the model to produce longer replies, while a lower value restricts the length of the response, making it more concise. Adjusting this value appropriately based on different application scenarios can help achieve the desired response length and level of detail. View Documentation

Type
INT
Default Value
--
Range
0 ~ 4K

Related Models

Baichuan

Baichuan 4 Turbo

Baichuan4-Turbo
The leading model in the country, surpassing mainstream foreign models in Chinese tasks such as knowledge encyclopedias, long texts, and creative generation. It also possesses industry-leading multimodal capabilities, excelling in multiple authoritative evaluation benchmarks.
32K
Baichuan

Baichuan 4 Air

Baichuan4-Air
The leading model in the country, surpassing mainstream foreign models in Chinese tasks such as knowledge encyclopedias, long texts, and creative generation. It also possesses industry-leading multimodal capabilities, excelling in multiple authoritative evaluation benchmarks.
32K
Baichuan

Baichuan 3 Turbo

Baichuan3-Turbo
Optimized for high-frequency enterprise scenarios, significantly improving performance and cost-effectiveness. Compared to the Baichuan2 model, content creation improves by 20%, knowledge Q&A by 17%, and role-playing ability by 40%. Overall performance is superior to GPT-3.5.
32K
Baichuan

Baichuan 3 Turbo 128k

Baichuan3-Turbo-128k
Features a 128K ultra-long context window, optimized for high-frequency enterprise scenarios, significantly improving performance and cost-effectiveness. Compared to the Baichuan2 model, content creation improves by 20%, knowledge Q&A by 17%, and role-playing ability by 40%. Overall performance is superior to GPT-3.5.
128K
Baichuan

Baichuan 2 Turbo

Baichuan2-Turbo
Utilizes search enhancement technology to achieve comprehensive links between large models and domain knowledge, as well as knowledge from the entire web. Supports uploads of various documents such as PDF and Word, and URL input, providing timely and comprehensive information retrieval with accurate and professional output.
32K