Member-only story
Mistral Large rivals GPT-4 and Claude 2: Setup, Testing, and Function Calling
9 min readMar 1, 2024
Mistral Large is the world’s second-ranked model generally available through an API according to official announcement; have a look at the performance comparison on MMLU (Measuring massive multitask language understanding):
However, if you include both Gemini Ultra 1.0 (83.7%) and Gemini Pro 1.5 (81.9%), they slightly outperform Mistral Large.
Let’s have a look at some of Mistral Large’s characteristics:
- Multilingual support for English, French, Spanish, German, and Italian.
- Extended 32K context window for more granular response generation.
- Supports function calling and JSON mode to streamline interactions with internal code and databases, facilitating complex applications.
- 81.2% on MMLU, closely behind GPT-4, Gemini Ultra and Gemini Pro
- Not open, only available on Mistral or Azure
In this article, I will walk you through
- Setting up a local environment with Mistral Large
- Mistral AI Model Pricing
- Running initial tests with Mistral Large
- Function calling with Mistral Large
- Further resources to dig deeper