Mistral Small 3.2 24B by Mistral, released on June 20, 2025, is a powerful model designed for text and image inputs with a focus on instruction following and structured outputs. It boasts a large context window of 131072 tokens, allowing it to handle extensive data inputs efficiently. This model excels in reducing repetition and infinite generations, making it particularly reliable for complex tasks. With support for coding, STEM, and vision benchmarks, it offers robust performance across various domains. Key features include function and tool calling capabilities, enhancing its utility in diverse applications.
Use Cases
Here are a few ways teams apply Mistral: Mistral Small 3.2 24B (free) in practice—from fast drafting to multimodal understanding. Adapt these ideas to your workflow.
Generate structured outputs from complex inputs
Improve coding tasks with HumanEval+ support
Enhance STEM learning with accurate responses
Process images for detailed analysis
Utilize tool calling for advanced applications
Key Features
A quick look at the capabilities that make this model useful in real projects.
Optimized for instruction following
Large 131072 token context window
Supports text and image inputs
Enhanced function and tool calling
Improved accuracy on WildBench and Arena Hard
Specs
Overview
Vendor
mistralai
Model ID
mistralai/mistral-small-3.2-24b-instruct:free
Release
2025-06-20
Modalities & context
Input
image · text
Output
text
Context
131,072 tokens
Parameters & defaults
Supported parameters: frequency_penalty, max_tokens, presence_penalty, repetition_penalty, seed, stop, structured_outputs, temperature, tool_choice, tools, top_k, top_p
Defaults: temperature 0.3, top_p 0.95
Benchmark tests: Mistral: Mistral Small 3.2 24B (free)
We ran this model against a few representative prompts to show its range. Review the outputs below and be the judge.
Text
Prompt:
Write 150 words on how AI might positively upend work, leisure and creativity
The Mistral Small 3.2 24B is a compact language model designed for a variety of natural language processing tasks. With 24 billion parameters, it offers a balance between performance and resource efficiency, making it suitable for applications such as text generation, summarization, and conversational agents. This model is particularly useful for developers and researchers looking to integrate advanced language capabilities into their projects without requiring extensive computational resources.
Notably, while the Mistral Small 3.2 24B can handle a wide range of topics and contexts, its performance may vary based on the complexity of the input and the specificity of the task. Users should be aware that, like many language models, it may generate outputs that require human oversight for accuracy and appropriateness. Additionally, the model is available for free, making it accessible for experimentation and development purposes.
Run this prompt on Upend.AI