Baseten acquires Parsed: Own your intelligence by unifying training and inference. READ
large language

MiniMaxMiniMax M2.1

An update to MiniMax M2 with improved coding and reasoning performance

Model details

Example usage

MiniMax M2.1 is a 230B-parameter MoE model with 10B active parameters. It scores competitively across benchmarks for coding and agentic tool use.

MiniMax M2.1 shows improved performance across coding benchmarksMiniMax M2.1 shows improved performance across coding benchmarks
Input
1from openai import OpenAI
2import os
3
4model_url = "" # Copy in from API pane in Baseten model dashboard
5
6client = OpenAI(
7    api_key=os.environ['BASETEN_API_KEY'],
8    base_url=model_url
9)
10
11# Chat completion
12response_chat = client.chat.completions.create(
13    model="",
14    messages=[
15        {"role": "user", "content": "Write FizzBuzz."}
16    ],
17    temperature=0.6,
18    max_tokens=100,
19)
20print(response_chat)
JSON output
1{
2    "id": "143",
3    "choices": [
4        {
5            "finish_reason": "stop",
6            "index": 0,
7            "logprobs": null,
8            "message": {
9                "content": "[Model output here]",
10                "role": "assistant",
11                "audio": null,
12                "function_call": null,
13                "tool_calls": null
14            }
15        }
16    ],
17    "created": 1741224586,
18    "model": "",
19    "object": "chat.completion",
20    "service_tier": null,
21    "system_fingerprint": null,
22    "usage": {
23        "completion_tokens": 145,
24        "prompt_tokens": 38,
25        "total_tokens": 183,
26        "completion_tokens_details": null,
27        "prompt_tokens_details": null
28    }
29}

large language models

See all
Z AI
LLM

GLM 4.7

4.7
DeepSeek Logo
Model API
LLM

DeepSeek V3.2

V3.2 - B200

MiniMax models

See all

🔥 Trending models