Instructions to use Mungert/SmallThinker-4BA0.6B-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use Mungert/SmallThinker-4BA0.6B-Instruct with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Mungert/SmallThinker-4BA0.6B-Instruct", filename="SmallThinker-4BA0.6B-Instruct-bf16.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Mungert/SmallThinker-4BA0.6B-Instruct with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M
Use Docker
docker model run hf.co/Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use Mungert/SmallThinker-4BA0.6B-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Mungert/SmallThinker-4BA0.6B-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Mungert/SmallThinker-4BA0.6B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M
- Ollama
How to use Mungert/SmallThinker-4BA0.6B-Instruct with Ollama:
ollama run hf.co/Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M
- Unsloth Studio new
How to use Mungert/SmallThinker-4BA0.6B-Instruct with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Mungert/SmallThinker-4BA0.6B-Instruct to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Mungert/SmallThinker-4BA0.6B-Instruct to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Mungert/SmallThinker-4BA0.6B-Instruct to start chatting
- Pi new
How to use Mungert/SmallThinker-4BA0.6B-Instruct with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use Mungert/SmallThinker-4BA0.6B-Instruct with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use Mungert/SmallThinker-4BA0.6B-Instruct with Docker Model Runner:
docker model run hf.co/Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M
- Lemonade
How to use Mungert/SmallThinker-4BA0.6B-Instruct with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Mungert/SmallThinker-4BA0.6B-Instruct:Q4_K_M
Run and chat with the model
lemonade run user.SmallThinker-4BA0.6B-Instruct-Q4_K_M
List all available models
lemonade list
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)Introduction
๐ค Hugging Face | ๐ค ModelScope | ๐ Technical Report
SmallThinker is a family of on-device native Mixture-of-Experts (MoE) language models specially designed for local deployment, co-developed by the IPADS and School of AI at Shanghai Jiao Tong University and Zenergize AI. Designed from the ground up for resource-constrained environments, SmallThinker brings powerful, private, and low-latency AI directly to your personal devices, without relying on the cloud.
Performance
Note: The model is trained mainly on English.
| Model | MMLU | GPQA-diamond | GSM8K | MATH-500 | IFEVAL | LIVEBENCH | HUMANEVAL | Average |
|---|---|---|---|---|---|---|---|---|
| SmallThinker-4BA0.6B-Instruct | 66.11 | 31.31 | 80.02 | 60.60 | 69.69 | 42.20 | 82.32 | 61.75 |
| Qwen3-0.6B | 43.31 | 26.77 | 62.85 | 45.6 | 58.41 | 23.1 | 31.71 | 41.67 |
| Qwen3-1.7B | 64.19 | 27.78 | 81.88 | 63.6 | 69.50 | 35.60 | 61.59 | 57.73 |
| Gemma3nE2b-it | 63.04 | 20.2 | 82.34 | 58.6 | 73.2 | 27.90 | 64.63 | 55.70 |
| Llama-3.2-3B-Instruct | 64.15 | 24.24 | 75.51 | 40 | 71.16 | 15.30 | 55.49 | 49.41 |
| Llama-3.2-1B-Instruct | 45.66 | 22.73 | 1.67 | 14.4 | 48.06 | 13.50 | 37.20 | 26.17 |
For the MMLU evaluation, we use a 0-shot CoT setting.
All models are evaluated in non-thinking mode.
Speed
| Model | Memory(GiB) | i9 14900 | 1+13 8gen4 | rk3588 (16G) | rk3576 | Raspberry PI 5 | RDK X5 | rk3566 |
|---|---|---|---|---|---|---|---|---|
| SmallThinker 4B+sparse ffn +sparse lm_head | 2.24 | 108.17 | 78.99 | 39.76 | 15.10 | 28.77 | 7.23 | 6.33 |
| SmallThinker 4B+sparse ffn +sparse lm_head+limited memory | limit 1G | 29.99 | 20.91 | 15.04 | 2.60 | 0.75 | 0.67 | 0.74 |
| Qwen3 0.6B | 0.6 | 148.56 | 94.91 | 45.93 | 15.29 | 27.44 | 13.32 | 9.76 |
| Qwen3 1.7B | 1.3 | 62.24 | 41.00 | 20.29 | 6.09 | 11.08 | 6.35 | 4.15 |
| Qwen3 1.7B+limited memory | limit 1G | 2.66 | 1.09 | 1.00 | 0.47 | - | - | 0.11 |
| Gemma3n E2B | 1G, theoretically | 36.88 | 27.06 | 12.50 | 3.80 | 6.66 | 3.46 | 2.45 |
Note: i9 14900, 1+13 8ge4 use 4 threads, others use the number of threads that can achieve the maximum speed. All models here have been quantized to q4_0.
You can deploy SmallThinker with offloading support using PowerInfer
Model Card
| Architecture | Mixture-of-Experts (MoE) |
|---|---|
| Total Parameters | 4B |
| Activated Parameters | 0.6B |
| Number of Layers | 32 |
| Attention Hidden Dimension | 1536 |
| MoE Hidden Dimension (per Expert) | 768 |
| Number of Attention Heads | 12 |
| Number of Experts | 32 |
| Selected Experts per Token | 4 |
| Vocabulary Size | 151,936 |
| Context Length | 32K |
| Attention Mechanism | GQA |
| Activation Function | ReGLU |
How to Run
Transformers
transformers==4.53.3 is required, we are actively working to support the latest version.
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
path = "PowerInfer/SmallThinker-4BA0.6B-Instruct"
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map=device, trust_remote_code=True)
messages = [
{"role": "user", "content": "Give me a short introduction to large language model."},
]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to(device)
model_outputs = model.generate(
model_inputs,
do_sample=True,
max_new_tokens=1024
)
output_token_ids = [
model_outputs[i][len(model_inputs[i]):] for i in range(len(model_inputs))
]
responses = tokenizer.batch_decode(output_token_ids, skip_special_tokens=True)[0]
print(responses)
ModelScope
ModelScope adopts Python API similar to (though not entirely identical to) Transformers. For basic usage, simply modify the first line of the above code as follows:
from modelscope import AutoModelForCausalLM, AutoTokenizer
Statement
- Due to the constraints of its model size and the limitations of its training data, its responses may contain factual inaccuracies, biases, or outdated information.
- Users bear full responsibility for independently evaluating and verifying the accuracy and appropriateness of all generated content.
- SmallThinker does not possess genuine comprehension or consciousness and cannot express personal opinions or value judgments.
- Downloads last month
- 14
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Mungert/SmallThinker-4BA0.6B-Instruct", filename="", )