Instructions to use muverqqw/Noir-mini with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use muverqqw/Noir-mini with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="muverqqw/Noir-mini") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("muverqqw/Noir-mini") model = AutoModelForCausalLM.from_pretrained("muverqqw/Noir-mini") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use muverqqw/Noir-mini with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "muverqqw/Noir-mini" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "muverqqw/Noir-mini", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/muverqqw/Noir-mini
- SGLang
How to use muverqqw/Noir-mini with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "muverqqw/Noir-mini" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "muverqqw/Noir-mini", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "muverqqw/Noir-mini" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "muverqqw/Noir-mini", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use muverqqw/Noir-mini with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for muverqqw/Noir-mini to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for muverqqw/Noir-mini to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for muverqqw/Noir-mini to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="muverqqw/Noir-mini", max_seq_length=2048, ) - Docker Model Runner
How to use muverqqw/Noir-mini with Docker Model Runner:
docker model run hf.co/muverqqw/Noir-mini
💎 Noir-Mini (1.5B)
Noir-Mini is the "Sweet Spot" of the Noir family. Built on the Qwen 2.5 (1.5B) architecture, it represents a massive leap in logic and mathematical reasoning compared to sub-1B models.
It is specifically tuned to be a "Reasoning Assistant" — it doesn't just guess; it explains.
🌟 Why Noir-Mini?
While 0.5B models are great for speed, Noir-Mini is built for tasks that require actual understanding:
- 🧮 Math Champion: With a 54.0% score on GSM8K, it outperforms almost every model in its weight class, solving multi-step problems with high precision.
- 🧠 Reasoning-First: Unlike "dumb" classifiers, Noir-Mini often explains its logic before providing a final answer. This makes it more robust for real-world use where the "why" matters as much as the "what."
- 🎨 High Creativity: A creativity score of 72.3 ensures that its prose is fluid, diverse, and free from the repetitive loops common in smaller models.
- 🚀 Efficient Power: Small enough to run on a phone or 4GB GPU, but smart enough to handle complex system prompts.
📊 Benchmark Results (Internal Test)
Tested using a custom high-precision evaluation suite (100-sample batches):
| Metric | Dataset | Score (%) | Commentary |
|---|---|---|---|
| Mathematics | GSM8K | 54.0% | 🏆 Phenomenal for 1.5B. Solves complex word problems. |
| Creativity | Diversity Eval | 72.3% | Very high vocabulary variety and natural flow. |
| General Knowledge | MMLU (STEM) | 16.0% | Solid grasp of college-level math and science. |
| Logic | ARC (Challenge) | 7.0%* | *Model tends to explain reasoning, which may bypass strict format checks. |
| Model | Parameters | Role | Key Strength |
|---|---|---|---|
| Noir-Lightning | 0.5B | The Pocket Assistant | Ultra-fast, runs on anything |
| Noir-Mini | 1.5B | The Balanced Thinker | High speed with solid grammar |
| Noir-Standard | 3B | The Versatile Workhorse | 65% GSM8K, perfect for 8GB VRAM |
| Noir-Ultra | 7B | The Reasoning Master | 91% SciQ & 84% Math |
| Noir-Starlight | 14B | The Galactic Intelligence | Deep logic & Expert-level STEM |
🛠 Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "muverqqw/Noir-Mini"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto")
messages = [
{"role": "system", "content": "You are Noir-Mini, a precise and creative AI."},
{"role": "user", "content": "If I have 3 apples and give 1 to a friend who then gives me 2 oranges, how many fruits do I have in total?"}
]
# Recommended for Noir-Mini: Temp 0.4-0.6 for logic, 0.7+ for stories
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
gen_tokens = model.generate(input_ids, max_new_tokens=256, temperature=0.5, do_sample=True)
print(tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)[0])
⚙️ Technical Specifications
Architecture: Qwen 2.5 (1.5B)
Training Context: 32k tokens.
Specialty: Logic-heavy instructions and bilingual (EN/RU) support.
👤 About the Developer
Creator: IceL1ghtning
Release Year: 2025
License: Apache 2.0
- Downloads last month
- 250
Model tree for muverqqw/Noir-mini
Base model
Qwen/Qwen2.5-1.5BCollection including muverqqw/Noir-mini
Evaluation results
- accuracy on GSM8Kself-reported54.000
- accuracy on MMLU Proself-reported16.000