Inference Providers
Active filters: Distill
stepenZEN/DeepSeek-R1-Distill-Llama-70B-bitsandbytes-4bit
72B • Updated • 6
prithivMLmods/QwQ-R1-Distill-1.5B-CoT
Text Generation
• 2B • Updated • 18
• 4
mradermacher/QwQ-R1-Distill-1.5B-CoT-GGUF
2B • Updated • 219
• 1
mradermacher/QwQ-R1-Distill-1.5B-CoT-i1-GGUF
2B • Updated • 308
stepenZEN/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo
Text Generation
• 2B • Updated • 11
• 3
mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo-GGUF
2B • Updated • 340
• 5
mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo-i1-GGUF
2B • Updated • 522
adriey/QwQ-R1-Distill-1.5B-CoT-Q8_0-GGUF
Text Generation
• 2B • Updated • 8
RDson/LIMO-R1-Distill-Qwen-7B
8B • Updated • 4
mradermacher/LIMO-R1-Distill-Qwen-7B-GGUF
8B • Updated • 193
prithivMLmods/Delta-Pavonis-Qwen-14B
Text Generation
• 15B • Updated • 7
• 3
mradermacher/Delta-Pavonis-Qwen-14B-GGUF
15B • Updated • 32
• 1
mradermacher/Delta-Pavonis-Qwen-14B-i1-GGUF
15B • Updated • 134
• 1
prithivMLmods/Octantis-QwenR1-1.5B-Q8_0-GGUF
Text Generation
• 2B • Updated • 3
ChiKoi7/GPT-5-Distill-Qwen3-4B-Instruct-Heretic
Text Generation
• 4B • Updated • 6
• • 4
ChiKoi7/GPT-5-Distill-Qwen3-4B-Instruct-Heretic-GGUF
4B • Updated • 130
• 1
mradermacher/GPT-5-Distill-Qwen3-4B-Instruct-Heretic-GGUF
4B • Updated • 160
• 2
mradermacher/GPT-5-Distill-Qwen3-4B-Instruct-Heretic-i1-GGUF
4B • Updated • 169
• 1
ChiKoi7/GPT-5-Distill-llama3.2-3B-Instruct-Heretic
3B • Updated • 3
• 1
ChiKoi7/GPT-5-Distill-llama3.2-3B-Instruct-Heretic-GGUF
3B • Updated • 44
mradermacher/GPT-5-Distill-llama3.2-3B-Instruct-Heretic-GGUF
3B • Updated • 182
• 1
mradermacher/GPT-5-Distill-llama3.2-3B-Instruct-Heretic-i1-GGUF
3B • Updated • 712
• 2
DavidAU/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL
Text Generation
• 8B • Updated • 162
• 7
mradermacher/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-GGUF
8B • Updated • 1.32k
• 5
mradermacher/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-i1-GGUF
8B • Updated • 465
• 1
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-2Bit
Text Generation
• 0.7B • Updated • 94
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-3Bit
Text Generation
• 1.0B • Updated • 92
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-4Bit
Text Generation
• 1B • Updated • 98
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-5Bit
Text Generation
• 1B • Updated • 109
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-6Bit
Text Generation
• 8B • Updated • 34