Using TurboQuant as a Quantization method
AI & ML interests
GGUF for MLX!
Recent Activity
View all activity
models 44
JANGQ-AI/MiniMax-M2.7-JANGTQ_K
Text Generation • 20B • Updated
JANGQ-AI/MiniMax-M2.7-JANGTQ
Text Generation • 15B • Updated • 9.68k • 39
JANGQ-AI/DeepSeek-V4-Flash-JANGTQ
Text Generation • 20B • Updated • 8.15k • 1
JANGQ-AI/Ling-2.6-flash-JANGTQ
Text Generation • Updated
JANGQ-AI/Kimi-K2.6-Med-JANGTQ
Text Generation • 45B • Updated • 81
JANGQ-AI/Nemotron-3-Nano-Omni-30B-A3B-JANGTQ4
Image-Text-to-Text • Updated
JANGQ-AI/Nemotron-3-Nano-Omni-30B-A3B-JANGTQ2
Image-Text-to-Text • Updated
JANGQ-AI/Holo3-35B-A3B-JANGTQ
Image-Text-to-Text • 3B • Updated • 99 • 1
JANGQ-AI/Holo3-35B-A3B-JANGTQ4
Image-Text-to-Text • 5B • Updated • 124 • 1
JANGQ-AI/Qwen3.6-35B-A3B-JANGTQ4
Image-Text-to-Text • 5B • Updated • 1.05k
datasets 0
None public yet