MANZANO: A Simple and Scalable Unified Multimodal Model with a Hybrid Vision Tokenizer Paper • 2509.16197 • Published Sep 19, 2025 • 58
view article Article Saving Memory Using Padding-Free Transformer Layers during Finetuning Jun 11, 2024 • 21
view article Article DS-MoE: Making MoE Models More Efficient and Less Memory-Intensive Apr 9, 2024 • 30