Lemma — Gemma 4 E4B — MLX 4-bit

The mid-sized member of the Lemma model family by Lethean. An EUPL-1.2 fork of Gemma 4 E4B with the Lethean Ethical Kernel (LEK) merged into the weights.

This repo hosts the MLX 4-bit build for native Apple Silicon inference via mlx-lm and mlx-vlm. For the GGUF playground (Ollama, llama.cpp) see lthn/lemma. For the unmodified Google base see LetheanNetwork/lemma.

Family

Repo Format Bits
lthn/lemma GGUF multi-quant Q4_K_M → BF16
lthn/lemma-mlx MLX 4-bit
lthn/lemma-mlx-8bit MLX 8-bit
lthn/lemma-mlx-bf16 MLX bf16

License

EUPL-1.2. See Gemma Terms of Use for upstream base model terms.

Downloads last month
45
Safetensors
Model size
2B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lthn/lemma-mlx

Finetuned
lthn/lemma
Quantized
(3)
this model

Collection including lthn/lemma-mlx