metacortex-models
GGUF models used by metacortex-ai for on-device AI with receipt-based attestation.
These files are hosted here to provide authoritative SHA-256 reference hashes for the receipt model verification feature.
Models
| File | Parameters | Quantization | Size | Upstream Source |
|---|---|---|---|---|
Qwen3.5-2B-Q4_K_M.gguf |
2B | Q4_K_M | 1.2 GB | unsloth/Qwen3.5-2B-GGUF |
Qwen3.5-2B-Q8_0.gguf |
2B | Q8_0 | 2.0 GB | unsloth/Qwen3.5-2B-GGUF |
Qwen3.5-4B-Q4_K_M.gguf |
4B | Q4_K_M | 2.6 GB | unsloth/Qwen3.5-4B-GGUF |
Qwen3.5-4B-Q8_0.gguf |
4B | Q8_0 | 4.2 GB | unsloth/Qwen3.5-4B-GGUF |
Qwen3.5-9B-Q4_K_M.gguf |
9B | Q4_K_M | 5.3 GB | unsloth/Qwen3.5-9B-GGUF |
Qwen3.5-9B-Q8_0.gguf |
9B | Q8_0 | 8.9 GB | unsloth/Qwen3.5-9B-GGUF |
Qwen3.5-27B-Q4_K_M.gguf |
27B | Q4_K_M | 16 GB | unsloth/Qwen3.5-27B-GGUF |
Qwen3.5-27B-Q8_0.gguf |
27B | Q8_0 | 27 GB | unsloth/Qwen3.5-27B-GGUF |
embeddinggemma-300m-qat-Q8_0.gguf |
300M | Q8_0 | 313 MB | ggml-org/embeddinggemma-300m-qat-q8_0-GGUF |
SHA-256 Checksums
aaf42c8b7c3cab2bf3d69c355048d4a0ee9973d48f16c731c0520ee914699223 Qwen3.5-2B-Q4_K_M.gguf
1b04acba824817554f4ce23639bc8495ff70453b8fcb047900c731521021f2c1 Qwen3.5-2B-Q8_0.gguf
00fe7986ff5f6b463e62455821146049db6f9313603938a70800d1fb69ef11a4 Qwen3.5-4B-Q4_K_M.gguf
10cc391b403021dd11c614679d2fd92f611c3681d29e29651b717316965d61e1 Qwen3.5-4B-Q8_0.gguf
03b74727a860a56338e042c4420bb3f04b2fec5734175f4cb9fa853daf52b7e8 Qwen3.5-9B-Q4_K_M.gguf
809626574d0cb43d4becfa56169980da2bb448f2299270f7be443cb89d0a6ae4 Qwen3.5-9B-Q8_0.gguf
84b5f7f112156d63836a01a69dc3f11a6ba63b10a23b8ca7a7efaf52d5a2d806 Qwen3.5-27B-Q4_K_M.gguf
6b0a101b0a86697fe11eabcc1a7db72699a9f3d4b18b6a1ac75ea3fb2c26c450 Qwen3.5-27B-Q8_0.gguf
6fa0c02a9c302be6f977521d399b4de3a46310a4f2621ee0063747881b673f67 embeddinggemma-300m-qat-Q8_0.gguf
Purpose
Each metacortex-ai receipt includes a gguf_sha256 field (SHA-256 of the local model file). Users can compare this against the hashes published here to verify the model file on disk is genuine and unmodified.
This does not prove that this specific model generated the response -- only that an unmodified copy of the model exists on the device. See the receipts spec for the full trust model.
Usage with llama-server
# Chat model
llama-server --model Qwen3.5-9B-Q4_K_M.gguf --jinja --reasoning-format deepseek -ngl 99
# Embedding model
llama-server --model embeddinggemma-300m-qat-Q8_0.gguf --embedding --pooling mean -c 2048
- Downloads last month
- 188
Hardware compatibility
Log In to add your hardware
4-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support