Instructions to use Runink/blogen with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use Runink/blogen with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Runink/blogen", filename="gemma-3-12b-it-q4_0.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Runink/blogen with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Runink/blogen:Q4_0 # Run inference directly in the terminal: llama-cli -hf Runink/blogen:Q4_0
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Runink/blogen:Q4_0 # Run inference directly in the terminal: llama-cli -hf Runink/blogen:Q4_0
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Runink/blogen:Q4_0 # Run inference directly in the terminal: ./llama-cli -hf Runink/blogen:Q4_0
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Runink/blogen:Q4_0 # Run inference directly in the terminal: ./build/bin/llama-cli -hf Runink/blogen:Q4_0
Use Docker
docker model run hf.co/Runink/blogen:Q4_0
- LM Studio
- Jan
- Ollama
How to use Runink/blogen with Ollama:
ollama run hf.co/Runink/blogen:Q4_0
- Unsloth Studio new
How to use Runink/blogen with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Runink/blogen to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Runink/blogen to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Runink/blogen to start chatting
- Docker Model Runner
How to use Runink/blogen with Docker Model Runner:
docker model run hf.co/Runink/blogen:Q4_0
- Lemonade
How to use Runink/blogen with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Runink/blogen:Q4_0
Run and chat with the model
lemonade run user.blogen-Q4_0
List all available models
lemonade list
File size: 2,276 Bytes
10214b6 62ad84f 10214b6 62ad84f 10214b6 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | ---
license: apache-2.0
tags:
- ai
- blogging
- content-generation
- gemma
- stable-diffusion
- gguf
- blogen
---
# ๐ Blogen - Community Edition Models
This repository contains the quantized AI models used by **Blogen Community Edition**, a self-hosted, privacy-focused AI blogging assistant.
These models are optimized for local inference (CPU) using [GGUF](https://github.com/ggerganov/ggml) format.
## ๐ฆ Included Models
| Model Type | Model Name | Filename | Size | Description |
| :--- | :--- | :--- | :--- | :--- |
| **LLM** | **Google Gemma 3 12B IT** | `gemma-3-12b-it-q4_0.gguf` | ~4.7 GB | Instruction-tuned model for generating blog posts, titles, and SEO metadata. Quantized to 4-bit (Q4_0). |
| **Image Gen** | **Stable Diffusion v1.5** | `stable-diffusion-v1-5-pruned-emaonly-Q4_1.gguf` | ~2.0 GB | Text-to-Image model for generating blog cover images. Quantized to Q4_1 for `stable-diffusion.cpp`. |
## โจ New Capabilities (v1.1)
These models power the latest version of Blogen, enabling:
* **๐ Multilingual Blogging**: Native support for generating content in Spanish, French, German, and 50+ languages via Gemma 3 instructions.
* **๐จ High-Fidelity Images**: Optimized Stable Diffusion pipeline with 30-step generation for clearer, artifact-free cover images.
* **๐ก๏ธ Enterprise Grade**: Ready for secure, air-gapped deployments with Ed25519 license verification.
## ๐ Usage
These models are designed to be automatically downloaded by the **Blogen** Docker container upon startup.
### Manual Download & Run
If you prefer to download them manually (e.g., to save bandwidth on re-deployments):
1. **Download the files** to a local folder (e.g., `./models`).
2. **Run Blogen Community Edition**:
```bash
docker run -d \
-p 3000:3000 \
-v $(pwd)/models:/app/models \
-v $(pwd)/data:/app/data \
ghcr.io/org-runink/blogen/server:free
```
## โ๏ธ License & Acknowledgments
* **Blogen CE**: Apache 2.0
* **Gemma 3**: [Gemma Terms of Use](https://ai.google.dev/gemma/terms) (Google)
* **Stable Diffusion**: [CreativeML Open RAIL-M](https://huggingface.co/runwayml/stable-diffusion-v1-5) (RunwayML / Stability AI)
*These files are quantized redistributions of the original models cited above.* |