Instructions to use KoinicLabs/AXL-Chat-Pro with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use KoinicLabs/AXL-Chat-Pro with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="KoinicLabs/AXL-Chat-Pro") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("KoinicLabs/AXL-Chat-Pro", dtype="auto") - llama-cpp-python
How to use KoinicLabs/AXL-Chat-Pro with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="KoinicLabs/AXL-Chat-Pro", filename="axl-chat-pro-f16.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use KoinicLabs/AXL-Chat-Pro with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf KoinicLabs/AXL-Chat-Pro:Q4_K_M_REAL # Run inference directly in the terminal: llama-cli -hf KoinicLabs/AXL-Chat-Pro:Q4_K_M_REAL
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf KoinicLabs/AXL-Chat-Pro:Q4_K_M_REAL # Run inference directly in the terminal: llama-cli -hf KoinicLabs/AXL-Chat-Pro:Q4_K_M_REAL
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf KoinicLabs/AXL-Chat-Pro:Q4_K_M_REAL # Run inference directly in the terminal: ./llama-cli -hf KoinicLabs/AXL-Chat-Pro:Q4_K_M_REAL
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf KoinicLabs/AXL-Chat-Pro:Q4_K_M_REAL # Run inference directly in the terminal: ./build/bin/llama-cli -hf KoinicLabs/AXL-Chat-Pro:Q4_K_M_REAL
Use Docker
docker model run hf.co/KoinicLabs/AXL-Chat-Pro:Q4_K_M_REAL
- LM Studio
- Jan
- vLLM
How to use KoinicLabs/AXL-Chat-Pro with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "KoinicLabs/AXL-Chat-Pro" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "KoinicLabs/AXL-Chat-Pro", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/KoinicLabs/AXL-Chat-Pro:Q4_K_M_REAL
- SGLang
How to use KoinicLabs/AXL-Chat-Pro with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "KoinicLabs/AXL-Chat-Pro" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "KoinicLabs/AXL-Chat-Pro", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "KoinicLabs/AXL-Chat-Pro" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "KoinicLabs/AXL-Chat-Pro", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use KoinicLabs/AXL-Chat-Pro with Ollama:
ollama run hf.co/KoinicLabs/AXL-Chat-Pro:Q4_K_M_REAL
- Unsloth Studio new
How to use KoinicLabs/AXL-Chat-Pro with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for KoinicLabs/AXL-Chat-Pro to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for KoinicLabs/AXL-Chat-Pro to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for KoinicLabs/AXL-Chat-Pro to start chatting
- Docker Model Runner
How to use KoinicLabs/AXL-Chat-Pro with Docker Model Runner:
docker model run hf.co/KoinicLabs/AXL-Chat-Pro:Q4_K_M_REAL
- Lemonade
How to use KoinicLabs/AXL-Chat-Pro with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull KoinicLabs/AXL-Chat-Pro:Q4_K_M_REAL
Run and chat with the model
lemonade run user.AXL-Chat-Pro-Q4_K_M_REAL
List all available models
lemonade list
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)AXL-Chat-Pro
Advanced conversational AI. 12.8M params. PPL 1.34.. Context 256 bytes. Part of the AXL model family by KoinicLabs.
Model Details
| Property | Value |
|---|---|
| Developed by | KoinicLabs |
| Architecture | Multi-Scale Transformer |
| Parameters | 13M |
| Optimizer | Lion |
| Attention | SDPA |
| Vocab Size | 258 (byte-level) |
| Context Window | 256 bytes |
| d_model | 256 |
| Attention Heads | 4 |
| Layers per Scale | 3 |
| Downsample Factors | [1, 2, 4] |
| License | Apache 2.0 |
Sources
- Repository: GitHub
- Organization: KoinicLabs
Uses
Direct Use
Advanced conversational AI for code explanation.
import torch
from multiscale_transformer.model.model import MultiScaleTransformer
from multiscale_transformer.training.tokenizer import ByteTokenizer
ckpt = torch.load("axl_chat_pro.pt", map_location="cpu")
model = MultiScaleTransformer(config)
model.load_state_dict(ckpt["model_state_dict"])
model.eval()
tokenizer = ByteTokenizer()
ids = torch.tensor([tokenizer.encode("def hello():")], dtype=torch.long)
with torch.no_grad():
out = model.generate(ids, max_new_tokens=50, temperature=0.8)
print(tokenizer.decode(out[0].tolist()))
Out-of-Scope Use
Not for general code generation. Task-specific model. For integration with tools like Continue.dev, LlamaIndex, or LangChain, use the Python API server which provides OpenAI-compatible endpoints.
Bias, Risks, and Limitations
Byte-level perplexity is not comparable to BPE-level perplexity. Specialized for chat. Max context 256 bytes. IMPORTANT: GGUF files exported for Ollama/LM Studio use only the fine-scale encoder (1/3 of the AXL architecture). The reported PPL applies to the full multi-scale model. For full AXL quality, use the Python API server at http://localhost:8880/v1/completions.
Recommendations
- Use for prototyping and experimentation, not production code generation.
- Byte-level perplexity (258 vocab) is not comparable to BPE-level perplexity (32K vocab).
- For better results, use the Lion-optimized version if available.
Training Details
Training Data
Rewritten from numpy to PyTorch. Trained with Lion on 10MB chat pairs. 208 steps in 10 min.
Preprocessing
Byte-level tokenization with vocabulary size 258 (256 bytes + BOS + EOS). No vocabulary training required.
Speeds, Sizes, Times
| Metric | Value |
|---|---|
| Training Steps | 208 |
| Training Time | 10 min |
| Final Loss | 0.3106 |
Evaluation
Metrics
Perplexity on held-out Python code using byte-level tokenization.
Results
| Metric | Value |
|---|---|
| Perplexity (byte-level) | 1.34 |
| Final Loss | 0.3106 |
| Training Steps | 208 |
| Training Time | 10 min |
Summary: Better quality than AXL-Chat-Lion (PPL 1.34 vs 1.52).
Environmental Impact
| Property | Value |
|---|---|
| Hardware | AMD Ryzen 5 5600G |
| Hours Used | 0.167 |
| Carbon Emitted | 0.0070 kg CO2 |
| Cloud Provider | None (local CPU) |
Technical Specifications
Model Architecture
Multi-Scale Transformer with three parallel encoder stacks at resolution scales 1x, 2x, and 4x. Cross-scale attention connects all scale pairs. Adaptive gating fusion. SwiGLU feed-forward. RoPE positional encoding.
Compute Infrastructure
| Property | Value |
|---|---|
| Hardware | AMD Ryzen 5 5600G (6 cores, 12 threads) |
| RAM | 16 GB |
| GPU | None (CPU-only) |
Citation
@misc{axl_2026,
title={AXL: AXL-Chat-Pro - Multi-Scale Transformer for CPU Code Generation},
author={Koinic},
year={2026},
url={https://huggingface.co/KoinicLabs}
}
How to Get Started
With Ollama
ollama create axl-chat-pro -f Modelfile
ollama run axl-chat-pro "def fibonacci():"
With Python
import torch
from multiscale_transformer.model.config import load_config
from multiscale_transformer.model.model import MultiScaleTransformer
from multiscale_transformer.training.tokenizer import ByteTokenizer
config = load_config("config.json")
model = MultiScaleTransformer(config)
ckpt = torch.load("axl_chat_pro.pt", map_location="cpu")
model.load_state_dict(ckpt["model_state_dict"])
model.eval()
tokenizer = ByteTokenizer()
prompt = "def fibonacci():"
ids = torch.tensor([tokenizer.encode(prompt)], dtype=torch.long)
with torch.no_grad():
out = model.generate(ids, max_new_tokens=100, temperature=0.8, top_k=40)
print(tokenizer.decode(out[0].tolist()))
- Downloads last month
- 10
4-bit
16-bit
Collection including KoinicLabs/AXL-Chat-Pro
Evaluation results
- Perplexity (byte-level)self-reported1.340
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="KoinicLabs/AXL-Chat-Pro", filename="", )