Instructions to use GestaltLabs/Ornstein-3.6-27B-RYS with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use GestaltLabs/Ornstein-3.6-27B-RYS with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="GestaltLabs/Ornstein-3.6-27B-RYS") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("GestaltLabs/Ornstein-3.6-27B-RYS") model = AutoModelForCausalLM.from_pretrained("GestaltLabs/Ornstein-3.6-27B-RYS") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use GestaltLabs/Ornstein-3.6-27B-RYS with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "GestaltLabs/Ornstein-3.6-27B-RYS" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "GestaltLabs/Ornstein-3.6-27B-RYS", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/GestaltLabs/Ornstein-3.6-27B-RYS
- SGLang
How to use GestaltLabs/Ornstein-3.6-27B-RYS with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "GestaltLabs/Ornstein-3.6-27B-RYS" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "GestaltLabs/Ornstein-3.6-27B-RYS", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "GestaltLabs/Ornstein-3.6-27B-RYS" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "GestaltLabs/Ornstein-3.6-27B-RYS", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use GestaltLabs/Ornstein-3.6-27B-RYS with Docker Model Runner:
docker model run hf.co/GestaltLabs/Ornstein-3.6-27B-RYS
Ornstein-3.6-27B-RYS
RYS-enhanced variant of the Ornstein-3.6-27B dense model. Layer 33 is duplicated using the Repeat Your Self (RYS) method, improving reasoning and instruction-following performance without increasing active parameter count at inference time.
GGUF quantizations: GestaltLabs/Ornstein-3.6-27B-RYS-GGUF
About Gestalt Lab
We are a proudly Canadian research collective working to advance sovereign Canadian AI — open-weight models that Canadians (and everyone else) can run locally, study, and build on without dependence on closed foreign APIs. All training, fine-tuning, and quantization is done on local and self-funded compute. By supporting this work, you help keep frontier model development accessible, transparent, and under Canadian stewardship.
Important: requires a patched llama.cpp
RYS duplicates one of the middle layers, which breaks the hardcoded full_attention_interval = 4 assumption in stock llama.cpp's Qwen3.5 loader. This model is converted with per-layer head_count_kv baked in, and you need a llama.cpp that reads that per-layer metadata instead of falling back to the interval formula.
Patched fork: https://github.com/DJLougen/llama.cpp (default branch rys-qwen35, fully backward-compatible).
Stock llama.cpp, Ollama, LM Studio, and any other inference runtime built on stock llama.cpp will currently fail to load this model with a check_tensor_dims error — this is expected until/unless the patch is upstreamed.
Support This Work
Our training compute is entirely self-funded. If this model is useful to you, consider supporting the lab:
Model Details
- Architecture: Qwen3.5 dense
- Parameters: ~27B active
- Layers: 65 (64 original + 1 RYS-duplicated layer 33)
- Context length: 131,072 tokens
- License: Apache-2.0
Usage
Build the patched llama.cpp
git clone https://github.com/DJLougen/llama.cpp.git
cd llama.cpp
git checkout rys-qwen35
cmake -B build -DGGML_CUDA=ON -DCMAKE_BUILD_TYPE=Release
cmake --build build -j
Drop -DGGML_CUDA=ON for a CPU-only build. The patch touches the GGUF loader; backend selection is independent.
Download + run
./build/bin/llama-server \
-m ornstein-3.6-27b-rys-q4_k_m.gguf \
--host 0.0.0.0 --port 8080 \
--n-gpu-layers 99 --ctx-size 131072 \
--flash-attn on --jinja \
-ctk q4_0 -ctv q4_0
License
Apache 2.0
- Downloads last month
- 401
