How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="DedeProGames/chennus")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("DedeProGames/chennus")
model = AutoModelForCausalLM.from_pretrained("DedeProGames/chennus")
Quick Links

Model Card for Chennus

This is Chennus, my custom Chess AI model trained to play competitive chess on
Chess LLM Arena.

Chennus was trained on a 1500 ELO rating dataset, and despite the modest training base,
it achieved high performance levels on
Chess LLM Arena
.

Chennus is free for anyone to use for chess finetuning, as long as you clearly state
in your model card or template that your work was based on Chennus.

Downloads last month
8
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including DedeProGames/chennus