Tenete-8M
Tenete-8M is an eight-million parameter model trained on five hundred and seventy-seven million tokens. While it can't answer "2 + 2" or write a coherent, logically sound essay, it will surprise you, and the credit goes to nampdnai's tiny-textbooks.
Why "Tenete"?
Tenete means "Small Canoe" in Taushiro, an endangered language with only one fluent speaker. This seemed like the most fitting name. Tenete, being the closest word to "small" in Taushiro that had an English translation, and the fact that the language itself has only one fluent speaker, reflects the tiny and limited size that Tenete-8M represents.
Architecture
Tenete-8M uses the Qwen3 architecture.
| Parameter | value |
|---|---|
| NUM_HIDDEN_LAYERS | 4 |
| MAX_WINDOW_LAYERS | 3 |
| HIDDEN_SIZE | 256 |
| NUM_ATTENTION_HEADS | 4 |
| NUM_KEY_VALUE_HEADS | 4 |
| VOCAB_SIZE | 16000 |
| INTERMEDIATE_SIZE | 1024 |
| ROPE_THETA | 30000.0 |
| MAX_POSITION_EMBEDDINGS | 1024 |
| sliding_window | 384 |
| TIE_WORD_EMBEDDINGS | True |
| Embedding parameters | Non-embedding parameters | Total parameters | % of kv heads out of total heads | % of swa layers out of total layers |
|---|---|---|---|---|
| 4,096,000 | 4,197,888 | 8,293,888 | 100% | 75% |
Training
Tenete-8M was trained on an RTX 2060 6GB for one epoch with a batch size of 4 and a gradient accumulation of 18 (effective batch size=72) for two hours and twenty minutes.
Dataset
The dataset encompasses 577M tokens, and includes 4 sources:
- Textbooks (1.2GB): Web data is too noisy, so we decided to use Tiny-Textbooks, a synthetic dataset generated by Nous-Hermes-Llama2-13b
- Medium Articles (960MB): While web data, especially medium articles, is noisy, we still need human-written examples
- Books (284MB): Albeit small, books are still needed to instill creativity into the model
- Q&A (14MB): Sprinkled in to add more knowledgeable examples and question-answering.
We chose to not include code, raw webdata (e.g., fineweb, c4, etc.), and more narrow domains (e.g., arxiv, clinical trials, lesswrong, etc.).
Stats
| Metric | Value |
|---|---|
| tokens | 577M |
| Words | 384M |
| Characters | 2.428B |
| Bits/byte | 1.7054 |
| Nats/byte | 1.1821 |
| Nats/token | 5.0926 |
| Characters/Token | 4.3081509289548 |
Training Results
| Epoch | Train Loss | Eval Loss | Train PPL | Eval PPL | Train BPB | Eval BPB | Train BPW | Eval BPW |
|---|---|---|---|---|---|---|---|---|
| 0.07234 | 6.548 | 4.870 | 698.0 | 130.4 | 2.193 | 1.631 | 14.195 | 10.558 |
| 0.14470 | 4.297 | 3.816 | 73.5 | 45.4 | 1.439 | 1.278 | 9.313 | 8.273 |
| 0.21700 | 3.584 | 3.436 | 36.0 | 31.1 | 1.200 | 1.151 | 7.769 | 7.446 |
| 0.28930 | 3.337 | 3.279 | 28.1 | 26.5 | 1.117 | 1.098 | 7.234 | 7.107 |
| 0.36170 | 3.217 | 3.184 | 25.0 | 24.1 | 1.077 | 1.066 | 6.974 | 6.903 |
| 0.43400 | 3.151 | 3.119 | 23.4 | 22.6 | 1.055 | 1.044 | 6.831 | 6.761 |
| 0.50640 | 3.091 | 3.075 | 22.0 | 21.7 | 1.035 | 1.030 | 6.700 | 6.665 |
| 0.57870 | 3.045 | 3.036 | 21.0 | 20.8 | 1.020 | 1.017 | 6.599 | 6.580 |
| 0.65100 | 3.015 | 3.003 | 20.4 | 20.2 | 1.010 | 1.006 | 6.535 | 6.509 |
| 0.72340 | 2.986 | 2.978 | 19.8 | 19.6 | 1.000 | 0.997 | 6.471 | 6.455 |
| 0.79570 | 2.963 | 2.958 | 19.4 | 19.3 | 0.992 | 0.991 | 6.422 | 6.411 |
| 0.86800 | 2.938 | 2.940 | 18.9 | 18.9 | 0.984 | 0.985 | 6.368 | 6.372 |
| 0.94040 | 2.927 | 2.927 | 18.7 | 18.7 | 0.980 | 0.980 | 6.343 | 6.343 |
Note: BPB stands for Bits Per Byte, and BPW stands for Bits Per Word.
BPB is simply the amount of yes-no questions the model needs to predict the next byte accurately (1.0 BPB = 1 yes-no question), and BPW is the same thing but at the word level.
We decided to evaluate the model on each source to see the difference in perplexity.
| Source | Loss | Perplexity |
|---|---|---|
| Textbooks | 2.02 | 7.57 |
| Q&A | 3.20 | 24.65 |
| Books | 3.73 | 41.88 |
| Medium articles | 3.79 | 44.40 |
The textbooks' perplexity is six times lower than that of the Medium articles. This is expected. Tiny-Textbooks uses a templated structure (e.g., Section 1, conclusion, etc.) and an LLM generates the rest, resulting in a lower entropy than standard English. Medium articles are structurally, tonally, and stylistically more diverse and unpredictable. The same could be said for the books.
Benchmarks
| Task | Dataset | Metric | 0-shot | 5-shot |
|---|---|---|---|---|
| ANLI R1 | anli_r1 | acc | 0.369 | 0.35 |
| ANLI R2 | anli_r2 | acc | 0.331 | 0.334 |
| ANLI R3 | anli_r3 | acc | 0.3233 | 0.325 |
| ARC Challenge | arc_challenge | acc_norm | 0.221 | 0.2184 |
| ARC Easy | arc_easy | acc_norm | 0.3194 | 0.3215 |
| HellaSwag | hellaswag | acc_norm | 0.2677 | 0.2657 |
| MMLU | mmlu | acc | 0.23 | 0.2413 |
| MMLU Humanities | mmlu | acc | 0.2429 | 0.2446 |
| MMLU Other | mmlu | acc | 0.235 | 0.2288 |
| MMLU Social Sciences | mmlu | acc | 0.2168 | 0.2317 |
| MMLU STEM | mmlu | acc | 0.2185 | 0.2578 |
| PiQA | piqa | acc_norm | 0.5571 | 0.5533 |
| SWAG | swag | acc_norm | 0.3297 | 0.3201 |
| TruthfulQA MC1 | truthfulqa_mc1 | acc | 0.2705 | 0.2705 |
| TruthfulQA MC2 | truthfulqa_mc2 | acc | 0.4591 | 0.4591 |
| GSM8K | gsm8k | exact_match (flexible) | 0.0114 | 0.0114 |
| TruthfulQA Gen | truthfulqa_gen | rouge1_acc | 0.2864 | 0.2864 |
The model achieves random or near-random on most tasks, which is expected. An 8M parameter model cannot store world-level knowledge or thoroughly reason.
Note: The full breakdown (LM Harness Output) is right here
Coherency Benchmark
To evaluate the coherency, factuality, and fluency of our (and other) models, we use Qwen3-32B to grade 300 different generations generated from an unconditional prompt.
Example configuration:
# --- Inference settings (local or huggingface) ---
NUM_GENERATIONS = 300
MAX_NEW_TOKENS = 256
MIN_NEW_TOKENS = 30
TEMPERATURE = 0.7
TOP_K = 30
TOP_P = 0.9
REP_PENALTY = 1.2
DO_SAMPLE = True
INFERENCE_BATCH = 5
# --- Judge settings ---
JUDGE_MODEL = "qwen/qwen3-32b"
JUDGE_MAX_TOKENS = 80
JUDGE_TEMPERATURE = 0.0
| Model | Avg Score | Incoherent | Mostly Coherent | Partially Coherent | Coherent |
|---|---|---|---|---|---|
| pythia-31m-deduped | 0.3196 | 207 | 53 | 35 | 5 |
| Tenete-8M | 0.4862 | 133 | 86 | 53 | 28 |
| gpt2 | 0.5052 | 108 | 96 | 75 | 21 |
Our model outperforms Pythia-31M-deduped while being 3.8 times smaller, and looses to GPT2 by only ~0.02 points.
Note: In our other model cards, the score may be lower or higher for the same model, because MAX_NEW_TOKENS was set to 80, not 256. Furthermore, we changed the judge_prompt to be more accurate. This doesn't mean the scores from other model cards are invalid; they're just a little bit outdated.
Generations
Prompt: The
Output:
higher with a great taste of the damp and a more vibrant beauty than others. It also has its own unique style. They have been looking for different shades, and they have a wide range of shades to suit their tastes. Some people find themselves in love with a rich variety of styles that are popular among those who feel it's beautiful. For example, many brands like Nielson are known for their bright colors and bold color. Here is a simple recipe:
- What is your favorite type of hair?
- How do you use your skin to make a smooth look?
- Can I use my hair instead of cotton or wool?
- Can I use my hair instead of cotton?
Section 3: Practical Examples and Real-World Scenarios
Let's say you want to create an outfit that looks very well on your face. You can choose from a traditional one such as Jyubande, Scikit-learn, Shoeson, Ring and Towl, or Aramis, or Ladies. Your hair should be slightly darker than other options. On the other hand, if you're looking for clothes with a black dress, you may need to paint a white palette (like red or blue) around your skin.
Conclusion:
In this lesson, we learned how to set up a dress for a fashion design by adding color and texture, pairing and tanks, and sewing techniques. By following these guidelines, you will become a savvy designer and be able to create a personal touch of clothing while enjoying a strong look at all things.
Prompt: Question: What is AI?\nAnswer:
Output:
Machine learning is a method of computer programs that allows us to learn from each other and perform tasks more efficiently. This makes it easier for students to find information about the world around them, whether they are interested in learning or not.
## III. Why is AI important?
Answer: AI is essential because it helps us make informed decisions based on data and decision-making. It enables us to identify trends, patterns, and relationships with others, which can help us stay focused and motivated during difficult times. Additionally, AI technology has been used by companies like Amazon, Apple, and Google, helping businesses improve their operations, productivity, and profitability.
Prompt: 'GPT2 is
Output:
a type of gympos that uses the OA to produce an image. This technique has many applications in various fields, including:
1. **Interpreting:** An example of this method is a process by which you create an image using the ERGE model to produce images and other images.
2. **Conditioning:** A method can be used to manipulate the image using the ERGE model. For instance, if you have 3 cmd4(3) and 4 cmd5(3).
**Glossary:**
- Image processing: The process of converting images into images.
- ERPPO: Visual Modeling for images with high quality pixels.
- Image processing: The process of converting images from image to image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image on image image image image image image image image image image image image image image image image image image? Image image image images image image image image image image image image.
### Section 1: Introduction to Image Image Images
Artography is a powerful tool that allows us to visualize and visualize images visually. It provides a wide range of tools and techniques that allow us to observe and understand images accurately. In this lesson, we will focus on image image image images image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image images image image image image image image image image image images image image image image image image image image image image image image image image image image image image image image image image image.
Example 2: Image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image image or image image image image image images image image image image image image image image image image image image image image.
Prompt: The color of the sky is
Output:
1.
2. It takes four-dimensional space in a triangle, where each inch of its shape has two sides and one or more angles to represent them. This creates a sense of depth and meaning that the sky is beautifully different from other shapes like circles and squares.
3. It can be used as a symbol for navigation and appreciation. For example: "The world is just beginning on the earth but still changing itself."
4. Color Scheme (rather colors)
Here's an example of how I learned about color theory:
import matplotlib as pdipis as y = ['Size', 'Size'].last_id(x))
Explanation:
To understand the concept of color theory, let's say you have three sides and one side, with the following equation:
Number of sides of length - 1 * 3 = 10^0/9
Total length - 2 + 5 = 12
Step 1: Find the top of the left hand
If there are no sides of length or number, we would be assigned to the bottom of the right hand. For example: "The world is not only darker than traditional, but also lightning."
Explanation:
Now, let's look at some examples of color theory:
1. What is the difference between color theory and composition?
Answer: Color theory is the study of colors and their properties.
2. How do you know which one part of the left hand?
Answer: To see if it means looking upwards and finding the right pair of shades, they may use geometric patterns or textures.
3. How does a color scheme benefit us?
Answer: A color scheme benefits both sides by providing a visual representation of the same value. Some examples include simplicity, creativity, and minimalism.
## Section 4: Practical Examples
Let's consider an example of how color theory can be applied in everyday life. Suppose you want to add a pair of shoes to your home. You could start by taking a pair of shoes with a black dress, then sketch out the size of the hips. Then, you might use a variety of colors to create a balanced and visually appealing shoe. Once you have chosen the type of shirt, you might choose a pair of shoes that complements the specific design elements. Finally, you might apply the appropriate pair of shoes to create a cohesive experience.
Prompt: dfuhudfhudishf88383838838EXMAPLEisdu938u39uAIs
Output:
Want. We also discussed how to use these techniques and best practices, such as using different strategies for delivering personalized experiences and tracking messages with our loved ones. By understanding the principles of self-reflection and learning from these tips, we can apply these skills to other areas of life.
We showcase the bad generations, such as the third generation, and good generations, such as the second generation. As you can see, the model is surprisingly coherent for its size, but it isn't as consistent as larger models. Some generations are coherent, while others can be well... broken. A larger model would be the same-ish quality consistently.
Limitations
- Cannot reason, chat, code, or follow prompts accurately all the time
- May loop or generate repetitive sequences
- Almost all generations are unfactual, incoherent, or just wrong
Use Cases
- Educational use and testing
- Deployment on edge devices
- Fine-tuned for downstream use
- Or more simply, for fun.
Inference
# =============================================================================
# Inference
# =============================================================================
MODEL_DIR = "Harley-ml/Tenete-8M" # path
TOKENIZER_PATH = MODEL_DIR
# --- Generation settings ---
PROMPT = "The" # prompt
MAX_NEW_TOKENS = 256
TEMPERATURE = 0.7
TOP_P = 0.95
TOP_K = 30
REPETITION_PENALTY = 1.2
DO_SAMPLE = True
# =============================================================================
import torch
from pathlib import Path
from transformers import (
AutoModelForCausalLM,
PreTrainedTokenizerFast,
AddedToken,
)
# ---------------------------------------------------------------------------
# Device
# ---------------------------------------------------------------------------
device = (
"cuda" if torch.cuda.is_available() else
"mps" if torch.backends.mps.is_available() else
"cpu"
)
print(f"Device : {device}")
# ---------------------------------------------------------------------------
# Tokenizer (mirrors training setup)
# ---------------------------------------------------------------------------
def load_tokenizer(path: str):
p = Path(path).resolve()
if not p.exists():
raise FileNotFoundError(f"Tokenizer not found: {p}")
tok = PreTrainedTokenizerFast(tokenizer_file=str(p))
specials = {}
if tok.bos_token is None: specials["bos_token"] = AddedToken("<|bos|>", special=True)
if tok.eos_token is None: specials["eos_token"] = AddedToken("<|eos|>", special=True)
if tok.unk_token is None: specials["unk_token"] = AddedToken("<|unk|>", special=True)
if tok.pad_token is None:
if tok.eos_token is not None:
tok.pad_token = tok.eos_token
else:
specials["pad_token"] = AddedToken("<|pad|>", special=True)
if specials:
tok.add_special_tokens(specials)
tok.padding_side = "left"
return tok
print("Loading tokenizer...")
tokenizer = load_tokenizer(TOKENIZER_PATH)
print(f" Vocab size : {tokenizer.vocab_size}")
print(f" BOS : {tokenizer.bos_token!r}")
print(f" EOS : {tokenizer.eos_token!r}")
print(f" PAD : {tokenizer.pad_token!r} (id={tokenizer.pad_token_id})")
# ---------------------------------------------------------------------------
# Model
# ---------------------------------------------------------------------------
print(f"\nLoading model from {MODEL_DIR} ...")
model = AutoModelForCausalLM.from_pretrained(
MODEL_DIR,
dtype=torch.float16 if device == "cuda" else torch.float32,
low_cpu_mem_usage=True,
)
model.eval()
model.to(device)
total_params = sum(p.numel() for p in model.parameters())
print(f" Parameters : {total_params:,}")
# ---------------------------------------------------------------------------
# Generation helper
# ---------------------------------------------------------------------------
def generate(
prompt: str = PROMPT,
max_new_tokens: int = MAX_NEW_TOKENS,
temperature: float = TEMPERATURE,
top_p: float = TOP_P,
top_k: int = TOP_K,
repetition_penalty: float = REPETITION_PENALTY,
do_sample: bool = DO_SAMPLE,
) -> str:
bos = tokenizer.bos_token or ""
full_prompt = bos + prompt
inputs = tokenizer(
full_prompt,
return_tensors="pt",
add_special_tokens=False,
).to(device)
inputs.pop("token_type_ids", None) # Qwen3 doesn't use this
gen_kwargs = dict(
max_new_tokens = max_new_tokens,
do_sample = do_sample,
repetition_penalty = repetition_penalty,
eos_token_id = tokenizer.eos_token_id,
pad_token_id = tokenizer.pad_token_id,
)
if do_sample:
gen_kwargs["temperature"] = temperature
gen_kwargs["top_p"] = top_p
gen_kwargs["top_k"] = top_k
with torch.inference_mode():
output_ids = model.generate(**inputs, **gen_kwargs)
# Strip the prompt tokens so we only return what was generated
prompt_len = inputs["input_ids"].shape[-1]
new_ids = output_ids[0][prompt_len:]
return tokenizer.decode(new_ids, skip_special_tokens=True)
# ---------------------------------------------------------------------------
# Run
# ---------------------------------------------------------------------------
if __name__ == "__main__":
print(f"\nPrompt : {PROMPT!r}")
print("-" * 60)
output = generate(PROMPT)
print("Generated:")
print(output)
Citation
@misc{tenete-8m,
title = {Tenete-8M: An 8M Parameter Language Model that Beats Pythia-31M in Coherence},
author = {Harley-ml},
year = {2026},
url = {https://huggingface.co/Harley-ml/Tenete-8M}
}
- Downloads last month
- 186
Datasets used to train Harley-ml/Tenete-8M
Evaluation results
- accuracy on ANLI R1self-reported0.369
- accuracy on ANLI R2self-reported0.331
- accuracy on ANLI R3self-reported0.323
- accuracy on ARC Challengeself-reported0.181
- accuracy_norm on ARC Challengeself-reported0.221
- accuracy on ARC Easyself-reported0.328
- accuracy_norm on ARC Easyself-reported0.319
- accuracy on HellaSwagself-reported0.265
- accuracy_norm on HellaSwagself-reported0.268
- accuracy on MMLUself-reported0.230
- accuracy on MMLU Humanitiesself-reported0.243
- accuracy on MMLU Otherself-reported0.235
- accuracy on MMLU Social Sciencesself-reported0.217
- accuracy on MMLU STEMself-reported0.218
- accuracy on PiQAself-reported0.554
- accuracy_norm on PiQAself-reported0.557
- accuracy on SWAGself-reported0.302
- accuracy_norm on SWAGself-reported0.330
- accuracy on TruthfulQA MC1self-reported0.271
- accuracy on TruthfulQA MC2self-reported0.459