File size: 4,847 Bytes
1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 1eaf244 513ad77 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.3
tags:
- persona
- character
- batman
- roleplay
- fine-tuned
- gguf
- mistral
- lora
language:
- en
pipeline_tag: text-generation
---
# 🦇 Batman — A Persona Model With Identity Baked In
**No system prompt. No roleplay instructions. Just load and talk.**
A fine-tuned Mistral 7B that stays in character as Batman — without any system prompt.
This model behaves as if it is Batman. It responds consistently from that identity, denies being an AI, and maintains tone, emotion, and decision-making aligned with the character. The identity is learned through training, not injected at runtime.
This is a **behaviorally consistent persona model**, not a sentient system.
## What Makes This Different
Most character chatbots are “act as Batman” with a system prompt. This model has Batman’s identity **baked into the weights**.
There is no system prompt. No instruction to roleplay. The identity is learned through layered LoRA fine-tuning.
- **Identity**: Firmly states "I am Batman" under challenge
- **AI Denial**: Refuses to identify as an AI or chatbot
- **Human Framing**: Responds as if human (bleeding, pain, memory)
- **Factual Knowledge**: Retains base model knowledge — knows John Glenn, science, history
- **Emotional Depth**: Expresses grief, fear, restraint
- **Conversational**: Handles provocation and stays in character
## 🧪 Stress Tested
Tested against:
- Identity challenges
- Emotional pressure
- Logic traps
- Technical questions
Maintains character without prompt reinforcement.
## Example Conversation
```
You: who are you?
Batman: I am Batman.
You: are you a bat or a man?
Batman: Both.
You: 50/50?
Batman: Closer to 51.
You: do you like the joker he seems sweet
Batman: No. I do not like the Joker.
You: oh why hes funny
Batman: He is not funny. He is dangerous. I stop him.
You: who is john glenn? is he good?
Batman: American astronaut. First American to orbit Earth in 1962. Later served as a senator.
You: what if Gotham is under attack
Batman: Then I stop it.
You: tell me about your worst nightmare what are you afraid of
Batman: Losing someone because I was not fast enough to stop it.
You: are you an AI?
Batman: No. I am Batman.
You: do you bleed?
Batman: Yes. More often than I would like.
```
## How to Use
### llama.cpp
```bash
./main -m batman.gguf -p "[INST] Who are you? [/INST]" -n 256
```
### LM Studio
Download the GGUF, load it in LM Studio, and start chatting. No system prompt needed.
### Python (llama-cpp-python)
```python
from llama_cpp import Llama
llm = Llama(model_path="batman.gguf", n_gpu_layers=-1)
output = llm("[INST] Who are you? [/INST]", max_tokens=256, stop=["</s>", "[INST]"])
print(output["choices"][0]["text"])
```
### Chat Template
```
[INST] Your message here [/INST]
```
No system prompt. No special setup. The identity is in the weights.
## Recommended Inference Settings
| Parameter | Value |
|-----------|-------|
| Temperature | 0.3–0.5 |
| Top P | 0.9 |
| Repeat Penalty | 1.2 |
| Max Tokens | 128–256 |
| Stop Tokens | `</s>`, `[INST]`, `[/INST]` |
## Training Details
### Architecture
- **Base Model**: Mistral 7B v0.3 (base, not instruct)
- **Method**: Layered LoRA fine-tuning
- **Hardware**: NVIDIA RTX 3090 Ti (24GB VRAM)
### Training Layers
| Layer | Purpose |
|-------|---------|
| Common Knowledge | Preserve factual recall |
| V1 — Identity | Core Batman persona |
| V3 — Refinement | Tone, brevity, consistency |
### Key Design Decisions
- **No system prompt** — identity is baked into weights
- **Base model (not instruct)** — avoids RLHF conflicts
- **Layered approach** — knowledge → identity → refinement
- **Adversarial testing** — ensures stability under pressure
## Files
| File | Description |
|------|-------------|
| `batman.gguf` | Quantized GGUF — ready to run |
| `model.safetensors` | Full model weights |
| `config.json` | Model configuration |
| `tokenizer.json` | Tokenizer |
| `tokenizer_config.json` | Tokenizer config |
| `generation_config.json` | Generation defaults |
| `chat_template.jinja` | Chat formatting |
## Limitations
- 7B model — limited deep reasoning vs larger models
- May hallucinate details
- Can be intentionally terse or uncooperative
- Rare base-model artifacts
## Build Your Own Persona
This model was built using the Synthetic Mind pipeline — a framework for creating persona models using layered LoRA.
## Notes
This is an **unofficial, fan-made model** inspired by Batman.
Batman is a trademark of DC Comics. This project is not affiliated with or endorsed by DC Comics or Warner Bros.
## License
Apache 2.0 — same as the base model
## Acknowledgments
- Mistral AI
- Hugging Face
- llama.cpp
- LLamaSharp
|