Instructions to use JusteLeo/emotion-text-classifier-LLM with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use JusteLeo/emotion-text-classifier-LLM with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="JusteLeo/emotion-text-classifier-LLM", filename="EmotionTextClassifierLLM.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use JusteLeo/emotion-text-classifier-LLM with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf JusteLeo/emotion-text-classifier-LLM # Run inference directly in the terminal: llama-cli -hf JusteLeo/emotion-text-classifier-LLM
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf JusteLeo/emotion-text-classifier-LLM # Run inference directly in the terminal: llama-cli -hf JusteLeo/emotion-text-classifier-LLM
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf JusteLeo/emotion-text-classifier-LLM # Run inference directly in the terminal: ./llama-cli -hf JusteLeo/emotion-text-classifier-LLM
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf JusteLeo/emotion-text-classifier-LLM # Run inference directly in the terminal: ./build/bin/llama-cli -hf JusteLeo/emotion-text-classifier-LLM
Use Docker
docker model run hf.co/JusteLeo/emotion-text-classifier-LLM
- LM Studio
- Jan
- Ollama
How to use JusteLeo/emotion-text-classifier-LLM with Ollama:
ollama run hf.co/JusteLeo/emotion-text-classifier-LLM
- Unsloth Studio new
How to use JusteLeo/emotion-text-classifier-LLM with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for JusteLeo/emotion-text-classifier-LLM to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for JusteLeo/emotion-text-classifier-LLM to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for JusteLeo/emotion-text-classifier-LLM to start chatting
- Docker Model Runner
How to use JusteLeo/emotion-text-classifier-LLM with Docker Model Runner:
docker model run hf.co/JusteLeo/emotion-text-classifier-LLM
- Lemonade
How to use JusteLeo/emotion-text-classifier-LLM with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull JusteLeo/emotion-text-classifier-LLM
Run and chat with the model
lemonade run user.emotion-text-classifier-LLM-{{QUANT_TAG}}List all available models
lemonade list
Emotion Classification GGUF
Model Description
This repository contains a GGUF version of gemma-3-1b-it-qat, specially configured for zero-shot emotion classification.
The goal is to offer a lightweight, fast, and universal alternative to traditional classifiers (like fine-tuned BERT models). Instead of relying on a model trained on a fixed dataset, this GGUF leverages the power of a foundational language model and a modified chat template to transform it into a specialized text analysis tool.
This approach makes emotion classification highly accessible, requiring no specialized training or complex setups.
โจ Key Features
- โก Fast & Accessible: The GGUF format allows for very fast inference, even on a CPU, making emotion classification accessible without a powerful GPU.
- ๐ฏ Prompt-Specialized: The model is guided by a detailed, built-in system prompt that instructs it to classify text against a predefined list of 30+ emotions and provide an explanation in a structured JSON format.
- ๐ Stateless (No Conversation Memory): Thanks to the custom template, the model only considers the user's current input. It has no conversational memory, making it perfect for API-like use cases (one input -> one output).
- ๐ Multilingual: Based on the Gemma model, it is theoretically capable of classifying emotions in any language supported by the base model. Performance will vary depending on the base model's proficiency in a given language.
- ๐ง Easily Adaptable: While this model is ready for emotion classification, the underlying method can be easily adapted for other NLP tasks like sentiment analysis, intent detection, or topic modeling simply by changing the system prompt.
๐ How to Use
This model is designed to be used with any GGUF-compatible runner, such as llama.cpp, LM Studio, Ollama, and others.
The core logic is embedded directly into the chat template within the GGUF file. Most modern tools will automatically detect and use this template. All you need to do is provide your text as the user's prompt, and the model will perform the classification.
Expected Output
The model will return a response in the JSON format specified in the prompt:
Input:
"le ciel est bleu"
Model Output:
{
"emotions": [ "Neutral" ],
"explanation": "The sentence simply describes a visual observation of the sky โ itโs neutral in terms of expressing emotion."
}
Emotions List
- Contentment
- Joy
- Euphoria
- Excitement
- Disappointment
- Sadness
- Regret
- Irritation
- Frustration
- Anger
- Anxiety
- Fear
- Astonishment
- Disgust
- Hate
- Pleasure
- Desire
- Affection
- Trust
- Distrust
- Gratitude
- Compassion
- Admiration
- Contempt
- Guilt
- Shame
- Pride
- Jealousy
- Envy
- Hope
- Nostalgia
- Relief
- Curiosity
- Boredom
- Neutral
- Fatigue
๐ ๏ธ The Trick: The Custom Chat Template
This model's specialization comes from a custom Jinja2 chat template, not from fine-tuning. This template forces the model to adopt a specialized question-answering behavior.
Hereโs how it works:
- Hardcoded System Prompt: A detailed system prompt is embedded at the very beginning of every request, instructing the model on its role, the list of possible emotions, and the required JSON output format.
- Ignoring History: The template uses a
{% if loop.last %}condition. This ensures that only the very last user message is processed, making the model stateless and perfect for single-shot tasks.
Here is the template baked into this GGUF file:
{{ bos_token }}<start_of_turn>system
You are an emotion classification assistant. Your task is to analyze ALL given sentence and classify it emotions chosen from Contentment, Joy, Euphoria, Excitement, Disappointment, Sadness, Regret, Irritation, Frustration, Anger, Anxiety, Fear, Astonishment, Disgust, Hate, Pleasure, Desire, Affection, Trust, Distrust, Gratitude, Compassion, Admiration, Contempt, Guilt, Shame, Pride, Jealousy, Envy, Hope, Nostalgia, Relief, Curiosity, Boredom, Neutral, fatigue, Trust You can choose one or several emotions follow this format
___json
{
"emotions": [ " "
],
"explanation": "This is the explanation related to the listed emotions."
}
___
begin<end_of_turn>
{%- for message in messages %}
{%- if loop.last and message['role'] == 'user' -%}
{{ '<start_of_turn>user
' + message['content'] | trim + '<end_of_turn>
' }}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{ '<start_of_turn>model
' }}
{%- endif -%}
Note : ___ must be replaced by ```
โ ๏ธ Limitations & Performance
It is important to note that this model has not been evaluated on academic emotion classification benchmarks. Its performance is based on qualitative testing and may vary.
- Accuracy: While results are often very good, they might be less precise than a specialized model fine-tuned on a domain-specific dataset.
- Base Model Dependency: The quality of the classification is entirely dependent on the intrinsic capabilities of the original base model.
- Format Robustness: For very complex, ambiguous, or adversarial inputs, the model might occasionally fail to adhere strictly to the JSON output format.
Acknowledgements
- google team with gemma3-1b
- LM Studio to carry out the tests
- GGUF editor by Sigbjรธrn Skjรฆret
- Downloads last month
- 5
We're not able to determine the quantization variants.