Instructions to use youngbongbong/empathymodel with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use youngbongbong/empathymodel with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="youngbongbong/empathymodel", filename="merged-empathy-8.0B-chat-Q4_K_M.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use youngbongbong/empathymodel with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf youngbongbong/empathymodel:Q4_K_M # Run inference directly in the terminal: llama-cli -hf youngbongbong/empathymodel:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf youngbongbong/empathymodel:Q4_K_M # Run inference directly in the terminal: llama-cli -hf youngbongbong/empathymodel:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf youngbongbong/empathymodel:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf youngbongbong/empathymodel:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf youngbongbong/empathymodel:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf youngbongbong/empathymodel:Q4_K_M
Use Docker
docker model run hf.co/youngbongbong/empathymodel:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use youngbongbong/empathymodel with Ollama:
ollama run hf.co/youngbongbong/empathymodel:Q4_K_M
- Unsloth Studio new
How to use youngbongbong/empathymodel with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for youngbongbong/empathymodel to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for youngbongbong/empathymodel to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for youngbongbong/empathymodel to start chatting
- Docker Model Runner
How to use youngbongbong/empathymodel with Docker Model Runner:
docker model run hf.co/youngbongbong/empathymodel:Q4_K_M
- Lemonade
How to use youngbongbong/empathymodel with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull youngbongbong/empathymodel:Q4_K_M
Run and chat with the model
lemonade run user.empathymodel-Q4_K_M
List all available models
lemonade list
llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)π¬ EM-BLOSSOM: Empathy-Stage Korean Chatbot Model
π§ λͺ¨λΈ μκ°
μ΄ λͺ¨λΈμ Transtheoretical Model (TTM)μ Pre-contemplation λ¨κ³μ ν΄λΉνλ
**κ³΅κ° κΈ°λ° λν(Empathic Listening, κ°μ λͺ
λͺ
)**μ νΉνλ νκ΅μ΄ μ±λ΄ λͺ¨λΈμ
λλ€.
- π©΅ μ¬μ©μμ κ°μ μ μ νν νμ νκ³ λ°μνλ 곡κ°μ λ°μ μμ±
- β¨ GPT κΈ°λ° μλλ¦¬μ€ μμ± β μμμ μ μ ν Bllossom LLM νμΈνλ
- β llama.cpp νΈν κ°λ₯ν GGUF ν¬λ§·
π― μ¬μ© λͺ©μ
- μ μ κ±΄κ° μ±λ΄ μμ€ν μ λμ λΆ, κ°μ μ΄ν΄ κΈ°λ° μλ΅ μ€κ³
- κ°μ λͺ
λͺ
, λΉνλ¨μ λ°μ, λΉμΈμ΄μ λ°μ μ¬ν λ±
μΈμ§νλμΉλ£(CBT) νλ¦ λ΄ κ³΅κ°μ μ°κ²°μ μν λͺ¨λΈλ‘ νμ©
π λͺ¨λΈ μΈλΆ μ 보
| νλͺ© | μ€λͺ |
|---|---|
| Base model | llama-3-Korean-Bllossom-8B |
| Fine-tuning type | Instruction-tuned, GPT-gen dialogue |
| Format | GGUF (merged-empathy-8.0B-chat-Q4_K_M.gguf) |
| Turns Trained | μ½ 800ν΄μ κ³΅κ° μλλ¦¬μ€ |
| Compatible with | llama.cpp, koboldcpp, Web UI λ± |
π¬ μμ λν
[μ¬μ©μ] μμ¦μ κ·Έλ₯... μ무κ²λ νκΈ° μ«μ΄μ.
[μ±λ΄] μ무κ²λ νκΈ° μ«λ€λ λ§μ, μ λ§ λ§μ΄ μ§μΉκ³ κ³μ κ² κ°μμ.
νΉμ μ΅κ·Όμ κ°μ μ μΌλ‘ νλ€μλ μΌμ΄ μμλμ?
π νμ΅ λ°μ΄ν°
- GPT-4 κΈ°λ° λ€μ€ν΄ κ°μ νν μλλ¦¬μ€ μμ±
- κ°μ λͺ λͺ , λ°μ μ§λ¬Έ, κ²½μ²μ λ°μμ μ€μ¬μΌλ‘ μμμ μ μ
- μ΄ 800ν΄, νκ΅μ΄ μ€μ¬, μ΄κΈ° κ°μ νν νμ ꡬ쑰μ λ§μΆ€
β οΈ μ μμ¬ν
- μ€μ μ¬λ¦¬μλ΄μ λ체νμ§ μμ΅λλ€.
- λΉμμ μ μ°κ΅¬, μλ΄ μμ€ν νλ‘ν νμ΄ν μ©λλ‘ μΆμ²λ©λλ€.
- κ°μ νν μ λ λͺ¨λΈμ΄λ―λ‘ μν©λ³ νν°λ§ νμν μ μμ΅λλ€.
π©βπ» κ°λ°μ μ 보
- μ΄λ¦: μ€μμ (SoYoung Yun)
- μ΄λ©μΌ: thdud041113@g.skku.edu
- GitHub: @yunsoyoung2004
π€ μ΄ λͺ¨λΈμ "λΉμ μ κ°μ μ λ¨Όμ λ€μ΄μ£Όλ μ±λ΄"μ μν μμμ μ λλ€.
likes: 0μ΄λλΌλ, κ·Έ κ°μ λ§νΌμ μ§μ¬μΌλ‘ λ°μλ€μ λλ€.
- Downloads last month
- 10
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="youngbongbong/empathymodel", filename="merged-empathy-8.0B-chat-Q4_K_M.gguf", )