Instructions to use llmware/bling-phi-2-gguf with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use llmware/bling-phi-2-gguf with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("llmware/bling-phi-2-gguf", dtype="auto") - llama-cpp-python
How to use llmware/bling-phi-2-gguf with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="llmware/bling-phi-2-gguf", filename="bling-phi2-tool.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use llmware/bling-phi-2-gguf with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf llmware/bling-phi-2-gguf # Run inference directly in the terminal: llama-cli -hf llmware/bling-phi-2-gguf
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf llmware/bling-phi-2-gguf # Run inference directly in the terminal: llama-cli -hf llmware/bling-phi-2-gguf
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf llmware/bling-phi-2-gguf # Run inference directly in the terminal: ./llama-cli -hf llmware/bling-phi-2-gguf
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf llmware/bling-phi-2-gguf # Run inference directly in the terminal: ./build/bin/llama-cli -hf llmware/bling-phi-2-gguf
Use Docker
docker model run hf.co/llmware/bling-phi-2-gguf
- LM Studio
- Jan
- Ollama
How to use llmware/bling-phi-2-gguf with Ollama:
ollama run hf.co/llmware/bling-phi-2-gguf
- Unsloth Studio new
How to use llmware/bling-phi-2-gguf with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for llmware/bling-phi-2-gguf to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for llmware/bling-phi-2-gguf to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for llmware/bling-phi-2-gguf to start chatting
- Docker Model Runner
How to use llmware/bling-phi-2-gguf with Docker Model Runner:
docker model run hf.co/llmware/bling-phi-2-gguf
- Lemonade
How to use llmware/bling-phi-2-gguf with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull llmware/bling-phi-2-gguf
Run and chat with the model
lemonade run user.bling-phi-2-gguf-{{QUANT_TAG}}List all available models
lemonade list
BLING-PHI-2-GGUF
bling-phi-2-gguf is part of the BLING model series, RAG-instruct trained on top of a Microsoft Phi-2B base model.
BLING models are fine-tuned with high-quality custom instruct datasets, designed for rapid prototyping in RAG scenarios.
For other similar models with comparable size and performance in RAG deployments, please see:
bling-phi-3-gguf
bling-stable-lm-3b-4e1t-v0
bling-sheared-llama-2.7b-0.1
bling-red-pajamas-3b-0.1
Benchmark Tests
Evaluated against the benchmark test: RAG-Instruct-Benchmark-Tester
Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
--Accuracy Score: 93.0 correct out of 100
--Not Found Classification: 95.0%
--Boolean: 85.0%
--Math/Logic: 82.5%
--Complex Questions (1-5): 3 (Above Average - multiple-choice, causal)
--Summarization Quality (1-5): 3 (Above Average)
--Hallucinations: No hallucinations observed in test runs.
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
Model Description
- Developed by: llmware
- Model type: Phi-2B
- Language(s) (NLP): English
- License: Apache 2.0
- Finetuned from model: Microsoft Phi-2B-Base
Uses
The intended use of BLING models is two-fold:
Provide high-quality RAG-Instruct models designed for fact-based, no "hallucination" question-answering in connection with an enterprise RAG workflow.
BLING models are fine-tuned on top of leading base foundation models, generally in the 1-3B+ range, and purposefully rolled-out across multiple base models to provide choices and "drop-in" replacements for RAG specific use cases.
Direct Use
BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources.
BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
How to Get Started with the Model
To pull the model via API:
from huggingface_hub import snapshot_download
snapshot_download("llmware/bling-phi-2-gguf", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
Load in your favorite GGUF inference engine, or try with llmware as follows:
from llmware.models import ModelCatalog
model = ModelCatalog().load_model("bling-phi-2-gguf")
response = model.inference(query, add_context=text_sample)
Note: please review config.json in the repository for prompt wrapping information, details on the model, and full test set.
The BLING model was fine-tuned with a simple "<human> and <bot> wrapper", so to get the best results, wrap inference entries as:
full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
- Text Passage Context, and
- Specific question or instruction based on the text passage
To get the best results, package "my_prompt" as follows:
my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
Model Card Contact
Darren Oberst & llmware team
- Downloads last month
- 30
We're not able to determine the quantization variants.