Instructions to use kevin009/flyingllama-v2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use kevin009/flyingllama-v2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="kevin009/flyingllama-v2")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("kevin009/flyingllama-v2") model = AutoModelForCausalLM.from_pretrained("kevin009/flyingllama-v2") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use kevin009/flyingllama-v2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "kevin009/flyingllama-v2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "kevin009/flyingllama-v2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/kevin009/flyingllama-v2
- SGLang
How to use kevin009/flyingllama-v2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "kevin009/flyingllama-v2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "kevin009/flyingllama-v2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "kevin009/flyingllama-v2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "kevin009/flyingllama-v2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use kevin009/flyingllama-v2 with Docker Model Runner:
docker model run hf.co/kevin009/flyingllama-v2
Model Description
kevin009/flyingllama-v2 is a language model leveraging the Llama architecture. It is tailored for text generation and various natural language processing tasks. The model features a hidden size of 1024, incorporates 24 hidden layers, and is equipped with 16 attention heads. It utilizes a vocabulary comprising 50304 tokens and is fine-tuned using the SiLU activation function. Model Usage
This model is well-suited for tasks such as text generation, language modeling, and other natural language processing applications that require understanding and generating human-like language. Limitations
Like any model, kevin009/flyingllama may have limitations related to its architecture and training data. Users should assess its performance for specific use cases.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 30.19 |
| AI2 Reasoning Challenge (25-Shot) | 24.74 |
| HellaSwag (10-Shot) | 38.44 |
| MMLU (5-Shot) | 26.37 |
| TruthfulQA (0-shot) | 41.30 |
| Winogrande (5-shot) | 50.28 |
| GSM8k (5-shot) | 0.00 |
- Downloads last month
- 99
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard24.740
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard38.440
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard26.370
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard41.300
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard50.280
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard0.000