Instructions to use uw-ssec/OLMo-7B-Instruct-hf with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use uw-ssec/OLMo-7B-Instruct-hf with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="uw-ssec/OLMo-7B-Instruct-hf") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("uw-ssec/OLMo-7B-Instruct-hf") model = AutoModelForCausalLM.from_pretrained("uw-ssec/OLMo-7B-Instruct-hf") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use uw-ssec/OLMo-7B-Instruct-hf with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "uw-ssec/OLMo-7B-Instruct-hf" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "uw-ssec/OLMo-7B-Instruct-hf", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/uw-ssec/OLMo-7B-Instruct-hf
- SGLang
How to use uw-ssec/OLMo-7B-Instruct-hf with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "uw-ssec/OLMo-7B-Instruct-hf" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "uw-ssec/OLMo-7B-Instruct-hf", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "uw-ssec/OLMo-7B-Instruct-hf" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "uw-ssec/OLMo-7B-Instruct-hf", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use uw-ssec/OLMo-7B-Instruct-hf with Docker Model Runner:
docker model run hf.co/uw-ssec/OLMo-7B-Instruct-hf
OLMo 7B-Instruct-hf
For more details on OLMO-7B-Instruct, refer to Allen AI's OLMo-7B-Instruct model card.
OLMo is a series of Open Language Models designed to enable the science of language models. The OLMo base models are trained on the Dolma dataset. The Instruct version is trained on the cleaned version of the UltraFeedback dataset.
OLMo 7B Instruct is trained for better question answering. They show the performance gain that OLMo base models can achieve with existing fine-tuning techniques.
This version is for direct use with HuggingFace Transformers from v4.40 on.
Run instructions are forthcoming.
For faster inference with llama.cpp or similar software supporting the GGUF format, you can find this model as GGUF at ssec-uw/OLMo-7B-Instruct-GGUF.
Contact
For errors in this model card, contact Don or Anant, {landungs, anmittal} at uw dot edu.
Acknowledgement
We would like to thank the hardworking folks at Allen AI for providing the original model.
Additionally, the work to convert the model to the new hf version was done by the
University of Washington Scientific Software Engineering Center (SSEC),
as part of the Schmidt Futures Virtual Institute for Scientific Software (VISS).
- Downloads last month
- 11