Instructions to use Tesslate/OmniCoder-9B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Tesslate/OmniCoder-9B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Tesslate/OmniCoder-9B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("Tesslate/OmniCoder-9B") model = AutoModelForImageTextToText.from_pretrained("Tesslate/OmniCoder-9B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Tesslate/OmniCoder-9B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Tesslate/OmniCoder-9B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Tesslate/OmniCoder-9B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Tesslate/OmniCoder-9B
- SGLang
How to use Tesslate/OmniCoder-9B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Tesslate/OmniCoder-9B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Tesslate/OmniCoder-9B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Tesslate/OmniCoder-9B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Tesslate/OmniCoder-9B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Tesslate/OmniCoder-9B with Docker Model Runner:
docker model run hf.co/Tesslate/OmniCoder-9B
35b variant?
Are there plans to release a 35B variant of this model?
I'd love to see a 35B too
same, I just tried this model on my toy project and it is really impressive! my fav so far by far! Thank you! Now i am wondering how better would 35 A3B be? will it be with only 3B active tho?
Working writing the training pipeline for it right now. We were training on modal but since their gpus are usually spot instances, it keeps crashing us.
How does this model compare to Qwen-Coder-Next-80B MoE model in terms of real world usage?
Thank u smirki for the hard work , cant wait to see the end result of the 35b
How does this model compare to Qwen-Coder-Next-80B MoE model in terms of real world usage?
That would be a nice comparison to have!
9B is a dense model right? Wouldn't it be easier to train the 27B?
Upvote also for 35B variant because of cpu offloading makes it good for low vram. Also 4B would be huge for lapstops for small editing.
9B is a dense model right? Wouldn't it be easier to train the 27B?
but i can run 35b with 12g vram + 32g ram, not the 27b...