Instructions to use tiny-random/devstral-2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use tiny-random/devstral-2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="tiny-random/devstral-2") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiny-random/devstral-2") model = AutoModelForCausalLM.from_pretrained("tiny-random/devstral-2") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use tiny-random/devstral-2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "tiny-random/devstral-2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "tiny-random/devstral-2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/tiny-random/devstral-2
- SGLang
How to use tiny-random/devstral-2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "tiny-random/devstral-2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "tiny-random/devstral-2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "tiny-random/devstral-2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "tiny-random/devstral-2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use tiny-random/devstral-2 with Docker Model Runner:
docker model run hf.co/tiny-random/devstral-2
| {#- Default system message if no system prompt is passed. #} | |
| {%- set default_system_message = '' %} | |
| {#- Begin of sequence token. #} | |
| {{- bos_token }} | |
| {#- Handle system prompt if it exists. #} | |
| {#- System prompt supports text content or text chunks. #} | |
| {%- if messages[0]['role'] == 'system' %} | |
| {{- '[SYSTEM_PROMPT]' -}} | |
| {%- if messages[0]['content'] is string %} | |
| {{- messages[0]['content'] -}} | |
| {%- else %} | |
| {%- for block in messages[0]['content'] %} | |
| {%- if block['type'] == 'text' %} | |
| {{- block['text'] }} | |
| {%- else %} | |
| {{- raise_exception('Only text chunks are supported in system message contents.') }} | |
| {%- endif %} | |
| {%- endfor %} | |
| {%- endif %} | |
| {{- '[/SYSTEM_PROMPT]' -}} | |
| {%- set loop_messages = messages[1:] %} | |
| {%- else %} | |
| {%- set loop_messages = messages %} | |
| {%- if default_system_message != '' %} | |
| {{- '[SYSTEM_PROMPT]' + default_system_message + '[/SYSTEM_PROMPT]' }} | |
| {%- endif %} | |
| {%- endif %} | |
| {#- Tools definition #} | |
| {%- set tools_definition = '' %} | |
| {%- set has_tools = false %} | |
| {%- if tools is defined and tools is not none and tools|length > 0 %} | |
| {%- set has_tools = true %} | |
| {%- set tools_definition = '[AVAILABLE_TOOLS]' + (tools| tojson) + '[/AVAILABLE_TOOLS]' %} | |
| {{- tools_definition }} | |
| {%- endif %} | |
| {#- Checks for alternating user/assistant messages. #} | |
| {%- set ns = namespace() %} | |
| {%- set ns.index = 0 %} | |
| {%- for message in loop_messages %} | |
| {%- if message.role == 'user' or (message.role == 'assistant' and (message.tool_calls is not defined or message.tool_calls is none or message.tool_calls | length == 0)) %} | |
| {%- if (message['role'] == 'user') != (ns.index % 2 == 0) %} | |
| {{- raise_exception('After the optional system message, conversation roles must alternate user and assistant roles except for tool calls and results.') }} | |
| {%- endif %} | |
| {%- set ns.index = ns.index + 1 %} | |
| {%- endif %} | |
| {%- endfor %} | |
| {#- Handle conversation messages. #} | |
| {%- for message in loop_messages %} | |
| {#- User messages supports text content. #} | |
| {%- if message['role'] == 'user' %} | |
| {%- if message['content'] is string %} | |
| {{- '[INST]' + message['content'] + '[/INST]' }} | |
| {%- elif message['content'] | length > 0 %} | |
| {{- '[INST]' }} | |
| {%- set sorted_blocks = message['content'] | sort(attribute='type') %} | |
| {%- for block in sorted_blocks %} | |
| {%- if block['type'] == 'text' %} | |
| {{- block['text'] }} | |
| {%- else %} | |
| {{- raise_exception('Only text chunks are supported in user message content.') }} | |
| {%- endif %} | |
| {%- endfor %} | |
| {{- '[/INST]' }} | |
| {%- else %} | |
| {{- raise_exception('User message must have a string or a list of chunks in content') }} | |
| {%- endif %} | |
| {#- Assistant messages supports text content or text chunks. #} | |
| {%- elif message['role'] == 'assistant' %} | |
| {%- if (message['content'] is none or message['content'] == '' or message['content']|length == 0) and (message['tool_calls'] is not defined or message['tool_calls'] is none or message['tool_calls']|length == 0) %} | |
| {{- raise_exception('Assistant message must have a string or a list of chunks in content or a list of tool calls.') }} | |
| {%- endif %} | |
| {%- if message['content'] is string and message['content'] != '' %} | |
| {{- message['content'] }} | |
| {%- elif message['content'] | length > 0 %} | |
| {%- for block in message['content'] %} | |
| {%- if block['type'] == 'text' %} | |
| {{- block['text'] }} | |
| {%- else %} | |
| {{- raise_exception('Only text chunks are supported in assistant message contents.') }} | |
| {%- endif %} | |
| {%- endfor %} | |
| {%- endif %} | |
| {%- if message['tool_calls'] is defined and message['tool_calls'] is not none and message['tool_calls']|length > 0 %} | |
| {%- for tool in message['tool_calls'] %} | |
| {{- '[TOOL_CALLS]' }} | |
| {%- set name = tool['function']['name'] %} | |
| {%- set arguments = tool['function']['arguments'] %} | |
| {%- if arguments is not string %} | |
| {%- set arguments = arguments|tojson|safe %} | |
| {%- elif arguments == '' %} | |
| {%- set arguments = '{}' %} | |
| {%- endif %} | |
| {{- name + '[ARGS]' + arguments }} | |
| {%- endfor %} | |
| {%- endif %} | |
| {{- eos_token }} | |
| {#- Tool messages only supports text content. #} | |
| {%- elif message['role'] == 'tool' %} | |
| {{- '[TOOL_RESULTS]' + message['content']|string + '[/TOOL_RESULTS]' }} | |
| {#- Raise exception for unsupported roles. #} | |
| {%- else %} | |
| {{- raise_exception('Only user, assistant and tool roles are supported, got ' + message['role'] + '.') }} | |
| {%- endif %} | |
| {%- endfor %} |