Text Generation
Transformers
Safetensors
qwen3_moe
reasoning
olympiad
mathematics
science
reinforcement-learning
test-time-scaling
long-context
conversational
Instructions to use Simplified-Reasoning/SU-01 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Simplified-Reasoning/SU-01 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Simplified-Reasoning/SU-01") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Simplified-Reasoning/SU-01") model = AutoModelForCausalLM.from_pretrained("Simplified-Reasoning/SU-01") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Simplified-Reasoning/SU-01 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Simplified-Reasoning/SU-01" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Simplified-Reasoning/SU-01", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Simplified-Reasoning/SU-01
- SGLang
How to use Simplified-Reasoning/SU-01 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Simplified-Reasoning/SU-01" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Simplified-Reasoning/SU-01", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Simplified-Reasoning/SU-01" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Simplified-Reasoning/SU-01", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Simplified-Reasoning/SU-01 with Docker Model Runner:
docker model run hf.co/Simplified-Reasoning/SU-01
Add pipeline tag and library name to SU-01 metadata
#1
by nielsr HF Staff - opened
Hi there! I'm Niels from the community science team at Hugging Face.
This PR improves the model card for SU-01 by:
- Adding the
pipeline_tag: text-generationto the YAML metadata for better discoverability. - Adding
library_name: transformersas the configuration files indicate compatibility with the Transformers library. - Ensuring the associated research paper is linked in the Markdown content.
The rest of the comprehensive documentation provided by the authors, including the performance benchmarks and training details, has been preserved.
rzzhan changed pull request status to merged