Instructions to use stepfun-ai/Step-3.5-Flash with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use stepfun-ai/Step-3.5-Flash with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="stepfun-ai/Step-3.5-Flash", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("stepfun-ai/Step-3.5-Flash", trust_remote_code=True, dtype="auto") - Inference
- HuggingChat
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use stepfun-ai/Step-3.5-Flash with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "stepfun-ai/Step-3.5-Flash" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "stepfun-ai/Step-3.5-Flash", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/stepfun-ai/Step-3.5-Flash
- SGLang
How to use stepfun-ai/Step-3.5-Flash with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "stepfun-ai/Step-3.5-Flash" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "stepfun-ai/Step-3.5-Flash", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "stepfun-ai/Step-3.5-Flash" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "stepfun-ai/Step-3.5-Flash", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use stepfun-ai/Step-3.5-Flash with Docker Model Runner:
docker model run hf.co/stepfun-ai/Step-3.5-Flash
Will crash EVERY time when the context is >240.000
1
#39 opened about 9 hours ago
by
Nerdsking
3.6 Soon?
🚀 1
#38 opened about 1 month ago
by deleted
Is Step 3.5 Flash 2603 coming to huggingface?
👍 4
1
#37 opened about 1 month ago
by
jukofyork
使用vllm:nightly部署bf16版本模型报错
#36 opened about 2 months ago
by
Ikkyu321
This is the most underreported model when it comes to agentic coding and intelligence
🚀➕ 4
3
#35 opened about 2 months ago
by
mayankiit04
Install & run stepfun-ai/Step-3.5-Flash easily using llmpm
#33 opened 2 months ago
by
sarthak-saxena
Off-topic responses when running in 4 node SGLang cluster
2
#32 opened 2 months ago
by
apairmont
Add MathArena evaluation result for hmmt/hmmt_feb_2026
#30 opened 3 months ago
by
JasperDekoninck
Transformers v5 compatibility
#29 opened 3 months ago
by
hmellor
I have reasoning datasets for this
#28 opened 3 months ago
by
Crownelius
Context Management Reproducibility | 可复现性 ?
👍 1
2
#27 opened 3 months ago
by
pandemo
Add MathArena evaluation result for aime/aime_2026
1
#26 opened 3 months ago
by
JasperDekoninck
Hope to see a 4bits AWQ version
👍➕ 10
#25 opened 3 months ago
by
leflak
Question : Real-world use cases for Step-3.5-Flash
8
#24 opened 3 months ago
by
Geodd
Safe trading
1
#23 opened 3 months ago
by
Lyonblaze
Disabling/Reducing model reasoning
5
#22 opened 3 months ago
by
Abdallah1997
Question about Step 3.5 Flash Base model weights release
👍 1
3
#21 opened 3 months ago
by
NodeLinker
update-bmk-numbers
#18 opened 3 months ago
by
mh3467
Deployment of Step-3.5-Flash (fp16) via sglang failed.
➕ 1
8
#17 opened 3 months ago
by
zqzq71
using prompt like this reducing the model reasoning length in my testing
3
#16 opened 3 months ago
by
gopi87
fp8 version?
1
#15 opened 3 months ago
by
CHNtentes
NVFP4
🚀👍 6
7
#14 opened 3 months ago
by
reneho
Open Network
1
#13 opened 3 months ago
by
Lawziet
The Soul in the Machine
2
#12 opened 3 months ago
by
qxet
Add missing task tag!
#11 opened 3 months ago
by
MihaiPopa-1
BrowseComp with Context Management
👀➕ 4
#10 opened 3 months ago
by
yanghao1126
llama.cpp
4
#8 opened 3 months ago
by
PotatoSniffer
@cerebras please do your thing
🔥 1
2
#6 opened 3 months ago
by
marksverdhei
Does it support fine-tuning?
#5 opened 3 months ago
by
taozi555
Recommended sampling parameters?
👍 1
2
#3 opened 3 months ago
by
sszymczyk
How to use PaCoRe?
5
#2 opened 3 months ago
by
JJKiks