Instructions to use zai-org/GLM-4.7-Flash with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use zai-org/GLM-4.7-Flash with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="zai-org/GLM-4.7-Flash") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("zai-org/GLM-4.7-Flash") model = AutoModelForCausalLM.from_pretrained("zai-org/GLM-4.7-Flash") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- HuggingChat
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use zai-org/GLM-4.7-Flash with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "zai-org/GLM-4.7-Flash" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "zai-org/GLM-4.7-Flash", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/zai-org/GLM-4.7-Flash
- SGLang
How to use zai-org/GLM-4.7-Flash with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "zai-org/GLM-4.7-Flash" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "zai-org/GLM-4.7-Flash", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "zai-org/GLM-4.7-Flash" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "zai-org/GLM-4.7-Flash", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use zai-org/GLM-4.7-Flash with Docker Model Runner:
docker model run hf.co/zai-org/GLM-4.7-Flash
Excellent model - short feedback
Hi dear Z.AI team.
I want to congratulate you on the release of this model. This is a model that runs super fast and efficient on most consumer PCs while also delivering fantastic quality for its size.
It is also groundbreaking in some ways, like being one of the first smaller local models that uses MLA. It is good in roleplay and creative writing. Really good logic too, it can solve many of my trick questions.
Now for potential improvements:
I did notice that when setting reasoning budget to 0 (basically turning off thinking) in rare cases the model would still sneak a thought process in there or start with cutoff sentences. Maybe that can be improved by feeding it more non-thinking data? Not a big deal though because it's rare, but it can happen.
Some people say the model likes to loop at higher context, I cannot confirm nor deny it, but it would be a good idea to investigate anyway. Repetition and looping is a general problem in this model size. Any improvements are very welcome.
In roleplaying it is very sensitive to my writing. If my writing is repetitive over multiple turns, the output suddenly becomes very deterministic (e.g. multiple regenerations lead to almost the same result or text is the same/very similar to previous assistant turn) even though my sampler settings are of course not deterministic at all.
German language is decent but not always grammatically correct or natural. Still far away from older ChatGPT models like 3.5.
Missing multimodality. Now I'm aware a GLM 4.7V-Flash is probably coming, but as usual that will probably compromise text generation quality. What I would like to see is native multimodality (e.g. model is pretrained on text, video, audio etc.) and have all GLM models support vision at the very least. Similar to Gemma 3 or Mistral Small 3.2 which are also natively multimodal without compromising text gen performance.
Thank you for reading my feedback, I wish you all a wonderful day!
Awesome ways
congratulations