PLaMo
Collection
8 items โข Updated โข 3
How to use mlx-community/plamo-2-1b with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="mlx-community/plamo-2-1b", trust_remote_code=True) # Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("mlx-community/plamo-2-1b", trust_remote_code=True, dtype="auto")How to use mlx-community/plamo-2-1b with MLX:
# Make sure mlx-lm is installed
# pip install --upgrade mlx-lm
# if on a CUDA device, also pip install mlx[cuda]
# Generate text with mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/plamo-2-1b")
prompt = "Once upon a time in"
text = generate(model, tokenizer, prompt=prompt, verbose=True)How to use mlx-community/plamo-2-1b with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "mlx-community/plamo-2-1b"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "mlx-community/plamo-2-1b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/mlx-community/plamo-2-1b
How to use mlx-community/plamo-2-1b with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "mlx-community/plamo-2-1b" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "mlx-community/plamo-2-1b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "mlx-community/plamo-2-1b" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "mlx-community/plamo-2-1b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use mlx-community/plamo-2-1b with MLX LM:
# Install MLX LM uv tool install mlx-lm # Generate some text mlx_lm.generate --model "mlx-community/plamo-2-1b" --prompt "Once upon a time"
How to use mlx-community/plamo-2-1b with Docker Model Runner:
docker model run hf.co/mlx-community/plamo-2-1b
The Model mlx-community/plamo-2-1b was converted to MLX format from pfnet/plamo-2-1b using mlx-lm version 0.22.0.
# numba is required for the new PLaMo tokenizer
pip install mlx numba 'mlx-lm>=0.22.0'
python -m mlx_lm.generate \
--model mlx-community/plamo-2-1b \
--prompt '็พๅณใใใซใฌใผใฎไฝใๆนใ็ดนไปใใพใใ' \
--ignore-chat-template \
--max-tokens 1024 \
--extra-eos-token '<|plamo:bos|>' \
--temp 0.7 \
--seed 0
==========
ในใใคในใฎไฝฟใๆนใใใซใฌใผใฎไฝใๆนใพใง่ฉณใใ่งฃ่ชฌใใพใใ
## ใซใฌใผใฎไฝใๆน
**โ ๆๆใ็จๆใใ**
ๅกฉใ้ฉ้ๅ ใใใจใในใใคใทใผใช้ฆใใๅบใใใพใใ
ใฏใใณใใณใชใขใณใใผใชใฉใฎในใใคในใไฝฟใใจใ้ฃๆฌฒใใใใใพใใ
**โกในใใคในใ็ใใ**
ในใใคในใฏๅงใใซๅฐใๅ
ฅใใใ ใใงใใใฎๅพใฏๅพใ
ใซๅ ใใพใใ
**โข็ใญใใฎใฟใใๅใใ็ใใ**
็ใญใใ็ใใใใจใงใ็ใฟใๅผใๅบใใใพใใ
**โฃ่ใ็ใใ**
่ใฏ่ใในใฉใคในใใฆใในใใคในใจใใๆททใใพใใ
**โค้่ใ็ใใ**
ไบบๅใใธใฃใฌใคใขใใคใณใฒใณใชใฉใไธ็ทใซ็ใใพใใ้่ใฏ็ใใๅใซใซใใใใฆใใใพใใใใ
**โฅๆฐดใๅ ใใ**
**โฆใซใฌใผใซใผใๅ ใใ**
**โงๅบๆฅไธใใ**
## ใซใฌใผใฎๅณไปใๆนๆณ
**โ ่ใ็ใใ**
่ใฏ่ใในใฉใคในใใฆใในใใคในใจใใๆททใใพใใ
**โก้่ใ็ใใ**
้่ใฏ็ใใๅใซใซใใใใฆใใใพใใใใ
**โขๆฐดใๅ ใใ**
**โฃใซใฌใผใซใผใๅ ใใ**
## ใซใฌใผใซใฏโโใๅ
ฅใใใ
**โ ใใใใ**
ใใใใใๅ ใใใใจใงใใพใใใใงใณใฏใฎใใๅณใซใชใใพใใ
**โกใซใใซใ**
ใซใใซใใๅ ใใใใจใงใ้ฆใใจ้ขจๅณใใขใใใใพใใ
**โขใใใใ**
ใใใใใๅ ใใใใจใงใในใใคใทใผใช้ขจๅณใๅ ใใใพใใ
**โฃใใใ็ผถ**
ใใใ็ผถใๅ ใใใใจใงใ้
ธๅณใๅ ใใใๆทฑใฟใฎใใๅณใใใซใชใใพใใ
**โคใจใผใฐใซใ**
ใจใผใฐใซใใๅ ใใใใจใงใใณใฏใใขใใใใพใใ
## ใซใฌใผใฎใชในในใกใฎๅ
ทๆ
**โ ่**
็่ใ่ฑ่ใ้ถ่ใชใฉใใๅฅฝใฟใฎ่ใไฝฟใฃใฆใฟใพใใใใ
**โก้่**
ไบบๅใใธใฃใฌใคใขใใคใณใฒใณใชใฉใใๅฅฝใฟใฎ้่ใไฝฟใฃใฆใฟใพใใใใ
**โขใทใผใใผใ**
ใใใใใใณใใใใชใฉใใทใผใใผใใไฝฟใฃใฆใฟใพใใใใ
**โฃใใผใบ**
ใใผใบใๅ ใใใใจใงใใฏใชใผใใผใงใณใฏใฎใใๅณใใใซใชใใพใใ
## ใซใฌใผใฎ็พๅณใใไฝใๆน
**โ ๆๆใ็จๆใใ**
ๅกฉใ้ฉ้ๅ ใใใจใในใใคใทใผใช้ฆใใๅบใใใพใใ
ใฏใใณใใณใชใขใณใใผใชใฉใฎในใใคในใไฝฟใใจใ้ฃๆฌฒใใใใใพใใ
**โกในใใคในใ็ใใ**
ในใใคในใฏๅงใใซๅฐใๅ
ฅใใใ ใใงใใใฎๅพใฏๅพใ
ใซๅ ใใพใใ
**โข็ใญใใฎใฟใใๅใใ็ใใ**
็ใญใใ็ใใใใจใงใ็ใฟใๅผใๅบใใใพใใ
**โฃ่ใ็ใใ**
่ใฏ่ใในใฉใคในใใฆใในใใคในใจใใๆททใใพใใ
**โค้่ใ็ใใ**
ไบบๅใใธใฃใฌใคใขใใคใณใฒใณใชใฉใไธ็ทใซ็ใใพใใ้่ใฏ็ใใๅใซใซใใใใฆใใใพใใใใ
**โฅๆฐดใๅ ใใ**
**โฆใซใฌใผใซใผใๅ ใใ**
**โงๅบๆฅไธใใ**
## ใพใจใ
ในใใคในใจ้่ใฎ็ตใฟๅใใใฏใใซใฌใผใฎๅณใใใๆทฑใใใฎใซๆฌ ใใใพใใใ
ใใฒไปๅ็ดนไปใใใฌใทใใๅ่ใซใ็พๅณใใใซใฌใผใไฝใฃใฆใฟใฆใใ ใใใ
==========
Prompt: 6 tokens, 87.012 tokens-per-sec
Generation: 496 tokens, 52.861 tokens-per-sec
Peak memory: 5.317 GB
You can also write your code to use this model like this:
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/plamo-2-1b")
prompt = "็พๅณใใใซใฌใผใฎไฝใๆนใฎใฌใทใใ็ดนไปใใพใใ"
response = generate(model, tokenizer, prompt=prompt, verbose=True)
Quantized
Base model
pfnet/plamo-2-1b