ProGen2: Exploring the Boundaries of Protein Language Models
Paper • 2206.13517 • Published • 1
How to use hugohrban/progen2-BFD90 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="hugohrban/progen2-BFD90", trust_remote_code=True) # Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("hugohrban/progen2-BFD90", trust_remote_code=True, dtype="auto")How to use hugohrban/progen2-BFD90 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "hugohrban/progen2-BFD90"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "hugohrban/progen2-BFD90",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/hugohrban/progen2-BFD90
How to use hugohrban/progen2-BFD90 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "hugohrban/progen2-BFD90" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "hugohrban/progen2-BFD90",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "hugohrban/progen2-BFD90" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "hugohrban/progen2-BFD90",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use hugohrban/progen2-BFD90 with Docker Model Runner:
docker model run hf.co/hugohrban/progen2-BFD90
Mirror of the base ProGen2-BFD90 model (with slightly modified configuration and forward pass) introduced by Nijkamp, et al..
See my github repo for an example of finetuning or sampling from this model.
Example usage:
from transformers import AutoModelForCausalLM
from tokenizers import Tokenizer
import torch
import torch.nn.functional as F
# load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("hugohrban/progen2-BFD90", trust_remote_code=True, torch_dtype="auto")
tokenizer = Tokenizer.from_pretrained("hugohrban/progen2-BFD90")
tokenizer.no_padding()
# prepare input
prompt = "1MEVVIVTGMSGAGK"
input_ids = torch.tensor(tokenizer.encode(prompt).ids).to(model.device)
# forward pass
logits = model(input_ids).logits
# print output probabilities
next_token_logits = logits[-1, :]
next_token_probs = F.softmax(next_token_logits, dim=-1)
for i in range(tokenizer.get_vocab_size(with_added_tokens=False)):
print(f"{tokenizer.id_to_token(i)}: {100 * next_token_probs[i].item():.2f} %")