Instructions to use silicobio/peleke-phi-4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use silicobio/peleke-phi-4 with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("microsoft/phi-4") model = PeftModel.from_pretrained(base_model, "silicobio/peleke-phi-4") - Transformers
How to use silicobio/peleke-phi-4 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="silicobio/peleke-phi-4") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("silicobio/peleke-phi-4", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use silicobio/peleke-phi-4 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "silicobio/peleke-phi-4" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "silicobio/peleke-phi-4", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/silicobio/peleke-phi-4
- SGLang
How to use silicobio/peleke-phi-4 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "silicobio/peleke-phi-4" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "silicobio/peleke-phi-4", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "silicobio/peleke-phi-4" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "silicobio/peleke-phi-4", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use silicobio/peleke-phi-4 with Docker Model Runner:
docker model run hf.co/silicobio/peleke-phi-4
Model Card for peleke-phi-4
This model is a fine-tuned version of microsoft/phi-4 for antibody sequence generation. It takes in an antigen sequence, and returns novel Fv portions of heavy and light chain antibody sequences.
Quick start
- Load in the Model
model_name = 'silicobio/peleke-phi-4'
config = PeftConfig.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, torch_dtype=torch.bfloat16, trust_remote_code=True).cuda()
model.resize_token_embeddings(len(tokenizer))
model = PeftModel.from_pretrained(model, model_name).cuda()
- Format your Input
This model uses <epi> and </epi> to annotate epitope residues of interest.
It may be easier to use other characters for annotation, such as [ ]'s. For example: ...CSFS[S][F][V]L[N]WY....
Then, use the following function to properly format the input.
def format_prompt(antigen_sequence):
epitope_seq = re.sub(r'\[([A-Z])\]', r'<epi>\1</epi>', antigen_sequence)
formatted_str = f"Antigen: {epitope_seq}<|im_end|>\nAntibody:"
return formatted_str
- Generate an Antibody Sequence
prompt = format_prompt(antigen)
inputs = tokenizer(prompt, return_tensors="pt")
inputs = {k: v.cuda() for k, v in inputs.items()}
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=1000,
do_sample=True,
temperature=0.7,
pad_token_id=tokenizer.eos_token_id,
use_cache=False,
)
full_text = tokenizer.decode(outputs[0], skip_special_tokens=False)
antibody_sequence = full_text.split('<|im_end|>')[1].replace('Antibody: ', '')
print(f"Antigen: {antigen}\nAntibody: {antibody_sequence}\n")
This will generate a |-delimited output, which is an Fv portion of a heavy and light chain.
Antigen: NPPTFSPALL...
Antibody: QVQLVQSGGG...|DIQMTQSPSS...
Training procedure
This model was trained with SFT.
Framework versions
- PEFT 0.17.0
- TRL: 0.19.1
- Transformers: 4.54.0
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
- Downloads last month
- 1
Model tree for silicobio/peleke-phi-4
Base model
microsoft/phi-4