Instructions to use tiny-random/phi-moe with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use tiny-random/phi-moe with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="tiny-random/phi-moe", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiny-random/phi-moe", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("tiny-random/phi-moe", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use tiny-random/phi-moe with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "tiny-random/phi-moe" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "tiny-random/phi-moe", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/tiny-random/phi-moe
- SGLang
How to use tiny-random/phi-moe with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "tiny-random/phi-moe" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "tiny-random/phi-moe", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "tiny-random/phi-moe" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "tiny-random/phi-moe", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use tiny-random/phi-moe with Docker Model Runner:
docker model run hf.co/tiny-random/phi-moe
| library_name: transformers | |
| pipeline_tag: text-generation | |
| inference: true | |
| widget: | |
| - text: Hello! | |
| example_title: Hello world | |
| group: Python | |
| base_model: | |
| - microsoft/Phi-tiny-MoE-instruct | |
| This tiny model is for debugging. It is randomly initialized with the config adapted from [microsoft/Phi-tiny-MoE-instruct](https://huggingface.co/microsoft/Phi-tiny-MoE-instruct). | |
| ### Example usage: | |
| ```python | |
| import torch | |
| from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline | |
| model_id = "tiny-random/phi-moe" | |
| tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) | |
| model = AutoModelForCausalLM.from_pretrained( | |
| model_id, | |
| torch_dtype=torch.bfloat16, | |
| trust_remote_code=True, | |
| ) | |
| pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, trust_remote_code=True) | |
| print(pipe('Write an article about Artificial Intelligence.')) | |
| ``` | |
| ### Codes to create this repo: | |
| ```python | |
| import json | |
| from pathlib import Path | |
| import torch | |
| import accelerate | |
| from huggingface_hub import file_exists, hf_hub_download | |
| from transformers import ( | |
| AutoConfig, | |
| AutoModelForCausalLM, | |
| AutoTokenizer, | |
| GenerationConfig, | |
| set_seed, | |
| ) | |
| source_model_id = "microsoft/Phi-tiny-MoE-instruct" | |
| save_folder = "/tmp/tiny-random/phi-moe" | |
| processor = AutoTokenizer.from_pretrained(source_model_id) | |
| processor.save_pretrained(save_folder) | |
| with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f: | |
| config_json = json.load(f) | |
| for k, v in config_json['auto_map'].items(): | |
| config_json['auto_map'][k] = f'{source_model_id}--{v}' | |
| config_json['head_dim'] = 32 | |
| config_json['hidden_size'] = 64 | |
| config_json['intermediate_size'] = 128 | |
| config_json['num_attention_heads'] = 2 | |
| config_json['num_experts_per_tok'] = 2 | |
| config_json['num_hidden_layers'] = 2 | |
| config_json['num_key_value_heads'] = 1 | |
| config_json['num_local_experts'] = 8 | |
| config_json['tie_word_embeddings'] = True | |
| with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f: | |
| json.dump(config_json, f, indent=2) | |
| config = AutoConfig.from_pretrained( | |
| save_folder, | |
| trust_remote_code=True, | |
| ) | |
| print(config) | |
| automap = config_json['auto_map'] | |
| torch.set_default_dtype(torch.bfloat16) | |
| model = AutoModelForCausalLM.from_config(config, trust_remote_code=True) | |
| torch.set_default_dtype(torch.float32) | |
| if file_exists(filename="generation_config.json", repo_id=source_model_id, repo_type='model'): | |
| model.generation_config = GenerationConfig.from_pretrained( | |
| source_model_id, trust_remote_code=True, | |
| ) | |
| set_seed(42) | |
| model = model.cpu() # cpu is more stable for random initialization across machines | |
| with torch.no_grad(): | |
| for name, p in sorted(model.named_parameters()): | |
| torch.nn.init.normal_(p, 0, 0.2) | |
| print(name, p.shape) | |
| model.save_pretrained(save_folder) | |
| print(model) | |
| with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f: | |
| config_json = json.load(f) | |
| config_json['auto_map'] = automap | |
| with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f: | |
| json.dump(config_json, f, indent=2) | |
| for python_file in Path(save_folder).glob('*.py'): | |
| python_file.unlink() | |
| ``` | |
| ### Printing the model: | |
| ```text | |
| PhiMoEForCausalLM( | |
| (model): PhiMoEModel( | |
| (embed_tokens): Embedding(32064, 64) | |
| (layers): ModuleList( | |
| (0-1): 2 x PhiMoEDecoderLayer( | |
| (self_attn): PhiMoESdpaAttention( | |
| (q_proj): Linear(in_features=64, out_features=64, bias=True) | |
| (k_proj): Linear(in_features=64, out_features=32, bias=True) | |
| (v_proj): Linear(in_features=64, out_features=32, bias=True) | |
| (o_proj): Linear(in_features=64, out_features=64, bias=True) | |
| (rotary_emb): PhiMoERotaryEmbedding() | |
| ) | |
| (block_sparse_moe): PhiMoESparseMoeBlock( | |
| (gate): Linear(in_features=64, out_features=8, bias=False) | |
| (experts): ModuleList( | |
| (0-7): 8 x PhiMoEBlockSparseTop2MLP( | |
| (w1): Linear(in_features=64, out_features=128, bias=False) | |
| (w2): Linear(in_features=128, out_features=64, bias=False) | |
| (w3): Linear(in_features=64, out_features=128, bias=False) | |
| (act_fn): SiLU() | |
| ) | |
| ) | |
| ) | |
| (input_layernorm): LayerNorm((64,), eps=1e-05, elementwise_affine=True) | |
| (post_attention_layernorm): LayerNorm((64,), eps=1e-05, elementwise_affine=True) | |
| ) | |
| ) | |
| (norm): LayerNorm((64,), eps=1e-05, elementwise_affine=True) | |
| ) | |
| (lm_head): Linear(in_features=64, out_features=32064, bias=True) | |
| ) | |
| ``` |