Instructions to use sohv/nanokimi-mini with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use sohv/nanokimi-mini with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="sohv/nanokimi-mini", trust_remote_code=True)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("sohv/nanokimi-mini", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use sohv/nanokimi-mini with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "sohv/nanokimi-mini" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "sohv/nanokimi-mini", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/sohv/nanokimi-mini
- SGLang
How to use sohv/nanokimi-mini with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "sohv/nanokimi-mini" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "sohv/nanokimi-mini", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "sohv/nanokimi-mini" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "sohv/nanokimi-mini", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use sohv/nanokimi-mini with Docker Model Runner:
docker model run hf.co/sohv/nanokimi-mini
nanokimi-mini
This repository contains the nanoKimi model pre-trained on Shakespeare dataset. An upgraded version of nanokimi trained on OpenWebText will be up on HuggingFace in a few days.
Model Details
- Architecture: 12 layers, 12 heads, 768 embedding dimension
- Training Data: Shakespeare dataset
- Features: Mixture of Experts (8 experts), Latent Attention
- Model Type: Kimi-K2
Files
pytorch_model.bin- Model weightsconfig.json- Model configurationsrc/- Source code for model architecturemodeling_kimik2.py- HuggingFace wrapper
Usage
import torch
import json
from huggingface_hub import hf_hub_download
# Download files
config_path = hf_hub_download(repo_id="sohv/nanokimi-mini", filename="config.json")
weights_path = hf_hub_download(repo_id="sohv/nanokimi-mini", filename="pytorch_model.bin")
# Load config and weights
with open(config_path) as f:
config = json.load(f)
weights = torch.load(weights_path, map_location="cpu")
print("Model downloaded successfully!")
License
MIT License
Contact
Raise an issue in Files and Version or reach out to me here for any feedback or enquiry.
- Downloads last month
- 15