Instructions to use mlx-community/CodeLlama-7b-Python-mlx with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use mlx-community/CodeLlama-7b-Python-mlx with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # if on a CUDA device, also pip install mlx[cuda] # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("mlx-community/CodeLlama-7b-Python-mlx") prompt = "Once upon a time in" text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- MLX LM
How to use mlx-community/CodeLlama-7b-Python-mlx with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Generate some text mlx_lm.generate --model "mlx-community/CodeLlama-7b-Python-mlx" --prompt "Once upon a time"
File size: 136 Bytes
b1b748f | 1 2 3 4 | version https://git-lfs.github.com/spec/v1
oid sha256:936ea09ef86a484954c0f2dc1447ec89d446c5097a4c65986f7432611e1c701d
size 13476918398
|