Instructions to use mlx-community/CodeLlama-7b-Python-mlx with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use mlx-community/CodeLlama-7b-Python-mlx with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # if on a CUDA device, also pip install mlx[cuda] # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("mlx-community/CodeLlama-7b-Python-mlx") prompt = "Once upon a time in" text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- MLX LM
How to use mlx-community/CodeLlama-7b-Python-mlx with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Generate some text mlx_lm.generate --model "mlx-community/CodeLlama-7b-Python-mlx" --prompt "Once upon a time"
- Xet hash:
- 9063cc0e1c1983d125f052350dceb4913e6f66a5c7a8d9ec82177c1c306c99f4
- Size of remote file:
- 13.5 GB
- SHA256:
- 936ea09ef86a484954c0f2dc1447ec89d446c5097a4c65986f7432611e1c701d
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.