llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)llcuda Models
Optimized GGUF models for llcuda - Zero-config CUDA-accelerated LLM inference.
Models
google_gemma-3-1b-it-Q4_K_M.gguf
- Model: Google Gemma 3 1B Instruct
- Quantization: Q4_K_M (4-bit)
- Size: 769 MB
- Use case: General-purpose chat, Q&A, code assistance
- Recommended for: 1GB+ VRAM GPUs
Performance:
- Tesla T4 (Colab/Kaggle): ~15 tok/s
- Tesla P100 (Colab): ~18 tok/s
- GeForce 940M (1GB): ~15 tok/s
- RTX 30xx/40xx: ~25+ tok/s
Usage
With llcuda (Recommended)
pip install llcuda
import llcuda
engine = llcuda.InferenceEngine()
engine.load_model("gemma-3-1b-Q4_K_M")
result = engine.infer("What is AI?")
print(result.text)
With llama.cpp
# Download model
huggingface-cli download waqasm86/llcuda-models google_gemma-3-1b-it-Q4_K_M.gguf --local-dir ./models
# Run with llama.cpp
./llama-server -m ./models/google_gemma-3-1b-it-Q4_K_M.gguf -ngl 26
Supported Platforms
- โ Google Colab (T4, P100, V100, A100)
- โ Kaggle (Tesla T4)
- โ Local GPUs (GeForce, RTX, Tesla)
- โ All NVIDIA GPUs with compute capability 5.0+
Links
- PyPI: pypi.org/project/llcuda
- GitHub: github.com/waqasm86/llcuda
- Documentation: waqasm86.github.io
License
Apache 2.0 - Models are provided as-is for educational and research purposes.
Credits
- Model: Google Gemma 3 1B
- Quantization: llama.cpp GGUF format
- Package: llcuda by Waqas Muhammad
- Downloads last month
- 8
Hardware compatibility
Log In to add your hardware
4-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="waqasm86/llcuda-models", filename="google_gemma-3-1b-it-Q4_K_M.gguf", )