Kernels

This is the repository card of kernels-community/quantization-eetq that has been pushed on the Hub. It was built to be used with the kernels library. This card was automatically generated.

How to use

# make sure `kernels` is installed: `pip install -U kernels`
from kernels import get_kernel

kernel_module = get_kernel("kernels-community/quantization-eetq")
w8_a16_gemm = kernel_module.w8_a16_gemm

w8_a16_gemm(...)

Available functions

  • w8_a16_gemm
  • w8_a16_gemm_
  • preprocess_weights
  • quant_weights

Benchmarks

No benchmark available yet.

Downloads last month
663
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support