Instructions to use UCSC-VLAA/openvision-vit-base-patch16-384 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- OpenCLIP
How to use UCSC-VLAA/openvision-vit-base-patch16-384 with OpenCLIP:
import open_clip model, preprocess_train, preprocess_val = open_clip.create_model_and_transforms('hf-hub:UCSC-VLAA/openvision-vit-base-patch16-384') tokenizer = open_clip.get_tokenizer('hf-hub:UCSC-VLAA/openvision-vit-base-patch16-384') - Notebooks
- Google Colab
- Kaggle
Adding `safetensors` variant of this model
#3 opened 5 months ago
by
SFconvertbot