RAME-VL
This repository hosts the released checkpoint for RAME-VL.
The checkpoint files expected by the accompanying codebase are:
config.yamlmodel.pt
Clone This Repository
To clone the full repository with large files:
brew install git-xet
git xet install
git clone https://huggingface.co/Ritesh-hf/RAME-VL
To clone without downloading the checkpoint weights:
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/Ritesh-hf/RAME-VL
Download Only the Checkpoint Files
If you only need the files required for evaluation:
huggingface-cli download Ritesh-hf/RAME-VL \
config.yaml model.pt \
--local-dir checkpoints/rame-vl
After download, the checkpoint directory should look like:
checkpoints/rame-vl/
config.yaml
model.pt
Using the Checkpoint in the Codebase
The accompanying codebase expects a local checkpoint directory containing
config.yaml and model.pt.
Example evaluation command:
torchrun --standalone --nnodes=1 --nproc_per_node=1 \
launch_scripts/eval_downstream.py \
checkpoints/rame-vl \
test-low-res \
--device_batch_size 1 \
--save_dir outputs/test_low_res \
--overwrite
Notes
- This repository stores the released model checkpoint only.
- Datasets are not bundled in this repository.
- Some evaluation tasks in the codebase expect local dataset paths to be set up separately.
- Downloads last month
- 8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for Ritesh-hf/RAME-VL
Base model
Qwen/Qwen2-7B