-
JLKang/ViSpec-Qwen2.5-VL-3B-Instruct
Image-Text-to-Text • 0.4B • Updated • 63 -
JLKang/ViSpec-Qwen2.5-VL-7B-Instruct
Image-Text-to-Text • 0.9B • Updated • 127 -
JLKang/ViSpec-llava-v1.6-vicuna-7b-hf
Image-Text-to-Text • 0.5B • Updated • 8 -
JLKang/ViSpec-llava-v1.6-vicuna-13b-hf
Image-Text-to-Text • 0.7B • Updated • 8
Jialiang Kang
JLKang
AI & ML interests
Vision Language Models
Organizations
None yet
ViSpec
-
JLKang/ViSpec-Qwen2.5-VL-3B-Instruct
Image-Text-to-Text • 0.4B • Updated • 63 -
JLKang/ViSpec-Qwen2.5-VL-7B-Instruct
Image-Text-to-Text • 0.9B • Updated • 127 -
JLKang/ViSpec-llava-v1.6-vicuna-7b-hf
Image-Text-to-Text • 0.5B • Updated • 8 -
JLKang/ViSpec-llava-v1.6-vicuna-13b-hf
Image-Text-to-Text • 0.7B • Updated • 8
models 5
JLKang/ViSpec-llava-1.5-7b-hf
Image-Text-to-Text • 0.5B • Updated • 11
JLKang/ViSpec-llava-v1.6-vicuna-13b-hf
Image-Text-to-Text • 0.7B • Updated • 8
JLKang/ViSpec-llava-v1.6-vicuna-7b-hf
Image-Text-to-Text • 0.5B • Updated • 8
JLKang/ViSpec-Qwen2.5-VL-7B-Instruct
Image-Text-to-Text • 0.9B • Updated • 127
JLKang/ViSpec-Qwen2.5-VL-3B-Instruct
Image-Text-to-Text • 0.4B • Updated • 63
datasets 0
None public yet