Instructions to use BAAI/llm-embedder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use BAAI/llm-embedder with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="BAAI/llm-embedder")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("BAAI/llm-embedder") model = AutoModel.from_pretrained("BAAI/llm-embedder") - Inference
- Notebooks
- Google Colab
- Kaggle
This vs bge-large-en-v1.5?
#5
by chongcy - opened
Hello,
I'm currently using bge-large-en-v1.5 for embedding to vector database, and also embedding query for retrieval.
Is there any performance/quality difference changing to this model? Or I can just use my current one?
And is the dimension for this still 1024, or 768?
llm-embedder has the same size as bge-base-en-v1.5, whose dimension is 768. llm-embedder is fine-tuned based on bge-base model, and improves the ability for example retrieval, tool retrieval, and conversation retrieval. You can select the model based on your scenario.