Instructions to use ai-forever/ruSciBERT with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ai-forever/ruSciBERT with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="ai-forever/ruSciBERT")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("ai-forever/ruSciBERT") model = AutoModelForMaskedLM.from_pretrained("ai-forever/ruSciBERT") - Inference
- Notebooks
- Google Colab
- Kaggle
ruSciBERT
Model was trained by Sber AI team and MLSA Lab of Institute for AI, MSU. If you use our model for your project, please tell us about it (nikgerasimenko@gmail.com).
Presentation at the AI Journey 2022
- Task:
mask filling - Type:
encoder - Tokenizer:
bpe - Dict size:
50265 - Num Parameters:
123 M - Training Data Volume:
6.5 GB
- Downloads last month
- 4,075