Instructions to use deepset/gbert-large with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use deepset/gbert-large with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="deepset/gbert-large")# Load model directly from transformers import AutoModelForMaskedLM model = AutoModelForMaskedLM.from_pretrained("deepset/gbert-large", dtype="auto") - Inference
- Notebooks
- Google Colab
- Kaggle
| { | |
| "architectures": [ | |
| "BertForMaskedLM" | |
| ], | |
| "attention_probs_dropout_prob": 0.1, | |
| "hidden_act": "gelu", | |
| "hidden_dropout_prob": 0.1, | |
| "hidden_size": 1024, | |
| "initializer_range": 0.02, | |
| "intermediate_size": 4096, | |
| "max_position_embeddings": 512, | |
| "num_attention_heads": 16, | |
| "num_hidden_layers": 24, | |
| "type_vocab_size": 2, | |
| "vocab_size": 31102 | |
| } | |