Instructions to use lkonle/EMO_Fear_gbert with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use lkonle/EMO_Fear_gbert with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="lkonle/EMO_Fear_gbert")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("lkonle/EMO_Fear_gbert") model = AutoModelForSequenceClassification.from_pretrained("lkonle/EMO_Fear_gbert") - Notebooks
- Google Colab
- Kaggle
Model Card for Model ID
Model Details
Model Description
How to use the model
import pandas as pd
import numpy as np
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# Load model
model = AutoModelForSequenceClassification.from_pretrained("lkonle/EMO_Fear_gbert")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("lkonle/EMO_Fear_gbert")
tokenizer.pad_token = "[PAD]"
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
# define input text
myinput = ["Paul war sehr sehr glücklich über seinen Welpen.",
"Paul war sehr traurig über sein Frühstück.",
"Paul hatte große Langeweile."]
# tokenize, encode, format as batch and return pytorch tensors
input_ids = tokenizer.batch_encode_plus(myinput, truncation=True, padding="max_length", padding_side="right", return_tensors="pt")
# predict
logits = model(**input_ids)["logits"]
# get the predicted label
result = logits.detach().numpy()
prediction = np.argmax(result, axis=1)
# store result in pandas
output = pd.DataFrame()
output["inputs"] = myinput
output["prediction"] = prediction
print(output)
- Downloads last month
- 2