How to use dslim/bert-base-NER with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="dslim/bert-base-NER")
# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER") model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER")
建议换模型,换数据。我们可以讨论下这个黑核问题,因为bert本身要少量数据并且达到很好的效果,只能从算法和语义关联性的角度去思考了
· Sign up or log in to comment