How to use google/tapas-tiny-finetuned-wtq with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("table-question-answering", model="google/tapas-tiny-finetuned-wtq")
# Load model directly from transformers import AutoTokenizer, AutoModelForTableQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("google/tapas-tiny-finetuned-wtq") model = AutoModelForTableQuestionAnswering.from_pretrained("google/tapas-tiny-finetuned-wtq")