Instructions to use qubing/text2sql_retrieval with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use qubing/text2sql_retrieval with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="qubing/text2sql_retrieval")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("qubing/text2sql_retrieval") model = AutoModelForSequenceClassification.from_pretrained("qubing/text2sql_retrieval") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 4f7ba745346a90d1db7e5d5296bb7efabe5219b1d59e2d28211bd6857237cc39
- Size of remote file:
- 5.95 kB
- SHA256:
- 9ab314289f3ca3bce93ed79f1ffa32f6a530203f25c34b5e89bfe1fd5e07ca7d
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.