How to use valine/OpenPirate with Transformers:
# Load model directly from transformers import AutoTokenizer, SplitAttentionForCasualLM tokenizer = AutoTokenizer.from_pretrained("valine/OpenPirate") model = SplitAttentionForCasualLM.from_pretrained("valine/OpenPirate")
Hey, where can I learn more about how you fine-tune models without leading to the behavior you describe in NeuralFlow?
· Sign up or log in to comment