Instructions to use BikoRiko/Gpt-2.1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use BikoRiko/Gpt-2.1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="BikoRiko/Gpt-2.1")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BikoRiko/Gpt-2.1") model = AutoModelForCausalLM.from_pretrained("BikoRiko/Gpt-2.1") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use BikoRiko/Gpt-2.1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "BikoRiko/Gpt-2.1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "BikoRiko/Gpt-2.1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/BikoRiko/Gpt-2.1
- SGLang
How to use BikoRiko/Gpt-2.1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "BikoRiko/Gpt-2.1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "BikoRiko/Gpt-2.1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "BikoRiko/Gpt-2.1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "BikoRiko/Gpt-2.1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use BikoRiko/Gpt-2.1 with Docker Model Runner:
docker model run hf.co/BikoRiko/Gpt-2.1
this model is finetuned from gpt-2 model and trained on more data it should be able to be compared eqeuly to gpt2 large model.
The model will be continued to be developed on, other version witch will be more advanced each difrent verison contains better results as me the only one is non stop working on the ai i am releasing nonstop versions on day thele be able to generate images.
Gpt-2.2 is gonne come soon im working on other ais becouse im working on a qwen mini model witch will be better then this gpt-2.1 model the model qwen mini is gonne be better actualy be able to chat has coding capibiltys it is a mini coder qwen model.
- Downloads last month
- 7