Instructions to use MathBite/llama1b_finetuned_json_creation with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MathBite/llama1b_finetuned_json_creation with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="MathBite/llama1b_finetuned_json_creation") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("MathBite/llama1b_finetuned_json_creation", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use MathBite/llama1b_finetuned_json_creation with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "MathBite/llama1b_finetuned_json_creation" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MathBite/llama1b_finetuned_json_creation", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/MathBite/llama1b_finetuned_json_creation
- SGLang
How to use MathBite/llama1b_finetuned_json_creation with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "MathBite/llama1b_finetuned_json_creation" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MathBite/llama1b_finetuned_json_creation", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "MathBite/llama1b_finetuned_json_creation" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MathBite/llama1b_finetuned_json_creation", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use MathBite/llama1b_finetuned_json_creation with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for MathBite/llama1b_finetuned_json_creation to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for MathBite/llama1b_finetuned_json_creation to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for MathBite/llama1b_finetuned_json_creation to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="MathBite/llama1b_finetuned_json_creation", max_seq_length=2048, ) - Docker Model Runner
How to use MathBite/llama1b_finetuned_json_creation with Docker Model Runner:
docker model run hf.co/MathBite/llama1b_finetuned_json_creation
Llama 3.2 1B JSON Extractor
A fine-tuned version of Llama 3.2 1B Instruct specialized for generating structured JSON outputs with high accuracy and schema compliance.
π― Model Description
This model has been fine-tuned to excel at generating valid, well-structured JSON objects based on Pydantic model schemas. It transforms natural language prompts into properly formatted JSON responses with remarkable consistency.
π Performance
π Dramatic Improvement in JSON Generation:
- JSON Validity Rate: 20% β 92% (over 70% improvement)
- Schema Compliance: Near-perfect adherence to small-average size Pydantic model structures
- Generalization: Successfully handles completely new, unseen Pydantic model classes
π§ Training Details
- Base Model: meta-llama/Llama-3.2-1B-Instruct
- Fine-tuning Method: LoRA (Low-Rank Adaptation) with Unsloth
- Training Data: Synthetic dataset with 15+ diverse Pydantic model types
- Training Epochs: 15
- Batch Size: 16 (with gradient accumulation)
- Learning Rate: 1e-4
ποΈ Supported Model Types
The model can generate JSON for 15+ different object types including:
- Educational: Course, Resume, Events
- Entertainment: FilmIdea, BookReview, GameIdea
- Business: TShirtOrder, Recipe, House
- Characters & Gaming: FictionalCharacter, GameArtifact
- Travel: Itinerary
- Science: SollarSystem, TextSummary
- And many more...
π― Key Features
- High JSON Validity: 92% success rate in generating valid JSON
- Schema Compliance: Follows Pydantic model structures precisely
- Strong Generalization: Works with new, unseen model classes
- Consistent Output: Reliable structured data generation
- Lightweight: Only 1B parameters for efficient deployment
π Training Data
The model was fine-tuned on a synthetic dataset containing thousands of examples across diverse domains:
- Character creation and game development
- Business and e-commerce objects
- Educational and professional content
- Entertainment and media descriptions
- Scientific and technical data structures
π Links
- GitHub Repository: LLM_FineTuning_4JsonCreation
- Base Model: meta-llama/Llama-3.2-1B-Instruct
π License
This model is released under the Apache 2.0 license.
π Acknowledgments
- Meta for the base Llama 3.2 model
- Unsloth for efficient fine-tuning framework
- Hugging Face for model hosting and ecosystem
Model tree for MathBite/llama1b_finetuned_json_creation
Base model
meta-llama/Llama-3.2-1B-Instruct