Instructions to use bartowski/llama-3-sqlcoder-8b-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use bartowski/llama-3-sqlcoder-8b-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="bartowski/llama-3-sqlcoder-8b-GGUF", filename="llama-3-sqlcoder-8b-IQ1_M.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use bartowski/llama-3-sqlcoder-8b-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf bartowski/llama-3-sqlcoder-8b-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf bartowski/llama-3-sqlcoder-8b-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf bartowski/llama-3-sqlcoder-8b-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf bartowski/llama-3-sqlcoder-8b-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf bartowski/llama-3-sqlcoder-8b-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf bartowski/llama-3-sqlcoder-8b-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf bartowski/llama-3-sqlcoder-8b-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf bartowski/llama-3-sqlcoder-8b-GGUF:Q4_K_M
Use Docker
docker model run hf.co/bartowski/llama-3-sqlcoder-8b-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use bartowski/llama-3-sqlcoder-8b-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "bartowski/llama-3-sqlcoder-8b-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bartowski/llama-3-sqlcoder-8b-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/bartowski/llama-3-sqlcoder-8b-GGUF:Q4_K_M
- Ollama
How to use bartowski/llama-3-sqlcoder-8b-GGUF with Ollama:
ollama run hf.co/bartowski/llama-3-sqlcoder-8b-GGUF:Q4_K_M
- Unsloth Studio new
How to use bartowski/llama-3-sqlcoder-8b-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for bartowski/llama-3-sqlcoder-8b-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for bartowski/llama-3-sqlcoder-8b-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for bartowski/llama-3-sqlcoder-8b-GGUF to start chatting
- Docker Model Runner
How to use bartowski/llama-3-sqlcoder-8b-GGUF with Docker Model Runner:
docker model run hf.co/bartowski/llama-3-sqlcoder-8b-GGUF:Q4_K_M
- Lemonade
How to use bartowski/llama-3-sqlcoder-8b-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull bartowski/llama-3-sqlcoder-8b-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.llama-3-sqlcoder-8b-GGUF-Q4_K_M
List all available models
lemonade list
Getting gibberish responses
Hello,
I've just started using LLAMA 3 models in my LangChain code, and so far I'm getting gibberish responses. Currently I'm able to use llama2 models via llama.cpp (python) - but when swapping out for LLAMA 3, I get the trash responses. I followed the prompting instructions and I seem to be up to date on the llama.cpp version (by reinstalling llama-cpp-python). I'm using GGUF models only and was previously using a sqlcoder-7b-2 model without issue.
Simply swapping the GGUF file to this one is breaking my code. Any thoughts?
EDIT - here's a sample output from running llama-3-sqlcoder-8b-Q6_K.gguf:
hmm, no thoughts off the top of my head, that's weird. I may be able to try the full model to see if i get the same or if it's a GGUF issue
Hey @bartowski - here's the code I used to compare the the llama2 and llama3 sqlcoder models, where I get gibberish for the llama3 version. I copied the llama3 prompt from the base repo (https://huggingface.co/defog/llama-3-sqlcoder-8b), but there still must be something wrong with what I'm doing. Below is an example that compares the output of sqlcoder-7b-2.Q6_K and llama-3-sqlcoder-8b-Q6_K. This example assumes you have the Chinook DB SQLite file in your env and have set the paths to it along with your GGUF files (I'm calling the LLAMA2 version "SQL Coder 2" and LLAMA3 is "SQL Coder 3"):
import json
from langchain_community.llms import LlamaCpp
from langchain_community.utilities import SQLDatabase
from langchain_core.messages import AIMessage, SystemMessage
from langchain_core.prompts import BasePromptTemplate, PromptTemplate, SystemMessagePromptTemplate
from langchain_core.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder
)
from langchain_core.output_parsers import StrOutputParser
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnablePassthrough
############################################## Init SQL DB ###################################################
SQLITE_DB = 'chinook.db'
db_string = f"sqlite:///{SQLITE_DB}"
db = SQLDatabase.from_uri(db_string, sample_rows_in_table_info=0)
def get_schema(_):
return db.get_table_info()
def run_query(query):
return db.run(query)
############################################## Init SQL Chains #################################################
N_GPU_LAYERS = 50 # Metal set to 1 is enough.
N_BATCH = 1028 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.
SQL_MODEL2_PATH = "models/sqlcoder-7b-2.Q6_K.gguf"
SQL_MODEL3_PATH = "models/llama-3-sqlcoder-8b-Q6_K.gguf"
### SQL Coder 2
sql_coder2_llm = LlamaCpp(
model_path=SQL_MODEL2_PATH,
n_gpu_layers=N_GPU_LAYERS,
n_batch=N_BATCH,
n_ctx=2048,
f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls
verbose=False,
streaming=False,
model_kwargs={'do_sample': False, 'num_beams': 5}
)
sql_coder2_prompt = '''### Task
Generate a SQL query to answer [QUESTION]{question}[/QUESTION]
### Instructions
- If you cannot answer the question with the available database schema, return 'I do not know'
### Database Schema
The query will run on a database with the following schema:
{schema}
### Answer
Given the database schema, here is the SQL query that answers [QUESTION]{question}[/QUESTION]
[SQL]
'''
### SQL Coder 3
sql_coder3_llm = LlamaCpp(
model_path=SQL_MODEL3_PATH,
n_gpu_layers=N_GPU_LAYERS,
n_batch=N_BATCH,
n_ctx=2048,
f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls
verbose=False,
streaming=False,
model_kwargs={'do_sample': False, 'num_beams': 5}
)
sql_coder3_prompt = """<|begin_of_text|><|start_header_id|>user<|end_header_id|>
Generate a valid SQL query to answer this question: {question}
DDL statements: {schema}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
The following SQL query best answers the question {question}
```sql
"""
def get_sql_coder_chain(llm, prompt):
class InputType(BaseModel):
question: str
sql_chain = (
RunnablePassthrough.assign(schema=get_schema).with_types(input_type=InputType)
| ChatPromptTemplate.from_messages([("human", prompt)])
| llm.bind(stop=["\nSQLResult:", ";", "\nAnswer", "\nHuman", '\nResults'])
| StrOutputParser()
)
return sql_chain
####################################### Test Chains ####################################################
sql_coder2_chain = get_sql_coder_chain(sql_coder2_llm, sql_coder2_prompt)
sql_coder3_chain = get_sql_coder_chain(sql_coder3_llm, sql_coder3_prompt)
question = "How many artists are there?"
print("SQL Coder 2 response:\n------------------------------------")
print(sql_coder2_chain.invoke({'question': question}))
print("\n\n")
print("SQL Coder 3 response:\n------------------------------------")
print(sql_coder3_chain.invoke({'question': question}))
My Output:
Do you see anything wrong with the prompt, or anything else? Does this give you good responses on your end?
Thanks again!
Jack

