DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
Paper • 2402.03300 • Published • 145
A specialized smart contract security auditor built on Qwen2.5-Coder-0.5B-Instruct, fine-tuned using Group Relative Policy Optimization (GRPO) on real-world audit findings from top security firms.
Given a Solidity smart contract, this model identifies security vulnerabilities and produces structured audit findings with:
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model = AutoModelForCausalLM.from_pretrained(
"oxdev/security-auditor-grpo",
use_cache=True, # Important: config has use_cache=False from training
)
tokenizer = AutoTokenizer.from_pretrained("oxdev/security-auditor-grpo")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device="cuda")
messages = [
{"role": "system", "content": "You are an expert smart contract security auditor. Analyze the provided Solidity code for vulnerabilities."},
{"role": "user", "content": """Audit this contract:
```solidity
contract SimpleBank {
mapping(address => uint256) public balances;
function deposit() public payable { balances[msg.sender] += msg.value; }
function withdraw(uint256 amount) public {
require(balances[msg.sender] >= amount);
(bool success, ) = msg.sender.call{value: amount}("");
require(success);
balances[msg.sender] -= amount;
}
}
```"""},
]
result = pipe(messages, max_new_tokens=512, do_sample=False, return_full_text=False)
output = result[0]["generated_text"]
if isinstance(output, list):
output = output[-1]["content"]
print(output)
Interactive Demo: oxdev/security-auditor-demo — Side-by-side comparison with base model, 7 test cases with known vulnerabilities, automated scoring.
train_grpo_v2_colab.ipynb in Google Colab with a free T4 GPU| Category | Keywords |
|---|---|
| Reentrancy | reentrancy, reentrant, callback |
| Access Control | unauthorized, permission, onlyowner |
| Oracle Manipulation | price feed, chainlink, twap |
| Flash Loan | flash loan, flashloan |
| Overflow/Underflow | overflow, underflow, arithmetic |
| Front-running | front-run, sandwich, MEV |
| DoS | denial of service, gas limit, unbounded |
| Token Issues | fee-on-transfer, rebasing, ERC20 |
| Storage | storage collision, delegatecall, proxy |
| Cross-chain | bridge, relay, message passing |
| Liquidation | liquidation, collateral, health factor |
| Signature | ecrecover, replay, nonce, EIP712 |
| Initialization | uninitialized, constructor |
| Rounding | precision, truncation, decimal |
<|im_start|> / <|im_end|>)use_cache=True when loading for inference — the saved config has use_cache=False from training, which makes generation 10-20× slower| File | Description |
|---|---|
model.safetensors |
V1 trained model weights (1.8GB) |
train_grpo_job.py |
V1 training script |
train_grpo_v2.py |
V2 training script (4 reward functions) |
train_grpo_v2_colab.ipynb |
V2 Colab notebook (free T4 GPU) |
checkpoint-300/ |
V1 training checkpoint |
checkpoint-326/ |
V1 final checkpoint |
@article{shao2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and others},
year = 2024,
eprint = {arXiv:2402.03300},
}
Base model
Qwen/Qwen2.5-0.5B