Instructions to use monsterapi/codellama_7b_DolphinCoder with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use monsterapi/codellama_7b_DolphinCoder with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("codellama/CodeLlama-7b-hf") model = PeftModel.from_pretrained(base_model, "monsterapi/codellama_7b_DolphinCoder") - Notebooks
- Google Colab
- Kaggle
| library_name: peft | |
| tags: | |
| - code | |
| - instruct | |
| - code-llama | |
| datasets: | |
| - cognitivecomputations/dolphin-coder | |
| base_model: codellama/CodeLlama-7b-hf | |
| license: apache-2.0 | |
| ### Finetuning Overview: | |
| **Model Used:** codellama/CodeLlama-7b-hf | |
| **Dataset:** cognitivecomputations/dolphin-coder | |
| #### Dataset Insights: | |
| [Dolphin-Coder](https://huggingface.co/datasets/cognitivecomputations/dolphin-coder) dataset – a high-quality collection of 100,000+ coding questions and responses. It's perfect for supervised fine-tuning (SFT), and teaching language models to improve on coding-based tasks. | |
| #### Finetuning Details: | |
| With the utilization of [MonsterAPI](https://monsterapi.ai)'s [no-code LLM finetuner](https://monsterapi.ai/finetuning), this finetuning: | |
| - Was achieved with great cost-effectiveness. | |
| - Completed in a total duration of 15hr 31mins for 1 epochs using an A6000 48GB GPU. | |
| - Costed `$31.31` for the entire 1 epoch. | |
| #### Hyperparameters & Additional Details: | |
| - **Epochs:** 1 | |
| - **Total Finetuning Cost:** $31.31 | |
| - **Model Path:** codellama/CodeLlama-7b-hf | |
| - **Learning Rate:** 0.0002 | |
| - **Data Split:** 100% train | |
| - **Gradient Accumulation Steps:** 128 | |
| - **lora r:** 32 | |
| - **lora alpha:** 64 | |
|  | |
| --- | |
| license: apache-2.0 |