# Unsloth Docs

Unsloth lets you run and train AI models on your own local hardware.

Our docs will guide you through running & training your own model locally.

<a href="fine-tuning-for-beginners" class="button primary">Get started</a> <a href="https://github.com/unslothai/unsloth" class="button secondary">Our GitHub</a>

<table data-view="cards" data-full-width="false"><thead><tr><th></th><th></th><th data-hidden data-card-cover data-type="image">Cover image</th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td><strong>Qwen3.6</strong></td><td>The new Qwen3.6-27B model is here!</td><td><a href="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Fdy7z03AUFzXHKqqOY7kC%2Fqwen3.6%20logo.png?alt=media&#x26;token=a894c047-a9ea-4f9c-824a-be86ec81f54d">qwen3.6 logo.png</a></td><td><a href="../models/qwen3.6">qwen3.6</a></td></tr><tr><td><strong>Google Gemma 4</strong></td><td>Run and train Google's new Gemma 4 models!</td><td><a href="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FkEjWOJqBWCtIN9Cg6CdI%2FGemma%204%20landscape.png?alt=media&#x26;token=57d3f596-dae8-4eab-80e6-0847794ffc8d">Gemma 4 landscape.png</a></td><td><a href="../models/gemma-4">gemma-4</a></td></tr><tr><td><strong>Kimi K2.6</strong></td><td>Run the new SOTA open model.</td><td><a href="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FJ0bVVb4T95GD9XONyMQh%2Fkimi%20k26.png?alt=media&#x26;token=c69c24cd-45d5-4710-9c0f-d3a04ab7f07d">kimi k26.png</a></td><td><a href="../models/kimi-k2.6">kimi-k2.6</a></td></tr><tr><td><strong>Introducing Unsloth Studio</strong></td><td>New open, no-code UI to train and run LLMs.</td><td><a href="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FstfdTMsoBMmsbQsgQ1Ma%2Flandscape%20clip%20gemma.gif?alt=media&#x26;token=eec5f2f7-b97a-4c1c-ad01-5a041c3e4013">landscape clip gemma.gif</a></td><td><a href="../new/studio">studio</a></td></tr><tr><td><strong>Qwen3.5</strong></td><td>New Qwen3.5 Small &#x26; Medium LLMs are here!</td><td><a href="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Fvw6yRxJDCeBl1CIsQkki%2Fqwen35.png?alt=media&#x26;token=28fe0357-351a-49e1-a176-bb21ecc8542a">qwen35.png</a></td><td><a href="../models/qwen3.5">qwen3.5</a></td></tr><tr><td><strong>GLM-5.1</strong></td><td>Run the new SOTA open model locally.</td><td><a href="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FK69rPUGatLzCBK9uaqxU%2Fglm51%20logo.png?alt=media&#x26;token=934ef701-0233-47fd-ad49-6c1a5959b684">glm51 logo.png</a></td><td><a href="../models/glm-5.1">glm-5.1</a></td></tr></tbody></table>

{% columns %}
{% column width="50%" %}
{% content-ref url="fine-tuning-llms-guide" %}
[fine-tuning-llms-guide](https://unsloth.ai/docs/get-started/fine-tuning-llms-guide)
{% endcontent-ref %}

{% content-ref url="unsloth-notebooks" %}
[unsloth-notebooks](https://unsloth.ai/docs/get-started/unsloth-notebooks)
{% endcontent-ref %}
{% endcolumn %}

{% column width="50%" %}
{% content-ref url="unsloth-model-catalog" %}
[unsloth-model-catalog](https://unsloth.ai/docs/get-started/unsloth-model-catalog)
{% endcontent-ref %}

{% content-ref url="../models/tutorials" %}
[tutorials](https://unsloth.ai/docs/models/tutorials)
{% endcontent-ref %}
{% endcolumn %}
{% endcolumns %}

### 🦥 Why Unsloth?

* We directly collab with teams behind [gpt-oss](https://docs.unsloth.ai/new/gpt-oss-how-to-run-and-fine-tune#unsloth-fixes-for-gpt-oss), [Qwen3](https://www.reddit.com/r/LocalLLaMA/comments/1kaodxu/qwen3_unsloth_dynamic_ggufs_128k_context_bug_fixes/), [Llama 4](https://github.com/ggml-org/llama.cpp/pull/12889), [Mistral](https://unsloth.ai/docs/models/tutorials/devstral-how-to-run-and-fine-tune), [Gemma 1-3](https://news.ycombinator.com/item?id=39671146) and [Phi-4](https://unsloth.ai/blog/phi4), where we’ve **fixed critical bugs** that greatly improved model accuracy. Andrej Karpathy for example has [praised our work](https://x.com/karpathy/status/1765473722985771335).
* Unsloth streamlines local training, inference, data, and deployment
* Unsloth supports inference and training for 500+ models: [vision](https://unsloth.ai/docs/basics/vision-fine-tuning), [TTS](https://unsloth.ai/docs/basics/text-to-speech-tts-fine-tuning), [embedding](https://unsloth.ai/docs/basics/embedding-finetuning), [RL](https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide)

### ⭐ Features

Unsloth lets you run and train models for text, [audio](https://unsloth.ai/docs/basics/text-to-speech-tts-fine-tuning), [embedding](https://unsloth.ai/docs/new/embedding-finetuning), [vision](https://unsloth.ai/docs/basics/vision-fine-tuning) and more. Unsloth provides many key features for both inference and training:

#### Inference

* Search + download + run any model like GGUFs, LoRA adapters, safetensors.
* [Self-healing tool calling](https://unsloth.ai/docs/new/studio/chat#auto-healing-tool-calling) / web search and call OpenAI-compatible APIs.
* [Auto inference parameter](https://unsloth.ai/docs/new/studio/chat#auto-parameter-tuning) tuning and edit chat templates.
* [Export or save](https://unsloth.ai/docs/new/studio/export) your model to GGUF, 16-bit safetensor etc.
* [Compare outputs](https://unsloth.ai/docs/new/studio/chat#model-arena) with two different model side by side.

#### Training

* Train and [RL](https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide) 500+ models \~2x faster with \~70% less VRAM (no accuracy loss)
* Supports full fine-tuning, pre-training, 4-bit, 16-bit and FP8 training.
* [Auto-create datasets](https://unsloth.ai/docs/new/studio/data-recipe) from PDF, CSV, DOCX files. Edit data in a visual node workflow.
* Observability: Monitor training live, track loss, GPU usage, customize graphs
* Most efficient [**reinforcement learning**](https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide) library, using 80% less VRAM for GRPO, [FP8](https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide/fp8-reinforcement-learning) etc.
* [Multi-GPU](https://unsloth.ai/docs/basics/multi-gpu-training-with-unsloth) works but a much better version is coming!

### Quickstart

Unsloth supports MacOS, Linux, [Windows](https://unsloth.ai/docs/get-started/install/windows-installation), [NVIDIA](https://unsloth.ai/docs/get-started/install/pip-install), Intel and CPU setups. See: [unsloth-requirements](https://unsloth.ai/docs/get-started/fine-tuning-for-beginners/unsloth-requirements "mention"). Use the same commands to update:

#### **MacOS, Linux, WSL:**

```bash
curl -fsSL https://unsloth.ai/install.sh | sh
```

#### **Windows PowerShell:**

```bash
irm https://unsloth.ai/install.ps1 | iex
```

#### Docker

Use our official **Docker image**: [`unsloth/unsloth`](https://hub.docker.com/r/unsloth/unsloth) which currently works for Windows, WSL and Linux. MacOS support coming soon.

#### Launch Unsloth

```bash
unsloth studio -H 0.0.0.0 -p 8888
```

### What is Fine-tuning and RL? Why?

[**Fine-tuning** an LLM](https://unsloth.ai/docs/get-started/fine-tuning-llms-guide) customizes its behavior, enhances domain knowledge, and optimizes performance for specific tasks. By fine-tuning a pre-trained model (e.g. Llama-3.1-8B) on a dataset, you can:

* **Update Knowledge**: Introduce new domain-specific information.
* **Customize Behavior**: Adjust the model’s tone, personality, or response style.
* **Optimize for Tasks**: Improve accuracy and relevance for specific use cases.

[**Reinforcement Learning (RL)**](https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide) is where an "agent" learns to make decisions by interacting with an environment and receiving **feedback** in the form of **rewards** or **penalties**.

* **Action:** What the model generates (e.g. a sentence).
* **Reward:** A signal indicating how good or bad the model's action was (e.g. did the response follow instructions? was it helpful?).
* **Environment:** The scenario or task the model is working on (e.g. answering a user’s question).

**Example fine-tuning or RL use-cases**:

* Enables LLMs to predict if a headline impacts a company positively or negatively.
* Can use historical customer interactions for more accurate and custom responses.
* Fine-tune LLM on legal texts for contract analysis, case law research, and compliance.

You can think of a fine-tuned model as a specialized agent designed to do specific tasks more effectively and efficiently. **Fine-tuning can replicate all of RAG's capabilities**, but not vice versa.

{% columns %}
{% column width="50%" %}
{% content-ref url="fine-tuning-for-beginners/faq-+-is-fine-tuning-right-for-me" %}
[faq-+-is-fine-tuning-right-for-me](https://unsloth.ai/docs/get-started/fine-tuning-for-beginners/faq-+-is-fine-tuning-right-for-me)
{% endcontent-ref %}

{% content-ref url="../basics/inference-and-deployment" %}
[inference-and-deployment](https://unsloth.ai/docs/basics/inference-and-deployment)
{% endcontent-ref %}
{% endcolumn %}

{% column width="50%" %}
{% content-ref url="reinforcement-learning-rl-guide" %}
[reinforcement-learning-rl-guide](https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide)
{% endcontent-ref %}

{% content-ref url="../basics/unsloth-dynamic-2.0-ggufs" %}
[unsloth-dynamic-2.0-ggufs](https://unsloth.ai/docs/basics/unsloth-dynamic-2.0-ggufs)
{% endcontent-ref %}
{% endcolumn %}
{% endcolumns %}

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Fgit-blob-134302f2507d4313b9575917c9a43b0a0028856c%2Flarge%20sloth%20wave.png?alt=media" alt="" width="188"><figcaption></figcaption></figure>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://unsloth.ai/docs/get-started/readme.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
