# Quickstart

Go from local code to running on GPUs in under two minutes.

### Prerequisites

Make sure you've completed [installation](https://docs.mithril.ai/mithril-cli/installation "mention") — you'll need a Mithril API key, project and billing configured before continuing.

### 1. Start from your project

Open a terminal in the directory that contains your project code:

```bash
cd /path/to/project
```

### 2. Describe your task in YAML

Create a file called `task.yaml` at the project root. This tells Mithril what hardware you need, how to prepare the environment, and what to run.

{% hint style="info" %}
Tip: Run `ml launch` with no arguments to generate a starter task.yaml and AGENTS.md file that coding agents can use to guide you through the entire workflow.
{% endhint %}

```yaml
# task.yaml

# Sync this directory so your code and data are
# available on the cluster
workdir: .

resources:
  infra: mithril
  accelerators: B200:8
  
# Maximum hourly price you're willing to pay for
# the instance.
# Due to auction-based pricing, you often pay less
# than this cap.
config:
  mithril:
    # Equivalent to $4.00/GPU/hour on an 8x instance.
    limit_price: 32.00

# Runs once when the cluster is first created
# (install deps, download data, etc.)
setup: |
  pip install -r requirements.txt

# Command that executes your code — runs on the
# cluster every time you launch or exec.
run: |
  nvidia-smi
  python train.py
```

> This example shows training, but the same workflow applies to batch inference and evaluation.

That's the entire configuration. `workdir: .` uploads your local files, `setup` runs once to prepare the environment, and `run` specifies the commands that executes your code.

→ [task-yaml](https://docs.mithril.ai/mithril-cli/task-yaml "mention") — full reference manual.

### 3. Launch

```bash
ml launch task.yaml -c my-cluster
```

This does two things: creates a cluster named `my-cluster` with the GPUs you requested, and submits your task as the first job. Logs will stream directly to your terminal.

### 4. Check cluster status

Once launched, you can check on your cluster at any time:

```bash
ml status
```

### 5. View the job queue and logs

A cluster can run multiple jobs. To see what's queued and stream output:

```bash
ml queue my-cluster        # list all jobs on the cluster
ml logs my-cluster         # stream logs for a job
```

### 6. SSH into the cluster

For interactive debugging or poking around, SSH straight in.&#x20;

```bash
ssh my-cluster
```

### 7. Iterate on your code

Edit your code or `run` commands locally, then push the changes to your existing cluster:

```bash
ml exec my-cluster task.yaml
```

`ml exec` syncs your workdir and executes the run commands without reprovisioning or modifying cluster resources. This creates a tight iteration loop for testing changes.

**When to use which:**

| Scenario                                           | Command                             |
| -------------------------------------------------- | ----------------------------------- |
| Only your code or `run` commands changed           | `ml exec my-cluster task.yaml`      |
| You changed `setup`, `file_mounts`, or `resources` | `ml launch task.yaml -c my-cluster` |

### 8. Tear down when you're done

When you no longer need the cluster:

```bash
ml down my-cluster
```

This terminates your bid, releases compute resources and stops billing. Persistent volumes are not affected — your data remains available.
