💻 Examples

Real working code in Python, JavaScript, and cURL

🐍 Python SDK Examples

Complete Python scripts ready to run

Complete Training Script

train_model.py

import requests
import time

API_BASE = "https://finetunelab.ai"
AUTH_TOKEN = "your-token"

headers = {
    "Authorization": f"Bearer {AUTH_TOKEN}",
    "Content-Type": "application/json"
}

def monitor_training(job_id):
    url = f"{API_BASE}/api/training/metrics/{job_id}"
    while True:
        response = requests.get(url, headers=headers)
        metrics = response.json()
        
        print(f"Step {metrics['current_step']}/{metrics['total_steps']}")
        print(f"Loss: {metrics['train_loss']:.4f}")
        
        if metrics.get("status") == "completed":
            break
        time.sleep(10)

Dataset Validation Script

validate_dataset.py

import json

def validate_jsonl(file_path):
    errors = []
    with open(file_path, 'r') as f:
        for i, line in enumerate(f, 1):
            try:
                data = json.loads(line)
                if "messages" not in data:
                    errors.append(f"Line {i}: Missing messages")
            except json.JSONDecodeError:
                errors.append(f"Line {i}: Invalid JSON")
    return errors

⚡ JavaScript/TypeScript Examples

Modern async/await patterns for Node.js and browsers

Training Client Class

TrainingClient.ts

class TrainingClient {
  constructor(baseUrl, authToken) {
    this.baseUrl = baseUrl;
    this.authToken = authToken;
  }

  async createConfig(config) {
    const response = await fetch(`${this.baseUrl}/api/training`, {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${this.authToken}`,
        'Content-Type': 'application/json',
      },
      body: JSON.stringify(config),
    });
    return response.json();
  }

  async getMetrics(jobId) {
    const response = await fetch(`${this.baseUrl}/api/training/metrics/${jobId}`, {
      headers: { 'Authorization': `Bearer ${this.authToken}` },
    });
    return response.json();
  }
}

React Hook for Training

useTraining.ts

import { useState, useEffect } from 'react';

export function useTraining(jobId) {
  const [metrics, setMetrics] = useState(null);

  useEffect(() => {
    if (!jobId) return;

    const fetchMetrics = async () => {
      const response = await fetch(`/api/training/metrics/${jobId}`);
      setMetrics(await response.json());
    };

    fetchMetrics();
    const interval = setInterval(fetchMetrics, 5000);
    return () => clearInterval(interval);
  }, [jobId]);

  return metrics;
}

🔧 cURL Command Reference

Quick copy-paste commands for terminal use

Create Training Config

curl -X POST https://finetunelab.ai/api/training \ -H "Content-Type: application/json" \ -d '{"name": "my-model", "base_model": "meta-llama/Llama-3.2-1B", "dataset_id": "dataset-123", "epochs": 3}'

Start Training

curl -X POST https://finetunelab.ai/api/training/execute \ -H "Content-Type: application/json" \ -d '{"id": "config-456"}'

Get Training Status

curl https://finetunelab.ai/api/training/status/job-789

Get Metrics

curl https://finetunelab.ai/api/training/metrics/job-789

Pause Training

curl -X POST https://finetunelab.ai/api/training/pause/job-789

Resume Training

curl -X POST https://finetunelab.ai/api/training/resume/job-789

Download Model

curl -O https://finetunelab.ai/api/training/download/job-789

Get Analytics

curl https://finetunelab.ai/api/training/analytics/job-789

🚀 Inference Deployment Examples

Deploy trained models to production with RunPod Serverless

Python: Complete Deployment Workflow

deploy_inference.py

import requests
import time

class InferenceDeployment:
    def deploy(self, training_job_id, deployment_name, budget_limit=10.0):
        url = f"{self.api_base}/api/inference/deploy"
        payload = {
            "provider": "runpod-serverless",
            "deployment_name": deployment_name,
            "training_job_id": training_job_id,
            "gpu_type": "NVIDIA RTX A4000",
            "budget_limit": budget_limit
        }
        response = requests.post(url, json=payload, headers=self.headers)
        return response.json()

    def get_status(self, deployment_id):
        url = f"{self.api_base}/api/inference/deployments/{deployment_id}/status"
        return requests.get(url, headers=self.headers).json()

    def make_inference_request(self, endpoint_url, prompt):
        payload = {"input": {"prompt": prompt, "max_tokens": 512}}
        return requests.post(endpoint_url, json=payload).json()

JavaScript/TypeScript: Inference Client

InferenceClient.ts

class InferenceClient {
  async deploy(config) {
    return this.fetch('/api/inference/deploy', {
      method: 'POST',
      body: JSON.stringify({
        provider: 'runpod-serverless',
        deployment_name: config.deploymentName,
        training_job_id: config.trainingJobId,
        gpu_type: config.gpuType || 'NVIDIA RTX A4000',
        budget_limit: config.budgetLimit || 10.0
      })
    });
  }

  async makeInferenceRequest(endpointUrl, prompt) {
    const response = await fetch(endpointUrl, {
      method: 'POST',
      body: JSON.stringify({ input: { prompt, max_tokens: 512 }})
    });
    return response.json();
  }
}

cURL: Quick Commands

Deploy to RunPod Serverless

curl -X POST https://finetunelab.ai/api/inference/deploy \ -H "Authorization: Bearer YOUR_TOKEN" \ -H "Content-Type: application/json" \ -d '{ "provider": "runpod-serverless", "deployment_name": "my-model-prod", "training_job_id": "job-abc123", "gpu_type": "NVIDIA RTX A4000", "budget_limit": 10.0, "min_workers": 0, "max_workers": 3, "auto_stop_on_budget": true }'

Check Deployment Status

curl -X GET \ https://finetunelab.ai/api/inference/deployments/dep-xyz789/status \ -H "Authorization: Bearer YOUR_TOKEN"

Make Inference Request

curl -X POST https://your-endpoint.runpod.net \ -H "Content-Type: application/json" \ -d '{ "input": { "prompt": "Explain quantum computing", "max_tokens": 512, "temperature": 0.7 } }'

Stop Deployment

curl -X DELETE \ https://finetunelab.ai/api/inference/deployments/dep-xyz789/stop \ -H "Authorization: Bearer YOUR_TOKEN"

💡 Important Notes

  • • Configure RunPod API key in Settings → Secrets before deploying
  • • Minimum budget limit is $1.00, recommended $10-50 for production
  • • Auto-scaling: Set min_workers=0 to scale to zero when idle
  • • Budget alerts trigger at 50%, 80%, and 100% utilization
  • • Deployment typically takes 2-5 minutes to become active
  • • Monitor costs in real-time on the /inference page

🔄 End-to-End Workflows

Complete scenarios from start to finish

Workflow 1: Basic Fine-Tuning

1

Prepare dataset (training_data.jsonl)

{"messages": [{"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}]}
2

Upload dataset

POST /api/training/datasets
3

Create training configuration

POST /api/training (learning_rate: 0.0001, batch_size: 4, epochs: 3)
4

Start training job

POST /api/training/execute
5

Monitor progress (poll every 5-10 seconds)

GET /api/training/metrics/:id

Download trained model

GET /api/training/download/:id

Workflow 2: Production Deployment

1

Complete training workflow (see Workflow 1)

2

Review final metrics

GET /api/training/analytics/:id (check final loss, eval metrics)
3

Deploy to RunPod Serverless

POST /api/training/deploy/:id
4

Test deployed model

curl -X POST https://finetunelab.ai/v1/chat/completions -d '{"model": "...", "messages": [...]}'
5

Add model to your app

POST /api/models (register custom model with base_url)

Monitor production usage

Track latency, error rates, user feedback

Workflow 3: Hyperparameter Optimization

1

Create baseline config (learning_rate: 1e-4, batch_size: 4)

POST /api/training → config-baseline
2

Create variant configs with different hyperparameters

POST /api/training → config-lr-high (lr: 5e-4), config-lr-low (lr: 5e-5)
3

Run all training jobs in parallel

POST /api/training/execute for each config
4

Compare results

GET /api/training/analytics/compare?ids=job1,job2,job3

Select best performing config and deploy

Choose config with lowest eval loss and best convergence

🎉 Ready to Build!

You now have working code examples in Python, JavaScript, and cURL. Pick your favorite language and start building!