The Problem
We needed a support assistant that actually understands FineTune Lab. Not a generic chatbot that hallucinates features we don't have, but one that:
- Knows exactly where every button is
The Approach
We started with Qwen and a dataset of ~2,000 Q&A pairs. Here's what we learned through 10+ iterations:
Iteration 1: The Baseline Disaster
Our first dataset was written like documentation. Formal, complete, boring. The model responded like a manual, not a helpful assistant.
Example:
Problem: We don't have a dashboard. The model was hallucinating UI elements.
Iteration 2: Grounding in Reality
We rewrote answers to match our actual UI:
- Q: "How do I view my datasets?"
Better. But still missing personality and helpfulness.
Iteration 3-5: Adding Context
We added details users actually need:
Iteration 6-8: Adversarial Examples
This was the game-changer. We added:
Example adversarial:
Iteration 9-10: Reasoning Anchoring
We added thinking patterns that start with "What does FineTune Lab actually offer for this?"
This prevents the model from defaulting to generic LLM knowledge and keeps it grounded in our platform.
The Results
--
--
--
Key Takeaways
1. Start with real UI, not documentation - Write answers by actually clicking through the app
This is a living case study. We'll update it as we continue iterating on the dataset.