While Zero-Shot is fast, Few-Shot Prompting (providing examples) is the requirement for 100% production-grade fidelity. In this lesson, we learn how to benchmark the accuracy gap and build a "Golden Dataset" of examples.
Research shows that providing just 3-5 high-quality examples can increase a model's performance on complex tasks (like JSON extraction or creative writing) by up to 40%.
### TASK
Classify the following lead based on their 'CRM Sophistication'.
### EXAMPLES
Input: [Example 1 Website Text] -> Output: High (Uses Salesforce + Segments)
Input: [Example 2 Website Text] -> Output: Low (No pixel, generic contact form)
Input: [Example 3 Website Text] -> Output: Medium (Uses Klaviyo but no flows)
### ACTUAL INPUT
Input: [New Lead Website Text]
### OUTPUT
A "Black-Belt" pro doesn't just provide good examples; they provide Negative Examples (what not to do). This creates a "decision boundary" that prevents the model from hallucinating or using forbidden styles.
Create a set of 5 "Golden Examples" for a specific agency task (e.g., Drafting a WhatsApp win-back message). Ensure each example covers a different edge-case (e.g., angry customer, loyal customer, inactive customer).