As LLMs process long instructions, they often suffer from Instruction Driftingβthe tendency to ignore early constraints in favor of later ones. In this lesson, we learn the technical "Anchor" techniques to keep the model locked into your deterministic logic.
### SYSTEM ARCHITECTURE
[500 words of complex context...]
### FINAL EXECUTION ANCHOR (RECAP)
Before you generate the response, confirm you will adhere to these 3 strict rules:
1. No corporate jargon.
2. Format as valid JSON only.
3. Identify exactly 2 revenue leaks.
### TASK
[Immediate Command]
Higher "Temperature" settings (e.g., 0.8+) increase creativity but exponentially increase Instruction Drifting. For high-fidelity technical tasks, always set temperature: 0.0 or 0.2 to ensure the model follows instructions precisely.
Write a system prompt for a Python Code Auditor. It must follow 15 specific style rules. Use the Anchor technique to ensure the model doesn't ignore rule #1 by the time it reaches the end of the file.