Do not index
Do not index
An AI agent can be your most reliable employee—or the one who constantly misses the mark. The difference lies in the instructions you give it. After months of experimenting with agent swarms at Lunch Pail Labs, I’ve learned that clear and precise instructions make all the difference.
Including a clear overview, defined responsibilities with examples, and key reminders to prevent errors has consistently led to more reliable performance.
Here’s how you can do it too:
1. Define the agent’s role
I start with a clear summary of the agent’s purpose. This acts as a job description, setting expectations and boundaries.
In my setup, PailAssist handles user interactions and coordinates with other agents. Its role is to:
- Respond to user queries.
- Assign tasks to specialized agents like PailTask or PailContent.
- Handle requests such as generating content or resolving support tickets.
This step ensures the agent understands its responsibilities and stays within its scope.
2. List responsibilities with examples
I outline specific tasks and provide templates for how to execute them. Including example outputs improves performance by giving the agent a clear format to follow.
For example, here’s a responsibility and template for responding about generated content:
Content Creation
When user needs content:
```
"I'll help create that content.
✨ Content ready in Notion!
Title: [content_title]
Type: [content_type]
URL: [notion_url]
Feel free to review and let me know if you'd like any changes."
3. Use reminders for common failures
As you test your agent, you’ll notice where it consistently fails or executes tasks incorrectly. Adding reminders like “Always send the request ID to the support agent” ensures critical steps aren’t missed.
For my PailResearch agent, I include these reminders:
- Interact with one web page at a time.
- Stop and report errors immediately—don’t retry without instructions.
- Complete tasks fully before starting new ones.
These reminders keep agents on track and reduce execution errors.
Once you have your instructions structured with this format in mind, you can use a GPT like Instruction Genius to further refine and clarify them.
Bonus: standardize the input
While the practices above help improve output consistency, sometimes you need an extra boost. One advantage of building a swarm of agents for internal use is control—you are not dealing with wildly different inputs like in a consumer app.
With Slack /commands, I can open a form that collects the necessary information and formats the request for the agent the same way every time. This makes the output for complex tasks more consistent because the agent always gets the same input format.
That’s it for my practices. What do you use? Feel free to share your thoughts. For more of this kind of content, subscribe to my newsletter. I write about building integrations and the AI-powered systems that help me build better.