.png?table=block&id=3204768c-ac2d-81f2-8d9e-ec2d3d636802&cache=v2)
Do not index
Self-flying agents, agent teams, software factories, and even "AI companies run by agents" are everywhere. Some of this is genuinely useful, and I track those conversations closely.
At the same time, current discussion keeps pointing to the same tension: output goes up fast, but quality can drift when review discipline is weak (trend discussion). The strongest software factory arguments are policy-first and evidence-first (software factory perspective), and even big autonomy visions still require strong governance to work in practice (autonomy perspective). All that to say: there is real leverage in an army of autonomous agents deciding and executing work, but it takes real legwork to put the right systems and policies in place first.
I felt this directly. I tried a lot of these systems myself and ended up with a ton running in parallel, tons of output, and too much garbage to sift through. It was easy to create activity and harder to maintain project hygiene.
The simpler move that worked for me
If you want the benefit of background agents, a simpler starting point is scheduling.
A lot of knowledge work runs on regular cadence and repeats every cycle. You do not always need a system deciding what to do next from scratch. For many delivery workflows, you can run the same job every day or every week with a clear QA gate.
That is where I started getting real leverage.
What this is at the root
At the root, this is a cron job pattern.
With terminal agents, any recurring job should become a skill that explains the job clearly. Then schedule a prompt on your machine at the cadence you want, route the run through that skill, and review outputs on a fixed rhythm.
If you are using OpenCode, I use opencode-scheduler for this.
Example recurring jobs
- Daily client comment sweep across implementation docs
- Post-meeting notes and requirement extraction
- Weekly project scans with stakeholder update drafts
- Daily QA pass on generated artifacts before client share
These jobs do not need constant goal-level planning. They need consistent execution, clear ownership, and review gates.
Result
I get a lot more leverage this way. Many more tasks start at Lunch Pail Labs without my explicit command. Work hums in the background, and I review outputs at regular cadences so I can focus on higher-value work.
Conclusion
This is the approach I recommend if you are building while running delivery.
Schedule repeatable work first. Let that become your base layer. Then add more autonomous behavior where dynamic judgment is actually required.
I am building PailFlow in the open and sharing how I use AI systems to scale a one-woman business.
If you run client delivery and want help installing workflow packs on your current stack, book a workflow fit call.
.png?table=block&id=9ba33ac6-8e12-48f6-b980-4333b612ec56&cache=v2)