How to turn a target account list into outreach-ready leads with a browser agent

Recently, I shared how I built a target account list with a browser agent. This follow-up shows how I turned that one-time list into a repeatable enrichment workflow I can trust for outreach.

How to turn a target account list into outreach-ready leads with a browser agent
Do not index
Do not index
Recently, I shared a how-to on building a target account list with a browser agent: How to Build a Target Account List With a Browser Agent.
That worked well for sourcing and ranking accounts, but I still needed a reliable way to enrich each account with buyer and user signals I could actually use in outreach.
I used OpenCode to coordinate agents, a browser agent for research and validation, and different ai for browser automation in OpenCode. I ran execution with the Ralph Wiggum technique: PRD first, user stories next, then small-batch execution with QA between batches.
This post breaks down the exact enrichment workflow.

Stack and workflow model

I used OpenCode to run both technical and research tasks in one loop. The browser agent handled link gathering and verification across agency websites and public profiles. I used a PRD with explicit user stories to define required fields, required inclusions, and batch behavior before execution started.
The most important design decision was separating collection, validation, and output writing. That made each row auditable and reduced quality drift across batches.
I also moved from hand-authored prompt guidance to sample-derived style assets. In practice, that meant I manually sampled records, defined a fixed output schema, and reused that structure for every batch. This produced more consistent rows than relying on prompt wording alone.

The 4-step process I used to enrich the list

1) Sample the list manually

I started with manual spot checks to establish what a "good" enriched row should look like. This gave me concrete examples for acceptable hiring links, buyer profiles, and user profiles before I scaled execution.

2) Create a fixed enrichment table

I defined one schema for every agency row:
  • agency_name
  • website
  • hiring_signal_links
  • buyer_links
  • user_links
  • snapshot_date
  • notes
I also locked missing-value placeholders:
  • no_signal_found
  • buyer_not_found
  • user_not_found
This prevented inconsistent formatting and made QA much faster.

3) Run the loop with Ralph Wiggum to collect signals

I converted the PRD to ralph/prd.json, kept each agency as its own execution unit, and ran in small batches. The loop gathered links, validated link reachability, checked role relevance, and then wrote the row.
I also enforced specific required inclusions where needed. For example, Sidebench had a required hiring signal URL, and Appstem had required user profile URLs. Treating these as explicit checks made the run verifiable instead of subjective.

4) Export a clean snapshot and QA summary

After enrichment, I produced a date-stamped CSV snapshot and a markdown QA summary. The summary made coverage easy to review by showing where hiring, buyer, or user fields were present or missing.

What QA actually looked like in the PRD

The PRD enforced QA at three levels.
First, batch-level QA: each batch had status tracking (pending, in_progress, done) so I could catch drift early.
Second, row-level QA: each agency row required validated URLs and role-fit checks before finalization, with notes for ambiguity when evidence was unclear.
Third, final-output QA: the export had to contain one row per agency, preserve semicolon delimiters for multi-link fields, and include clear placeholders where evidence was missing.
This structure is what made the output reliable. The browser agent improved speed, but the QA rules made the output trustworthy.

How to apply this pattern

If you want to reuse this workflow for your own pipeline, start with structure instead of prompts.
  1. Define required fields and missing-value conventions before running agents.
  1. Manually sample a few records to set a concrete quality baseline.
  1. Separate collection, validation, and write steps.
  1. Run in small batches so quality issues surface early.
  1. Add required-inclusion checks for high-priority accounts.
  1. Export both machine-usable output (CSV) and human QA output (markdown).
The transferable lesson is simple: reliability comes from workflow design, not longer prompts.

Conclusion

The biggest improvement was not a new model or a clever instruction. It was turning enrichment into a repeatable system with fixed schema, explicit checks, and batch QA gates.
Using OpenCode, browser automation with different ai, and the Ralph Wiggum technique, I moved from a one-time target account list to a process I can rerun with confidence.
I'm building PailFlow in the open and sharing how I use AI systems to scale a one-woman business.
If you work in client services and want to see how AI can increase your project delivery capacity, book a PailFlow Delivery Audit.

Get regular notes on building products, shipping add ons, and scaling a product studio

Follow the journey

Subscribe

Written by

Lola
Lola

Lola is the founder of Lunch Pail Labs. She enjoys discussing product, app marketplaces, and running a business. Feel free to connect with her on Twitter or LinkedIn.