Save hours on research and uncover opportunities with this AI agent

Research is at the heart of every integration, but sifting through APIs, forums, and competitor data can be overwhelming. I built an AI assistant that combines Slack, Tavily, Agency Swarm, and Notion to deliver clear insights and streamline the process. Here’s how it works, so you can create your own.

Save hours on research and uncover opportunities with this AI agent
Do not index
Do not index
Research is at the heart of every integration. You read platform documentation, sift through forums, and analyze competitor integrations. It’s a lot to process, and it’s easy to get lost. I needed a way to stay focused, so I built an AI-powered assistant. It combines Slack, Tavily, Agency Swarm, and Notion to deliver clear insights and streamline my process. The real star of this process is Tavily’s GPT researcher. It’s the standout tool in my workflow.
 
Here’s how I use it so you can swipe it and build your own.

How it works

1. Ask a question

The workflow begins in Slack. I type a query like:
@pailswarm generate a report about the Clio app marketplace, what it is, how it works, and how someone can get started with it.
Once the query is sent, the agent gets to work collecting information.
notion image
 

2. AI agent researches and compiles sources

An AI agent research collects data from the web, cross-references sources, and organizes findings. It includes links to the sources it used for its research, making the insights more reliable. This is powered by Tavily’s GPT researcher.
 

3. Generates reports with key findings in Slack

The agent compiles its research into a structured report and delivers a quick summary in Slack. The full report is saved in Notion for easy access.
The Slack overview includes:
  • Key Findings: Actionable bullet points.
The full report in Notion provides:
  • Introduction.
  • Main points.
  • References with links to validate the report.
 
For a recent project, this workflow gave me a primer on an ecosystem I’m building for. It helped me quickly understand the ecosystem, identify where to focus my efforts, and refine the project scope faster.
 

Tips for Building Your Own

Here are a few tips to keep in mind if you’re building your own AI-powered researcher:
  • Monitor your usage: AI tools can get expensive quickly. This has been my most costly process so far since I haven’t customized the researcher for efficiency. Keep an eye on your API usage to control costs.
  • Prepare for long-running processes: Research tasks can take a few minutes to complete. Make sure your hosting solution can handle long processing times. I use Fly.io for hosting, and it works great.
  • Store your research: Don’t leave your research reports in chat. Save them in a database or another searchable format. Having a central place to store reports makes it much easier to revisit and leverage past research.
 

Bonus: GPT Researcher Code

This is the code for the GPT Researcher I use with Agency Swarm. It combines Tavily’s researcher as a tool with Notion and the research agent to automate and streamline the research process. You can customize this for your workflow.
 
💻 see code
from agency_swarm.tools import BaseTool
from pydantic import Field
from gpt_researcher import GPTResearcher
from typing import Optional
import os
import logging

# Set up logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

class GPTResearcherTool(BaseTool):
    """
    A tool that uses GPT Researcher to conduct comprehensive research on given topics.
    """

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        # Verify required API keys are present
        if not os.getenv("TAVILY_API_KEY"):
            raise ValueError("TAVILY_API_KEY not found in environment variables")
        if not os.getenv("OPENAI_API_KEY"):
            raise ValueError("OPENAI_API_KEY not found in environment variables")

    query: str = Field(
        ..., 
        description="The research topic or keyword to investigate"
    )
    
    report_type: str = Field(
        default="research_report",
        description="The type of report to generate"
    )
    
    config_path: Optional[str] = Field(
        default=None,
        description="Optional path to configuration file"
    )

    async def run(self):
        """
        Executes research using GPT Researcher and returns the report
        """
        try:
            logger.info(f"Starting research for query: {self.query}")
            
            # Initialize the researcher
            researcher = GPTResearcher(
                query=self.query, 
                report_type=self.report_type,
                config_path=self.config_path
            )
            
            logger.info("Conducting research...")
            # Conduct the research
            research_result = await researcher.conduct_research()
            
            logger.info("Generating report...")
            # Generate and return the report
            report = await researcher.write_report()
            
            if not report:
                raise ValueError("Empty report received from GPT Researcher")
                
            logger.info("Research completed successfully")
            return report
            
        except Exception as e:
            logger.error(f"Research failed with error: {str(e)}")
            raise Exception(f"Failed to generate research report: {str(e)}")

# Example usage
if __name__ == "__main__":
    import asyncio
    import dotenv
    
    # Load environment variables
    dotenv.load_dotenv()
    
    async def test_researcher():
        try:
            researcher = GPTResearcherTool(
                query="What are the latest developments in AI?",
                report_type="research_report"
            )
            result = await researcher.run()
            print("Research completed successfully!")
            print(result)
        except Exception as e:
            print(f"Error: {str(e)}")
    
    # Run the test
    asyncio.run(test_researcher()) 
 
To get started, you can install the required dependencies and follow the instructions in Tavily’s GPT Researcher documentation.
 

What’s Next

This system speeds up my research and gives me a clear reference for ecosystems, APIs, and more. It took a few hours to set up, and I plan to expand it for deeper analysis and better insights from forums and Reddit.
 
And that’s it! What do you think? I’d love to hear your thoughts—feel free to share them. For more insights like this, subscribe to my newsletter.

We build apps for app marketplaces

Partner with us →

Written by

Lola
Lola

Lola is the founder of Lunch Pail Labs. She enjoys discussing product, app marketplaces, and running a business. Feel free to connect with her on Twitter or LinkedIn.