Show HN: Building a Deep Research Agent Using MCP-Agent

(thealliance.ai)

77 points | by saqadri 2 days ago

4 comments

  • diggan 13 hours ago
    I gotta say, having white blurry blobs of something in the background floating behind white/grey text maybe wasn't the best design-choice out there.

    None the less, I tried to find the actual APIs/service/software used for the "search" part, as I've found that to be the hardest to actually get right (at least for as-local-as-possible usage) for my own "Deep Research Agent".

    I've experimented with Brave's search API which worked OK, but seems pricey for agent usage. Currently experimenting with using my own (local) YaCy instance right now, which actually gives me higher quality artifacts at the end, as there are no rate-limits and the model can do hundreds of search calls without me worrying about the cost. But it isn't very quick at picking up some stuff like news and more, otherwise works OK too.

    What is the author doing here for the actual searching? Anyone else have any other ideas/approaches to this?

    • saqadri 12 hours ago
      Haha, I didn't have control on the blog website, just the content. The readme and code is the ultimate source of truth (and easier to read):https://github.com/lastmile-ai/mcp-agent/blob/main/src/mcp_a...

      So the core idea is the Deep Orchestrator is pretty unopinionated on what to use for searching, as long as it is exposed over MCP. I tried with a basic fetch server that's one of the reference MCP servers (with a single tool called `fetch`), and also tried with Brave.

      I think the folks at Jina wrote some really good stuff on the actual search part: https://jina.ai/news/a-practical-guide-to-implementing-deeps... -- and how to do page/url ranking over the course of the flow. My recommendation would be to do all that in an MCP server itself. That keeps the "deep orchestrator" architecture fairly clean, and you can plug in increasingly sophisticated search techniques over time.

    • Zetaphor 12 hours ago
      Self host an instance of SearXNG[1] either locally or on a remote server with a simple docker container and use its JSON API [2]. You have to enable the JSON API in the config manually [3].

      [1] https://docs.searxng.org/admin/installation-docker.html#inst...

      [2] https://docs.searxng.org/dev/search_api.html

      [3] https://github.com/searxng/searxng/discussions/3542

      • saqadri 11 hours ago
        Thanks for sharing, this looks great! Do they have an MCP server? It should be easy to wrap around their JSON API but I couldn't see MCP support in the repo/docs.
        • Zetaphor 10 hours ago
          Not that I'm aware of, but it's an extremely simple API. It's should be really easy to wrap into an MCP
  • mbil 5 hours ago
    I'm using mcp-agent and have tried the orchestrator workflow pattern[0]. For deep research I'm having mixed results. As far as I can tell, it's not using prompt caching[1] with Anthropic models, nor the gpt-5 responses API[2], which is preferable to the completions API. The many MCP tools from a handful of servers eat up a lot of context. It doesn't report progress, so it'll just spin for minutes at a time without meaningful indication. Mostly it has been high cost and high latency without great grounding in source facts. I like the interface overall, but some of the patterns and examples were convoluted. I'm aware that mcp-agent is being worked on, and I look forward to improvements.

    [0]: https://docs.mcp-agent.com/workflows/orchestrator

    [1]: https://docs.anthropic.com/en/docs/build-with-claude/prompt-...

    [2]: https://platform.openai.com/docs/guides/migrate-to-responses

  • ilovefood 13 hours ago
    Great write-up! Gives me a few ideas for a governance bot that I'm working on. Thanks for sharing :)
  • asail77 2 days ago
    A good model for planner seems pretty important, what models are best?
    • saqadri 2 days ago
      OP here -- I think the general principle I would recommend is using a big reasoning model for the planning phase. I think Claude Code and other agents do the same. The reason this is important is because the quality of the plan really affects the final result, and error rates will compound if the plan isn't good.
    • haniehz 2 days ago
      based on the article, it seems like a good reasoning model like gpt5 or opus 4.1 might be good choices for the planner. I wonder if the gpt oss reasoning models would do well
      • diggan 13 hours ago
        Personally been using GPT-OSS-120b locally with reasoning_effort set to `high` and it blows pretty much every other local model out of the water, but takes a lot of time for it to eventually do a proper content reply. But for fire-and-forget jobs like "Create a well-researched report on X from perspective Y" it works really well.
        • cyberninja15 11 hours ago
          what machine are you running GPT-OSS-120B on? I'm currently only able to get GPT-OSS-20B working on my macbook using Ollama
      • koakuma-chan 14 hours ago
        Gemini 2.5 Pro is also a great reasoning model, I still prefer it over GPT 5
        • luckydata 12 hours ago
          Gemini is great, it's just incredibly clumsy at tool use and that's why it fails so often in practice. I'm looking forward to the next version, it will for sure address it, it's a big issue internally too (I'm a recent xoogler).
          • reachableceo 10 hours ago
            Yes it really is horrible at using tools. Codex is way better (even better than Claude code ). Gemini is great at doing audits and content (though I’ve switched to codex for everything all in one).
          • PantaloonFlames 11 hours ago
            Can you elaborate on “clumsy at tool use”?
            • luckydata 6 hours ago
              have you ever witnessed how sometimes Gemini makes multiple attempts at writing a file only to give up and start chanting "I'm worthless...".

              That's tool use failure :)

          • koakuma-chan 11 hours ago
            I'm excited for the next version!