Nice - I do something similar in a semi manual way.
I do find Codex very good at reviewing work marked as completed by Claude, especially when I get Claude to write up its work with a why,where & how doc.
It’s very rare Claude has fully completed the task successfully and Codex doesn’t find issues.
I agree! Right now it is leveraging the Codex App Server, which is open-source and very well implemented, but using Claude Code Channels is probably a bit hacky.
The good thing is that it establishes a direct connection so it's already much better than having one agent spawn the other and wait for its output, or read/write to a shared .md file -- but it would be cool to make it work for all agent harnesses.
I prefer claude for generation / creativity, codex for bull-headed, accurate complaining and audit. Very rarely claude just doesn't "get it" and it makes sense to have codex direct edit. But generally I think it's happiest and best used complaining.
This is interesting for code, but I'm curious about agent-to-agent coordination for ops tasks — like one agent detecting a database anomaly and another auto-remediating it
I think a lot of people/companies are integrating workflows like that, it's just separate from the point of agent pair coding.
The interesting thing here is agents working together to be better at a single task. Not agents integrated in a workflow. There's a lot of opportunity in "if this then that" scenarios that has nothing to do with two agents communicating on one single element of a problem, it's just Agent detect -> agent solve (-> Agent review? Agent deploy? Etc.)
Been using Claude for pair programming since we're just two founders building MediTailor. It's wild - I can now prototype features that would have required hiring a full-time dev six months ago. The bottleneck shifted from "can we build this" to "should we build this" which is a much better problem to have.
Multi turn review of code written by cc reviewed by codex works pretty well. Been one of the only ways to be able to deliver larger scoped features without constant bugs. I've seen them do 10-15 rounds of fix and review until complete.
Also implemented this as a gh action, works well for sentry to gh to auto triage to fix pr.
Yes I’ve had a lot of success with this too. I found with prompt tightening I seldom do more than 5 rounds now, but it also does an explicit plan step with plan review.
Currently I’m authoring with codex and reviewing with opus.
Even with the same model (--self-review), that makes a huge difference, and immediately highlights how bad the first iterations of an LLM output can be.
I do find Codex very good at reviewing work marked as completed by Claude, especially when I get Claude to write up its work with a why,where & how doc.
It’s very rare Claude has fully completed the task successfully and Codex doesn’t find issues.
The good thing is that it establishes a direct connection so it's already much better than having one agent spawn the other and wait for its output, or read/write to a shared .md file -- but it would be cool to make it work for all agent harnesses.
Open to ideas! The repo is open-source.
The interesting thing here is agents working together to be better at a single task. Not agents integrated in a workflow. There's a lot of opportunity in "if this then that" scenarios that has nothing to do with two agents communicating on one single element of a problem, it's just Agent detect -> agent solve (-> Agent review? Agent deploy? Etc.)
Also implemented this as a gh action, works well for sentry to gh to auto triage to fix pr.
Currently I’m authoring with codex and reviewing with opus.
Even with the same model (--self-review), that makes a huge difference, and immediately highlights how bad the first iterations of an LLM output can be.