It’s an interesting direction, but feels pretty expensive for what might still be a guess at what matters.
I’m not sure an LLM can really capture project-specific context yet from a single PR diff.
Honestly, a simple data-driven heatmap showing which parts of the code change most often or correlate with past bugs would probably give reviewers more trustworthy signals.
This is not that expensive with Gemini, they give free keys that have plenty of req/day, you can upload your diff + a bundle of the relevant part of the codebase and get this behavior for free, at least for a small team with ~10-20 PR's / day. If you could run this with personal keys, anyhow.
Yeah this is honestly pretty expensive to run today.
> I’m not sure an LLM can really capture project-specific context yet from a single PR diff.
We had an even more expensive approach that cloned the repo into a VM and prompted codex to explore the codebase and run code before returning the heatmap data structure. Decided against it for now due to latency and cost, but I think we'll revisit it to help the LLM get project context.
Distillation should help a bit with cost, but I haven't experimented enough to have a definitive answer. Excited to play around with it though!
> which parts of the code change most often or correlate with past bugs
I can think of a way to do the correlation that would require LLMs. Maybe I'm missing a simpler approach? But agree that conditioning on past bugs would be great
Gemini is better than GPT5 variants for large context. Also, agents tend to be bad at gathering an optimal context set. The best approach is to intelligently select from the codebase to generate a "covering set" of everything touched in the PR, make a bundle, and fire it off at Gemini as a one shot. Because of caching, you can even fire off multiple queries to Gemini instructing it to evaluate the PR from different perspectives for cheap.
For the correlation idea, you might take a look at how Sentry does it, they rely mostly on stack traces, error messages, and pattern matching to map issues back to code areas. It’s cheap, scalable, and doesn’t need an LLM in the loop, which could be a good baseline before layering anything heavier on top.
As for interactive reviews, one workflow I’ve found surprisingly useful is letting Claude Code simulate a conversation between two developers pair-programming through the PR. It’s not perfect, but in practice the dialogue and clarifying questions it generates often give me more insight than a single shot LLM summary. You might find it an interesting pattern to experiment with once you revisit the more context-aware approaches.
A large portion of the lines of code I'm considering when I review a PR are not part of the diff. This has to be a common experience - think of how often you want to comment on a line of code or file that just isn't in the PR. It happens almost every PR for me. They materialize as lose comments, or comments on a line like "Not this line per-se but what about XYZ?" Or "you replaced this 3 places but I actually found 2 more it should be applied to."
I mean these tools are fine. But let's be on the same page that they can only address a sub-class of problems.
This looks great. I'm probably gonna keep the threshold set to 0%, so a bit more gradient variety could be nice. Red-yellow-green maybe?
Also, can I use this on AI-generated code before creating a PR somehow? I find myself spending a lot of time reviewing Codex and Claude Code edits in my IDE.
Either would work, I think. How I do it right now is that I let AI edit automatically, but then check the diff in Cursor before I stage my Git changes. May be different for others.
Yeah, heatmapping the diff before creating a PR would need tighter IDE integration. We're working on cmux for this purpose. It's kinda an IDE, and it lives in the same repo: https://github.com/manaflow-ai/cmux.
This is very cool and I could see it being really useful especially for those giant PRs. I'd prefer it if instead of the slider I could just click the different heatmap colors and if they indicated what exactly they were for (label not threshold). I get the underlying premise but at a glance it's more to process unless I was to end up using this constantly.
Currently tooltips are shown when hovering on highlighted words. Need to make it visible on mobile though. Was wondering if you were thinking of another way to show the labels besides hovering?
I was referring to something more akin to a legend like you have in the examples "(examples: hard-coded secret, weird crypto mode, gnarly logic)." where I could click "hard-coded secret" (not the best label but you get the idea) and it would filter on those instead of the slider.
I tried it on a low-complexity Rust PR I worked on a few months back and it did a pretty good job. I'd probably change where the highlights live (for example x.y.z() -> x.w.z() should highlight y/w in a lot of cases).
For the most part, it seems to draw the eye to the general area where you need to look closer. It found a near-invisible typo in a coworker's PR which was kind of interesting as well.
Just a simple prompt right now, but I think we could try an approach where we directly see which tokens might be hallucinated. Gonna try to find the paper for this idea. Might be kinda analogous to the "distance between the expected token in this position vs the actual token."
Getting rate limited by GitHub, gonna add caching here as well. Temporary workaround is to sign in manually and return to example page: https://0github.com/handler/sign-in
This is something I have found missing in my current workflow when reviewing PR's. Particularly in the age of large AI generated PR's.
I think most reviewers do this to some degree by looking at points of interest. It'd be cool if this could look at your prior reviews and try to learn your style.
Thank you. This is a pretty cool feature that is just scratching the surface of a deep need, so keep at it.
Another perspective where this exact feature would be useful is in security review.
For example - there are many static security analyzers that look for patterns, and they're useful when you break a clearly predefined rule that is well known.
However, there are situations that static tools miss, but a highlight tool like this could help bring a reviewer's eyes to a high risk "area". I.e. scrutinize this code more because it deals with user input information and there is the chance of SQL injection here, etc.
This is really useful. Might want to add a checkbox at a certain threshold, so that reviewers explicitly answer the concerns of the LLM. Also you can start collecting stats on how "easy to review" PR's of team members are, e.g. they'd probably get a better score if they address the concerns in the comments already.
How do I opt out of this tool? I do not want anyone reviewing my code or projects to use or engage with it and it is explicitly against the TOS of those projects. It would be nice if this tool screened for a robots.txt or something of the sort so that I could ensure that this tool never touches my projects.
cmux-agent requires access to your Github account:
I would have logged an issue for this but I see you've disabled logging issues on the repo. Seems a bit sus to me.Just tested these example links in incognito and seemed to work?
https://0github.com/manaflow-ai/cmux/pull/666
https://0github.com/stack-auth/stack-auth/pull/988
https://0github.com/tinygrad/tinygrad/pull/12995
https://0github.com/simonw/datasette/pull/2548
> you've disabled logging issues on the repo
Sorry, wasn't aware. Turning it on right now. EDIT: https://github.com/manaflow-ai/cmux/issues seems to be fine?
I’m not sure an LLM can really capture project-specific context yet from a single PR diff.
Honestly, a simple data-driven heatmap showing which parts of the code change most often or correlate with past bugs would probably give reviewers more trustworthy signals.
> I’m not sure an LLM can really capture project-specific context yet from a single PR diff.
We had an even more expensive approach that cloned the repo into a VM and prompted codex to explore the codebase and run code before returning the heatmap data structure. Decided against it for now due to latency and cost, but I think we'll revisit it to help the LLM get project context.
Distillation should help a bit with cost, but I haven't experimented enough to have a definitive answer. Excited to play around with it though!
> which parts of the code change most often or correlate with past bugs
I can think of a way to do the correlation that would require LLMs. Maybe I'm missing a simpler approach? But agree that conditioning on past bugs would be great
As for interactive reviews, one workflow I’ve found surprisingly useful is letting Claude Code simulate a conversation between two developers pair-programming through the PR. It’s not perfect, but in practice the dialogue and clarifying questions it generates often give me more insight than a single shot LLM summary. You might find it an interesting pattern to experiment with once you revisit the more context-aware approaches.
I mean these tools are fine. But let's be on the same page that they can only address a sub-class of problems.
Very fun to see my own PR on Hacker News!
This looks great. I'm probably gonna keep the threshold set to 0%, so a bit more gradient variety could be nice. Red-yellow-green maybe?
Also, can I use this on AI-generated code before creating a PR somehow? I find myself spending a lot of time reviewing Codex and Claude Code edits in my IDE.
What form factor would make the most sense for you? Maybe a a cli command that renders the diff in cli or html?
a cli command with two options, console (color) and HTML opens all doors, right?
After we add the heatmap diff viewer into cmux, I expect that I'll be spending most of my time in between the heatmap diff and a browser preview: https://github.com/manaflow-ai/cmux/raw/main/docs/assets/cmu...
For the most part, it seems to draw the eye to the general area where you need to look closer. It found a near-invisible typo in a coworker's PR which was kind of interesting as well.
https://0github.com/geldata/gel-rust/pull/530
It seems to flag _some_ deletions as needing attention, but I feel like a lot of them are ignored.
Is this using some sort of measure of distance between the expected token in this position vs the actual token?
EDIT: Oh, I guess it's just an LLM prompt? I would be interested to see an approach where the expected token vs actual token generates a heatmap.
> Is this using some sort of measure of distance between the expected token in this position vs the actual token?
The main implementation is in this file: https://github.com/manaflow-ai/cmux/blob/main/apps/www/lib/s...
EDIT: yeah it's just a LLM prompt haha
Just a simple prompt right now, but I think we could try an approach where we directly see which tokens might be hallucinated. Gonna try to find the paper for this idea. Might be kinda analogous to the "distance between the expected token in this position vs the actual token."
I think most reviewers do this to some degree by looking at points of interest. It'd be cool if this could look at your prior reviews and try to learn your style.
Is this the correct commit to look at? https://github.com/manaflow-ai/cmux/commit/661ea617d7b1fd392...
This file has most of the logic, the commit you linked to has a bunch of other experiments.
> look at your prior reviews and try to learn your style.
We're really interested in this direction too of maybe setting up a DSPy system to automatically fit reviews to your preferences
Another perspective where this exact feature would be useful is in security review.
For example - there are many static security analyzers that look for patterns, and they're useful when you break a clearly predefined rule that is well known.
However, there are situations that static tools miss, but a highlight tool like this could help bring a reviewer's eyes to a high risk "area". I.e. scrutinize this code more because it deals with user input information and there is the chance of SQL injection here, etc.
I think that would be very useful as well.
File `apps/client/electron/main/proxy-routing.ts` line 63
Adding a comment to explain why the downgrade is done would have resulted in not raising the issue?
Also two suggestions on the UI
- anchors on lines
- anchors on files and ability to copy a filename easily
> Adding a comment to explain why the downgrade is done would have resulted in not raising the issue?
Trying it out here with a new PR on same branch: https://0github.com/manaflow-ai/cmux/pull/809
Will check back on it later!
EDIT: seems like my comment online 62 got highlighted. Maybe we should surface the ability edit the prompt.