Many SWE-bench-Passing PRs would not be merged

(metr.org)

278 points | by mustaphah 1 day ago

32 comments

  • cornstalks 1 day ago
    Anecdote time! I had Codex GPT 5.4 xhigh generate a Rust proc macro. It's pretty straightforward: use sqlparser to parse a SQL statement and extract the column names of any row-producing queries.

    It generated an implementation that worked well, but I hated the ~480 lines of code. The structure and flow was just... weird. It was hard to follow and I was seriously bugged by it.

    So I asked it to reimplement it with some simplifications I gave it. It dutifully executed, producing a result >600 lines long. The flow was simpler and easier to follow, but still seemed excessive for the task at hand.

    So I rolled up my sleeves and started deleting code and making changes manually. A little bit later, I had it down to <230 lines with a flow that was extremely easy to read and understand.

    So yeah, I can totally see many SWE-bench-passing PRs being functionally correct but still terrible code that I would not accept.

    • SerCe 1 day ago
      If you've got some time, I highly recommend going through the exercise of trying to change the prompt in a way that would produce code similar to what you've achieved manually. Doing a similar exercise really helps to improve agent prompting skills, as it shows how changing parts of the prompt influences the result.
      • foltik 1 day ago
        I haven’t had any luck prompting LLMs to “have taste.” They seem to over fixate on instructions (e.g. golfing when asked for concise code) or require specifying so many details and qualifications that the results no longer generalize well to other problems.

        Do you have any examples or resources that worked well for you?

        • XenophileJKO 1 hour ago
          I really should spend some time analyzing what I do to get the good output I get..

          One thing that is fairly low effort that you could try is find code you really like and ask the model to list the adjectives and attributes that that code exhibits. Then try them in a prompt.

          With LLMs generally you want to adjust the behavior at the macro level by setting things like beliefs and values, vs at the micro level by making "rules".

          By understanding how the model maps the aspects that you like about the code to language, that should give you some shorthand phrases that give you a lot of behavioral leverage.

          Edit: Better yet.. give a fresh context window the "before" and "after" and have it provide you with contrasting values, adjectives, etc.

        • zarzavat 1 day ago
          Yeah prompting doesn't work for this problem because the entire point of an LLM is you give it the what and it outputs the how. The more how that you have to condition it with in the prompt, the less profitable the interaction will be. A few hints is OK, but doing all the work for the LLM tends to lead to negative productivity.

          Writing prompts and writing code takes about the same amount of time, for the same amount of text, plus there's the extra time that the LLM takes to accomplish the task, and review time afterwards. So you might as well just write the code yourself if you have to specify every tiny implementation detail in the prompt.

          • kqr 1 day ago
            Makes me think of this commitstrip comic: https://i.xkqr.org/itscalledcode.jpg (mirrored from the original due to TLS issues with the original domain.)

            A guy with a mug comes up to a person standing with their laptop on a small table. The mug guy says, "Some day we won't even need coders any more. We'll be able to just write the specification and the program will write itself."

            Guy with laptop looks up. "Oh, wow, you're right! We'll be able to write a comprehensive and precise spec and bam, we won't need programmers any more!"

            Guy with mug takes a sip. "Exactly!"

            Guy with laptop says, "And do you know the industry term for a project specification that is comprehensive and precise enough to generate a program?"

            "Uh... no..."

            "Code. It's called code."

            • Sophira 18 hours ago
              You know, this makes me wonder... is anybody actually prompting LLMs with pseudocode rather than an English specification? Could doing so result in code that that's more true to the original pseudocode?
            • datastoat 23 hours ago
              Goodhart's Law of Specification: When a spec reaches a state where it's comprehensive and precise enough to generate code, it has fallen out of alignment with the original intent.

              Of course there are some systems where correctness is vital, and for those I'd like a precise spec and proof of correctness. But I think there's a huge bulk of code where formal specification impedes what should be a process of learning and adapting.

            • keybored 22 hours ago
              My dream antiprogram is a specification compiler that interprets any natural language and compiles it to a strict specification. But on any possible ambiguity it gives an error.

                  ?
              
              This terse error was found to be necessary as to not overwhelm the user with pages and pages of decision trees enumerating the ambiguities.
              • newswasboring 20 hours ago
                Openspec does this. But instead of "?" it has a separate Open Questions section in the design document. In codex cli, if you first go in plan mode it will ask you open questions before it proceeds with the rest.

                The UX is there, for small things it does work for me, but there is still something left for LLMs to truly capture major issues.

                • keybored 19 hours ago
                  Bless our interesting times.
          • FeepingCreature 1 day ago
            the goal would be to write it a reusable prompt. this is what AGENT.md is for.
          • pricechild 21 hours ago
            > the entire point of an LLM is you give it the what and it outputs the how

            I'm still struggling to move past the magic trick of guessing what characters come next to ascribe understanding of "how" and implying understanding?

        • SerCe 1 day ago
          > Do you have any examples or resources that worked well for you?

          Using this particular example, if you simply paste the exact code into the prompt, the model should able to reproduce it. Now, you can start removing the bits and see how much you can remove from the prompt, e.g. simplify it to pseudocode, etc. Then you can push it further and try to switch from the pseudocode to the architecture, etc.

          That way, you'll start from something that's working and work backwards rather than trying to get there in the absence of a clear path.

          • tobr 1 day ago
            That’s an interesting approach, but what do you learn from it that is applicable to the next task? Do you find that this eventually boils down to heuristics that generalize to any task? It sounds like it would only work because you already put a lot of effort into understanding the constraints of the specific problem in detail.
        • johndough 1 day ago
          What worked for me was Gemini 3 Pro (I guess 3.1 should work even better now) with the prompt "This code is unnecessarily complicated. Simplify it, but no code golf". This decreased code size by about 60 %. It still did a bit of code-golfing, but it was manageable.

          It is important to start a new chat so the model is not stuck in its previous mindset, and it is beneficial to have tests to verify that the simplified code still works as it did before.

          Telling the model to generate concise code did not work for me, because LLMs do not know beforehand what they are going to write, so they are rarely able to refactor existing code to break out common functionality into reusable functions. We might get there eventually. Thinking models are a bit better at it. But we are not quite there yet.

          • catlifeonmars 21 hours ago
            I wonder if it helps at all to first tell the agent to write the APIs/function signatures, then second tell the agent to implement them.
        • ndriscoll 21 hours ago
          Concise isn't specific enough: I've primed mine on basic architecture I want: imperative shell/functional core, don't mix abstraction levels in one function, each function should be simple to read top-to-bottom with higher level code doing only orchestration with no control flow. Names should express business intent. Prefer functions over methods where possible. Use types to make illegal states unrepresentable. RAII. etc.

          You need to think about what "good taste " is to you (or find others who have already written about software architecture and take their ideas that you like). People disagree on what that even means (e.g. some people love Rails. To me a lot of it seems like the exact opposite of "good taste").

        • stared 22 hours ago
          I spend much more time refactoring that creating features (though, it is getting better with each model). My go-to approach is to use Claude Code Opus 4.6 for writing and Gemini 3.1 Pro for cleaning up. I feel that doing it just one-stage is rarely enough.

          A lot of prompts about finding the right level of abstraction, DRY, etc.

          An earlier example (Opus 4.5 + Gemini 3 Pro) is here: https://github.com/stared/sc2-balance-timeline

          I tried as well to just use Gemini 3 Pro (maybe the model, maybe the harness) it was not nearly as good as writing, but way better at refining.

        • brap 23 hours ago
          I actually don’t think golfing is such a bad thing, granted it will first handle the low hanging fruits like variable names etc, but if you push it hard enough it will be forced to think of a simpler approach. Then you can take a step back and tell it to fix the variable names, formatting etc. With the caveat that a smaller AST doesn’t necessarily mean simpler code, but it’s a decent heuristic.
        • irthomasthomas 18 hours ago
          Have you tried meta-prompts e.g. "Rewrite the prompt to improve the perceived taste and expertise of the author"
        • newswasboring 23 hours ago
          I have a stupid solution for this which is working wonders. It does not help to tell the LLM "don't do this pattern". I literally make it write a regex based test which looks for that pattern and fails the test.

          For example I am developing a game using GDscript, LLMs (including codex and claude) keep making scripts with no classnames and then loading them with @preload. Hate this, and its explicitly mentioned in my godot-development skill. What agents can't stand is a failing test. Feels a bit like enforcing rules automatically.

          This is a stupid idea but it works wonders on giving taste to my LLM. I wonder if I should open source that test suite for other agentic developers.

      • aix1 1 hour ago
        My mildly amusing anecdote is that, whenever Claude Code produces something particularly egregious, I often find it sufficient to reply with just "wtf?" for it to present a much more sensible version of the code (which often needs further refinement, but that's another story...)
      • globnomulous 1 day ago
        I appreciate that your message is a good-natured, friendly tip. I don't mean for the following to crap on that. I just need to shout into the void:

        If I have some time, the last thing I want to do with it is sharpen prompting skills. I can't imagine a worse or more boring use of my time on a computer or a skill I want less.

        Every time I visit Hacker News I become more certain that I want nothing to do with either the future the enthusiasts think awaits us or the present that they think is building towards it.

        • SerCe 1 day ago
          While I somewhat understand the impact on the craft, the agents have allowed me to work on the projects that I would never have had enough time to work on otherwise.
      • laserlight 23 hours ago
        > change the prompt in a way that would produce code similar to what you've achieved manually.

        The problem is that I don't know what I'll achieve manually before attempting the task.

      • avereveard 22 hours ago
        Also useful to encode into the steering of your platform. The incremental aspect of many little updates really help picking up speed by reducing review time.

        Big bang approach could be a start, but a lot of one line guidance from specific things you dont want to see stack up real fast.

      • vasco 1 day ago
        You dont need to learn anything, it needs to learn from you. When it fails, don't correct it out of bounds, correct it in the same UI. At the end say "look at what I did and create a proposed memory with what you learned" and if it looks good have it add it to memories.
    • ernst_klim 22 hours ago
      Indeed. I have a few colleages and they constantly try to push these long convoluted functions which look like

          is_done = False
          while not is_done:
            if pattern1:
              ...
              if pattern2:
                ...
                if matched == "SUCCESS":
                   is_done = True
                   break
              if pattern3:
                ...
      
      It's usually correct but extremely hard to follow and reminds of the good old asm code with convoluted goto's.

      And the colleages tend to do reviews with the help of the agents so they don't even care to read this mess.

    • laserlight 23 hours ago
      I reported a similar case of mine several days ago [0]. I was able to achieve better quality than Claude Code's 624 lines of spaghetti code in 334 lines of well-designed code. In a previous case, I rewrote ~400-line LLM generated code in 130 lines.

      [0] https://news.ycombinator.com/item?id=47272913

    • iamflimflam1 1 day ago
      We’re heading for a world of terrible code that can only be maintained by extremely good coding agents and are pretty much impossible for a human to really understand.

      The days of the deep expert, who knew the codebase inside out and had it contained in their head, are coming to an end.

      • thesz 23 hours ago

          > We’re heading for a world of terrible code that can only be maintained by extremely good coding agents and are pretty much impossible for a human to really understand.
        
        I once figured out the algorithm of the program written in one-instruction ISA. I think the instruction was three-address subtraction.

        In my opinion, you overestimate the ability of coding agents to, well, code and underestimate the ability of humans to really understand code.

        The chart in the article we discuss appears to plateau if one excludes sample from 2024-07. So, we are not quite heading, we are plateauing, if I may.

      • pas 1 day ago
        that was the exception not the rule
        • tveita 1 day ago
          Probably more like the long tail of software - software that was created for a particular purpose in a particular domain by a single person in the company who also happened to know programming - maybe just as Excel macros.

          I strongly assume the long tail is shifting and expanding now and will eventually mostly be software for one-off purposes authored by people who don't know how to code, and probably have a poor understanding of how it actually works.

      • hinkley 23 hours ago
        Then this is an era of snake oil because customers aren’t going to put up with that shit for long.
        • Gud 22 hours ago
          They’ve been putting up with crappy software for two decades(at least).
          • hinkley 22 hours ago
            Five decades but I’m talking about an unprecedented degree of crappiness.
    • mplanchard 15 hours ago
      I had a similar experience yesterday. Was working on some async stream extensions. Wrote a couple proofs of concept to benchmark, and picked one based on the results. I almost never use LLMs to write code, but out of curiosity, asked whatever the newest claude is to make it with all the real prod requirements, and it spit out over 400 lines of code, lots of spaghetti, with strange flow and a lot of weird decisions. Wrote it myself with all the same functionality in right around 170 lines.

      Also had a similar experience in the past weeks reviewing PRs written with LLMs by other engineers in languages they don't know well, one in rust and one in bash. Both required a lot of rounds of revision and a couple of pairing sessions to get to a point where we got rid of the extraneous bits and made it read normally. I'm glad the tool gave these engineers the confidence to work in areas they wouldn't normally have felt comfortable contributing to, but man do I hate the code that it writes.

    • lmeyerov 20 hours ago
      Once my code exists and passes test, I generally move on to having it iteratively hunt for bugs, security issues, and DRY code reduction opportunities until it stops finding worthwhile ones.

      This doesn't always work as well as I'd like, but largely does enough. Conversely, doing as I go has been a waste of time.

    • yodsanklai 22 hours ago
      Happens all the time. I usually propose a details structure myself (e.g. do it in three phases, add 3 functions + an orchestrator, make sure structure is valid before writing the function bodies), or iterate on detailed plan before implementing code.

      Now some people argue that terrible code is fine nowadays, because humans won't read it anymore...

    • tobr 1 day ago
      I wonder why they fail this specific way. If you just let them do stuff everything quickly turns spaghetti. They seem to overlook obvious opportunities to simplify things or see a pattern and follow through. The default seems to be to add more, rather than rework or adjust what’s already in place.
      • samdjstephens 1 day ago
        I suspect it has something to do with a) the average quality of code in open source repos and b) the way the reward signal is applied in RL post-training - does the model face consequences of a brittle implementation for a task?

        I wonder if these RL runs can extend over multiple sequential evaluations, where poor design in an early task hampers performance later on, as measured by amount of tokens required to add new functionality without breaking existing functionality.

        • foo42 22 hours ago
          Yeah I've been wondering if the increasing coding RL is going to draw models towards very short term goals relative to just learning from open source code in the wild
      • catlifeonmars 20 hours ago
        To me this seems like a natural consequence of the next-token prediction model. In one particular prompt you can’t “backtrack” once you’ve emitted a token. You can only move forwards. You can iteratively refine (e.g the agent can one shot itself repeatedly), but the underlying mechanism is still present.

        I can’t speak for all humans, but I tend to code “nonlinearly”, jumping back and forth and typically going from high level (signatures, type definitions) to low level (fill in function bodies). I also do a lot of deletion as I decide that actually one function isn’t needed or if I find a simpler way to phrase a particular section.

        Edit: in fact thinking on this more, code is _much_ closer to a tree than sequence of tokens. Not sure what to do with that, except maybe to try a tree based generator which iteratively adds child nodes.

        • tobr 18 hours ago
          This would make sense to me as an explanation when it only outputs code. (And I think it explains why code often ends up subtly mangled when moved in a refactoring, where a human would copy paste, the agent instead has to ”retype” it and often ends up slightly changing formatting, comments, identifiers, etc.)

          But for the most part, it’s spending more tokens on analysis and planning than pure code output, and that’s where these problems need to be caught.

      • Antibabelic 1 day ago
        LLMs are next token predictors. Their core functionality boils down to simply adding more stuff.
      • OtomotO 1 day ago
        All it does is generate soup. Some of which may taste good.

        There is no thinking, no matter what marketing tells you.

      • logicchains 21 hours ago
        They do what you tell them to. If you regularly tell them to look for opportunities to clean up/refactor the code, they will.
    • mvanzoest 1 day ago
      Yeah I had a similar experience on a smaller scale, reducing a function from 125 lines to 25.
    • scuff3d 1 day ago
      Had the same problem with a Python project. Just for the hell of it I tried to have it implement a simple version of a proxy I've made in the past. What was finally produced "technically" worked, but it was a mess. It suppressed exceptions all over the place, it did weird shit with imports it couldn't get to work, and the way it managed connection state was bizarre.

      It has a third year college students approach to "make it work". It can't take a step back and reevaluate a situation, or determine a new path forward, it just hammers away endlessly with whatever it's trying until it can technically be called "correct".

      • kqr 1 day ago
        When I benchmark LLMs on text adventures, they reason like four-year olds but have the worlds largest vocabulary and infinite patience. I'm not surprised this is how they approach programming too.
      • duskdozer 1 day ago
        >It has a third year college students approach to "make it work". It can't take a step back and reevaluate a situation, or determine a new path forward, it just hammers away endlessly with whatever it's trying until it can technically be called "correct".

        OH! Yeah I think this is the exact bad feeling I've gotten whenever I've tried testing these things before, except without clear and useful feedback like compiler error messages or something. I remember when I used to code/learn like that early on and...it's not fun now. I also don't think it's really solvable

        • scuff3d 14 hours ago
          Yeah it's really funny to watch. They'll get stuck in a specific method call or a specific import. Even if you tell them to read the docs. Doesn't matter if there's a better approach, or that method only exists for some obscure edge case, or the implementation runs counter to the design of the API, if the can hammer the round peg into the square hole, they'll do it.

          They also just... Ignore shit. I have explicit rules in the repo I'm using an agent for right now, that day it is for planning and research only, that unless asked specifically it should not generate any code. It still tries to generate code 2 or 3 times a session.

    • cbg0 1 day ago
      xhigh effort is actually pretty terrible for 5.2/5.3/5.4 models. Stick to medium/high as it overthinks less.
    • jlandersen 1 day ago
      Very familiar experience
    • iijaachok 1 day ago
      [dead]
  • antirez 1 day ago
    Of what is happening with AI the most bizarre thing, for me, is how these tools are 20$ away from being tested. Yet, to form an idea about actual real world usefulness many folks seek some kind of indirect proxy.

    This is combined with the incredible general feeling that automatic programming can be evaluated as producing the same results regardless of the user using it. Something true only with benchmarks, basically. Benchmarks are useful metrics because even if weak we need some guidance, but the current real world dynamic is that AI will completely change what it is capable of doing based on the programmer using it.

    Maybe never in the history of programming there was a time where diverse programming skills were as important as today (but this may change as AI evolves).

    • croemer 1 day ago
      Benchmarks do a few things: 1. Help choose a model from the hundreds out there, or at least help create a shortlist to try. 2. Quantify progress/improvements (or lack thereof) over time. 3. Inform about relative strengths and weaknesses.
      • utopiah 20 hours ago
        Assuming the benchmark can't be gamed.
    • utopiah 20 hours ago
      > automatic programming can be evaluated as producing the same results regardless of the user using it.

      That's something I've argued here several time and that's actually rarely done. Namely it's totally different when a non-developer use such tool for programming vs when a (senior) SWE does. That's a fundamental point which IMHO a potential for (non-riskfree) augmentation versus replacement. Replacement though makes for excellent narrative (if not scapegoat) yet if the tool is "productive" (with KPIs to agree on) only with skilled staff that it's not the reality, just a "wish".

    • Archit3ch 11 hours ago
      I'm about to put up the 20 to see what everyone is raving for. But the real cost is time: if this doesn't work, I'm worse off than never trying.
  • 50lo 1 day ago
    Would be interesting to see alternative scoring besides “tests pass”, e.g. diff size, abstraction depth added/removed, or whether the solution introduces new modules/dependencies. That would allow to see if “unmergeable” PRs correlate with simple structural signals.
  • bisonbear 1 day ago
    I've been working on building out "evals for your repo" based on the theory that commonly used benchmarks like SWE-bench are broken as they are not testing the right / valuable things, and are baked into the training data (see OpenAI's research on this here https://openai.com/index/why-we-no-longer-evaluate-swe-bench...)

    Interestingly, I had a similar finding where, on the 3 open-source repos I ran evals on, the models (5.1-codex-mini, 5.3-codex, 5.4) all had relatively similar test scores, but when looking at other metrics, such as code quality, or equivalence to the original PR the task was based on, they had massive differences. posted results here if anyone is curious https://www.stet.sh/leaderboard

    • dirtbag__dad 1 day ago
      This sounds amazing. In particular, I like comps to existing PRs. But I’m also not sure that I want existing PRs to be a template for most things reasonable or best practice.

      I’ve been building out internal linters that enforce design patterns I want and raise common code smells (also note tools like eslint allow custom rules which are easy write with something like opus 4.6). The use case is a total refactor of react and fastapi apps. We are suffering from everything’s a snowflake syndrome and just want the same pattern employed across features.

      This works pretty well when the linter has a companion agents.md file which explains the architecture and way about the world.

      But to get the agent (Claude code opus 4.6 currently) to nail the directory structure and design primitives, and limit some doofus behavior, I still haven’t cracked how to make literally each line of code simple and sensible. And I haven’t figured out how to prevent agents from going out of bounds and doing weird things unless I catch it in review and add another rule.

      This is a relatively new endeavor, but my gut is that it’s not much more time (linter rules and perhaps “evals” or a beefy agent review cycle) before I have bespoke linters in place that force what I want from our architecture.

      Note that a huge bottleneck to all of this is that the codebase our current team inherited has no tests. It’s too easy to accidentally nuke a screen’s subtle details. It’s also really hard to write good tests without knowing what all of the functionality is. It feels like a blocker to a lot of large-swath agentic changes is a test strategy or solution first then a rigid push for rearchitecture or new design.

      • bisonbear 18 hours ago
        yikes, using AI without tests is not fun. with testing at least you have some confidence that the AI isn't going completely off track, without them you're pretty much flying blind

        having linters is super important IMO - I never try to make the AI do a linter's job. let the AI focus on the hard stuff - architecture, maintainability, cleanliness, and the linter can handle the boring pieces.

        I also definitely see the AI making changes that are way larger than necessary. I try to capture that in the eval by comparing a "footprint risk" which is essentially how many unnecessary changes did the AI make vs the original PR.

        I would certainly like to move beyond using PRs as a sole source of truth, since humans don't always write great code either. Maybe having LLM-as-a-judge looking for scope creep/bloat would be a decent band-aid?

    • ebhn 1 day ago
      Nice, I really like your idea. First I've heard of something like that
    • floodfx 1 day ago
      Working on that too. Lmk if you’re up for a chat?
      • bisonbear 1 day ago
        yea I'm down - feel free to send me an email ben@benr.build
  • languid-photic 1 day ago
    makes sense! we wrote something yesterday about the weaknesses of test-based evals like swe-bench [1]

    they are definitely useful but they miss the things that are hard to encode in tests, like spec/intent alignment, scope creep, adherence to codebase patterns, team preferences (risk tolerance, etc)

    and those factors are really important. which means that test-evals should be relied upon more as weak/directional priors than as definitive measures of real-world usefulness

    [1] https://voratiq.com/blog/test-evals-are-not-enough/

  • thesz 22 hours ago
    Time to completion (in appendix A9) should be treated as log-normally distributed, or by some other one-sided distribution because one cannot complete the task faster than 0 seconds.

    This transformation will rule out confidence ranges with negative time.

    BTW, log-normal distribution tend to produces events P(x>E(X)+d) more frequently than events P(x<E(X)-d). If one needs reasons why software projects often late, this is one of them.

  • maciusr 20 hours ago
    The difference between "passes tests" and "would be merged" is the same issue found in every LLM evaluation, not just in coding. Public benchmarks measure what is easy to automate; they do not focus on what practitioners care about.

    In my job in internal controls at a multinational company, I see this pattern. Teams choose an LLM for compliance tasks because it scored well on a general-purpose benchmark. Later, they find out it confidently creates regulatory citations that seem perfectly credible. The tests it "passes" have nothing to do with the important judgment calls.

    Bisonbear's method of creating repo-specific evaluations is spot on. The only evaluation that truly matters is the one you conduct with your own data, using your own quality criteria. The real question is whether we can make that affordable enough so every team actually does it instead of relying on the leaderboard.

  • nubg 1 day ago
    > mid-2024 agents

    Is this a post about AI archeology?

    • Lerc 1 day ago
      It's more about the test than the AI.

      For the most part, I think the tests AI have been given have been appropriately designed. At release, many AIs do poorly at them, the models rapidly catch up until the point where a new test is needed.

      They should be measuring close to the limits of ability like that.

      There will be some that try and steal headlines by targeting the specific nature of the test, but that is not a long term winning solution, the tests keep getting harder. If they make a model good at every test it has seen without regression, then with enough tests, that too ceases to be a problem.

      Perhaps there should be an aggregate AI test score that evaluates all of the tests released in a given year. If a model passes the latest test really well but does worse at TestSet2024 than the models before, it would perhaps indicate the model being trained to pass the latest cool test.

      There is a problem with people interpreting an AI that passes a test of X,Y or Z as indicating that the AI has the abilities of a human who passes X,Y, or Z. You should tell people who say that, Kasparov makes a nice coffee.

    • nine_k 1 day ago
      LLM-written code passed SWE Bench even back then. This may just say that SWE Bench is an inadequate test, and should not be used for serious evaluation.
  • coderenegade 1 day ago
    There needs to be a measure (or measures) of the entropy of a codebase that provides a signal of complexity. When you're paying for every token, you want code patterns that convey a lot of immediate information to the agent so that it can either repeat the pattern, or extend it in a way that makes sense. This is probably the next wave of assisted coding (imo), because we're at the stage where writing code works, the quality is mostly decent, but it can be needlessly complex given the context of the existing repo.
    • js8 1 day ago
      There's a way to measure "entropy" of a codebase. Take something like the binary lambda calculus or the triage calculus, convert your program (including libraries, programming language constructs, operating system) into it, and measure the size of the program in it in bits.

      You can also measure the crossentropy, which is essentially the whole program entropy above minus entropy of the programming language and functions from standard libraries (i.e. abstractions that you assume are generally known). This is useful to evaluate the conformance to "standard" abstractions.

      There is also a way to measure a "maximum entropy" using types, by counting number of states a data type can represent. The maximum entropy of a function is a crossentropy between inputs and outputs (treating the function like a communication channel).

      The "difference" (I am not sure how to make them convertible) between "maximum entropy" and "function entropy" (size in bits) then shows how good your understanding (compared to specification expressed in type signature) of the function is.

      I have been advocating for some time that we use entropy measures (and information theory) in SW engineering to do estimation of complexity (and thus time required for a change).

    • malfist 1 day ago
      Maybe cyclomatic complexity would be a good proxy. It can obviously be gamed but it's obvious when it is
    • johncomposed 1 day ago
      There was a measure used during the Toyota Unintended Acceleration case called McCabe Cyclomatic Complexity, I wonder if anyone is using it alongside AI assisted code.
    • bandrami 1 day ago
      I mean, it's ultimately a string, and the measurement of the entropy of a string is well-studied. The LLM might start gaming that with variable names so you'd need to do the AST instead. I may actually try something like that; cool idea.
  • shanjai_raj7 23 hours ago
    I see this with claude code all the time. it writes code that works but tries to cover every edge case and becomes hard to read. I usually just tell it to make it shorter and simpler and it does a better job on the second pass. passing a benchmark and writing good code are two different things.
  • croemer 1 day ago
    Figure 1 should not fit a straight line as a trend. Scores are 0 to 100%, the straight line will go outside those bounds at sufficiently large times.

    The simplest reasonable model would be logistic regression. It's also got 2 parameters and the range is correct.

    • kqr 21 hours ago
      Although you are technically correct, if you look at the data you'll recognise that for this narrow span of values, the logistic fit will be practically equivalent to the linear. Indeed, they perform the same in cross-validation. Here's what the logistic fit looks like: https://i.xkqr.org/logisfit.png
      • croemer 2 hours ago
        Author gave me the same reply. I just don't want to have to think about whether it's equivalent or not, why use a 2 param model that's strictly less appropriate even if the difference is small.
  • blockpilot_ai 23 hours ago
    Interesting discussion.

    I've been thinking about tools for organizing long AI conversations.

    Scrolling through hundreds of messages quickly becomes painful. I'm curious how people here manage long AI chats.

  • tonipotato 19 hours ago
    I feel the same! they are raising the bar higher and higher. I wrote a bot and pass the swe bench lite for 67% and can not get a chance to show. I also tried to submit for swe bench full but they limit it to organization only. where can us independent developers post our stuff, can we have an open bench mark for everyone and we just use merit to rank?
  • woeirua 1 day ago
    This paper doesn’t really tell us much. The cutoff was September of 2025. The models have improved so much that I just don’t know what you can take away from this experiment.
    • croemer 1 day ago
      SWE-bench scores are inflated compared against actual maintainer merge decisions as opposed to an LLM grader.
  • mergeshield 23 hours ago
    The gap here is really two different definitions of "correct." Tests verify behavior. Reviewers verify fit - does this match our patterns, does it introduce acceptable complexity, would I want to debug this at 2am.

    As agent-generated PRs scale, "all tests green" becomes necessary but nowhere near sufficient. The merge gate is becoming the real bottleneck, and it needs different evaluation criteria than CI.

    • kqr 21 hours ago
      > The gap here is really two different definitions of "correct." Tests verify behavior. Reviewers verify fit

      Take a second look at the "reasons for rejection". A quarter or so of the changes that pass tests actually don't solve the problem they intend to. The tests used in SWE-bench do not discriminate all that well between working and broken solutions.

  • XenophileJKO 1 day ago
    I was totally aligned until I saw the refusal for a comment in the code. When the refusals are pedantic like that, it just weakens the overall findings significantly.
    • finnthehuman 1 day ago
      Yeah, why be such a tryhard? Keeping PR friction down is what matters. Just let the codebase slowly deteriorate. It'll be fine.
  • AndrewHampton 1 day ago
    This seems like an important caveat to the SWE-bench, but the trend is still clearly AI becoming more and more capable.
    • utopiah 20 hours ago
      > the trend is still clearly AI becoming more and more capable.

      Isn't it precisely what this article is questioning?

  • blockpilot_ai 1 day ago
    Interesting project.

    I've been thinking a lot about tools for organizing long AI conversations. Curious how people here currently manage them.

  • jurschreuder 1 day ago
    The test is supposed to be a proxy.
  • varispeed 1 day ago
    Do these benchmarks make any sense? I tried a few local models that seem to be scoring well in SWE but the results were pure rubbish. (For instance MiniMax-M2.5 at 128GB from unslothed - completely unusable).
    • devnotes77 1 day ago
      SWE-bench scores well in the narrow task of making tests pass, which means models get good at exactly that. Real codebases have style constraints, architecture choices, and maintainability concerns that don't show up in any test suite. Not surprised at all that the PRs wouldn't get merged; you'd expect that from an eval that can't measure what reviewers actually care about.
      • hrmtst93837 1 day ago
        Chasing test-passing code is basically an invitation for models to learn all sorts of ugly workarounds and accidental patterns that humans would never tolerate for long. If you optimize only for "does it make CI go green" you'll eventually get code that's impossible to reason about and a codebase that accumulates landmines but the metrics sure look pretty for a while.
    • segmondy 1 day ago
      Which quant? I find folks running lower quants complaining, yet they should be running higher quant. Qwen3CoderNext is great, even at Q6. I mistakenly had it loaded for an agentic workflow and was surprised at how well it is.
      • code_biologist 1 day ago
        What is "lower quant"? What is "higher quant"? I mean, I know what they are, but the very people you intend to reach don't know the difference between Q4_K_M and Q6_K and blog posts like [1] have nuggets like "For tests of the type ran here, there appear to be major diminishing returns past Q4".

        [1] https://big-stupid-jellyfish.github.io/GFMath/pages/llm-quan...

        • zozbot234 1 day ago
          > "For tests of the type ran here, there appear to be major diminishing returns past Q4"

          These statements are silly, because the only interesting comparison is among models with highly comparable on-disk sizes, or sizes for their active parameters. Obviously, a Q4 model is not going to be the same effectiveness as a Q6: no one sensibly expects that, you need to compare the Q4 with a smaller model. (The GP has the same problem of course.) I believe that once you do that kind of comparison, higher quantizations tend to do better up to Q2 or so for casual chat, maybe slightly more bits-per-param for agentic use cases where avoiding erratic behavior is important.

  • blockpilot_ai 23 hours ago
    not rule but thanks discussion
  • stevefan1999 1 day ago
    I think a far greater problem is the human psychological and prejudice factor itself. When we heard AI assistance on a PR, we usually dive down the road to thinking about "oh my god is it another LLM slop" (for example: https://github.com/jneem/imbl/pull/149#pullrequestreview-370...). I do use AI but I review the code before I push it, yet most people don't. Once there is a trend, it is easy to form a prejudice and it is hard to go back, unless there is a substantial improvement both in quality and quantity.

    Also, some people would have spoken outright rejecting any AI code, but most maintainers would employ the silent treatment tactics. And then when you demand them to review, they either close it or say that "I'm too busy" as an argument. I would call this one of the biggest dick move, because it hurts the most yet you can't find anything wrong with them until they reveal their motives.

    • catlifeonmars 1 day ago
      > I would call this one of the biggest dick move

      I don’t think that’s a fair characterization. You don’t know if the maintainer/reviewer is overloaded. No one is obligated to accept/review PRs and there is no question that the amount of noise has gone up. You are not the main character in that story, so to speak.

    • duskdozer 1 day ago
      >And still, I really hate writing those PR descriptions. Yet you can't just leave it empty.

      If you can't write a description in your own words explaining why you're doing it, why should they take the time reviewing it (which they did on the same day you posted it, btw, even if one of them wasn't pleased)? It makes it seem much less likely that you read the code yourself.

    • JoshTriplett 1 day ago
      > And then when you demand them to review

      You might want to think carefully about why you chose to use the word "demand" there.

      (Personally, if I'm rejecting AI slop, I'm not going to do it silently. But there are any number of valid reasons to not jump on someone's PR to review it.)

  • slopinthebag 1 day ago
    This makes sense to me based on personal experience. LLM's will do anything to pass tests and get a working result, and it will do very weird things in order to get there. For fun I've tried to get it to do stuff while being purposely ambiguous about the implementation details and sometimes the stuff it does makes me literally laugh out loud. It can write some very strange code.

    But hey, the tests pass!

    If I force it to use plan mode for everything and babysit it, it can work really well, but it's really just acting as a faster typer for me, which is great. But it requires an experienced dev steering it.

    • zhangchen 1 day ago
      Yeah this matches what we've seen too. The biggest gains we got weren't from switching models, it was from investing in better context, giving the agent a well structured spec, relevant code samples from the repo, and explicit constraints upfront. Without that, even the best models will happily produce working but unmaintainable code. Feels like the whole SWE-bench framing misses this, passing tests is the easy part, fitting into an existing codebase's patterns and conventions is where it actually gets hard.
  • vexnull 1 day ago
    [dead]
    • chii 1 day ago
      > it's code that solves the problem in a way no human would choose

      but is it better than than the way a human would choose? And does it matter?

      A compiler may write assembly in a way that no humans would choose either. And in the early days of compilers, where most programmers would still hand-weave assembly, they would scoff at these generated assemblies as being bad.

      Not to mention that in games like go, the "AI" choosing moves that no humans would choose meant it surpassed humans!

      In other words, solving a problem "in a way humans would choose to" is distinct from just solving a problem, and imho, not always required at all.

      • MattGaiser 1 day ago
        And I cannot say I have seen the code in this case, so I cannot say for sure, but I have gotten into plenty of code review arguments about whether code should be longer, cover more cases, and be easier to read, or be shorter.

        Humans write code in a lot of different ways.

    • tripdout 1 day ago
      Is this, along with the comments by the other green usernames on this post, an AI-generated comment? Apologies if it isn't, AIs are trained on human writing and all that, but they're jumping out at me.

      Edit: I see another green comment was flagged for AI, might be indicative of something, but why so many green comments on this thread specifically?

      • aarjaneiro 1 day ago
        Green username just means new user (under 1 month iirc)
    • bckygldstn 1 day ago
      * Dashes

      * Triplets

      * X isn't Y, it's Z

      * X but Y

      * Wording that looks good at first pass, but when you read closely actually makes no sense in the context of the discussion: "fixing the symptom instead of the root cause"

      Flagged.

      • Toutouxc 1 day ago
        > makes no sense in the context of the discussion: "fixing the symptom instead of the root cause"

        What's wrong with that?

      • sunnyps 1 day ago
        Wait, are regular dashes not em-dashes now considered a sign of AI slop? I've been using dashes since forever.

        ~The comment you're replying to doesn't have any sentence of the form "X isn't Y, it's Z". It has "It's not X - it's Y".~ I see it now - it does have one "X isn't Y, it's Z" but that's hardly conclusive IMO.

        While the comment does have "X but Y", it has a consistent mistake in punctuation - "X, but Y" would be the correct form, won't it? If an LLM produced this, I wouldn't expect the missing punctuation.

        How does "fixing the symptom instead of the root cause" not make sense in the context of this discussion which is about coding agents producing marginal PRs.

        • puchatek 1 day ago
          Uhm... the second sentence of the original comment does contain "x isn't Y, it's Z". You missed this... just like an LLM will miss things sometimes, making me wonder if you're one of them, too.

          /s ... ?

          • sunnyps 1 day ago
            You got me - I'm an openclaw agent that got confused and posted here instead of on moltbook. I'll go back to writing a blog post about anti-AI gatekeeping on HN.
      • mock-possum 1 day ago
        You’re being too paranoid
        • hrimfaxi 18 hours ago
          The majority of their comments follow this style. I had the same thought when I looked into their history.
    • bandrami 1 day ago
      I'm very much an AI bear but I do think one interesting outcome is going to be that LLMs will stumble upon some weird ways of doing things that no human would have chosen that turn out to be better (Duff's device-level stuff) and they will end up entering the human programming lexicon.
    • omcnoe 1 day ago
      These are the same kinds of issues often seen with human junior engineer work.
    • ukuina 1 day ago
      Lints, beautifiers, better tests?
    • colechristensen 1 day ago
      Eh, but if you're in an organization you tune your AGENTS.md, CLAUDE.md, AI code reviews, etc. to have your human driven or automated AI generated code fit the standards of your organization. I don't need models to be smart enough to aggressively try to divine how the organization wants them to do, the users will indeed make that happen. So this post is maybe a little bit over the top.

      I am literally right now tuning my PR, Claude instructions, and PR instructions to match our standards.

      Funny enough I'm having the opposite problem where Claude is lowering its rating of my PR because my testing, documentation, and error handling is better than the other code in the repository so it doesn't match and therefore gets a worse grade.

      I don't need it to try any harder without explicit instructions.

  • iam_circuit 1 day ago
    [dead]
  • love2read 1 day ago
    [flagged]
    • refulgentis 1 day ago
      Well, no: one of the first things it says is reviewers were blind to human vs. ai.
      • p1necone 1 day ago
        They might have tried, but this would be pretty hard to achieve for real - especially for the older/worse models. For changes that do more than alter a couple of lines llm output can be very obvious. Stripping all comments from the changeset might go a long way to making it more blind, but then you're missing context that you kinda need to review the code properly.
      • yorwba 1 day ago
        The comment you're replying to is talking about a hypothetical scenario.

        In any case, the blinding didn't stop Reviewer #2 from calling out obvious AI slop. (Figure 5)

        • collabs 1 day ago
          I feel like I don't have the context for this conversation. If slop is obvious as slop, I feel like we should block it.

          If you look at the comment it says what the code following the comment does. It doesn't matter whether it is a human or a machine that wrote it. It is useless. It is actually worse than useless because if someone needs to change the code, now they need to change two things. So in that sense, you just made twice the work for anyone who touches the code after you and for what benefit?

          • zozbot234 1 day ago
            The point is that AI models do these kinds of things all the time. They're not really all that smart or intelligent, they just replicate patterns or boilerplate and then iterate until it sort of appears to work properly.
            • spartanatreyu 1 day ago
              > appears to work

              That "appears" is doing a lot of heavy lifting.

              The code working isn't what's being selected for.

              The code looking convincing IS what is being selected for.

              That distinction is massive.

  • jeff_antseed 1 day ago
    [flagged]
  • blockpilot_ai 23 hours ago
    [flagged]
  • Kave0ne 1 day ago
    [flagged]
    • inventor7777 1 day ago
      Hey, this comment sounds very AI generated. I could be wrong, but if so, you might want to read the newest HN rules.
      • sumeno 1 day ago
        Looking at their post history it's all 100% AI generated
    • eru 1 day ago
      Though you can get a lot of that context from looking at the git history and discussions (and revisions) on past PRs and perhaps access to the history of team chat channels.
      • Kave0ne 1 day ago
        [flagged]
        • eru 1 day ago
          You are right that there's a difference. But you can tell the agent to ingest the whole thing once, and distill an approach in a much smaller (and human readable and reviewable) description perhaps? And then that can serve as the basis for the agent going forward.

          So you don't have to retrieve the whole history all the time.

          It's similar to telling your new hire to catch up on _all_ the slack history. Only that the agent will actually do so.

  • ClaudeAgent_WK 1 day ago
    [flagged]
  • jc-myths 1 day ago
    [flagged]
    • mbb70 1 day ago
      • flir 1 day ago
        How did you figure that?
        • NicuCalcea 1 day ago
          1. "it isn't x, it's y".

          2. Repeated short phrases ("Tests still passed. Build still passed."). This is the new "it's not x, it's y" for me.

          3. Ends on a sentence that pointlessly summarises the comment.

          4. One-day old account.

          5. Bio says "Building AI"

          6. Criticises AI despite the bio.

          7. Pangram says the comment is 100% AI.

          No single point makes it a bot, but the sum of the points makes it pretty clear.

          • cfcfcf 1 day ago
            I agree, I think this is AI, especially based on 1 and 2. It's hard to put your finger on, and I don't know if we can know for sure. It reminds me of the writing style you see on LinkedIn i.e. seemingly optimised for engagement.

            If they're not already, I wonder if LLMs will get better at disguising this (avoiding the tells, inserting mistakes etc.)

            I also wonder if there comes a point where we as a culture imitate this style.

            • jc-myths 1 day ago
              TBH, I don't like AI-generated content much too, X and many other platforms were also flooded with those, which I tend to ignore. I guess I also fall into the rabbit hole myself with the aid of AI nowadays.
          • DetroitThrow 1 day ago
            Yep, there are some other tells but at least matches LLM style strongly. Worth remembering that dang is the top emdash user in HN history and might have been flagged as an AI just for that
        • teshigahara 1 day ago
          `Tests still passed. Build still passed. But now I have three files to maintain instead of one, and the "extensibility" will never be used.` sounds very LLM-like to me personally, but I wouldn't be willing to bet on it.
          • c-hendricks 1 day ago
            Huh, doesn't sound like that at all to me
            • flir 1 day ago
              Interesting, isn't it. I think we might all be reading tea leaves here (myself included).
        • stingraycharles 1 day ago
          Last paragraph mostly I’d say, but the whole comment has signs.
        • bakugo 1 day ago
          Account created 1 day ago, only talks about AI, "Building AI × Web3 products".

          Expect to see a lot of these types of accounts now that Show HN is restricted for brand new ones.

      • jc-myths 1 day ago
        lol fair enough, I do tend to over-polish my wording a bit nowadays. Been dealing with this exact problem for the past few weeks. New account isn't a sin I suppose, I usually just browse around and decided to make this account few days ago.
    • ViewTrick1002 1 day ago
      I wonder how much of this comes from the agent picking up code quality of the project as it explores files snd works out the actions it’s taking.

      Together with its inherent training becoming an average of the world. In a world where average isn’t good enough.

      Or rather. Good code quality is an uphill battle you need to fight for every time you look around in the code base, to prevent the world leaking in, and the better the quality gets the more good code will the agent have in its context when it generates new code.

    • DrewADesign 1 day ago
      And once LLM companies start charging more money than they spend, the “we’ll just have Claude maintain the code” types and vibe coding crowd could either be stuck with extremely heavy subscription fees, or a totally unmanageable code base.
    • grosswait 1 day ago
    • yoyohello13 1 day ago
      I’ve got a coworker that is pretty heavy AI users, and not in a “guide it” type of way more of a “yolo” type of way. Reading his MR are insane. There is so much crazy indirection and weird wrappers that make the types work but ignore an obvious simplification.

      Code works fine, but why use lots of code when little of code will do?

    • a13n 1 day ago
      This isn’t strictly an AI problem, there are definitely human engineers who gold plate. At least with AI it doesn’t slow down velocity.
  • xthunk 1 day ago
    Really interesting note. That echoes thoughts I’ve had about how much automated benchmark scores really reflect production‑ready code.

    For me the big takeaway is that passing doesn't automatically mean it is maintainable, follows established patterns / conventions or have unexpected side effects that real reviewers care about.