Ask HN: Why there are still no signs of increase in productivity anywhere?

Several PhD-level reasoning models have been released since September of 2024, and since then there have been many extraordinary claims of 10x to 1000x increase in productivity in programming.

Given that it's now October of 2025 I must ask, why there are no signs of such revolutionary increase in productivity?

5 points | by pera 5 hours ago

4 comments

  • pera 5 hours ago
    I also wanted to add a bit more context regarding some of these claims.

    For example, back in March Dario Amodei, the CEO and cofounder of Anthropic, said:

    > I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code

    Other similar claims:

    https://edition.cnn.com/2025/09/23/tech/google-study-90-perc...

    https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-a...

    Some of these AI predictions seem quite unlikely too, for example AI 2027:

    https://news.ycombinator.com/item?id=43571851

    > By late 2029, existing SEZs have grown overcrowded with robots and factories, so more zones are created all around the world (early investors are now trillionaires, so this is not a hard sell). Armies of drones pour out of the SEZs, accelerating manufacturing on the critical path to space exploration.

    > The new decade dawns with Consensus-1’s robot servitors spreading throughout the solar system. By 2035, trillions of tons of planetary material have been launched into space and turned into rings of satellites orbiting the sun.32 The surface of the Earth has been reshaped into Agent-4’s version of utopia

    • JohnFen 5 hours ago
      You should not believe any of the claims genAI companies make about their products. They just straight-up lie. For example:

      > Several PhD-level reasoning models have been released since September of 2024

      This is not true. What's true is that several models have been released that the companies have claimed to be "PhD-level" (whatever that means), but so far none of them have actually demonstrated such a trait outside of very narrow and contrived use cases.

      • Ekaros 4 hours ago
        If there is such models. Why are there not widely discussed full thesis works produced fully by them? Surely getting dozens of those out should be trivial if they are that good.
        • DaveZale 4 hours ago
          Well, would the AI graduate students also be required to be jerked around by professors, pass hundreds of exams, present seminars, teach, do original research, write proposals, deal with bureaucracy,too? Maybe this would solve the "hallucination" issues?
    • AfterHIA 45 minutes ago
      I'm going to laugh and shit my pants in that or some order when we realize the models that produced ALL the code has sleeper protocols built into the code that's now maintained by AI agents that might also be infected with sleeper protocols. Then later when 50 messages on Claude costs 2,500$ every company in the world is either going to experience exponential cost increases or spend an exponentially large amount of capital hiring and re-hiring engineers to, "un-AI'ify" the codebase.

      https://www.youtube.com/watch?v=wL22URoMZjo

  • necovek 4 hours ago
    This sounds like a disingenuous "ask HN": you are supposedly questioning marketing claims by AI model producers by pointing out how their predictions have not happened.

    Everyone knows why that's the case: because claims were never backed by anything but people claiming this in whose interest it was for others to buy into it.

    There might even be a case of shareholder fraud there for any public official, but obviously, they'll just claim they honestly believed that.

    • necovek 4 hours ago
      But even with that, we can get to most code being produced by LLMs, but without an increase in productivity.
  • pavel_lishin 4 hours ago
    > since then there have been many extraordinary claims

    Has there been any extraordinary evidence?

  • AfterHIA 49 minutes ago
    Language models have limited use in many well established domains like the humanities, literature, and art. The reason "AI" isn't being used to build, "the future we always wanted" is that even before LLMs innovation and incremental improvement weren't, "hard;" it takes a significant financial infrastructure to market products and create, "new, better norms" and given that software has moved from, "sell people useful tools and support" to, "collect and sell massive amounts of data; engage in behavior modification" there's no real reason to create better tools even if Claude can exponentially reduce development costs and time to working prototypes. We're living beyond the scope of market capitalism. We now live in pre-technofeudalism so all non-marginal gains are going serve the oligarch's potential for rent-collection. They aren't for you and I.

    Real innovation looks like this: https://worrydream.com/ and https://archive.org/details/humaneinterfacen00rask and https://www.dougengelbart.org/pubs/papers/scanned/Doug_Engel...