7 comments

  • hitekker 51 minutes ago
    Apparently, the noise around the AI policy came from Bun's developers saying that policy blocks upstreaming their performance PR. But the real reason seems to be that PR's code itself isn't in great shape, and introduces unhealthy complexity https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...

    > Parallel semantic analysis has been an explicitly planned feature of the Zig compiler for a long time, and it has heavily influenced the design of the self-hosted Zig compiler. However, implementing this feature correctly has implications not only for the compiler implementation, but for the Zig language itself! Therefore, to implement this feature without an avalanche of bugs and inconsistencies, we need to make language changes.

    • bonzini 41 minutes ago
      A single PR for a 3000-line addition would, in all likelihood, be rejected anyway.
  • felipeerias 23 minutes ago
    The other side of this is that open source projects that allow AI tools will be more restrictive towards new contributors.

    This already happens to some degree on large software projects with corporate backing (Web engines, compilers, etc.), where it is often not trivial to start contributing as an independent individual.

    Reasonable people can disagree on whether one approach is inherently better than the other, as ultimately they seem to be optimising for different goals.

    • nicman23 19 minutes ago
      yeah giving a llm git blame and git grep has saved me a lot of time of doing boring basically re.
  • jart 1 hour ago
    > This makes a lot of sense to me. It relates to an idea I've seen circulating elsewhere: if a PR was mostly written by an LLM, why should a project maintainer spend time reviewing and discussing that PR as opposed to firing up their own LLM to solve the same problem?

    The same argument applies to open source itself. Why use someone's project when you can just have the robot write your own? It's especially true if the open source project was vibe coded. AI and technology in general makes personalization cheap and affordable. Whereas earlier you had to use something that was mass produced to be satisfactory for everyone, now you have the hope of getting something that's outstanding for just you. It also stimulates the labor economy, because you have lots of people everywhere reinventing open source projects with their LLMs.

    • simonw 42 minutes ago
      > Why use someone's project when you can just have the robot write your own?

      I've been thinking about this a bunch recently, and I've realized that the thing I value most in software now isn't robust tests or thorough documentation - an LLM can spit those out in a few minutes. It's usage. I want to use software which other people have used before me. I want them to have encountered the bugs and sharp edges and sanded them down.

      • jart 18 minutes ago
        I value software that reveals knowledge. The frontier LLMs were trained on all the code that institutions had been keeping to themselves. So they're revealing programing know-how on a scale that just wasn't possible with open source. LLMs are the ultimate Prometheus. Information is more accessible and useful now than it's ever been.
        • Antibabelic 5 minutes ago
          I promise you, "the code that institutions had been keeping to themselves" is not nearly as special or good as you are implying here.
      • porridgeraisin 14 minutes ago
        Yep. I realised the same. No one reads docs, or goes through tests. Either ways it's easy to write useless tests. And easy to write useless docs. Idt most even read the code. Now the difference is that it has become possible to write useless code.

        So it's just the fact that others have already gone through the motions before I did. That's it really. I suppose in commercial settings, this is even more true and perhaps extends to compliance.

    • skeledrew 25 minutes ago
      LLM access is not yet universally available. There are those who can't exactly afford it. And there are also those with access but there are occasional or perennial issues, like Claude outages and general degraded performance over time. For example couple of months ago when I just started using Claude, I was easily making good progress on multiple projects within a week. Nowadays I'm hardly getting through much of anything as most of the time Claude is just showing spinners, and it also feels like the code quality has taken a nosedive.
    • gausswho 1 hour ago
      That only holds true for the smallest tier of open source projects. Past a certain point of complexity, it's unlikely you can expect the robot to read your mind well enough to provide something of high quality and 'outstanding for just you'.

      The Zig project is certainly far beyond such capability.

      • 8n4vidtmkvmk 21 minutes ago
        I'm finding this out the hard way. I set out to build a 1 page app. I thought it would take a day. It's 98% vibe coded at this point. Even with AI implementing everything, its taken several weekends and many evenings. And not because AI is doing a bad job its just that as i see it come together, i have more and more feature requests. I've got a couple dozen left but I can't just let the AI chew through them all at once. Im effectively QA now. Have to make sure everything is just right.
    • bee_rider 37 minutes ago
      Most people don’t have the ability to read code well enough to determine if an LLM output is good or not. And most people don’t have subscriptions to models that can develop non-trivial programs…

      Maybe this will be a real problem in a couple years though.

  • buggymcbugfix 45 minutes ago
    One reason I love writing production code in Ur/Web is that LLMs are incapable of synthesising something even remotely resembling it. Keeps me on my toes.

    I think this is a great policy by the Zig team.

  • jwzxgo 3 hours ago
    I talked to developers of https://deerflow.tech/ and they pretty much had the same plan, unless it's coming from a known and trusted developer.
    • mapontosevenths 1 hour ago
      > unless it's coming from a known and trusted developer.

      That's exactly the sketchy part here. They turned down known, working and tested, code that came from a partner (bun) due to this policy. Code that 4x'd compile speed.

      A general ban makes sense based on their rationalization ("contributor poker"[0]). A total and inflexible ban can lead to a worse outcome for everyone though.

      If a senior, experienced, contributor vouches for the code it shouldn't matter if they hand crafted it on stone tablets, generated it with yarrow sticks, or used gpt-3.

      [0] https://kristoff.it/blog/contributor-poker-and-ai/

      • lmm 28 minutes ago
        > If a senior, experienced, contributor vouches for the code it shouldn't matter if they hand crafted it on stone tablets, generated it with yarrow sticks, or used gpt-3.

        The flip side of that is that if such a contributor vouches for code that turns out to be poor-quality, this should severely damage their reputation. I've found far too many "senior" developers will give AI a pass on poor coding practices.

      • JoshTriplett 46 minutes ago
        • superb_dev 40 minutes ago
          A standout paragraph from that thread:

          > Put more simply, we are going to make these enhancements, but hacking them in for a flashy headline isn’t a good outcome for our users. Instead we’re approaching the problem with the care it deserves, so that when we ultimately ship it, we don’t cause regressions.

          These exact changes are already on the roadmap and Bun’s PR is rushing ahead.

        • mapontosevenths 39 minutes ago
          Thanks. That explains away most of my concern.
      • feverzsj 40 minutes ago
        Quite the contrary, Bun's developers don't even understand language spec. Their slop didn't use the same type resolution semantics as Zig, which makes their implementation exhibits non-deterministic behavior.
  • feverzsj 1 hour ago
    No human should trust any bullshit made by bullshit machine.