20 comments

  • s20n 1 day ago
    While I know that it may have been a security liability, I'm particularly sad that they're removing the AX.25 module from the kernel.

    > and since nobody stepped up to help us deal with the influx of the AI-generated bug reports we need to move it out of tree to protect our sanity.

    This thread from the linux-hams mailing list [2] has more insight into this decision. I guess the silver lining is that, more modern protocols (in userspace), written in modern languages will become the norm for HAM radio on linux now.

    [1] : <https://lwn.net/ml/all/20260421021824.1293976-1-kuba@kernel....>

    [2] : <https://lore.kernel.org/linux-hams/CAEoi9W5su6bssb9hELQkfAs7...>

    • ajross 1 day ago
      > more modern protocols (in userspace)

      That's really it. The list of things that "need" to be in the kernel is shrinking steadily, and the downsides of having C code running in elevated privilege levels are increasing. None of that is about LLMs at all, except to the extent that it's a notable inflection point in a decades-scale curve.

      The future, and we basically all agree, puts complexities like protocol handling and state in daemons and leaves only the hardware, process and I/O management in the kernel.

      Basically, Tannenbaum was right about the design but wrong about the schedule and path to get there.

      • sylware 12 hours ago
        kernel/user space context switching, in high performance context has to be seriously evaluated.

        And this is very dependent on the hardware programming interface of the devices.

        Look at AMD who is investigating hardware user-level queues for their GPUs (dunno if this is possible because of VMID stuff)

        • ajross 10 hours ago
          > kernel/user space context switching, in high performance context has to be seriously evaluated.

          Of course. But that's true for all userspace solutions too, and there are many options for async APIs (io_uring et. al.) which work to address that.

          The point is that you want the IP stack (or whatever) to be passing stuff around on unix domain sockets for every packet. It's that you want it running in its own memory domain.

      • anthk 1 day ago
        Except it's several times slower doing TCP/IP in userspace with programs than having a proper kernel for it, that's it, Hurd.
        • rwmj 1 day ago
          I don't think this is actually true (eg. DPDK), but even if it is, you can put the driver in userspace (tun/tap + vfio/libusb/ioport/...) and still use TCP/IP in the kernel.
          • duskwuff 20 hours ago
            Speed certainly certainly isn't an issue for AX.25. The protocol typically runs at <10 kbps; the overhead of processing packets in userspace is negligible.
            • ErroneousBosh 9 hours ago
              It most commonly runs at 1200bps, used for APRS these days.

              You can do a neat trick with this if you set up IP over AX.25, particularly with softmodems. Since you've got IP you can do SSH or TLS over it, right? At least, if you set all the timeouts really long, because some of those packets take a while at 120 bytes per second.

              So then you can tune the tones to be a little off the normal frequencies of one side, and play them through speakers with two PCs connected together. When you ssh from one to the other, you will hear the establishment packets and the flurry of packets for every keypress pingponging backwards and forwards between the two systems.

              Absolutely brilliant for demonstrating how things like TCP works with retries (plug a mike into it too, shout some interference) and how UDP doesn't, and stuff.

              • anthk 8 hours ago
                - Lower the MTU

                - Use Mosh instead of SSH

                - Spawn TMUX in the remote machine to send less bits per session

                I tried Mosh+Tmux with 2.7 KBPS (and less) when I was using a data plan. It worked perfectly fine, no delay or barely noticeable.

        • pixl97 1 day ago
          "Get Pwn3d server times faster!"

          Seemingly we've been writing kernels for years, and they still are full of security holes.

          • anthk 16 hours ago
            p0wned from where? Where's the vector attack?

            Do you realize AX25 it's just something loaded on demand when the user requires it, and not by default? Do you know the basics on how the systems work bellow your shiny UI's and IDE's?

            First, AX25 modules would just lie down in the disk harmless, no AX25 stuff it's loaded unless some user modprobe thems in order to setup some hamradio stack with HamNet and the like.

            I see far more security issues with blobs loaded in a so-called GPLv2 kernel everywhere where the tarball almost weights more in blobs than in libre source code. Yet these LLM bootlickers will happily accept whichever non-free firmware on their noses.

            Somehow propietary Radeon, Nvidia, some Intel audio drivers for SOCs and the tons of ARM related firmware blobs are not a security issue. At all. Just kick random bits over the BUS without knowing what really happens with the device. Even if some of them can have full access to the RAM and CPU and the like. That's pretty fine. Ah, yes, IOMMU's and the like. Not enough for some cases. Sorry, but these people can't be serious where the actual multi-CPU based networked computer it's full of opaque bits where you have no control on what they do at all.

            • ajross 8 hours ago
              > p0wned from where? Where's the vector attack?

              For clarity: the example upthread about pwning was TCP/IP, not AX.25.

              Also the idea that "there are no local exploits in this kernel code because it's not used by the running system" is like the proximate cause of 80% of local privilege escalation vulnerabilities. Seriously?

              • anthk 8 hours ago
                How can I exploit an unloaded module?
                • ajross 8 hours ago
                  ... by loading it? There are many ways to get the kernel to suck in a module you can then bang on over sysfs or whatever API it presents. You can have a local exploit in a binary with CAP_SYS_MODULE, subsystems can be fooled into passing uncooked strings to modprobe, users can be fooled into dropping junk into /etc/modprobe.d (instructions for doing so are pervasive in the embedded world and most users think this stuff is safe), etc...

                  This kind of chicanery is the vanilla pudding of the hacker world. It's everywhere. Suffice it to say that you're simply wrong: NO, it's never OK to argue a subsystem is safe because you personally think it can't be loaded. It 100% can be, that's the easy part.

                  • anthk 4 hours ago
                    >users can be fooled into dropping junk into /etc/modprobe.d (instructions for doing so are pervasive in the embedded world and most users think this stuff is safe), etc...

                    Not an issue for AX25 per se.

                    If you can fool an user to run root instructions, it's game over, period.

                    • pixl97 1 hour ago
                      There is a difference between running any instructions and an instruction that would otherwise be considered safe.
        • convolvatron 1 day ago
          that's strictly not true. if I move the code that does TCP from the kernel into the application (not some other daemon, which is perhaps what you're suggesting), then the performance is to the first order the same.

          ok, what are the niggly details. we don't have interrupts, and we're running under the general scheduler, so there may be some effects from not getting scheduled as aggressively.

          we still need to coordinate through the kernel wrt port bindings, since those are global across the machine, but that's just a tiny bit.

          clearly we may be re-opening a door to syn-flooding, since the path to rejection is maybe longer. maybe not, but maybe we can leave the 3-way handshake in the kernel and put the datapath in userspace.

          we probably lose rtt-estimates hanging off of routes.

          none of that suggests 'several times slower'

        • varispeed 1 day ago
          But hardware manufacturers love it! Excuse to sell new faster machines.
          • subscribed 12 hours ago
            With the current lead times and the prices of the fast, complicated silicone? I don't think so :)
          • anthk 16 hours ago
            Also releasing the so-called GPLv2 kernel full of propietary blobs where GPU's and even SOC's can take over the whole initialization process (and some devices talk to the CPU directly since DMA times, and I don't think IOMMU's will be 100% safe for this) it's perfecly fine for security.
    • varispeed 1 day ago
      Oh remember playing with that protocol ages ago. Sad.
  • sscaryterry 1 day ago
    Here's the thing, all of these problems are pre-existing. All LLMs are doing is shining a big bright light on it.
    • jononor 6 hours ago
      With enough tokens, all bugs are shallow? :D
    • socratic_weeb 1 day ago
      We are talking about drivers for devices from the last century which nobody even uses anymore. This isn't "shining light" on important pre-existing issues that have been ignored for too long or something, it isn't helping.

      The only problem here, if any, is the false sense of confidence given by LLMs to people who have no business touching kernel code.

      • tssva 1 day ago
        If they are drivers for devices from the last century which nobody even uses anymore why keep them in the kernel when they, as shown by LLMs, are potential sources of security vulnerabilities? Seems more logical to take the action being taken and remove them.
        • skydhash 1 day ago
          I like OpenBSD for that. If there's something that no one uses and wants to maintain, it's removed. That happened with the bluetooth driver. It was too complicated and no one missed it enough to add it back.
      • brookst 1 day ago
        You don’t see any issue with insecure drivers for obsolete hardware, exactly the kind of thing that is most prevalent in an industrial control type applications?

        Stuxnet should have been a wakeup call to everyone: the boring, obsolete, “safe because nobody browses TikTok on it” hardware is exactly the highest risk.

      • cestith 1 day ago
        If you only need 100 Mbps the 3Com 3c905 series of PCI Ethernet cards are still some of the most reliable hardware you can put into your industrial PC that still has PCI slots. ISDN and ax25 are still really useful if you have low-bandwidth but low-latency needs like sensor data.

        Now those are niche use cases, but they do exist. However, what’s wrong with removing insecure code for these niche cases? Either someone will step up to actually maintain it, or newer versions of the kernel will be leaner and have less historical cruft.

      • jwitthuhn 1 day ago
        If the LLMs run by these people are turning up real bugs then their confidence in touching kernel code seems pretty earned, imo.
      • segmondy 20 hours ago
        what do you mean nobody? there are a few of us using it, and we are completely broken when support is taken away.
        • literalAardvark 4 hours ago
          Sounds like someone should budget for an oss sponsorship
  • cozzyd 1 day ago
    Seems like there should be some "level of maintenance" metric for modules and distros can pick which they include by default and which are packaged separately based on what they care about. Arch users will build the world but an EL user who needs an unmaintained module would have to explicitly install kmod-isdn or even build it themselves
    • doubled112 1 day ago
      Red Hat already removes a bunch of modules/drivers from the RHEL kernel that they don't consider enterprise.

      Xbox/PS controllers, for example. I believe some old RAID controller and WiFi drivers are removed too. Whatever they don't want to support.

      • rwmj 1 day ago
        (Working for Red Hat) We actually opt devices in to our kernel. A couple of years ago an overzealous kernel maintainer removed the watchdog drivers used by qemu from the kernel and it took me ages to get those added back in.
  • mmsc 1 day ago
    Unmaintained code is a security issue in of itself, so this is of course a net benefit.
    • xbar 1 day ago
      This can be accurately generalized: code is a security issue in and of itself.
      • brookst 1 day ago
        That’s reductionism, not generalization.

        Generalizations that lose accuracy are not valid. “Ice cream is sweet, and candy is sweet, so food is sweet” is reductive.

      • mey 1 day ago
        Now if only I could get the product team to fully understand that implication.
      • catlifeonmars 1 day ago
        This can be generalized: in and of itself.
  • NooneAtAll3 1 day ago
    can such drivers be moved out of kernel? what exactly stops that?

    why do they even need to be in kernel repo and not brought at/after install time?

    • drewg123 1 day ago
      Linux is actively hostile to out-of-tree drivers. There is no stable driver API, and interfaces change at the drop of a hat. Maintaining an out of tree driver is a constant nightmare where you're always dealing with interfaces changing out from under you.

      I wrote and maintained 10GbE drivers for a small company in the 2000s, and just the SHIM file for our driver on Linux to massage over API differences was well over 1000 lines. I think it was close to the same size as the entire driver for one of the BSDs.

      • terryot 1 day ago
        A counterpoint: I recently asked Claude to port an obsolete ~2010 driver to latest kernel by asking Claude to "make it work". Few builds later and few crashes later, I had a working driver, with DMA, modern Io map protection, etc.

        It's not a nightmare anymore to port drivers

      • achierius 1 day ago
        GP meant moving the driver into userspace, which is much less painful due to the stable userspace APIs.
        • ahepp 20 hours ago
          I’m not sure the GP did mean that, but I agree it’s a much better solution than maintaining an out-of-tree kernel module, which is generally a really bad idea
    • gslepak 1 day ago
      > why do they even need to be in kernel

      People have been asking this question since Linux was first invented…

      • s20n 1 day ago
        I could never in a million years have imagined that LLM-slop driven fuzzing would become the ultimate vindication for the microkernel philosophy
  • notepad0x90 18 hours ago
    I think "won't fix" should be normalized, even for critical security bugs.

    Software exists to be used, not to be secure. These are not useless pieces of code. If they were useless, then no one is using them, so there is no security risk. This is equivalent to turning off (or destroying) a computer to secure it.

    Alternatively (and I'm disappointed Linux/Greg K.H. haven't done this), drivers and other isolated modular code should be marked as unmaintained, and for those with reported vulnerabilities, a similar config flag set. Require explicit acknowledgement by kernel builders to include them in the build config.

    Things have been trending badly with Linux in this area, it feels like it's lost it's original calling, and is now heavily influenced by PR and corporate interests. The desktop Linus used in the 90's to write Linux should be able to run the current Linux kernel. But it doesn't even support the CPU architecture any more!

    Some of us have perfectly good old hardware we can put to modern (non-networked) use, but we have to either use netbsd (if it supports the task/program), or generate more e-waste and dump the hardware in the bin. And buy yet another RPI, and waste money and resources. But at least, so long as it is simple use cases that don't require modern software, you can just slap an old version of Linux on it, but at least in my experience, stability was more of an issue for older drivers than it is today, so Windows 98 or XP is a better choice sometimes for x86.

  • KJs6ZxELzQM37O 1 day ago
    A lot of money seems to be placed to find bugs in open source projects right now... maybe they can spend just a little bit of this money on people to fix these bugs
    • cyanydeez 1 day ago
      You're arguing to pay your taxes instead of spending money on buying politicians.
  • segmondy 20 hours ago
    Seems there should be a "hobbyist kernel" with all that kernel code, you run at your own risk but get all the toys for your obscure use cases.
    • notepad0x90 18 hours ago
      There really shouldn't, people don't have to include things in their kernel builds they don't need.
  • ferguess_k 1 day ago
    Are we already in the time, or close to the time, that well-trained LLMs are more efficient in finding security holes than all but the best developers out there, even for OS kernel code? Can someone educate me on this?
    • stratos123 1 day ago
      In terms of quantity, definitely yes (a single person managing a swarm of Opusi can already find much more real bugs than a security researcher, hence the rise in reports).

      In terms of quality ("are there bugs that professional humans can't see at any budget but LLMs can?") - it's not very clear, because Opus is still worse than a human specialist, but Mythos might be comparable. We'll just have to wait and see what results Project Glasswing gets.

      Either way, cybersecurity is going to get real weird real soon, because even slightly-dumb models can have a large effect if they are cheap and fast enough.

      EDIT: Mozilla thinks "no" to the second question, by the way: "Encouragingly, we also haven’t seen any bugs that couldn’t have been found by an elite human researcher.", when talking about the 271 vulnerabilities recently found by Mythos. https://blog.mozilla.org/en/firefox/ai-security-zero-day-vul...

      • chuckadams 1 day ago
        > Opusi

        The plural of "Opus" is "Opera". Might be a tad confusing tho :)

        • robocat 1 day ago
          Opuses is also correct English, and clearer in non-academic contexts.

          Opera is the traditional plural from Latin, now perhaps for more scholarly use in English.

          Results from a quick search.

          • chuckadams 1 day ago
            I'll do the faux German thing then: Opusen :)
        • skeledrew 1 day ago
          Wondered for a second "what does that browser have to do with all this?"
      • DanielHB 1 day ago
        There is also a huge surface area of security problems that can't happen in practice due to how other parts of the code work. A classic example is unsanitized input being used somewhere where untrusted users can't inject any input.

        Being flooded with these kind of reports can make the actual real problems harder to see.

        • arcfour 9 hours ago
          They wouldn't be classed as vulnerabilities then, since, you know, there is no vulnerability. Unless you have evidence that most of these issues are unexploitable, but I would be surprised to hear that they were considered vulnerabilities in that case.
    • yk 1 day ago
      My theory is, that a lot of security bugs are low hanging fruit for LLMs in the sense that it is a bit tedious but not that hard pattern matching. (Let's see the free occurs in foo(), so if I trigger bar() after foo() then I have a use after free, that should be possible if I trigger an exception in baz::init().)
    • toast0 1 day ago
      Efficiency in finding isn't really the metric to consider. I'm sure a good security person could look at these and find the bugs, but nobody did.

      IMHO, if you were to do a manual audit of the Linux kernel, the first thing to do is exclude all the stuff you're never going to run, because why spend time on it?

      These scans are looking at everything, because once you set it up, the incremental cost to look at everything is not so bad.

      This is going to push lesser used stuff out of the mainline, which sucks for people who were using it, but is better for everyone else.

    • LeCompteSftware 1 day ago
      "Even for OS kernel code" is doing a lot of work. What you really mean is "legacy C code" and yes, since about 6 months ago these systems have gotten reliable enough that they are basically superhuman at identifying buffer overflows / etc. A remarkable number of these bugs are fixed by adding a (if (length > MAX_BUFFER) {return -1;}), just the classic C footguns. Even as a huge LLM skeptic I am not too too surprised that these systems might be superhuman at finding tedious tricky stuff like this.

      At the same time, a lot of these bugs were in places that people weren't looking because it's not actually important. This kernel code had already been a longstanding problem in terms of low-effort bot-driven security reports and nobody had any interest in maintaining it. So this was more LLM-assisted technical management than LLM-assisted security, it finally made a situation uncomfortable enough for the team to do something about it.

      Another example: Mythos found a real bug in FreeBSD that occurs when running as an NFS with a public connection. But... who on earth is doing that? I would guess 99.9% of FreeBSD NFS installations are on home LANs. More importantly, Anthropic spent $20,000 to find this bug. Just think in terms of paying a full-time FreeBSD dev for a month and that's what they find: I'd say "ok, looks like FreeBSD has a pretty secure codebase, let's fix that stupid bug, stop wasting our money, and get you on a more exciting project."

      I do think anyone who has a legacy open-source C/C++ codebase owes it to their users to run it by Claude/Codex, check your pointers and arrays, make sure everything looks ok. I just wish people were able to discuss it in proper context about other native debugging tools!

    • jcalvinowens 1 day ago
      My experience with these tools is that they generate absolutely enormous amounts of insidiously wrong false positives, and it actually takes a decent amount of skill to work through the 99% which is garbage with any velocity.

      Of course some people don't do that, and send all the reports anyway... and then scream from the hilltops about how incredible LLMs are when by sheer luck one happens to be right. Not only is that blatant p-hacking, it's incredibly antisocial.

      It's disingenuous marketing speak to say LLMs are "finding" any security holes at all: they find a thousand hypotheticals of which one or two might be real. A broken clock is right twice a day.

      • NitpickLawyer 1 day ago
        Your experience seems to be at least 3-6 months old. Long time kernel maintainers have recently written on this subject. They say that ~3 months ago the quality and accuracy of the reports crossed a threshold and are now legitimately useful.
        • jcalvinowens 1 day ago
          The experience I'm describing was two weeks ago.

          Yes, what we see coming out of the bottom of funnel is now is a little better. But it's sort of like reading day trading blogs: nobody shares their negative results, which in my direct experience are so bad they almost negate any investigative benefit. I also think part of this is that a small set of very prolific spammers were sufficiently discouraged to stop.

      • Legend2440 1 day ago
        This is incorrect. Here's the curl maintainer talking about dozens of bugs found using LLMs: https://daniel.haxx.se/blog/2025/10/10/a-new-breed-of-analyz...
        • warkdarrior 1 day ago
          From the curl blog post:

          > "Remarkably few of them complete false positives."

          • defmacr0 1 day ago
            That's worse than a report that can be easily dismissed
        • random__duck 20 hours ago
          You mean the same curl that have been crying from the hilltops that they are being DDOSed with slop security reports? That curl?
      • binaryturtle 1 day ago
        I used GitHub's Copilot once and let it check one of my repositories for security issues. It found countless (like 30 or 40 or so for a single PHP file of some ~400 lines). Some even sounded reasonable enough, so I had a closer look, just to make sure. In the end none of it was an issue at all. In some cases it invented problems which would have forced to add wild workaround code around simple calls into the PHP standard library. And that was the only time I wasted my time with that. :D
      • bri3d 1 day ago
        I strongly disagree with this take, and frankly, this reads like the state of "research" pre-LLMs where people would run fuzzers and scripted analysis tools (which by their nature DO generate enormous amounts of insidiously wrong false positives) and stuff them into bug bounty boxes, then collect a paycheck when one was correct by luck.

        Modern LLMs with a reasonable prompt and some form of test harness are, in my experience, excellent at taking a big list of potential vulnerabilities and figuring out which ones might be real. They're also pretty good, depending on the class of vuln and the guardrails in the model, at developing a known-reachable vulnerability into real exploit tooling, which is also a big win. This does require the _slightest_ bit of work (ie - don't prompt the LLM with "find possible use after free issues in this code," or it will give you a lot of slop; prompt the LLM with "determine whether the memory safety issues in this file could present a security risk" and you get somewhere), but not some kind of elaborate setup or prompt hacking, just a little common sense.

    • bri3d 1 day ago
      "More efficient" of course has many axes (cost, energy consumption, manual labor requirement vs cost of human, time, quality, etc.). However, as a long-time reverse engineer and exploit developer who has worked in the field professionally, I would say LLMs are now useful; their utility exceeds that which was previously available. That is, LLM assisted exploit discovery and especially development is faster, more efficient, and ultimately cheaper than non-LLM assisted processes.

      What commenters don't seem to understand is that especially CVE spam / bug bounty type vulnerability research has always been an exercise in sifting through useless findings and hallucinations, and LLMs, used well, are great at reducing this burden.

      Previously, a lot of "baseline" / bottom tier research consisted of "run fuzzers or pentest tools against a product; if you're a bottom feeder just stuff these vulns all into the submission box, if you're more legit, tediously try to figure out which ones are reachable." LLMs with a test harness do an _amazing_ job at reducing this tedium; in the memory safety space "read across 50 files to figure out if this UAF might be reachable" or in the web space, "follow this unsanitized string variable to see if it can be accessed by the user" are tasks that LLMs with a harness are awesome. The current models are also about 50% there at "make a chain for this CVE," depending on the shape of the CVE (they usually get close given a good test harness).

      It seems that the concern with the unreleased models is pretty much that this has advanced once again from where it is today (where you need smart prompting and a good harness) to the LLM giving you exploit chains in exchange for "giv 0day pl0x," and based on my experience, while this has got an element of puffery and classic capitalist goofiness to it ("the model is SO DANGEROUS only our RICHEST CUSTOMERS can have it!"), I believe this is just a small incremental step and entirely believable.

      To summarize: "more efficient than all but the best" comes with too many qualifiers, but "are LLMs meaningfully useful in exercising vulnerabilities in OS kernel code," or "is it possible to accelerate vulnerability research and development with LLMs" - 100% absolutely.

      And you don't have to believe one random professional (me); this opinion is fairly widespread across the community:

      https://sockpuppet.org/blog/2026/03/30/vulnerability-researc...

      https://lwn.net/Articles/1065620/

      etc.

    • olmo23 1 day ago
      We are there. This is pretty much the reason why Mythos isn't being released publically.
      • pocksuppet 1 day ago
        The reason Mythos isn't being released publicly is to drive up Anthropic's valuation by making big promises.
        • dymk 1 day ago
          https://blog.mozilla.org/en/privacy-security/ai-security-zer...

          > As part of our continued collaboration with Anthropic, we had the opportunity to apply an early version of Claude Mythos Preview to Firefox. This week’s release of Firefox 150 includes fixes for 271 vulnerabilities identified during this initial evaluation.

          • SleepyMyroslav 1 day ago
            I understand that they are trying to say that it is getting better... 271 vulnerabilities is a lot. I have been using FF for a long time. I am now considering if using it at all was a mistake or not. And I think it was.
          • warkdarrior 1 day ago
            So you're saying Mozilla is in on it, hyping up Anthropic. Are they getting a kickback?
            • dymk 1 day ago
              What I’m saying is the youths call this “smoking copium”
              • pocksuppet 1 day ago
                Both can be true at once. It can be good at finding vulnerabilities, and also overhyped to pump the stock price.
            • bitwize 1 day ago
              What they're saying is that the capabilities of Mythos to find overlooked vulnerabilities in large code bases are real.

              We're in a new era for security. You're either using AI to catch vulnerabilities in your code... or someone else is, and 0wning you.

    • traceroute66 1 day ago
      > well-trained LLMs are more efficient in finding security holes than all but the best developers out there, even for OS kernel code?

      No.

      Like everything else an LLM touches, it is prone to slop and hallucinations.

      You still need someone who knows what they are doing to review (and preferably manually validate) the findings.

      What all this recent hype carefully glosses over is the volume of false-positives. I guarantee you it is > 0 and most likely a fairly large number.

      And like most things LLM, the bigger the codebase the more likely the false-positives due to self-imposed context window constraints.

      Its all very well these blog posts saying "LLM found this serious bug in Firefox", well yeah but that's only because the security analyst filtered out all the junk (and knew what to ask the LLM in the prompt in the first place).

      • stratos123 1 day ago
        A 0% false-positive rate is not necessary for LLM-powered security review to be a big deal. It was worthless a few months ago, when the models were terrible at actually finding vulnerabilities and so basically all the reports were confabulated, with a false positive rate of >95%. Nowadays things are much better - see e.g. [1] by a kernel maintainer.

        Another way to see this is that you mentioned "LLM found this serious bug in Firefox", but the actual number in that Mozilla report [2] was 14 high-severity bugs, and 90 minor ones. However you look at it, it's an impressive result for a security audit, and I dount that the Antropic team had to manually filter out hundreds-to-thousands of false-positives to produce it.

        They did have to manually write minimal exploits for each bug, because Opus was bad at it[3]. This is a problem that Mythos doesn't have. With access to Mythos, to repeat the same audit, you'd likely just need to make the model itself write all the exploits, which incidentally would also filter out a lot of the false positives. I think the hype is mostly justified.

        [1] https://lwn.net/Articles/1065620/

        [2] https://blog.mozilla.org/en/firefox/hardening-firefox-anthro...

        [3] https://www.anthropic.com/news/mozilla-firefox-security

        • traceroute66 1 day ago
          > A 0% false-positive rate is not necessary

          To be clear, I'm not saying 0% false-positive because that will always be impossible with any LLM.

          However, to greatly over-simplify what I already said ...

          The presence of >0 false-positives means you still need someone who knows what they are doing behind the keyboard.

          The presence of an LLM, no matter how good, will never remove the need for a human with domain expertise in security analysis.

          You cannot blindly fix stuff just because the LLM says it needs fixing.

          You cannot report stuff just because the LLM says it needs reporting.

          There may well be scope for LLM-assisted workflows, but WHO is being assisted is a critical part of the equation.

          That is the fundamental point I am making.

          • literalAardvark 4 hours ago
            > You cannot blindly fix stuff just because the LLM says it needs fixing.

            > You cannot report stuff just because the LLM says it needs reporting.

            Not today, maybe. Though with a good enough harness we're pretty close to seeing that already.

            But in 6 months after another halving of the error rate? I wouldn't be so sure.

          • charleslmunger 7 hours ago
            If you can trigger address sanitizer from input outside the program, and the program may interact with untrusted input, isn't that always worth reporting and fixing?
  • tristor 1 day ago
    Realistically, that list of components are mostly things that have not been used in modern computing devices for over a decade. Nothing prevents someone from providing a module from out of the kernel tree to ship these drivers or delivering some of these capabilities in user space, and if they are unused and unmaintained I would rather they're not shipped in the kernel.

    Be real with yourself, do you know anyone using ISA or PCI in 2026? Everything is built on PCI-E except in specific industrial settings or on ancient hardware that's only relevant for retrocomputing. Is anyone using the ATM network protocol anymore? MPLS and MetroE mostly replaced ATM, and now MPLS is being largely supplanted by SDWAN technologies and normal Internet connections. I have been doing networking nearly my entire career in some capacity, the last time I touched X.25 or Frame Relay was in the early 2000s, the last time I touched ATM was in the mid early 2000s... the last time I touched ISDN was in the mid 2010s, and that was an IDSL setup, which is itself a dead technology. The last laptop I owned that had a PCMCIA card slot was manufactured in 2008.

    I don't want to see these capabilities completely disappear, but there's no reason they should ship in the mainline kernel in 2026. They should be separated kernel modules in their own tree.

    • badsectoracula 1 day ago
      > They should be separated kernel modules in their own tree.

      The main issue with this is that by being on a separate tree they do not benefit from the API breakage updates in the kernel. After all the main benefit that kernel devs mentioned over the years for keeping drivers in the kernel instead of separate trees is that the code gets updated whenever internal APIs change.

    • Krutonium 1 day ago
      I actually use a capture card on PCI but I'm well aware I'm unusual.
  • rasz 1 day ago
    Most if not all of the listed stuff could be converted to used mode code.
    • dbdr 1 day ago
      *user-mode code.
  • staticassertion 1 day ago
    They can't maintain the code so they are no longer going to maintain the code.
    • traceroute66 1 day ago
      > They can't maintain the code so they are no longer going to maintain the code.

      Yes, I don't see the point of maintaining technical debt just for the sake of it.

      The security environment in 2026 is such that legacy unmaintained code is a very real security risk for obscure zero-days to exploit to gain a foot in the door.

      Reading through the list I don't see it being an issue for the overwhelming majority of Linux users.

      Who, for example, still uses ISDN in 2026 ? Most telcos have stopped all new sales and existing ISDN circuits will be forcefully disconnected within 3–5 years as the telcos complete their FTTP build-outs and the copper network is subsequently decomissioned.

      • devmor 1 day ago
        > Who, for example, still uses ISDN in 2026?

        Most TV and radio stations.

        • traceroute66 1 day ago
          > Most TV and radio stations.

          I doubt it. And as I said, telcos have ceased new sales of ISDN and will be shutting down copper networks within 3–5 years.

          Therefore if there are still TV and radio stations still using it, they will be forced to stop using it by circumstance, i.e. they will find their ISDN will cease working after the telco shuts down the kit in the exchange.

          • devmor 1 day ago
            You can doubt it all you want, ISDN is used internally in broadcast all over the world. Telcos shutting it down has nothing to do with them and won’t affect them.

            Losing support in software however, does.

            • traceroute66 1 day ago
              >You can doubt it all you want, ISDN is used internally in broadcast all over the world

              Since you claim to be a domain expert, give me hard named examples with independently verifiable links. At this stage I want facts, not anecdotes.

              Because right now, my semi-educated guess is they are all using IP-based streaming codecs and protocols for remote contributions, outside broadcast, studio links and pretty much everything else under the sun.

              • mike_hearn 17 hours ago
                No, he's right. I have a friend who does voiceover work and is and announcer for the UK Channel 4. He does all his work from home using an ISDN link. It's a huge pita for him because the telcos don't want to know indeed, but it's the usual story with legacy workflows.

                I think it's also a fully switched system so you are guaranteed bandwidth with no packet drops or buffering which is clearly useful for broadcast work.

                • traceroute66 14 hours ago
                  > No, he's right. I have a friend who does voiceover work and is and announcer for the UK Channel 4. He does all his work from home using an ISDN link.

                  But how much is this to do with either Channel 4 not supporting (in the "assistance" sense of the word, rather than "interop") his move to IP or potentially his personal reluctance to change ("ain't broke don't fix it" mindset).

                  Given he is in the UK and the incumbent telco (BT) are switching off ISDN in 2027, I really suspect there is more than meets the eye to your friend's story.

                  I am not seeking to judge, I just feel realistically that it highly unlikely that at this late stage (1 year to go to 2027) there really is no other option other than ISDN when collaborating with Channel 4...

                  The reason I say this is because even the briefest of internet search throws up hard publicly-available evidence that the broadcast world have indeed moved on in the world ....

                  Way back in 2008 the BBC were already investigating options to move away from ISDN...[1]

                  ... and evidence is out there the BBC are using SIP for critical things like remote Radio contributions[2]

                  > guaranteed bandwidth with no packet drops or buffering

                  You can absolutely get this on IP.

                  [1] https://downloads.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP170... [2] https://support.inquality.com/kb/faq.php?id=144

                  • mike_hearn 12 hours ago
                    I assume they have a migration plan or already migrated. I don't know, he told me this years ago. It wasn't his choice it is/was just the standard in the industry. I'm sure they're moving to IP at the moment, I was just pushing back on the idea that broadcast doesn't use it. If they've moved away from it, it's a relatively recent change (last five years or so).
                    • traceroute66 11 hours ago
                      > If they've moved away from it, it's a relatively recent change (last five years or so).

                      So the TL;DR is we are not disagreeing then ? ;)

                      I never expressed any doubt that traditionally ISDN absolutely was the lifeblood of broadcast, there is zero doubt about that.

                      What I am saying is that was then and now is now. We are now sitting here in 2026 and the world of comms has moved on dramatically and the broadcast world has moved along with it and that those people still clinging on to legacy ISDN will be forced to shift to IP-based technologies because they will be forcibly disconnected by their telcos very soon (1–5 years).

                      The reality is also that here in 2026 we live in a world where (a) you have a 4k tv and high-end audio system in your home ... so there is a natural limitation on what utility ISDN has in this world, and (b) the general public is increasingly consuming the media produced by the broacaster via IP means (streams over 5G-IP to mobile, streams over IP to Apple TV boxes) ... so if a broadcaster can escape the un-necessary complexity (and cost) of transcoding ISDN-received content to IP and shift to an IP-to-IP environment, why would they not want to do that ?

                • anthk 11 hours ago
              • anthk 16 hours ago
                Telecos still use tons of legacy code and even encapsulate internet connections over Media standards.

                If you have that nickname you would already know that, for sure.

        • dezgeg 1 day ago
          Do they use it via mainline Linux kernel's ISNDN drivers though (and not something proprietary)?
    • sigmoid10 1 day ago
      Seems like this should have happened anyways and LLMs just finally forced them to admit it.
      • bastawhiz 1 day ago
        You're being downvoted but I think you're right in a lot of ways. If you read through the patches for some of the removals, the reasons come down to:

        - Nobody is familiar with the code

        - Almost all of the recent fixes are from static analysis

        - Nobody is even sure if anyone uses the code

        This feels a lot like CPython culling stdlib modules and making them pypi packages. The people who rely on those things have a little bit of extra work if they want a recent kernel version, and everyone else benefits (directly or indirectly) by way of there being less stuff that needs attention.

    • fluidcruft 1 day ago
      It's an interesting form of tree shaking.

      The overlap of bugs being found, nobody caring enough to bother read the reports or fix the code, and nobody caring that the modules are pushed out of main seems good.

    • goalieca 1 day ago
      Maybe attackers would focus on these unused bits for very niche products, but generally no one would waste their time.

      In general, drivers make up the largest attack surface in the kernel and many of them are just along for the ride rather than being actively maintained and reviewed by researchers.

      • catlifeonmars 1 day ago
        Would you say the vast majority are back seat drivers?
    • baq 1 day ago
      and the code is in the training set, so you can trivially[0] ask an LLM to summon it back either from memory or just by asking it to revert the removal commit.

      [0] not trivially if you want to validate if it works

  • Create 1 day ago
    When LLM reports the bug, is should be used to fix it on the same occasion. Nobody will bother afterwards.
    • skeledrew 1 day ago
      LLM being able to find bug doesn't necessarily equate to LLM being able to satisfactorily fix bug. Be happy that the bugs are being uncovered in the first place and brought to the attention of those who are concerned with their resolution.
    • gpm 1 day ago
      Performing the charity work of discovering bugs before someone evil uses them to cause damage does not somehow obligate you to perform more charity and fix those bugs.
    • the_biot 1 day ago
      That's literally advocating for filling the kernel with AI slop.

      Not that it's not going to happen; Linus is already a vibe-coder, and several maintainers have fallen for the LLM crap as well.

  • anthk 1 day ago
    No meshnet for the people, because of surv^U security.
  • jimmypk 1 day ago
    [dead]
  • canarias_mate 1 day ago
    [dead]
  • bozdemir 1 day ago
    [dead]
  • adnasalk 9 hours ago
    [flagged]
  • anthk 1 day ago
    Damn it, HAM was always an asset and NOT just hamradio related, but other protocols such as some mesh network.

    Can't wait to AI braindead folks get collapsed down for the good.