Cloudflare Scrubs Aisuru Botnet from Top Domains List

(krebsonsecurity.com)

76 points | by jtbayly 4 hours ago

5 comments

  • arcfour 2 hours ago
    If an automated service is pulling the top 100 domains from CF and naively trusting them, why can't it also pull the categorization information that's right there and make sure none of the categories are "Malware"??? Who would write something like that? It's absolutely believable that the top 100 domains could contain malware domains...because of the nature of botnets and malware.

    That's PEBCAK.

    • 8organicbits 1 hour ago
      People make mistakes. Security engineers need to understand what sort of mistakes people are making and mitigate that risk. Brushing it under the rug as silly users making mistakes doesn't protect anyone.
      • monerozcash 1 hour ago
        The automated services using this for security-related purposes are presumably built by "security engineers", if they're making mistakes like this they're obviously woefully underqualified.
        • wombatpm 8 minutes ago
          True masters of security realize all software is flawed, and therefore write none.
        • wolf550e 1 hour ago
          Almost nothing is built by security engineers, including security features of security products at security companies.
          • arcfour 12 minutes ago
            I'm a security engineer, I have built things like this, and I made the original comment. A lot of my job revolves around developing automation for security needs.

            Also, many of the top 100 domains serve user-generated content (like AWS/S3). Blindly trusting anything from them just because they are big is so woefully misguided it boggles my mind; I seriously doubt that anyone is actually doing what is described in the article.

        • Uehreka 1 hour ago
          Many people are woefully under qualified, we need to have a working society anyway.
    • charcircuit 33 minutes ago
      Why not include them? What's wrong with have the most resolved domain being the top domain. I think it's interesting to know the actual most resolved domain, than the top of some editorialized list.
  • bradly 2 hours ago
    > We should have two rankings: one representing trust and real human use, and another derived from raw DNS volume.

    Isn't identifying real humans an unsolved problem? I'm not sure efforts to hide the truth that these domain are actually the most requested domains does anyone any favors. Is there something using these rankings as an authoritative list or are they just vanity metrics similar to the Alexa Top Site rankings of yore? If they are authoritative, then Cloudflare defining "trusted" is going to be problematic as I would expect them to hide that logic to avoid gaming.

    • iamkonstantin 2 hours ago
      > Isn't identifying real humans an unsolved problem?

      I'm not sure this was ever a problem to begin with. The obsession with "confirm you are human" has created a lot of "bureaucracy" on technical level without actually protecting websites from unauthorised use. Why not actually bite the bullet and allow automations to interact with web resources instead of bothering humans to solve puzzles 10 times per day?

      > Cloudflare defining "trusted"

      They would love to monetise the opportunity, no doubt

      • bradly 1 hour ago
        > I'm not sure this was ever a problem to begin with. The obsession with "confirm you are human" has created a lot of "bureaucracy" on technical level without actually protecting websites from unauthorised use. Why not actually bite the bullet and allow automations to interact with web resources instead of bothering humans to solve puzzles 10 times per day?

        I mostly just let the bots have my sites, but I also don't have anything popular enough that it costs me money to do so. If I was paying for extra compute or bandwidth to accommodate bots, I may have a stronger stance.

        I do feel a burden with my private site that has a request an account form that has no captcha or bot blocking technology. Fake account requests are 100 to 1 real account, but this is my burden as a site owner, not my users' burden. Currently the fake account requests are easy enough to scan and I think I do a good job of picking out the humans, but I can't be sure and I fear this works because I run small software.

        • jacquesm 1 hour ago
          I send them on endless redirect loops with very slow responses. Cost me very little bandwidth and it effectively traps one bot process that then isn't available for useful work. Multiply by suitably large 'n' and they might even decide to start to play nice.
      • nickff 2 hours ago
        >"Why not actually bite the bullet and allow automations to interact with web resources instead of bothering humans to solve puzzles 10 times per day?"

        This is a great idea if you've developed your 'full-stack', but if you're interfacing with others, it often doesn't work well. For example, if you use an external payment processor, and allow bots to constantly test stolen credit card data, you will eventually get booted from the service.

        • isodev 1 hour ago
          I think the comment means we have these “institutional” problems that we’re constantly protecting with tricks like captchas instead of actually addressing why a payment processor would have a problem with that or be unable to handle it in their own way.
        • AnthonyMouse 1 hour ago
          The average normal user would go months to years between needing to update payment info, so why would that require them to solve puzzles 10 times a day?

          That is also notably a completely unnecessary dumpster fire created by the credit card companies. Hey guys, how about an API that will request the credit card company to send a text/email to the cardholder asking them to confirm they want to make a payment to Your Company, and then let your company know in real time whether they said yes? Use that once when they first add the card and you're not going to be a very useful service for card testing.

          • CamouflagedKiwi 1 hour ago
            Isn't that basically 3DSecure / Verified by Visa?
            • AnthonyMouse 1 hour ago
              It's what those things should have been.

              What you need is for all card issuers to be required to implement it by the network. Otherwise you'll still have people showing up to test all the cards that don't support it and the payment processors would still kick you off for that.

  • chrismorgan 2 hours ago
    > Aisuru switched to invoking Cloudflare’s main DNS server — 1.1.1.1

    I don’t suppose they use DNS to find their command-and-control servers? It’d be funny if Cloudflare could steal the botnet that way. (For the public good. I know that actually doing such a thing would raise serious concerns. Never know, maybe there would be a revival of interest in DNSSEC.) I remember reading a case within the last few years of finding expired domains in some malware’s list of C2 servers, and registering them in order to administer disinfectant. Sadly, IoT nonsense probably can’t be properly fixed, so they could probably reinfect it even if you disinfected it.

    • Vespasian 2 hours ago
      I wonder whether by now the botnets moved on to authenticating C2 server and using fallbacks methods if the malware discovers an endpoint to be "compromised"
      • monerozcash 1 hour ago
        That's been happening for well over 20 years, and I'm sure there are even earlier examples.
    • vpShane 1 hour ago
      This wouldn't raise serious concerns. Ask the customers/community if doing it before hand is something they agree with in some form of poll, then just do it. At the end of the day DNS is a million years old, out-dated and the mission is to help make a better internet. If Cloudflare straight up asked us all if it was cool to modify their DNS servers to identify / disrupt malicious use from botnets I'd agree. People not using DoH or internal things like dnscrypt-proxy need to get with the times.

      There's ethical ways to do things: https://www.justice.gov/archives/opa/pr/court-authorized-ope...

      I'm not saying I agree with it but we're all engineers, the internet and everything built on it was engineered, to put up with script kiddies and hacked computers and not-so-tech-savvy internet citizens using their devices and installing Infatica, and other malware/proxy services on their devices because it came within the agreement for installing some free app where their kids could 'pop bubbles' on their parents phones or some free desktop app included it; then distinguishing their IP addresses and IP-scores as they blend in with their regular human traffic makes it hard to block it. Ain't nobody got time for whack-a-mole internet, families and businesses will need to secure their networks.

      Honestly I'd be ok with an up-to-date live list of all known infected IP addresses and their last timestamp for what, and who detected them as a bot/malicious IP address so I could just use some simple ipsets and iptables, or make a simple script to disallow things like posting, interactions while still allowing them to see content on websites would be ideal. Add a little banner 'you're infected, or somebody on your network is infected, this is how to fix it and practice best security, and more info on the subject'

      These services switched from DDoS/attacks to renting out their hacked network spaces. They don't need to be making bank at our expense.

  • blibble 2 hours ago
    given the anti-user behaviour of modern Windows, shouldn't microsoft.com be down as malware too?

    after yesterday's reveal[1]: facebook should certainly be down as "scams"

    [1]: https://news.ycombinator.com/item?id=45845772

    • politelemon 1 hour ago
      If sentiment and personal bias were a factor in classifying malware then I'd be rid of all of faang and social media.
  • knowitnone3 1 hour ago
    Microsoft should be classified as malware