Bots have ruined reddit but that is what the owners wanted.
The API protest in 2023 took away tools from moderators. I noticed increased bot activity after that.
The IPO in 2024 means that they need to increase revenue to justify the stock price. So they allow even more bots to increase traffic which drives up ad revenue. I think they purposely make the search engine bad to encourage people to make more posts which increases page views and ad revenue. If it was easy to find an answer then they would get less money.
At this point I think reddit themselves are creating the bots. The posts and questions are so repetitive. I've unsubscribed to a bunch of subs because of this.
It's been really sad to see reddit go like this because it was pretty much the last bastion of the human internet. I hated reddit back in the day but later got into it for that reason. It's why all our web searches turned into "cake recipe reddit." But boy did they throw it in the garbage fast.
One of their new features is you can read AI generated questions with AI generated answers. What could the purpose of that possibly be?
We still have the old posts... for the most part (a lot of answers were purged during the protest) but what's left of it is also slipping away fast for various reasons. Maybe I'll try to get back into gemini protocol or something.
I see a retreat to the boutique internet. I recently went back to a gaming-focused website, founded in the late 90s, after a decade. No bots there, as most people have a reputation of some kind
given the timing, it has definitely been done to obscure bot activity, but the side effect of denying the usual suspects the opportunity to comb through ten years of your comments to find a wrongthink they can use to dismiss everything you've just said, regardless of how irrelevant it is, is unironically a good thing. I've seen many instances of their impotent rage about it since it's been implemented, and each time it brings a smile to my face.
Yes registering fake views is fraud against ad networks. Ad networks love it though because they need those fake clicks to defraud advertisers in turn.
Paying to have ads viewed by bots is just paying to have electricity and compute resources burned for no reason. Eventually the wrong person will find out about this and I think that's why Google's been acting like there's no tomorrow.
The biggest change reddit made was ignoring subscriptions and just showing anything the algorithm thinks you will like. Resulting in complete no name subreddits showing on your front page. Meaning moderators no longer control content for quality, which is both a good and bad thing, but it means more garbage makes it to your front page.
I can't remember the last time I was on the Reddit front page and I use the site pretty much daily. I only look at specific subreddit pages (barely a fraction of what I'm subscribed to).
These are some pretty niche communities with only a few dozen comments per day at most. If Reddit becomes inhospitable to them then I'll abandon the site entirely.
> why would you look at the "front page" if you only wanted to see things you subscribed to?
"Latest" ignores score and only sorts by submission time, which means you see a lot of junk if you follow any large subreddits.
The default home-page algorithm used to sort by a composite of score, recency, and a modifier for subreddit size, so that posts from smaller subreddits don't get drowned out. It worked pretty well, and users could manage what showed up by following/unfollowing subreddits.
At the moment I am on a personal finance kick. Once in awhile I find myself in the bogleheads Reddit. If you don’t know bogleheads have a cult-like worship of the founder of vanguard, whose advice, shockingly, is to buy index funds and never sell.
Most of it is people arguing about VOO vs VTI vs VT. (lol) But people come in with their crazy scenarios, which are all varied too much to be a bot, although the answer could easily be given by one!
Steve Huffman is an awful CEO. With that being said I've always been curious how the rest of the industry (for example, the web-wide practice of autoplaying videos) was constructed to catch up with Facebook's fraudulent metrics. Their IPO (and Zuckerberg is certainly known to lie about things) was possibly fraud and we know that they lied about their own video metrics (to the point it's suspected CollegeHumor shut down because of it)
In one hand, we are past the Turing Test definition if we can't distinguish if we are talking with an AI or a real human or more things that were rampant on internet previously, like spam and scam campaigns, targeted opinion manipulation, or a lot of other things that weren't, let's say, an honest opinion of the single person that could be identified with an account.
In the other hand, that we can't tell don't speak so good about AIs as speak so bad about most of our (at least online) interaction. How much of the (Thinking Fast and Slow) System 2 I'm putting in this words? How much is repeating and combining patterns giving a direction pretty much like a LLM does? In the end, that is what most of internet interactions are comprised of, done directly by humans, algorithms or other ways.
There are bits and pieces of exceptions to that rule, and maybe closer to the beginning, before widespread use, there was a bigger percentage, but today, in the big numbers the usage is not so different from what LLMs does.
I don't have strong negative feelings about the era of LLM writing, but I resent that it has taken the em-dash from me. I have long used them as a strong disjunctive pause, stronger than a semicolon. I have gone back to semicolons after many instances of my comments or writing being dismissed as AI.
I will still sometimes use a pair of them for an abrupt appositive that stands out more than commas, as this seems to trigger people's AI radar less?
Now I'm actually curious to see statistics regarding the usage of em-dashes on HN before and after AI took over. The data is public, right? I'd do it myself, but unfortunately I'm lazy.
The funny thing is I knew people that used the phrase 'you're absolutely right' very commonly...
They were sales people, and part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.
The other funny thing on EM dashes is there are a number of HN'ers that use them, and I've seen them called bots. But when you dig deep in their posts they've had EM dashes 10 years back... Unless they are way ahead of the game in LLMs, it's a safe bet they are human.
These phrases came from somewhere, and when you look at large enough populations you're going to find people that just naturally align with how LLMs also talk.
This said, when the number of people that talk like that become too high, then the statistical likelihood they are all human drops considerably.
I'm a confessing user of em-dashes (or en-dashes in fonts that feature overly accentuated em-dashes). It's actually kind of hard to not use them, if you've ever worked with typography and know your dashes and hyphenations. —[sic!] Also, those dashes are conveniently accessible on a Mac keyboard. There may be some Win/PC bias in the em-dash giveaway theory.
A few writer friends even had a coffee mug with the alt+number combination for em-dash in Windows, given by a content marketing company. It was already very widespread in writing circles years ago. Developers keep forgetting they're in a massively isolated bubble.
> part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.
I can usually tell when someone is leading like this and I resent them for trying to manipulate me. I start giving the opposite answer they’re looking for out of spite.
I’ve also had AI do this to me. At the end of it all, I asked why it didn’t just give me the answer up front. It was a bit of a conspiracy theory, and it said I’d believe it more if I was lead there to think I got there on my own with a bunch of context, rather than being told something fairly outlandish from the start. That fact that AI does this to better reinforce the belief in conspiracy theories is not good.
Reddit has a small number of what I hesitatingly might call "practical" subreddits, where people can go to get tech support, medical advice, or similar fare. To what extent are the questions and requests being posted to these subreddits also the product of bot activity? For example, there are a number of medical subreddits, where verified (supposedly) professionals effectively volunteer a bit of their free time to answer people's questions, often just consoling the "worried well" or providing a second opinion that echos the first, but occasionally helping catch a possible medical emergency before it gets out of hand. Are these well-meaning people wasting their time answering bots?
I’m a bit scared of this theory, i think it will be true, ai will eat the internet, then they’ll paywall it.
Innovation outside of rich coorps will end. No one will visit forums, innovation will die in a vacuum, only the richest will have access to what the internet was, raw innovation will be mined through EULAs, people striving to make things will just have ideas stolen as a matter of course.
Much like someone from Schaumburg Illinois can say they are from Chicago, Hacker News can call itself social media. You fly that flag. Don’t let anyone stop you.
Good post, Thank you.
May I say Dead, Toxic Internet? With social media adding the toxicity.
The Enshittification theory by Cory Doctorow sums up the process of how this unfolds (look it up on Wikipedia).
I am curious when we will land dead github theory? I am looking at growing of self hosted projects and it seems many of them are simply AI slop now or slowly moving there.
I prefer a Dark Forest theory [1] of the internet. Rather than being completely dead and saturated with bots, the internet has little pockets of human activity like bits of flotsam in a stream of slop. And that's how it is going to be from here on out. Occasionally the bots will find those communities and they'll either find a way to ban them or the community will be abandoned for another safe harbour.
To that end, I think people will work on increasingly elaborate methods of blocking AI scrapers and perhaps even search engine crawlers. To find these sites, people will have to resort to human curation and word-of-mouth rather than search.
The API protest in 2023 took away tools from moderators. I noticed increased bot activity after that.
The IPO in 2024 means that they need to increase revenue to justify the stock price. So they allow even more bots to increase traffic which drives up ad revenue. I think they purposely make the search engine bad to encourage people to make more posts which increases page views and ad revenue. If it was easy to find an answer then they would get less money.
At this point I think reddit themselves are creating the bots. The posts and questions are so repetitive. I've unsubscribed to a bunch of subs because of this.
Adding the option to hide profile comments/posts was also a terrible move for several reasons.
Isn't that just fraud?
These are some pretty niche communities with only a few dozen comments per day at most. If Reddit becomes inhospitable to them then I'll abandon the site entirely.
they have definitely made reddit far worse in lots of ways, but not this one.
"Latest" ignores score and only sorts by submission time, which means you see a lot of junk if you follow any large subreddits.
The default home-page algorithm used to sort by a composite of score, recency, and a modifier for subreddit size, so that posts from smaller subreddits don't get drowned out. It worked pretty well, and users could manage what showed up by following/unfollowing subreddits.
At the moment I am on a personal finance kick. Once in awhile I find myself in the bogleheads Reddit. If you don’t know bogleheads have a cult-like worship of the founder of vanguard, whose advice, shockingly, is to buy index funds and never sell.
Most of it is people arguing about VOO vs VTI vs VT. (lol) But people come in with their crazy scenarios, which are all varied too much to be a bot, although the answer could easily be given by one!
In the other hand, that we can't tell don't speak so good about AIs as speak so bad about most of our (at least online) interaction. How much of the (Thinking Fast and Slow) System 2 I'm putting in this words? How much is repeating and combining patterns giving a direction pretty much like a LLM does? In the end, that is what most of internet interactions are comprised of, done directly by humans, algorithms or other ways.
There are bits and pieces of exceptions to that rule, and maybe closer to the beginning, before widespread use, there was a bigger percentage, but today, in the big numbers the usage is not so different from what LLMs does.
Most people probably don't know, but I think on HN at least half of the users know how to do it.
It sucks to do this on Windows, but at least on Mac it's super easy and the shortcut makes perfect sense.
I will still sometimes use a pair of them for an abrupt appositive that stands out more than commas, as this seems to trigger people's AI radar less?
What should we conclude from those two extraneous dashes....
They were sales people, and part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.
The other funny thing on EM dashes is there are a number of HN'ers that use them, and I've seen them called bots. But when you dig deep in their posts they've had EM dashes 10 years back... Unless they are way ahead of the game in LLMs, it's a safe bet they are human.
These phrases came from somewhere, and when you look at large enough populations you're going to find people that just naturally align with how LLMs also talk.
This said, when the number of people that talk like that become too high, then the statistical likelihood they are all human drops considerably.
I can usually tell when someone is leading like this and I resent them for trying to manipulate me. I start giving the opposite answer they’re looking for out of spite.
I’ve also had AI do this to me. At the end of it all, I asked why it didn’t just give me the answer up front. It was a bit of a conspiracy theory, and it said I’d believe it more if I was lead there to think I got there on my own with a bunch of context, rather than being told something fairly outlandish from the start. That fact that AI does this to better reinforce the belief in conspiracy theories is not good.
Nice article, though. Thanks.
Innovation outside of rich coorps will end. No one will visit forums, innovation will die in a vacuum, only the richest will have access to what the internet was, raw innovation will be mined through EULAs, people striving to make things will just have ideas stolen as a matter of course.
To that end, I think people will work on increasingly elaborate methods of blocking AI scrapers and perhaps even search engine crawlers. To find these sites, people will have to resort to human curation and word-of-mouth rather than search.
[1] https://en.wikipedia.org/wiki/Dark_forest_hypothesis
Think of the children!!!