Late last year I tried asking ChatGPT to summarize a collection of 10 researchers' views/findings on a topic and provide representative quotes. It initially looked plausible but when I checked the links, the quotes were from clearly AI generated summaries of actual interviews. The paraphrasing was also plausible but subtly and profoundly incorrect.
I haven't tested this again on the latest models though, so not sure if there's been an improvement.
That's more or less how it works. To actually have the system carry out your intention it would have to use significant hardware resources (and even then who knows if it would actually work). Alternatively you would need to break up the work into chunks that the hardware allocated to you by the system would not be overwhelmed.
A lot of people don't realize this because the work that they are having the AI do does not need to be either true or false. It just has to output media that seems like it fits. The system probably took many shortcuts to keep the resource use low while outputting something plausible but false.
And frankly this is sort of fine as long as you know what it's doing and what the limitations are. Hypothetically if you broke up the task into multiple steps that the system can actually ingest properly it might reduce the time that the task took overall, maybe even significantly, but not down to one prompt.
You always have to check your sources because citation laundering is a thing[0].
In addition, most mainstream[1] journalists cite sources in a more liberal way than a scientist should so the source might not say what the journalist reports. The Atlantic has a bit on Waymo’s poor detection of minorities[2], e.g.
Just as an aside jumping off this sentence from the article, I am far less tolerant of the practice of naming countries of origin or general locales rather than specific organizations in headlines and stories.
Name the organization, and if you want to in the body, name where they’re from/located/operating as it pertains to the organization. For that matter, if you can offer information on the specific locale (Sweden is a big place after all), you should also do that unless it really is something more national/international.
People like to blame social media for this kind of bullshit, but social media is just the vector.
Just this week I read a "study" because someone claimed on social media that it was made by (Public, famous) Unis A, B and C and reported as an effect an increase in 30% of revenue for the companies that participated in the experiment.
The "study" was commissioned by an interest group (bad sign). It was conducted by people associated with said unis (I didn't check their credentials), and it did report in its headline the 30% revenue increase.
Said study was about an experiment that ran for a few months. Within these months, the revenue was flat (which could be considered good enough for the cause). The 30% was the revenue of this period against the same period the previous year. So somehow the experiment affected the companies retroactively! Not to mention that the researchers were able to find a group of companies that were, on average, growing 30% YoY. Surprising indeed.
So even if you check your sources, it may still be bullshit science or bullshit reporting from well-credentialed sources.
Facebook, ever the wasteland of bullshit and scams, has gotten even more bullshit and scammy in the AI era.
I have found the single best way to avoid being pissed off by this shit is to just avoid Facebook. It dramatically cuts down on the amount I am exposed to.
I also run with adblockers, and consume news via brutalist.report, which also helps. (I avoid the Fox News section at the bottom)
Not just Facebook, but also make sure to avoid TikTok, Instagram and YouTube, along with YouTube Shorts. Many of them are just nothing but fake AI content, and these days people are using AI to create fake profiles of good-looking, cute girls doing impossible things or actually showing off their bodies, and so on. At least 50% of what you see on your feed should be considered AI-generated content.
I would say save your time and energy, and invest that into something else - forget all this social media.
There's nuance to that. An LLM is quite capable of suggesting relevant reading, given the context. Especially when the context is broad enough that there's enough training data.
"Find me research on code reviews, their size, and quality" would give you more than enough reading. Yet, if you start with a claim, like "Longer PRs mean worse defect detection," the relevant data points fall to few enough for AI to start hallucinating.
You get "something, something, PR length, defect detection, IDK, I don't read research papers." Such output is fine as long as the author cares to validate it.
Skip the second step, and you might be good if you ask about something generic, like "What's the Slack story?" or "How did Blockbuster go bust?" Ask about some specific details, though, and you're bound to end up with made-up stuff that sounds just about right, while it's actually wrong.
Checking is different from finding, though. Source checking means just "verify that this information is actually present in that document". Much harder to hallucinate in this case.
"follow each link in this document. Read each link's contents against the contents in this document. Create a report: for each link list a working hyperlink, whether it exists, whether it supports or fails to support this document, and why"
If it returns a report claiming all correct? Good! Trust but verify. With a little practice, middle-mouse and Ctrl-F, you can get through the list in mere minutes.
Not all correct? Your initial prompt was malformed and/or you picked the wrong LLM, probably both. Either way the results are built on quicksand; you'll need to start over.
No sources? If there ain't no sources it never happened.
Have we forgotten how bad LLMs were at citing sources when they first came out? So, we had to build a lot of structure (harness engineering) and frontier labs had to do specific training to try to compensate for this.
So, LLMs are inherently bad at citing sources. A lot of effort has been put in to improve this behavior, but it's compensating for an inherent flaw.
I disagree. It is a bullshit machine all the way to the core. LLMs in my world fail to cite full sources and consistently conclude with guesses as facts. It does this much more than an average journalist or reporter would. Only when you double-check it will it then apologize and correct itself.
Personal experience? You ask it for the name of the paper referenced. You google that paper (for some reason it's not great at going out and acquiring the paper). You then upload the pdf and ask it if the paper supports the assertion if it's not quickly findable via ^F. You go read, ask it clarifying questions about hazard ratios, what they controlled for, etc.
Ultimate credibility? Sure, they never did. Yet the whole thing Google was built upon was using links as tokens of credibility.
You'd assume an outgoing link from a CNN website has more credibility than one from an anonymous blog. That is, I reckon, still true. Although the credibility either link conveys is degrading. Again, it has been so since we started playing the game of SEO, yet AI-generated content in this context is basically a weapon of mass destruction. The deterioration has sped up dramatically.
> Ops, the link doesn’t lead to the study, but to another article. But that article, in turn, has a link of its own. Which leads to yet another article that doesn’t even mention the study anymore.
This is a common, infuriating practice: provides a veneer of authoritativeness and credibility to newspaper articles, and who is ever going to click on the links that support those very cogent claims? Nobody of course, so they just link to another article with more vague claims, and at any further level deep your willingness to verify that information evaporates at the same rate as the information itself.
But hey, in the meanwhile the author has managed to sneak in that "scientists have found" and that if you don't believe it you must be anti-science.
Incidentally, highlighting this abuse (together with a bunch of other quality and fact-checking) would be a great use of AI on online news publication.
It's amazing that people think Snopes or other "fact-checkers" are reliable sources of information and represent ultimate truth, as if they're immune to bias and don't receive funding from people / organizations with their own agendas.
They are generally quite good, and they provide ample background info for you to replicate (or repudiate) their findings on your own if you're so inclined.
What's amazing is that people think Snopes or other fact-checkers are automatically wrong. I assume this comes from people who make a habit of believing bullshit and can't handle being corrected.
Also relevant: the derision and mockery directed at JD Vance as a “couch fucker” even used by John Oliver.
I read “Hillbilly Elegy” and wondered why it wasn’t in there. Snopes cleared it up in a matter of minutes. Why he hasn’t sued people into oblivion is his prerogative, but it’s a fascinating case study that we are, indeed, living in a Post-Truth environment.
There was a time, in the early to mid 2010s, when the phrase "Fake News" was almost exclusively used by people in publishing to talk about a very real rise in editorial disruption as news readers shifted from being desktop and homepage-driven to mobile and facebook-driven.
And then, one day, the politicians started saying it...
Did anyone actually believe that was anything more than a joke? It was a disgusting and weird thing to suggest about a disgusting and weird guy, and highly immature, but it's only libel if it's presented as being true.
Interesting that you focus on John Oliver's bit considering that it came up in the context of JD Vance doubling down on the whole "they're eating the cats and dogs thing".
Tucker Carlson set the precedent when he was sued for libel by Karen McDougal and won because Fox New lawyers successfully argued he wasn't a reporter and no reasonable person would believe he's stating facts.
Unless he's repeating Trump's lies, then 77M people apparently believe it.
You're getting downvotes because the target of this particular lie was a known liar, so people probably feel like it's some sort of poetic justice (or they know it's just in-kind retaliation and are cathartically satisfied by it).
I don't think the right answer to widespread disinformation campaigns is retaliatory disinformation campaigns (even if they're couched – pun not intended – in a just-barely-thin-enough veil of "wink wink we know this is a joke").
The right answer is to create systems and measures that actually limit disinformation.
I’m with you. The net effect actually is something akin to honking one’s horn at a guy who honked at you. You think you’re giving him a taste of his own medicine, but walking by I only see two people honking their horn and I’d ideally prefer not to be around the horn honkers since they’re unpleasant.
Purveyors of post-truth lies don’t turn around and sue people. They just peddle more lies, this is the kind of environment scum like the Vance’s live for.
I haven't tested this again on the latest models though, so not sure if there's been an improvement.
A lot of people don't realize this because the work that they are having the AI do does not need to be either true or false. It just has to output media that seems like it fits. The system probably took many shortcuts to keep the resource use low while outputting something plausible but false.
And frankly this is sort of fine as long as you know what it's doing and what the limitations are. Hypothetically if you broke up the task into multiple steps that the system can actually ingest properly it might reduce the time that the task took overall, maybe even significantly, but not down to one prompt.
In addition, most mainstream[1] journalists cite sources in a more liberal way than a scientist should so the source might not say what the journalist reports. The Atlantic has a bit on Waymo’s poor detection of minorities[2], e.g.
0: https://wiki.roshangeorge.dev/w/Blog/2026-01-17/Citogenesis
1: Some independent reporters like Matt Yglesias are more rigorous, though their direct reporting can still be bogus
2: https://www.theargumentmag.com/p/no-waymos-arent-racist
Actually I checked some sources, and I found some for three-legged crows:
https://en.wikipedia.org/wiki/Kojiki#The_Nakatsumaki_(%E4%B8...
https://en.wikipedia.org/wiki/Three-legged_crow#/media/File:...
https://en.wikipedia.org/wiki/File:Douze_emblemes_des_rites_...
https://en.wikipedia.org/wiki/File:Chengdu_2007_341.jpg
And by refuting this article, I thereby prove that which it sought to refute.
Just as an aside jumping off this sentence from the article, I am far less tolerant of the practice of naming countries of origin or general locales rather than specific organizations in headlines and stories.
Name the organization, and if you want to in the body, name where they’re from/located/operating as it pertains to the organization. For that matter, if you can offer information on the specific locale (Sweden is a big place after all), you should also do that unless it really is something more national/international.
Just this week I read a "study" because someone claimed on social media that it was made by (Public, famous) Unis A, B and C and reported as an effect an increase in 30% of revenue for the companies that participated in the experiment.
The "study" was commissioned by an interest group (bad sign). It was conducted by people associated with said unis (I didn't check their credentials), and it did report in its headline the 30% revenue increase.
Said study was about an experiment that ran for a few months. Within these months, the revenue was flat (which could be considered good enough for the cause). The 30% was the revenue of this period against the same period the previous year. So somehow the experiment affected the companies retroactively! Not to mention that the researchers were able to find a group of companies that were, on average, growing 30% YoY. Surprising indeed.
So even if you check your sources, it may still be bullshit science or bullshit reporting from well-credentialed sources.
I have found the single best way to avoid being pissed off by this shit is to just avoid Facebook. It dramatically cuts down on the amount I am exposed to.
I also run with adblockers, and consume news via brutalist.report, which also helps. (I avoid the Fox News section at the bottom)
I would say save your time and energy, and invest that into something else - forget all this social media.
"Find me research on code reviews, their size, and quality" would give you more than enough reading. Yet, if you start with a claim, like "Longer PRs mean worse defect detection," the relevant data points fall to few enough for AI to start hallucinating.
You get "something, something, PR length, defect detection, IDK, I don't read research papers." Such output is fine as long as the author cares to validate it.
Skip the second step, and you might be good if you ask about something generic, like "What's the Slack story?" or "How did Blockbuster go bust?" Ask about some specific details, though, and you're bound to end up with made-up stuff that sounds just about right, while it's actually wrong.
If it returns a report claiming all correct? Good! Trust but verify. With a little practice, middle-mouse and Ctrl-F, you can get through the list in mere minutes.
Not all correct? Your initial prompt was malformed and/or you picked the wrong LLM, probably both. Either way the results are built on quicksand; you'll need to start over.
No sources? If there ain't no sources it never happened.
So, LLMs are inherently bad at citing sources. A lot of effort has been put in to improve this behavior, but it's compensating for an inherent flaw.
AI is quite good when grounded in a source.
They never did!
You'd assume an outgoing link from a CNN website has more credibility than one from an anonymous blog. That is, I reckon, still true. Although the credibility either link conveys is degrading. Again, it has been so since we started playing the game of SEO, yet AI-generated content in this context is basically a weapon of mass destruction. The deterioration has sped up dramatically.
This is a common, infuriating practice: provides a veneer of authoritativeness and credibility to newspaper articles, and who is ever going to click on the links that support those very cogent claims? Nobody of course, so they just link to another article with more vague claims, and at any further level deep your willingness to verify that information evaporates at the same rate as the information itself.
But hey, in the meanwhile the author has managed to sneak in that "scientists have found" and that if you don't believe it you must be anti-science.
Incidentally, highlighting this abuse (together with a bunch of other quality and fact-checking) would be a great use of AI on online news publication.
Which is to say: pretty good so far, in their case. For the future? Who knows. But they've done well up to now, at least.
What's amazing is that people think Snopes or other fact-checkers are automatically wrong. I assume this comes from people who make a habit of believing bullshit and can't handle being corrected.
https://fair.org/home/the-digital-media-oligarchy-who-owns-o...
https://swprs.org/the-american-empire-and-its-media/
I read “Hillbilly Elegy” and wondered why it wasn’t in there. Snopes cleared it up in a matter of minutes. Why he hasn’t sued people into oblivion is his prerogative, but it’s a fascinating case study that we are, indeed, living in a Post-Truth environment.
And then, one day, the politicians started saying it...
https://youtu.be/NtRPLCso0Sw?t=14m09s
Makes me believe that you're really not commenting in good faith here.
Unless he's repeating Trump's lies, then 77M people apparently believe it.
I don't think the right answer to widespread disinformation campaigns is retaliatory disinformation campaigns (even if they're couched – pun not intended – in a just-barely-thin-enough veil of "wink wink we know this is a joke").
The right answer is to create systems and measures that actually limit disinformation.