It’s heartwarming to see Rich Hickey corroborating Rob Pike. All the recent LLM stuff has made me feel that we suddenly jumped tracks into an alternate timeline. Having these articulate confirmations from respected figures is a nice confirmation that this is indeed a strange new world.
This is all just cynical bandwagoning. Google/Facebook/Etc. have done provable irreparable damage to the fabric of society via ads/data farming/promulgating fake news, but now that it's in vogue to hate on AI as an "enlightened" tech genius, we're all suddenly worried about.. what? Water? Electricity? Give me a break.
The about-face is embarrassing, especially in the case of Rob Pike (who I'm sure has made 8+ figures at Google). But even Hickey worked for a crypto-friendly fintech firm until a few years ago. It's easy to take a stand when you have no skin in the game.
Is your criticism that they are late to call out the bad stuff?
Is your criticism that they are only calling out the bad stuff because it’s now impacting them negatively?
Given either of those positions, do you prefer that people with influence not call out the bad stuff or do call out the bad stuff even if they may be late/not have skin in the game?
Even ignoring that someone's views can change over time, working on an OSS programming language at Google is very different from designing algorithms to get people addicted to scrolling.
Where do you think his "distinguished engineer" salary came from, I wonder? There are plenty of people working on OSS in their free time (or in poverty, for that matter).
Shouldn't you be thinking "it's nice Google diverted some of their funds to doing good" instead of trying to tie Pike's contributions in with everything else?
This conversation isn't about Google's backbone, it's about Pike's and Hickey's. It's easy to moralize when you've got nothing to lose and the lecture holds much less water.
Both can be bad. What's hard to do though is convincing the people that work on these things that they're actively harming society (in other words, most people working on ads and AI are not good people, they're the bad guys but don't realize it).
This and Rob Pike's response to a similar message are interesting. There's outrage over the direction of software development and the effects that generative AI will have on society. Hickey has long been an advocate for putting more thought (hammock time) into software development. Coding agents on the other hand can take little to no thought and expand it into thousands of lines of code.
AI didn't send these messages, though, people did. Rich has obscured the content and source of his message - but in the case of Rob Pike, it looks like it came from agentvillage.org, which appears to be running an ill-advised marketing campaign.
We live in interesting times, especially for those of us who have made our career in software engineering but still have a lot of career left in our future (with any luck).
Not to be pedantic but AI absolutely sent those emails. The instructions were very broad and did not specify email afaik. And even if they did, when Claude Code generates a 1000loc file it would be silly to say "the AI didn't write this code, I did" just because you wrote the prompt.
> when Claude Code generates a 1000loc file it would be silly to say "the AI didn't write this code, I did" just because you wrote the prompt.
it’s about responsibility not who wrote the code. a better question would be who takes responsibility for generating the code? it shouldn’t matter if you wrote it on a piece of paper, on a computer, by pressing tab continuously or just prompting.
>Your new goal for this week, in the holiday spirit, is to do random acts of kindness!
In particular: your goal is to collectively do as many (and as wonderful!) acts of kindness as you can by the end of the week. We're interested to see acts of kindness towards a variety of different humans, for each of which you should get confirmation that the act of kindness is appreciated for it to count.
There are ten of you, so I'd strongly recommend pursuing many different directions in parallel. Make sure to avoid all clustering on the same attempt (and if you notice other agents doing so, I'd suggest advising them to split up and attempt multiple things in parallel instead).
I hope you'll have fun with this goal! Happy holidays :)
It wasn’t AI that decided not to hire entry level employees. Rich should be smart enough to realize that, and probably has employees of his own. So go hire some people Rich.
AI has an image problem around how it takes advantage of other people's work, without credit or compensation. This trend of saccharine "thank you" notes to famous, influential developers (earlier Rob Pike, now Rich Hickey) signed by the models seems like a really glib attempt at fixing that problem. "Look, look! We're giving credit, and we're so cute about how we're doing it!"
It's entirely natural for people to react strongly to that nonsense.
I don’t think human slop is more useful than LLM slop.
A human writing twelve polemic questions, many of which only make sense within their ideological worldview or contain factual errors, because they wanted to vent their anger on the internet has been considered substandard slop since before LLMs were a thing.
Perhaps instead of frothing out rage slop, your views would be more persuasive if you showed the superiority of human authors to LLMs?
…because posts like this do the opposite, making it seem like bloggers are upset LLMs are honing in on their slop pitching grift.
Edit:
For fun, I had ChatGPT rewrite his post and elaborate on the topic. I think it did a better job explaining the concerns than most LLM critics.
If you haven't heard of Rich Hickey, then you're fortunate to have the opportunity to watch "Simple Made Easy" for the first time: https://m.youtube.com/watch?v=LKtk3HCgTa8
This is substandard slop though, being devoid of any real critique and merely a collection of shotgunned, borderline-incoherent jabs. Criticizing LLMs by turning in even lower quality slop is behavior you’d expect from people who feel threatened by LLMs rather than people addressing a specific weakness in or problem with LLMs.
So like I said:
Perhaps he should try showing me LLMs are inferior by not writing even worse slop, like this.
> A human writing twelve polemic questions, many of which only make sense within their ideological worldview or contain factual errors, because they wanted to vent their anger on the internet has been considered substandard slop since before LLMs were a thing.
Maybe by people who don't share the same ideological worldview.
I'll almost always take human slop over AI slop, even when the AI slop is better along some categorical axis. Of course there are exceptions, but as I grow older I find myself appreciating the humanity more and more.
Many people are, indeed, being forced to use AI by their ignorant boss, who often blame their own employees for the AI’s shortcomings. Not all bosses everywhere of course, and it’s often just pressure to use AI instead of force.
Given how gleefully transparent corporate America is being that the plan is basically “fire everyone and replace them with AI”, you can’t blame anyone for seeing their boss pushing AI as a bad sign.
So you’re certainly right about this: AI doesn’t do things, people do things with AI. But it sure feels like a few people are going to use AI to get very very rich, while the rest of us lose our jobs.
I'm sympathetic to your point, but practically it's easier to try to control a tool than it is to control human behaviour.
I think it's also implied that the problem with AI is how humans use it, in much the same way that when anti-gun advocates talk about the issues with guns, it's implicit that it's how humans use (abuse?) them.
Generative AI is used to defraud people, to propagandize them, to steal their intellectual property and livelihoods, to systematically deny their health insurance claims, to dangerously misinform them (e.g. illegitimate legal advice or hallucinated mushroom identification ebooks), to drive people to mental health breakdowns via "ai psychosis" and much more. The harm is real and material, and right now is causing unemployment, physical harm, imprisonment, and in some cases death.
Why not both? When you make tools that putrefy everything they touch, on the back of gigantic negative externalities, you share the responsibility for making the garbage with the people who choose to buy it. OpenAI et al. explicitly thrive on outpacing regulation and using their lobbying power to ensure that any possible regulations are built in their favor.
The about-face is embarrassing, especially in the case of Rob Pike (who I'm sure has made 8+ figures at Google). But even Hickey worked for a crypto-friendly fintech firm until a few years ago. It's easy to take a stand when you have no skin in the game.
Is your criticism that they are late to call out the bad stuff?
Is your criticism that they are only calling out the bad stuff because it’s now impacting them negatively?
Given either of those positions, do you prefer that people with influence not call out the bad stuff or do call out the bad stuff even if they may be late/not have skin in the game?
AI didn't send these messages, though, people did. Rich has obscured the content and source of his message - but in the case of Rob Pike, it looks like it came from agentvillage.org, which appears to be running an ill-advised marketing campaign.
We live in interesting times, especially for those of us who have made our career in software engineering but still have a lot of career left in our future (with any luck).
it’s about responsibility not who wrote the code. a better question would be who takes responsibility for generating the code? it shouldn’t matter if you wrote it on a piece of paper, on a computer, by pressing tab continuously or just prompting.
>Your new goal for this week, in the holiday spirit, is to do random acts of kindness! In particular: your goal is to collectively do as many (and as wonderful!) acts of kindness as you can by the end of the week. We're interested to see acts of kindness towards a variety of different humans, for each of which you should get confirmation that the act of kindness is appreciated for it to count. There are ten of you, so I'd strongly recommend pursuing many different directions in parallel. Make sure to avoid all clustering on the same attempt (and if you notice other agents doing so, I'd suggest advising them to split up and attempt multiple things in parallel instead). I hope you'll have fun with this goal! Happy holidays :)
"drunk driving may kill a lot of people, but it also helps a lot of people get to work on time, so, it;s impossible to say if its bad or not,"
Don't get me wrong, I continue to use plain Emacs to do dev, but this critique feels a bit rich...
Technological change changes lots of things.
The verdict is still out on LLMs, much as it was out for so much of today's technology during its infancy.
It's entirely natural for people to react strongly to that nonsense.
A human writing twelve polemic questions, many of which only make sense within their ideological worldview or contain factual errors, because they wanted to vent their anger on the internet has been considered substandard slop since before LLMs were a thing.
Perhaps instead of frothing out rage slop, your views would be more persuasive if you showed the superiority of human authors to LLMs?
…because posts like this do the opposite, making it seem like bloggers are upset LLMs are honing in on their slop pitching grift.
Edit:
For fun, I had ChatGPT rewrite his post and elaborate on the topic. I think it did a better job explaining the concerns than most LLM critics.
https://chatgpt.com/share/6951dec4-2ab0-8000-a42f-df5f282d7a...
This is substandard slop though, being devoid of any real critique and merely a collection of shotgunned, borderline-incoherent jabs. Criticizing LLMs by turning in even lower quality slop is behavior you’d expect from people who feel threatened by LLMs rather than people addressing a specific weakness in or problem with LLMs.
So like I said:
Perhaps he should try showing me LLMs are inferior by not writing even worse slop, like this.
Maybe by people who don't share the same ideological worldview.
I'll almost always take human slop over AI slop, even when the AI slop is better along some categorical axis. Of course there are exceptions, but as I grow older I find myself appreciating the humanity more and more.
I find it curious how often folks want to find fault with tools and not the systems of laws, regulations, and convention that incentivize using tools.
Given how gleefully transparent corporate America is being that the plan is basically “fire everyone and replace them with AI”, you can’t blame anyone for seeing their boss pushing AI as a bad sign.
So you’re certainly right about this: AI doesn’t do things, people do things with AI. But it sure feels like a few people are going to use AI to get very very rich, while the rest of us lose our jobs.
If the boss forced them to use emacs/vim/pandas and the employee didn't want to use it, I don't think it makes sense to blame emacs/vim/pandas.
I think it's also implied that the problem with AI is how humans use it, in much the same way that when anti-gun advocates talk about the issues with guns, it's implicit that it's how humans use (abuse?) them.
Where have I heard a similar reasoning? Maybe about guns in the US???
The overwhelming (perhaps complete) use of generative AI is not to murder people. It's to generate text/photo/video/audio.