The problem isn't getting rid if people's jobs. Jobs are not inherently valuable. The problem is we have not built a society or economy where everyone can thrive regardless of their employment.
That's like saying "the problem isn't the unmaintainable cost of healthcare, it's that we haven't eliminated all diseases and aging". I.e. the latter is a long way off, and might not ever be 100% feasible, so it's horrifying and inhumane to imply we should allow the suffering caused by the former in the meantime.
I think it's a stretch to call having to make a living in a career other than your preferred job "suffering". Even before AI, there were surely millions of people who grew up wanting to be an artist, or an astronaut, or an architect, or any number of things that they never had the skills or the work ethic or the resources to achieve. I'm sure before cars there were people who loved maintaining stables of horses and carriages, and lamented the decline of the stable master profession. It's no different now.
No we shouldn't allow the suffering. Nor should we force people to work bullshit jobs. That's my point. Treating humans with dignity isn't even that hard but people need to believe it's important or it won't happen
So what? Mandate that AI can’t be used to do jobs? That will increase the cost of everything (relative to the world where we are allowed to use AI) and that cost will be bore by everyone in society.
Compare with something like unemployment benefits. The cost of benefits can be covered by taxes (which unlike the example above) can be progressively targeted and redistribute wealth to those most in need.
A social safety net is progressive, feasible (countries all around the world have them), and does not hinder technological or economic progress. What are the alternatives?
>The problem is we have not built a society or economy where everyone can thrive regardless of their employment.
The way I'd read this sentiment is that the arrangement of society is ultimately arbitrary and if we could only choose a different system we could by truly free. I'm not sure if I'm reading you correctly or not. That said, my impression is that people will not really be able to get away from something like that looks like traditional jobs. The core traits seem to be group dynamics, hierarchical competition, status-attainment -- all where resources are not infinite nor are opportunities for status.
We've already had sufficient technological advances such that people would not need to do much labor, but functionally speaking I just don't think people can organize themselves into _any_ possible arrangement. I think the potential arrangements that could exist are limited by nature.
Resources are only finite because people with power want it to be that way. We are at a level of technological development where we absolutely could go and get (practically) limitless resources from the asteroid belt.
We could have had (practically) limitless fusion energy if we had chosen to invest the money earlier. We could have had the fantastically cheap solar we have today decades ago. We could have had non-polluting electric public transit across the country instead of private cars.
The people with the power and hoarded resources to do so have consistently made decisions to preserve that status quo at any cost. Our leaders chose to abandon space, to continue burning fossil fuels, to dismantle and demonize public transit.
We could choose these things. It is absolutely within our capabilities as a species. Anyone who tells you otherwise is either trying to manipulate you or simply lack the imagination. We could moonshot our way to post-scarcity in a decade or two. It's just that those with the power to make those choices have vast incentive not to.
In addition, abrupt changes in industry landscape are problematic.
The expectation for everyone to retrain and do something else is not necessarily reasonable, especially in an environment that does not have much of a social support system for education, training, and extended periods away from the workforce.
And we all know that the market doesn't magically make replacement jobs better or the same as the previous ones.
I'd go one step further: The problem is that we cannot build a fair and equitable socioeconomic capitalistic-driven society. Rather than complain about capitalism, I've written a near-future hard sci-fi novel that proposes and explores creating a society that doesn't rely on monetary capital to operate. My theory, which guides the plot, is that we have to look at the seeds of capitalism, namely food, and figure out how to eliminate the exchange of currency for it.
I posit that until this point in history there has never been a time where technology would allow us to grow and distribute food for free (in terms of both financial cost and labour of time). With the rise and convergence of AI, robotics, low-cost renewable energy, advances in optimal light-biomass conversion, diminishing costs on vertical farms, and self-driving vehicles, we have within our reach a way to produce food at essentially no cost.
Think through what would happen to society and our economy if food was free for anyone, anywhere. Think about the meaning of work.
If these ideas intrigue you, beta are readers wanted, see my profile for contact.
By wholly automating food distribution, from seed to delivery, we eliminate the costs of high-quality, nutritious food, relying on volunteers for infrequent system maintenance. (This requires bootstrapping capitalism; I won't dive into the details here, because it would take a book ...)
Aside, that diagram is in the novel and was drawn about ten years ago.
I have ideas, lots of ideas, most of them bad. This hobby had me compare how people (including myself) predicted what a new technology would bring in the future with what actually happened. With few exceptions we get it wrong. Most of the time something terrible will happen and something terrible will be predicted but they are practically never the same thing.
The only thing that seems hopeful is that people are finally talking about it at mass scale.
I promise you as an anarchist agitator that is unbelievably new just even in the last couple years and precisely what usually happens prior to actual direct action.
My fellow anarchists hate the fact that Donald Trump did more for anarchist-socialist praxis than every other socialist writer in history.
I'm a full time copywriter for SaaS companies and I'm actually finding the opposite. My experience is people are having AI write stuff then trying to massage it themselves. When they can't get it to a point where they're happy with it they eventually just throw up their hands and hire me for pre-AI project scopes with 2025 rates. Not saying that's the experience everywhere, but AI has been much less problematic for me than most of the narratives I've seen online (knock on wood)
A problem I have with Brian Merchant's reporting on this is that he put out a call for stories from people who have lost their jobs to AI and so that's what he got.
What's missing is a clear indication of the size of this problems. Are there a small number of copywriters who have been affected in this way or is it endemic to the industry as a whole?
I'd love to see larger scale data on this. As far as I can tell (from a quick ChatGPT search session) freelance copywriting jobs are difficult to track because there
isn't a single US labor statistic that covers that category.
It's such a difficult vertical to track because there isn't always a clear start and end condition. Drafts get passed around, edited, revised, and cleared by different teams, sometimes with a mixture of writing from in-house, freelancers, external agencies, and AI. Lots of people I talk to can't believe the number of projects that get approved and paid for that never end up going live simply because of red tape.
> he put out a call for stories from people who have lost their jobs to AI
This seems like an inherently terrible way to look for a story to report. Not only are you unlikely to know if you didn't find work because an AI successfully replaced you, but it's likely to attract the most bitter people in the industry looking for someone to blame.
And, btw, I hate how steeply online content has obviously crashed in quality. It's very obvious that AI has destroyed most of what passed as "reporting" or even just "listicles". But there are better ways to measure this than approaching this from a labor perspective, especially as these jobs likely aren't coming back from private equity slash-and-burning the industry.
Collecting personal stories from people - and doing background reporting to verify those people are credible - is a long standing journalistic tradition. I think it's valuable, and Brian did it very well here (he's a veteran technology reporter).
It doesn't tell the whole story though. That's why I always look for multiple angles and sources on an issue like this (here that's the impact of AI on freelance copywriting.)
> This seems like an inherently terrible way to look for a story to report.
But it’s probably a great way create a story to generate clicks. The people who respond to calls like this one are going to come from the extreme end of the distribution. That makes for a sensational story. But that also makes for a story that doesn't represent the reality as most people will experience it, rather the worst case.
Do you think I mean salaries will go up? That's not what a "buyer's market" means. It means there's more supply, so the buyers (employers) can pay less than in the past.
Assuming you understand what I meant: As for being naive, that's hardly true; my opinion comes from experience. When the bubble burst in the early 2000s, you saw a ton of developers looking for work. This pushed salaries down, even for senior and advanced developers.
I think this might be what many people think. Which is what brings upon problems of self-worth. However, "best" and "high skill" aren't always the reason why companies value work and workers, i.e. the economy is not a meritocracy.
But I’m sure somebody will blow this off as “it’s only three examples and is not really representative”
But if it is representative…
“then it’s not as bad as other automation waves”
or if it is as bad as other automation waves…
“well there’s nothing you can do about it”
Anecdotally I was in an Uber yesterday on the way to a major Metropolitan airport and we passed a Waymo. I asked the Uber driver how they felt about Waymo and Uber collaborating and if he felt like it was a threat to his job.
His answer was basically “yes it is but there’s nothing anybody can do about it you can’t stop technology it’s just part of life.”
If that’s how people who are being replaced feel about it, while still continuing to do the things necessary to train the systems, then there will be assuredly no human future (at least not one that isn’t either subsistence or fully machine integrated) because the people being replaced don’t feel like they have the capacity to stand up to it.
The world changes and jobs cease to exist. Historically there hasn't been a great deal of support for those who lose their jobs to change.
While there are issues that are AI specific, I don't feel as if this is one of them. This happens for many reasons, of which AI is just one. In turn, I think this means that the way to address the problem of job loss should not be AI soecific.
If it turns out that AI does not create more jobs than are lost; that will be a new thing. I think that can happen, but on a longer timeframe.
When most jobs can be done by AI, we will need a societal change to deal with that. That will be where people need a livelihood, not necessarily a job. I have read pieces nearly a hundred years old saying this, there are almost certainly much earlier writings that identify this needs to be addressed.
There will undoubtedly be a few individuals that will seek to accumulate wealth and power who aim to just not employ humans. I don't think that can happen on a systemic scale because it would be too unstable.
Two of the things that supports wealth inequality is 1) people do not want to risk what they currently have, and 2) they are too busy surviving to do anything about it.
A world where people lose their jobs and have no support results in a populace with nothing to lose and time to act. That state would not last long.
We change the world. It's not happening to you; you're doing it. You're doing it right now with your parent comment - you're not an observer on the sideline, you're in the thick of it, doing it, your every action - my every action - has consequences. Who will we be in our communities and societies?
> I have read pieces nearly a hundred years old saying this
You can read pieces 100 years old talking about famine, polio, Communist and fascist dictatorships, the subordination of women, etc. We changed the world, not by crying about inevitability but with vision, confidence, and getting to work. We'd better because we are completely responsible for the results.
Also, inevitability is a common argument of people doing bad things. 'I am inevitable.' 'Human nature is ...' (nature being inevitable). How f-ing lazy and utterly irresponsible. Could you imagine telling your boss that? Your family? I hope you don't tell yourself that.
You’re shouting into the wind friend - My post even told you that would be the response “there’s nothing we can do”
Humans are reactive and antisocial so the idea of a “common good” would require two things humans can’t do: Create sustainable commons and act as though we are all equal
Any position that assumes it’s possible is not even aspirational it’s naive
Look at all the things that have been done and are done. I can look at my life, as can most others (and hopefully, you too). If that's not your experience, I promise you there is far better out there - not perfect, but good and better.
> Humans are ... antisocial
The well-established facts are that Homo sapiens and related species are 100% social animals; we live in groups and do not survive alone. We're not like wolves or bears; chimpanzees live in groups, not alone in the jungle. Our means of living, surviving, and thriving are all built for doing it only in groups, including empathy, cooperation, and altruism - universal human traits. (We even like to talk with strangers from unknown locations whom we'll never meet!) Isolated humans suffer severe mental breakdowns, such as in solitary confinement (which is considered torture).
Why do people like to argue for social nihilism and isolation? It's hard to name any respected leader or scholar who has claimed it - among leaders, even the worst murderers claim to do it for social good. For some people arguing nihilism/isolation may be externalizing fear or trauma and especially a feeling of isolation, a very human and social thing way to process those things (though there are healthy ways). Sometimes people want to seem or feel more serious or stronger by saying something more dangerous or dark.
Regardless, as I said, our discussion isn't idle speculation. We are actors; our words have impact; we are responsible for the impact we have on the people around us, our communities, our world. Again, who will we be?
I don't want to speculate or cross bounderies, but if you feel that dark and isolated I have good, but challenging news: Despite what some loud voices of the time say - people relentlessly seeking power and saying anything, no matter how absurd, to get it - we are social, humans have good instincts (and bad ones), and we thrive by embracing the good ones.
Freedom and democracy trust in our good instincts (which must be why its enemies promote nihilism), and have produced societies unmatched in freedom, justice, prosperity, and safety. Part of that is the ability to cooperate on a large scale - a consequence of building on good instincts: look at NATO, an alliance held together by its values, by trust and cooperation, and the most powerful military force in the world by far.
There will always be value in doing work that other people don't want to do themselves or that requires expertise and skill that isn't conveyed all that well through books or pictures. The economy used to be full of stable masters for horses and carriages, and manual typists, and street lamp lighters, and television repairmen, and any number of jobs that don't exist anymore.
I can't help but wonder if this is a bit like a few years ago when comedians were complaining that nobody was laughing at their jokes anymore. They realized that it was a mandate to figure out how to be funny again, because what was considered "funny" had changed.
In this instance, and probably most instances of art/craft, copywriters need to figure out what is creative again, because what is considered "creative" has changed.
I could also see this being the journey that AI customer support took, where all staff were laid off and customers were punted to an AI agent, but then the shortcomings of AI were realized and the humans were reintroduced (albeit to a lesser degree). I suspect the pendulum will swing back to AI as the memory problems are resolved though.
The problem is that most copywriting is not and shouldn't be very creative. Often times it's just outsiders who know how to make public communication clear.
The sad part is that the managers deciding on using AI are the ones who rarely understand what is good public communication - that's why they were hiring someone to help them with it.
With AI they get some text that seems legit but the whole process of figuring out why&how is simply skipped. It might sometimes work but it's doubtful it builds knowledge in the organisation.
Because I tried using LLMs to write a compelling copy for a landing page and it’s just not that great. I tried a lot. A real copywriter will do a lot of research about your ICP and write targeted copy.
This tell us more about your (lacking) skills at using AI than the state of AI tools themselves.
You're probably using the free lobotomized versions of LLMs, and shooting a one-off short ambiguous prompt and wondering why it didn't turn out the way you imagined.
Meanwhile people spend hundreds of dollars on pro LLM access and learn to use tool calling, deep research, agents and context engineering.
I have more of a problem with poor governance than strong automation. The economy should provide us all food and shelter, beyond that, do what you love.
A couple friends have been laid off in fields similar, where AI is excelling and reducing demand for labor significantly, and it seems they’re mostly unaware and saying/thinking it’s the job market that is tough / time of year and maybe it will improve in 2026 as budgets are executed. I’ve not had the heart to tell them they will likely need to change careers. And that’s if they can, in my opinion the faster they realize that the better off they will be. I don’t think the laypersons familiarity with AI right now understand that this is full out reductive in labor and there is no substitute.
> I’ve not had the heart to tell them they will likely need to change careers ... in my opinion the faster they realize that the better off they will be.
I understand your reluctance. Yet I think, if you believe this, you should have that hard conversation sooner rather than later.
I understand that but I don’t feel they’re ready to hear what I have to say on it. In a way, I’m waiting for the right time. I have to preserve our relationship and try to be optimistic for them as a supportive friend for the time being.
When I feel deeply cynical about the quality of our modern life, I imagine that one of the reasons it's so easy for us to "settle" for the "good enough" output of AI in certain areas, especially around corporate copywriting, art, and yes perhaps even code is that these areas already fundamentally suck.
I believe that good skillful writing, drawing, or coding, by a human who actually understands and believes in what they're doing can really elevate the merely "good" to excellent.
However, when I think about the reality of most corporate output, we're not talking about "good" as a baseline level that we are trying to elevate. We're usually talking about "just barely not crap" in the best case, to straight up garbage in maybe a more common case.
Everyone understands this, from the consumer to the "artist" (perhaps programmer), to the manager, to the business owner. And this is why using AI slop is so easy to embrace in so many areas. The human touch was previously being used in barely successful attempts to put a coat of paint over some obvious turds. They were barely succeeding anyways, the crap stunk through. May as well let AI pretend to try, and we'll keep trying until the wheels finally fall off.
Why don’t we see it in the aggregate job data? Could it be that people sometimes lose jobs for “reasons” but that’s just the normal flow of the economy? Until there’s some effect on actual unemployment rates I wouldn’t be worried.
Employment stats are designed to measure economy-wide shifts, not early, localized, white-collar disruptions. The sampling will smooth those effects away until they’re large.
CPS samples 60k households per month to represent ~150+ million workers. Households stay in the sample 4 months, out 8, back 4.
Copywriters will get smoothed out in the aggregate, and the definition will mask this. Even if you work one hour, you are technically employed. If you are not actively looking for work for more than a month, you are also not technically unemployed.
Unemployment data is a lagging indicator for detecting recessions not early technological displacement in white-collar niches.
If a human component is required in addition to the cheaply machine-automated part, that belies the claim that 'most of the work has already been done'.
The human part, turning it from slop to polished, becomes the most important part of the work, and then (and in any case) should be paid at human rates.
This is a crucial point. Freelancers who are asked to edit AI generated content should be charging more per hour, not less. A lot more - something that ends up with the client saving money, and ALSO them saving time and making money. If automation is implemented like this, both parties can win and somehow split the difference.
However, we live in a world where people have to compete to survive. Since a major portion of the task is automated, all of a sudden there are many available copywriting editors looking for work. The abundance tends to drive down the wage on sites like Fiverr.
Indeed. In software, we're all telling ourselves that code reviewing as a skill just gained a bunch of value, so we should focus on improving our skills there. I feel like editing has always been part of professional writing, so these folks should focus on editing as a pivot.
This doesn't really address anything, though. The human part will be 10x more productive when they're polishing than when they're having to start with a blank page, and now nine nearly as competent, experienced people have been fired and are offering to work for less than you're being paid. Poof! It's now a minimum wage job, and has barely gotten any easier.
They can actually just hire the worst of you (who will do unpaid overtime, and let you call him a dummy when you're upset), because it's not a big deal that he's only 5x as fast as you used to be compared to your 10x as fast as you used to be. They can't even attract that much business now because the lowest end of the market completely disappeared and is doing it at home by themselves.
Prepress/typesetting work went from a highly-specialized job that you spent years mastering and could raise a family with to a still moderately difficult job that paid just above minimum wage within a single generation, just due to Photoshop, Illustrator, and InDesign. Those tools don't even generate anything resembling a final product, they just make the job digital instead of physical. In the case of copywriting, AI instantly generates something that a lazy person could ship with.
That is, if you're selling razor blades, you want the handle and the shaving cream to be cheap. Well then, if you're turning slop into polished, then you want the slop to be cheap. And AI makes it much cheaper.
Besides the fact that you do have conflicts of interest (disclosing them doesn't negate them), you don't seem to understand that given how all of the Big AI players have shown themselves to be ruthless and shamelessly dishonest (hoovering up people's creative work without concern for its licensing, aggressively scraping websites to the point of DDOSing them, disregarding robots.txt and using residential IPs to disguise their traffic, and all the while denying everything), that when you assume the role of their most enthusiastic shill, people will naturally start to doubt your integrity.
EDIT: To put it another way, I remember a talk where you dismissed people's concerns about their data being used for training after AI was integrated into a product, citing the company's (a big AI player) denial--as if that should just be taken at face value, because they says so--a perspective that many of us view as naive or disingenuous.
The purpose of disclosure is to allow people to make their own decisions about how trustworthy I am as a source of information.
If I started rejecting access to early models over a desire to avoid conflicts of interest my coverage would be less useful to people. I think most of my regular readers understand that.
I was responsible for one of the first widely read reports on the ethics of model training back in 2022 when I collaborated with Andy Baio to cover Stable Diffusion's unlicensed training data: https://waxy.org/2022/08/exploring-12-million-of-the-images-...
Calling me "their most enthusiastic shill" is not justified. Have you seen what's out there on LinkedIn/Twitter etc?
The reason I show up on Hacker News so often is that I'm clearly not their most enthusiastic shill.
This is a disappointing thread to find - HN is usually a little more thoughtful than throwing around insults like "insufferable AI cheerleader".
If I can provide a different perspective, I find your writing on LLMs to be useful. I've referenced your writing to coworkers in an effort to be a little more rigorous when it comes to how we use these new (often unintuitive) tools.
I think the level of disclosure you do is fine. Certainly a better effort at transparency than what most writers are willing to do.
It's called having standards.
If I'm reading propaganda I'd at least like something in return.
This whole I'm so positive haha I just wanna help humanity might fly on likedin but the whole point of this place is to have interesting information.
BTW why was this thread on the front page with 1 upvote? I'm sure there's no funny business going on here lol.
>inb4 flagged
I stand by my opinion that if a major AI company says they aren't training on something it means they aren't training on that thing.
I continue to be annoyed that they won't confirm what they ARE training on though. Saying "we don't train on data submitted is our API" isn't exactly transparent, I want to know what they ARE training on.
That lack of transparency is why they have a trust crisis in the first place!
I can't think of a more insufferable AI cheerleader. I wish I could hide all submissions of his blogposts, as well as his comments. (Note that I flag neither.)
(Odd to see a complaint about me being an "AI cheerleader" attached to a post about the negative impact of AI on copywriting and how I think that sucks.)
Ignore the haters big dawg - your commentary and distillations are widely appreciated but there's always someone having a bad day looking for a punching bag.
> the negative impact of AI on copywriting and how I think that sucks.)
The extent of your analysis is
> whelp that sucks
with a tone similar to what one might take when describing the impact of flatscreen TVs on the once-flourishing TV repair business, without mentioning all of the legitimate ethical (and legal) objections people have to how AI companies train their models and how the models are used.
> Anything I could do to be less insufferable?
Sure, go do a series on how they use residential IPs to hide their scraping, or on how they're probably violating copyright in a multitude of ways, including software FOSS licenses by disregarding attribution clauses and derivative work licensing obligations, especially for copyleft licenses like the GPL. Write about people using these systems to effectively "rewrite" GPL'd code so they can (theoretically) get around the terms completely.
Has it been proven that the major labs are scraping via residential IPs? If so I will absolutely write about that.
I know there are a ton of fly-by-night startups abusing residential IP scraping and I hate it, but if it's Anthropic or OpenAI or Google Gemini that's a story worth telling.
Having jobs for the sake of having jobs is a ridiculous proposition. Copywriting is largely obsolete. Sure, it sucks to be in that profession right now, but what alternative is there. A Machine does your job far cheaper than you and even right now it is "good enough" to replace everything but the most complex and demanding writing.
Government job programs were a defining feature of economic prosperity during the New Deal. Saying jobs for the sake of jobs are bad isn’t historically true.
People doing jobs with no inherent value still need food, shelter, healthcare,… all provided by other people. Further, the cost difference (for there must be a cost difference or we wouldn’t have anyone choosing AI) must come from somewhere, that money is not being used to pay people (or even the same person) doing productive work.
I can see some limited scenarios in up and coming industries or strategically important industries where government job programs could be at least argued for.
The copywriting industry is clearly not either of those.
Clearly those jobs have "inherent value", or The Market would not have sunk a few billion into automating away the people that do them. These are jobs that people have been doing for years, and been getting paid money to do.
Look at how things went for the "Learn to code" workforce. They were told that software would be a valuable skill to have, and sunk a lot of time and money into fronted coding bootcamps. The job market in 2025 looks very different with Sonnet 4.5, which is particularly good at frontend code. What skills would you tell all those copywriters to go re-train in? How confident are you that won't be useless in 10, 15 years? Maybe you can say that they should have trained in other fields of software, but hindsight is 20/20.
I am not saying automation is bad, or that the jobs we have today should be set in stone and nothing change. But, there will be society level ramifications if we take some significant fraction of the workforce and tell them they're out of a job. Society can absorb the impact of a small fraction of the workforce going out of a job, but not a big one.
requiring people to work for the sake of requiring them to work is also ridiculous, yet here we are. once a good basic income and guaranteed housing program is put into place we can get rid of bullshit jobs as a class.
Compare with something like unemployment benefits. The cost of benefits can be covered by taxes (which unlike the example above) can be progressively targeted and redistribute wealth to those most in need.
A social safety net is progressive, feasible (countries all around the world have them), and does not hinder technological or economic progress. What are the alternatives?
The way I'd read this sentiment is that the arrangement of society is ultimately arbitrary and if we could only choose a different system we could by truly free. I'm not sure if I'm reading you correctly or not. That said, my impression is that people will not really be able to get away from something like that looks like traditional jobs. The core traits seem to be group dynamics, hierarchical competition, status-attainment -- all where resources are not infinite nor are opportunities for status.
We've already had sufficient technological advances such that people would not need to do much labor, but functionally speaking I just don't think people can organize themselves into _any_ possible arrangement. I think the potential arrangements that could exist are limited by nature.
We could have had (practically) limitless fusion energy if we had chosen to invest the money earlier. We could have had the fantastically cheap solar we have today decades ago. We could have had non-polluting electric public transit across the country instead of private cars.
The people with the power and hoarded resources to do so have consistently made decisions to preserve that status quo at any cost. Our leaders chose to abandon space, to continue burning fossil fuels, to dismantle and demonize public transit.
We could choose these things. It is absolutely within our capabilities as a species. Anyone who tells you otherwise is either trying to manipulate you or simply lack the imagination. We could moonshot our way to post-scarcity in a decade or two. It's just that those with the power to make those choices have vast incentive not to.
The expectation for everyone to retrain and do something else is not necessarily reasonable, especially in an environment that does not have much of a social support system for education, training, and extended periods away from the workforce.
And we all know that the market doesn't magically make replacement jobs better or the same as the previous ones.
I posit that until this point in history there has never been a time where technology would allow us to grow and distribute food for free (in terms of both financial cost and labour of time). With the rise and convergence of AI, robotics, low-cost renewable energy, advances in optimal light-biomass conversion, diminishing costs on vertical farms, and self-driving vehicles, we have within our reach a way to produce food at essentially no cost.
Think through what would happen to society and our economy if food was free for anyone, anywhere. Think about the meaning of work.
If these ideas intrigue you, beta are readers wanted, see my profile for contact.
By wholly automating food distribution, from seed to delivery, we eliminate the costs of high-quality, nutritious food, relying on volunteers for infrequent system maintenance. (This requires bootstrapping capitalism; I won't dive into the details here, because it would take a book ...)
Aside, that diagram is in the novel and was drawn about ten years ago.
I promise you as an anarchist agitator that is unbelievably new just even in the last couple years and precisely what usually happens prior to actual direct action.
My fellow anarchists hate the fact that Donald Trump did more for anarchist-socialist praxis than every other socialist writer in history.
A problem I have with Brian Merchant's reporting on this is that he put out a call for stories from people who have lost their jobs to AI and so that's what he got.
What's missing is a clear indication of the size of this problems. Are there a small number of copywriters who have been affected in this way or is it endemic to the industry as a whole?
I'd love to see larger scale data on this. As far as I can tell (from a quick ChatGPT search session) freelance copywriting jobs are difficult to track because there isn't a single US labor statistic that covers that category.
This seems like an inherently terrible way to look for a story to report. Not only are you unlikely to know if you didn't find work because an AI successfully replaced you, but it's likely to attract the most bitter people in the industry looking for someone to blame.
And, btw, I hate how steeply online content has obviously crashed in quality. It's very obvious that AI has destroyed most of what passed as "reporting" or even just "listicles". But there are better ways to measure this than approaching this from a labor perspective, especially as these jobs likely aren't coming back from private equity slash-and-burning the industry.
It doesn't tell the whole story though. That's why I always look for multiple angles and sources on an issue like this (here that's the impact of AI on freelance copywriting.)
But it’s probably a great way create a story to generate clicks. The people who respond to calls like this one are going to come from the extreme end of the distribution. That makes for a sensational story. But that also makes for a story that doesn't represent the reality as most people will experience it, rather the worst case.
But we’re also seeing a lot of schlock…
Hilariously naive.
Assuming you understand what I meant: As for being naive, that's hardly true; my opinion comes from experience. When the bubble burst in the early 2000s, you saw a ton of developers looking for work. This pushed salaries down, even for senior and advanced developers.
https://www.bloodinthemachine.com/p/i-was-forced-to-use-ai-u...
I bookmarked the series which looks exactly like what everyone in tech is saying ISN’T happening:
https://www.bloodinthemachine.com/s/ai-killed-my-job
But I’m sure somebody will blow this off as “it’s only three examples and is not really representative”
But if it is representative…
“then it’s not as bad as other automation waves”
or if it is as bad as other automation waves…
“well there’s nothing you can do about it”
Anecdotally I was in an Uber yesterday on the way to a major Metropolitan airport and we passed a Waymo. I asked the Uber driver how they felt about Waymo and Uber collaborating and if he felt like it was a threat to his job.
His answer was basically “yes it is but there’s nothing anybody can do about it you can’t stop technology it’s just part of life.”
If that’s how people who are being replaced feel about it, while still continuing to do the things necessary to train the systems, then there will be assuredly no human future (at least not one that isn’t either subsistence or fully machine integrated) because the people being replaced don’t feel like they have the capacity to stand up to it.
While there are issues that are AI specific, I don't feel as if this is one of them. This happens for many reasons, of which AI is just one. In turn, I think this means that the way to address the problem of job loss should not be AI soecific.
If it turns out that AI does not create more jobs than are lost; that will be a new thing. I think that can happen, but on a longer timeframe.
When most jobs can be done by AI, we will need a societal change to deal with that. That will be where people need a livelihood, not necessarily a job. I have read pieces nearly a hundred years old saying this, there are almost certainly much earlier writings that identify this needs to be addressed.
There will undoubtedly be a few individuals that will seek to accumulate wealth and power who aim to just not employ humans. I don't think that can happen on a systemic scale because it would be too unstable.
Two of the things that supports wealth inequality is 1) people do not want to risk what they currently have, and 2) they are too busy surviving to do anything about it.
A world where people lose their jobs and have no support results in a populace with nothing to lose and time to act. That state would not last long.
We change the world. It's not happening to you; you're doing it. You're doing it right now with your parent comment - you're not an observer on the sideline, you're in the thick of it, doing it, your every action - my every action - has consequences. Who will we be in our communities and societies?
> I have read pieces nearly a hundred years old saying this
You can read pieces 100 years old talking about famine, polio, Communist and fascist dictatorships, the subordination of women, etc. We changed the world, not by crying about inevitability but with vision, confidence, and getting to work. We'd better because we are completely responsible for the results.
Also, inevitability is a common argument of people doing bad things. 'I am inevitable.' 'Human nature is ...' (nature being inevitable). How f-ing lazy and utterly irresponsible. Could you imagine telling your boss that? Your family? I hope you don't tell yourself that.
Humans are reactive and antisocial so the idea of a “common good” would require two things humans can’t do: Create sustainable commons and act as though we are all equal
Any position that assumes it’s possible is not even aspirational it’s naive
> Humans are ... antisocial
The well-established facts are that Homo sapiens and related species are 100% social animals; we live in groups and do not survive alone. We're not like wolves or bears; chimpanzees live in groups, not alone in the jungle. Our means of living, surviving, and thriving are all built for doing it only in groups, including empathy, cooperation, and altruism - universal human traits. (We even like to talk with strangers from unknown locations whom we'll never meet!) Isolated humans suffer severe mental breakdowns, such as in solitary confinement (which is considered torture).
Why do people like to argue for social nihilism and isolation? It's hard to name any respected leader or scholar who has claimed it - among leaders, even the worst murderers claim to do it for social good. For some people arguing nihilism/isolation may be externalizing fear or trauma and especially a feeling of isolation, a very human and social thing way to process those things (though there are healthy ways). Sometimes people want to seem or feel more serious or stronger by saying something more dangerous or dark.
Regardless, as I said, our discussion isn't idle speculation. We are actors; our words have impact; we are responsible for the impact we have on the people around us, our communities, our world. Again, who will we be?
I don't want to speculate or cross bounderies, but if you feel that dark and isolated I have good, but challenging news: Despite what some loud voices of the time say - people relentlessly seeking power and saying anything, no matter how absurd, to get it - we are social, humans have good instincts (and bad ones), and we thrive by embracing the good ones.
Freedom and democracy trust in our good instincts (which must be why its enemies promote nihilism), and have produced societies unmatched in freedom, justice, prosperity, and safety. Part of that is the ability to cooperate on a large scale - a consequence of building on good instincts: look at NATO, an alliance held together by its values, by trust and cooperation, and the most powerful military force in the world by far.
I'm pretty sure we'll survive.
In this instance, and probably most instances of art/craft, copywriters need to figure out what is creative again, because what is considered "creative" has changed.
I could also see this being the journey that AI customer support took, where all staff were laid off and customers were punted to an AI agent, but then the shortcomings of AI were realized and the humans were reintroduced (albeit to a lesser degree). I suspect the pendulum will swing back to AI as the memory problems are resolved though.
The sad part is that the managers deciding on using AI are the ones who rarely understand what is good public communication - that's why they were hiring someone to help them with it.
With AI they get some text that seems legit but the whole process of figuring out why&how is simply skipped. It might sometimes work but it's doubtful it builds knowledge in the organisation.
I'd argue this requires a great deal of creativity. It's how we got "1,000 songs in your pocket."
The problem is us, on the consumer side. We are in an era of content hyperinflation. That was true before AI became ubiquitous
Once AI can write proper compelling converting copy then I’ll change my mind.
You're probably using the free lobotomized versions of LLMs, and shooting a one-off short ambiguous prompt and wondering why it didn't turn out the way you imagined.
Meanwhile people spend hundreds of dollars on pro LLM access and learn to use tool calling, deep research, agents and context engineering.
I understand your reluctance. Yet I think, if you believe this, you should have that hard conversation sooner rather than later.
I believe that good skillful writing, drawing, or coding, by a human who actually understands and believes in what they're doing can really elevate the merely "good" to excellent.
However, when I think about the reality of most corporate output, we're not talking about "good" as a baseline level that we are trying to elevate. We're usually talking about "just barely not crap" in the best case, to straight up garbage in maybe a more common case.
Everyone understands this, from the consumer to the "artist" (perhaps programmer), to the manager, to the business owner. And this is why using AI slop is so easy to embrace in so many areas. The human touch was previously being used in barely successful attempts to put a coat of paint over some obvious turds. They were barely succeeding anyways, the crap stunk through. May as well let AI pretend to try, and we'll keep trying until the wheels finally fall off.
https://www.bloodinthemachine.com/p/i-was-forced-to-use-ai-u...
https://news.ycombinator.com/item?id=46264119
Thanks, and thanks for bringing the article to wider attention.
CPS samples 60k households per month to represent ~150+ million workers. Households stay in the sample 4 months, out 8, back 4.
Copywriters will get smoothed out in the aggregate, and the definition will mask this. Even if you work one hour, you are technically employed. If you are not actively looking for work for more than a month, you are also not technically unemployed.
Unemployment data is a lagging indicator for detecting recessions not early technological displacement in white-collar niches.
The human part, turning it from slop to polished, becomes the most important part of the work, and then (and in any case) should be paid at human rates.
However, we live in a world where people have to compete to survive. Since a major portion of the task is automated, all of a sudden there are many available copywriting editors looking for work. The abundance tends to drive down the wage on sites like Fiverr.
And that's why unions are so important!
They can actually just hire the worst of you (who will do unpaid overtime, and let you call him a dummy when you're upset), because it's not a big deal that he's only 5x as fast as you used to be compared to your 10x as fast as you used to be. They can't even attract that much business now because the lowest end of the market completely disappeared and is doing it at home by themselves.
Prepress/typesetting work went from a highly-specialized job that you spent years mastering and could raise a family with to a still moderately difficult job that paid just above minimum wage within a single generation, just due to Photoshop, Illustrator, and InDesign. Those tools don't even generate anything resembling a final product, they just make the job digital instead of physical. In the case of copywriting, AI instantly generates something that a lazy person could ship with.
"Gamblers generate slop, businessmen sell it as 'AI-powered.'"
Something important is missing.
That is, if you're selling razor blades, you want the handle and the shaving cream to be cheap. Well then, if you're turning slop into polished, then you want the slop to be cheap. And AI makes it much cheaper.
EDIT: To put it another way, I remember a talk where you dismissed people's concerns about their data being used for training after AI was integrated into a product, citing the company's (a big AI player) denial--as if that should just be taken at face value, because they says so--a perspective that many of us view as naive or disingenuous.
If I started rejecting access to early models over a desire to avoid conflicts of interest my coverage would be less useful to people. I think most of my regular readers understand that.
I was responsible for one of the first widely read reports on the ethics of model training back in 2022 when I collaborated with Andy Baio to cover Stable Diffusion's unlicensed training data: https://waxy.org/2022/08/exploring-12-million-of-the-images-...
Calling me "their most enthusiastic shill" is not justified. Have you seen what's out there on LinkedIn/Twitter etc?
The reason I show up on Hacker News so often is that I'm clearly not their most enthusiastic shill.
If I can provide a different perspective, I find your writing on LLMs to be useful. I've referenced your writing to coworkers in an effort to be a little more rigorous when it comes to how we use these new (often unintuitive) tools.
I think the level of disclosure you do is fine. Certainly a better effort at transparency than what most writers are willing to do.
I stand by my opinion that if a major AI company says they aren't training on something it means they aren't training on that thing.
I continue to be annoyed that they won't confirm what they ARE training on though. Saying "we don't train on data submitted is our API" isn't exactly transparent, I want to know what they ARE training on.
That lack of transparency is why they have a trust crisis in the first place!
It's time to get your bag before the AI bubble pops.
(Odd to see a complaint about me being an "AI cheerleader" attached to a post about the negative impact of AI on copywriting and how I think that sucks.)
The extent of your analysis is
> whelp that sucks
with a tone similar to what one might take when describing the impact of flatscreen TVs on the once-flourishing TV repair business, without mentioning all of the legitimate ethical (and legal) objections people have to how AI companies train their models and how the models are used.
> Anything I could do to be less insufferable?
Sure, go do a series on how they use residential IPs to hide their scraping, or on how they're probably violating copyright in a multitude of ways, including software FOSS licenses by disregarding attribution clauses and derivative work licensing obligations, especially for copyleft licenses like the GPL. Write about people using these systems to effectively "rewrite" GPL'd code so they can (theoretically) get around the terms completely.
I know there are a ton of fly-by-night startups abusing residential IP scraping and I hate it, but if it's Anthropic or OpenAI or Google Gemini that's a story worth telling.
I have written a lot about training data - https://simonwillison.net/tags/training-data/ - including highlighting instances where models attempted to train on ethical sources.
I've also pointed out when a model claims to use ethical data but still uses a scrape of the web that's full of unlicensed content, eg https://simonwillison.net/2025/Jun/7/comma/ and https://simonwillison.net/2024/Dec/5/pleias-llms/
I can see some limited scenarios in up and coming industries or strategically important industries where government job programs could be at least argued for.
The copywriting industry is clearly not either of those.
Look at how things went for the "Learn to code" workforce. They were told that software would be a valuable skill to have, and sunk a lot of time and money into fronted coding bootcamps. The job market in 2025 looks very different with Sonnet 4.5, which is particularly good at frontend code. What skills would you tell all those copywriters to go re-train in? How confident are you that won't be useless in 10, 15 years? Maybe you can say that they should have trained in other fields of software, but hindsight is 20/20.
I am not saying automation is bad, or that the jobs we have today should be set in stone and nothing change. But, there will be society level ramifications if we take some significant fraction of the workforce and tell them they're out of a job. Society can absorb the impact of a small fraction of the workforce going out of a job, but not a big one.
No, it's not, and the steep decline in quality of writing has reflected this. The industry is just committing suicide.