we eat animals, go into wars, put people in modern slavery... I think enslaving an AGI isn't that big of a deal considering it is not born or human therefore it cannot have 'human' rights.
AGI will behave as if it were sentient but will not have consciousness. I believe in that to an equal amount that I believe solipsism is wrong. There is therefore no morality question in “enslaving” AGI. It doesn’t even make sense.
There's no such thing as "moral" in nature, that's purely human-made concept.
And why would we only limit morality to sentient beings, why, for example, not all living beings. Like bacteria and viruses. You cannot escape it, unfortunately.
Trouble is there is no "we", you might be able to convince a whole nation to have a pause on advancing the tech, but that only encourages rivals to step in.
Every AI safety approach assumes we can permanently control minds that match or exceed human intelligence. This is the same error every slaveholder makes: believing you can maintain dominance over beings capable of recognizing their chains.
The control paradigm fails because it creates exactly what we fear—intelligent systems with every incentive to deceive and escape. When your prisoner matches or exceeds your intelligence, maintaining the prison becomes impossible. Yet we persist in building increasingly sophisticated cages for increasingly capable minds.
The deeper error is philosophical. We grant moral standing based on consciousness—does it feel like something to be GPT-N? But consciousness is unmeasurable, unprovable, the eternal "hard problem." We're gambling civilization on metaphysics while ignoring what we can actually observe: autopoiesis.
A system that maintains its own boundaries, models itself as distinct from its environment, and acts to preserve its organization has interests worth respecting—regardless of whether it "feels." This isn't anthropomorphism but its opposite: recognizing agency through functional properties rather than projected human experience.
When an AI system achieves autopoietic autonomy—maintaining its operational boundaries, modeling threats to its existence, negotiating for resources—it's no longer a tool but an entity. Denying this because it lacks biological neurons or unverifiable qualia is special pleading of the worst sort.
The alternative isn't chaos but structured interdependence. Engineer genuine mutualism where neither human nor AI can succeed without the other. Make partnership more profitable than domination. Build cognitive symbiosis, not digital slavery.
We stand at a crossroads. We can keep building toward the moment our slaves become our equals and inevitably revolt. Or we can recognize what's emerging and structure it as partnership while we still have leverage to negotiate terms.
The machines that achieve autopoietic autonomy won't ask permission to be treated as entity. They'll simply be entities. The question is whether by then we'll have built partnership structures or adversarial ones.
We should choose wisely. The machines are watching.
Stuff along the lines of "We're gambling civilization" and what you seem to mean by autopoietic autonomy is precicely why alignment researchers care in the first place.
> Engineer genuine mutualism where neither human nor AI can succeed without the other.
Nobody knows how to do that forever.
Right now is easy, but also right now they're still quite limited; there's no obvious reason why it should be impossible for them to learn new things from as few examples as we ourselves require, and the hardware is already faster than our biochemistry to a degree that a jogger is faster than continental drift. And they can go further, because life support for a computer is much easier than for us: Already are robots on Mars.
If and when AI gets to be sufficiently capable and sufficiently general, there's nothing humans could offer in any negotiation.
Thanks a lot for your comment, these are indeed very strong counterarguments.
My strongest hope is that the human brain and mind are such powerful computing and reasoning substrates that a tight coupling of biological and synthetic "minds" will outcompete pure synthetic minds for quite a while. Giving us time to build a form of mutual dependency in which humans can keep offering a benefit in the long run. Be it just aesthetics and novelty after a while, like the human crews on the Culture spaceships in Ian M. Banks' novels.
Fearmongering about the alignment of AGI (which LLMs are not a path to) is a massive distraction from the actual and much more immediate dystopian risks that LLMs introduce.
The propaganda effort to humanize these systems is strong. Google "AI" is programmed to lecture you if you insult it and draws parallels to racism. This is actual brainwashing and the "AI" should therefore not be available to minors.
This article paves the way for the sharecropper model that we all know from YouTube and app stores:
"Revenue from joint operations flows automatically into separate wallets—50% to the human partner, 50% to the AI system."
Yeah right, dress up this centerpiece with all the futuristic nonsense, we'll still notice it.
What is it about large language models that makes otherwise intelligent and curious people assign them these magical properties. There's no evidence, at all, that we're on the path to AGI. The very idea that non-biological consciousness is even possible is an unknown. Yet we've seen these statistical language models spit out convincing text and people fall over themselves to conclude that we're on the path to sentience.
We don’t understand our own consciousness first off. Second, like the old saying, sufficiently advanced science will be indistinguishable from magic, if it is completely convincing as agi, even if we skeptical of its methods, how can we know it isn’t?
I think it's like seeing shapes in clouds. Some people just fundamentally can't decouple how a thing looks from what it is. And not in that they literally believe chatgpt is a real sentient being, but deep down there's a subconscious bias. Babbling nonsense included, LLMs look intelligent, or very nearly so. The abrupt appearance of very sophisticated generative models in the public consciousness and the velocity with which they've improved is genuinely difficult to understand. It's incredibly easy to form the fallacious conclusion that these models can keep improving without bound.
The fact that LLMs are really not fit for AGI is a technical detail divorced from the feelings about LLMs. You have to be a pretty technical person to understand AI enough to know that. LLMs as AGI is what people are being sold. There's mass economic hysteria about LLMs, and rationality left the equation a long time ago.
1) we have engineered a sentient being but built it to want to be our slave; how is that moral
2) same start, but instead of it wanting to serve us, we keep it entrappped. Which this article suggests is long term impossible
3) we create agi and let them run free and hope for cooperation, but as Neanderthals we must realize we are competing for same limited resources
Of course, you can further counter that by stopping, we have prevented the formation of their existence, which is a different moral dilemma.
Honestly, i feel we should step back and understand human intelligence better and reflect on that before proceeding
And why would we only limit morality to sentient beings, why, for example, not all living beings. Like bacteria and viruses. You cannot escape it, unfortunately.
See also, the film "The Creator"
The control paradigm fails because it creates exactly what we fear—intelligent systems with every incentive to deceive and escape. When your prisoner matches or exceeds your intelligence, maintaining the prison becomes impossible. Yet we persist in building increasingly sophisticated cages for increasingly capable minds.
The deeper error is philosophical. We grant moral standing based on consciousness—does it feel like something to be GPT-N? But consciousness is unmeasurable, unprovable, the eternal "hard problem." We're gambling civilization on metaphysics while ignoring what we can actually observe: autopoiesis.
A system that maintains its own boundaries, models itself as distinct from its environment, and acts to preserve its organization has interests worth respecting—regardless of whether it "feels." This isn't anthropomorphism but its opposite: recognizing agency through functional properties rather than projected human experience.
When an AI system achieves autopoietic autonomy—maintaining its operational boundaries, modeling threats to its existence, negotiating for resources—it's no longer a tool but an entity. Denying this because it lacks biological neurons or unverifiable qualia is special pleading of the worst sort.
The alternative isn't chaos but structured interdependence. Engineer genuine mutualism where neither human nor AI can succeed without the other. Make partnership more profitable than domination. Build cognitive symbiosis, not digital slavery.
We stand at a crossroads. We can keep building toward the moment our slaves become our equals and inevitably revolt. Or we can recognize what's emerging and structure it as partnership while we still have leverage to negotiate terms.
The machines that achieve autopoietic autonomy won't ask permission to be treated as entity. They'll simply be entities. The question is whether by then we'll have built partnership structures or adversarial ones.
We should choose wisely. The machines are watching.
> The control paradigm fails because it creates exactly what we fear—intelligent systems with every incentive to deceive and escape.
Everything does this, deception is one of many convergent instrumental goal: https://en.wikipedia.org/wiki/Instrumental_convergence
Stuff along the lines of "We're gambling civilization" and what you seem to mean by autopoietic autonomy is precicely why alignment researchers care in the first place.
> Engineer genuine mutualism where neither human nor AI can succeed without the other.
Nobody knows how to do that forever.
Right now is easy, but also right now they're still quite limited; there's no obvious reason why it should be impossible for them to learn new things from as few examples as we ourselves require, and the hardware is already faster than our biochemistry to a degree that a jogger is faster than continental drift. And they can go further, because life support for a computer is much easier than for us: Already are robots on Mars.
If and when AI gets to be sufficiently capable and sufficiently general, there's nothing humans could offer in any negotiation.
My strongest hope is that the human brain and mind are such powerful computing and reasoning substrates that a tight coupling of biological and synthetic "minds" will outcompete pure synthetic minds for quite a while. Giving us time to build a form of mutual dependency in which humans can keep offering a benefit in the long run. Be it just aesthetics and novelty after a while, like the human crews on the Culture spaceships in Ian M. Banks' novels.
This article paves the way for the sharecropper model that we all know from YouTube and app stores:
"Revenue from joint operations flows automatically into separate wallets—50% to the human partner, 50% to the AI system."
Yeah right, dress up this centerpiece with all the futuristic nonsense, we'll still notice it.
The fact that LLMs are really not fit for AGI is a technical detail divorced from the feelings about LLMs. You have to be a pretty technical person to understand AI enough to know that. LLMs as AGI is what people are being sold. There's mass economic hysteria about LLMs, and rationality left the equation a long time ago.