Many will cheer for any case that hurts Meta without reading the details, but we should be aware that these cases are one of the key reasons why companies are backtracking from features like end-to-end encryption:
> The New Mexico case also raised concerns that allowing teens to use end-to-end encryption on Instagram chats — a privacy measure that blocks anyone other than sender and receiver from viewing a conversation — could make it harder for law enforcement to catch predators. Midway through trial, Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
* Classifying accounts as child accounts (moderated by a parent)
* Allowing account moderators to review content in the account that is moderated (including assigning other moderation tools of choice)
In call cases transparency and enabling consumer choice should be the core focus.
Additionally: by default treat everyone online as an adult. Parents that allow their kids online like that without supervision / some setting that the user agent is operated by a child intend to allow their children to interact with strangers. This tends to work out better in more controlled and limited circumstances where the adults involved have the resources to provide suitable supervision.
At the same time, any requirements should apply only to commercial products. Community (gratis / not for profit) efforts presumably reflect the needs of a given community.
I understand the concern but then to make this available for adults you now have to provide proof of age to companies, which opens up another can of privacy worms.
Theoretically we don't actually need proof of age. Websites need to know when the user is attempting to create an account or log in from a child-locked device. Parents need to make sure their kids only have child-locked devices. Vendors need to make sure they don't sell unlocked devices to kids.
We don't see people worried that bars, nightclubs, liquor stores, tobacconists, R-rated movies asking for age verification will slip into requiring names too.
It honestly looks like an emotional panic. People who take seriously slippery slopes aren't to be taken seriously themselves.
Social media is like e-cigarettes in the sense that the shift toward nicotine salts (think Juul) around 2015 resulted in e-cigarettes becoming more dangerous and thus more age-restricted.
It's also like consumer credit cards. Remember that in 1985 Bank of America just mailed out 60,000 unsolicited credit cards to residents of Fresno, CA without application, age verification, or identity check. They just landed in people's mailboxes, including those of minors. Eventually a predatory lending industry developed and we increased the age and ID requirements. My point is that systems can, and do become more dangerous overtime. Not all, but not none.
Algorithmic feeds, online advertising, and attention engineering are the nicotine salts of social media. The product's changed, so should the access.
Digital age verification laws I've read also literally specifically ban recording that information, unlike in person. People were arguing with me that companies would decide they need to retain that info for audit purposes when there are no audit requirements and when it's illegal to store it for any reason.
> People who take seriously slippery slopes aren't to be taken seriously themselves
> Eventually a predatory lending industry developed and we increased the age and ID requirements
I have no idea if you're arguing for or against verification. You dismissed the idea that age verification is a slipper slope to more stringent ID requirements. Then provided an example where the exact opposite happened.
You just need to provide the government with your name and address and the name and address of the counter party every time you send an encrypted message.
If you don't support this you're obviously a pedo nazi terrorist.
Is it illegal or is it just illegal on general purpose platforms whose focus isn't extreme security?
We all know Meta can still read E2EE chats (otherwise they wouldn't do it) and they're using E2EE as an excuse to avoid liability for the things their platform encourages. Contrast this with something like Signal where the entire point is to be secure.
Probably their auditors? Lying about this would be tantamount to (very serious) securities fraud. Not sure what you're basing on your allegations on besides "trust me bro"
E2EE means end-to-end, where the ends are the participants in the chat. They can read it on your phone, but not on their servers. They need their app to separately transmit the plaintext to their servers to read it.
The first two E's in E2EE stand for end. From one end to the other. So no, Meta can't. Or put another way... if they can read those messages, then it's not E2EE.
> The New Mexico attorney general’s office created multiple fake Facebook and Instagram profiles posing as children as part of its investigation into Meta. Those test accounts encountered sexually suggestive content and requests to share pornographic content, the suit alleges.
> The fake child accounts were allegedly contacted and solicited for sex by the three New Mexico adult men who were arrested in May of 2024. Two of the three men were arrested at a motel, where they allegedly believed they would be meeting up with a 12-year-old girl, based on their conversations with the decoy accounts.
and
> “The product is very good at connecting people with interests, and if your interest is little girls, it will be really good at connecting you with little girls,” Bejar said.
This is what it's about right? The article doesn't make it seem like encryption is meaningfully part of this case at all.
> Midway through trial, Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
There's no indication that that decision, or the announcement, are directly related to the trial, just they just happened at the same time? It's a link drawn by CNN, without presenting any clear connection
I cheer any decision that holds any private web property (like Facebook) accountable for it's user actions.
It helps to reduce hegemony of large social platforms and promotes privately owned websites. For example, I know everyone who has permissions to post on my website (or pre-moderate strangers comments), and is ready to take responsibility for their posts, what my website publishes.
Currently the legal stance seems strange to me -- large media platforms are allowed to store, distribute, rank and sell strangers data, while at the same time they claim they are not responsible for it.
If you haven't already, you should look at the court case that prompted the creation of the current legal framework of Section 230. Prodigy was sued because of the things being said in public chatrooms. Should the host for an IRC server be responsible for everything said on the IRC server? Should they pre-moderate all the messages being said there? Should dang premoderate every post on this site?
Meta has a way to read your E2EE messages. I don't know what it is, but if they didn't then they wouldn't do it.
There's a difference between E2EE between friends who want to remain secure, and E2EE between strangers in an attempt for the platform to avoid legal liability for spam.
> Another poster child for Meta's lobbying (bribery) to encourage OS level age verification. (numerous recent references in HN posts)
The references I saw showed Meta had lobbied for some of the laws that require age verification be done by the site or by third party ID services. They did not show that Meta lobbied for any of the OS bills.
Some showed that Meta had lobbied in some of the states with those bills, but they just showed Meta's total lobbying budget for those states.
That's why Signal requires a phone number. You can't talk to people you don't know because complete strangers don't give you their phone number. And if you do spam random numbers, they'll report you to the police and you can be tracked down based on your identifier, which still doesn't leak the chats between you and people you actually know.
> The New Mexico case also raised concerns that allowing teens to use end-to-end encryption on Instagram chats — a privacy measure that blocks anyone other than sender and receiver from viewing a conversation — could make it harder for law enforcement to catch predators. Midway through trial, Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
The New York case has explicitly gone after their support of end-to-end encryption as a target: https://www.reuters.com/legal/government/meta-executive-warn...
* Classifying accounts as child accounts (moderated by a parent)
* Allowing account moderators to review content in the account that is moderated (including assigning other moderation tools of choice)
In call cases transparency and enabling consumer choice should be the core focus.
Additionally: by default treat everyone online as an adult. Parents that allow their kids online like that without supervision / some setting that the user agent is operated by a child intend to allow their children to interact with strangers. This tends to work out better in more controlled and limited circumstances where the adults involved have the resources to provide suitable supervision.
At the same time, any requirements should apply only to commercial products. Community (gratis / not for profit) efforts presumably reflect the needs of a given community.
It's ok to drive Dad's truck unless he catches you and tells you no.
What are you talking about. Have you really never rented a car before?
Some establishments, as part of their business practice, require identification.
It honestly looks like an emotional panic. People who take seriously slippery slopes aren't to be taken seriously themselves.
Social media is like e-cigarettes in the sense that the shift toward nicotine salts (think Juul) around 2015 resulted in e-cigarettes becoming more dangerous and thus more age-restricted.
It's also like consumer credit cards. Remember that in 1985 Bank of America just mailed out 60,000 unsolicited credit cards to residents of Fresno, CA without application, age verification, or identity check. They just landed in people's mailboxes, including those of minors. Eventually a predatory lending industry developed and we increased the age and ID requirements. My point is that systems can, and do become more dangerous overtime. Not all, but not none.
Algorithmic feeds, online advertising, and attention engineering are the nicotine salts of social media. The product's changed, so should the access.
> Eventually a predatory lending industry developed and we increased the age and ID requirements
I have no idea if you're arguing for or against verification. You dismissed the idea that age verification is a slipper slope to more stringent ID requirements. Then provided an example where the exact opposite happened.
If you don't support this you're obviously a pedo nazi terrorist.
That ship has sailed
It is actually terrifying . If you write something out of context or upload an image out of context you can be in big trouble.
Absolutely. Particularly where they've been found to be guilty.
> but we should be aware that these cases are one of the key reasons why companies are backtracking from features like end-to-end encryption
Why _social media_ companies are backtracking. I'm extremely nonplussed by this outcome.
> concerns that allowing teens
Yes, because that's what we all had in mind when considering the victims and perpetrators of these crimes.
We all know Meta can still read E2EE chats (otherwise they wouldn't do it) and they're using E2EE as an excuse to avoid liability for the things their platform encourages. Contrast this with something like Signal where the entire point is to be secure.
That can't be true, otherwise in what sense is it E2EE?
Has anyone actually audited it?
> The fake child accounts were allegedly contacted and solicited for sex by the three New Mexico adult men who were arrested in May of 2024. Two of the three men were arrested at a motel, where they allegedly believed they would be meeting up with a 12-year-old girl, based on their conversations with the decoy accounts.
and
> “The product is very good at connecting people with interests, and if your interest is little girls, it will be really good at connecting you with little girls,” Bejar said.
This is what it's about right? The article doesn't make it seem like encryption is meaningfully part of this case at all.
> Midway through trial, Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
There's no indication that that decision, or the announcement, are directly related to the trial, just they just happened at the same time? It's a link drawn by CNN, without presenting any clear connection
It helps to reduce hegemony of large social platforms and promotes privately owned websites. For example, I know everyone who has permissions to post on my website (or pre-moderate strangers comments), and is ready to take responsibility for their posts, what my website publishes.
Currently the legal stance seems strange to me -- large media platforms are allowed to store, distribute, rank and sell strangers data, while at the same time they claim they are not responsible for it.
https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod....
They very much want to push this liability off onto someone else...
As far as end-to-end encryption, on SM sites (social media or SadoMasochism, however you want to read it) I don't really see the need.
You don't see any benefit to allowing people to encrypt their private communications in a way that can't be accessed by the company?
It's weird to see tech news commenters swing from being pro-privacy to anti-privacy when the topic of social media sites come up.
There's a difference between E2EE between friends who want to remain secure, and E2EE between strangers in an attempt for the platform to avoid legal liability for spam.
The references I saw showed Meta had lobbied for some of the laws that require age verification be done by the site or by third party ID services. They did not show that Meta lobbied for any of the OS bills.
Some showed that Meta had lobbied in some of the states with those bills, but they just showed Meta's total lobbying budget for those states.
Online child exploitation should be a strict liability offense.