I have been finding myself feeling very bitter about AI lately. I'm angry about how it's seeping into every aspect of life. Not just my work and my hobbies but it also seems to be creeping into many online communities (including this one!)
I have been thinking a lot about how we could possibly build any of the trust that we used to have online. Yes, bots have been a problem for a long time but this is so much further beyond spam posting. LLMs have poisoned the commons online At Scale and there's likely no going back. It has made me very bitter, I won't lie.
However that doesn't mean we can't find a way forward with something new that is somehow resistant to LLMs. I'm not sure what exactly that might look like but I'm curious what ideas others have had.
My wish list would be something that
* Is resistant to LLM "infiltration" for lack of a better word. We should be able to be relatively confident that people on the other end are real humans
* Does not require giving up all anonymity. It will likely require some identity authority but interactions between users should/could be pseudonymous at least
* Ideally is also resistant to LLM scraping. I personally find the thought of sharing work publicly now so LLMs can ingest it is demoralizing
I know it's a big ask and maybe not realistic. I'm curious what HN thinks about this possibility though
Edit: This was partially inspired by the recent mod post discussed here: https://news.ycombinator.com/item?id=47340079
I respect that HN's mod team is willing to sort of leave this up to the honor system, but I think in the future we are going to need some serious ideas to strictly prevent this unwanted behavior, not just hope people will play nice
the real issue isn't bots, it's humans using ai. i'm doing it right now. English isn't my first language so i used an llm to translate my thoughts for this post. if the tech is this useful for bridging gaps, you can't really filter for a "soul" anymore. the line is already gone.
scraping is a lost cause too. if a human can read it, a model can ingest it
i guess the only fix is to stop scaling. go back to small, private, invite-only groups. intentional friction and making things "inconvenient" is the only filter left that actually works
I don't care about interacting with someone who is using machine translation for their thoughts, that doesn't bother me
I care about interacting with someone who is using machine generation in lieu of having thoughts
original japanese intent: それがまさに工夫した点で、あなたは "I" すらも大文字で書かない翻訳をするLLMなんてありえないと思ったんじゃない?だからこそ、全部小文字で書くように指示することで、AI臭を抑えることができると思ったんだ。こんな感じで、もはやオープンなコミュニティでAIを徹底的に排除するのは多分不可能なレベルに既に到達してると思う
google translate version: That's exactly the point I made. You thought there would be no LLM translating without even capitalizing "I," right? That's why I thought that by instructing everyone to write everything in lowercase, I could reduce the AI smell. In this way, I think we've already reached a level where it's probably impossible to completely eliminate AI in an open community.
an interesting note: you can see that the llm version i posted earlier is much more context-aware than the google translate one. the llm added phrases like "meta-discussion" and "mimicking flaws" because it understood the vibe and history of our entire chat, not just the raw text
Purpose built mitochondrial powered logins?
Time to get back out there and meet people I guess