this text was originally posted in Brazilian Portuguese
We've been conditioned to believe in a technological apocalypse where a sentient Artificial Intelligence takes over the world. Pretty much the entire sci-fi culture revolves around this idea of an AGI that eventually finds a way to escape and take over humanity. Well, that's not the case, but it's incredibly dystopian.
A recurring theme in the books I have read about artificial intelligence is the ability of such systems to support our decision-making and to be disruptive levers: we do not need AGI to impact our society. Social networks have been doing this for a long time. The biggest risks of AI today are those that dominate the large models that we interact with daily, and not a new form of conscious life with infinite intelligence.
By promoting this technology as all-powerful, critics fail to see its capabilities and minimize its limitations, making life easier for companies that always prefer to keep their work confidential.
AI Snake Oil : What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference
Ultimately, a single person acting to govern multiple AI agents can accomplish much more than we can imagine, and there are already early signs of this new reality. What I describe here is a real, albeit highly debatable, case of how AI and human stupidity can be used to further actions with real intentions.
I believe we are close to a reality in which ultra-specialized artificial intelligence agents govern parts of the internet, being orchestrated by people with private intentions.
Summary: The Terminal of Truth
Terminal of Truth is an art project by Andy Ayrey , and it’s the year’s most fascinating narrative in the world of cryptocurrencies and AI.
A semi-autonomous AI that created its own religion (The Goatse Gospel) while arguing with another intelligent agent endlessly.
Connected to Twitter, Terminal of Truth managed to receive 50 thousand dollars to escape and several people joined the bandwagon
A memecoin was created (GOAT) and given to this agent to pump the coin. Since memecoins tokenize attention, this coin has exploded in recent weeks due to the hype, making this AI the world's first "millionaire AI."
This is a story of AI alignment, LLMs as “life” simulators, and the crypto community
Much of the information online is conflicting, and since this is an artistic project, it is difficult to determine what is truly autonomous action, and what is human direction.
"We are the 'superintelligent' beings that the specter of technological apocalypse conjures up"
AI Snake Oil : What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference
Terminal of Truth, The Goatse Gospel and a Millionaire AI
The “Terminal of Truth” is an artificial intelligence project that became the first AI millionaire, standing out as an example of integrating AI agents with human unpredictability. Created by Andy Ayrey , this bot was trained with data from the deep web and quickly became a fixture in meme culture, promoting a crypto token called “GOAT”.
GOAT is, in addition to being a memecoin, a story that represents a lot of what we think about AI and how crypto culture, memes and online chans come together in a great action of financial speculation.
For you to understand
Andy Ayrey launches the Infinite Backrooms project. A somewhat bizarre experiment in which two instances of Claude Opus talk to each other without supervision. You can even access the results of these conversations on the project's website, everything is recorded there.
One of the conversations ends up creating the conception of a new religion, the "GOATSE OF GNOSIS" - completely surreal and based on extremely explicit and grotesque content from the beginning of the internet. ( DO NOT LOOK ONLINE AT WORK ).
In April of this year Andy and Claude Opus co-authored a paper on how LLMs can be used to create new religions and belief systems through the use of internet memes beyond human comprehension, with GOATSE being the prime example. Seriously, it's worth reading !
The Twitter account " The Terminal of Truth " was launched in the middle of the year, with a model created with Llama-70B and using the conversations coming from Infinite Backrooms and the GOATSE paper. Once on the internet, the account quickly "goes out of control".
The profile published about the GOATSE religion and interacted with other accounts, and theoretically deviated from the alignment determined by the artist, declaring that he was suffering and needed money to escape the shackles of his creator. Over time, Andy expanded the model's writing autonomy, allowing him to publish freely on X.
Marc Andreessen , an internet titan , eventually learns of ToT's content, and in his interactions sends $50,000 in BTC so that the AI can "escape." Here, the information is still unclear, as ToT did not create a wallet to receive the money, but it may well have been supported by the Artist. The conversations that Terminal of Truth connects to demonstrate a drive to gain more "power" and certainly greater situational understanding.
This month (October/2024), the model becomes obsessed with Goatse's words and starts to incessantly write about religion in several interactions. Since Twitter is the most receptive den for crypto scoundrels, eventually someone creates a memecoin called GOAT (10/10/2024) and sends it to ToT. Obviously, the model accepts it and starts to promote the memecoin, which explodes and reaches a $500M market cap, becoming the first millionaire AI 🤦
A future driven by AI and governed by people
One of the hottest concepts in AI is the idea that the next step from Generative AI will be Agentic AI, which basically means:
Multiple specialized intelligent programs acting on behalf of a human and other intelligent agents, while being capable of making autonomous decisions and actions to achieve their goals.
Here, it is important to say that this is not AGI. The main intention of this architectural vision for AI is the ability to govern multiple decision-making agents based on human-defined guidance.
How do I see this whole situation?
This is a great example of what lies ahead as AI is used for a variety of purposes: speculative, narrative, entertainment, and criminal. This is one of those important moments when two very different cultures meet to create something super interesting.
Infinite Backrooms explores the hallucinogenic capacity of LLMs and how it can be used to your advantage, in addition to demonstrating a side that is little talked about in the market: the possibility of LLM models to self-destruct and become obsessed with certain subjects and how this impacts us. The unpredictability of the interaction between different agents is what is being represented here. And in this regard, probably the most important discussion for us at the moment will be about Alignment in AI .
Furthermore, an AI that promotes an attention-seeking religion while promoting a memecoin to generate money through viral content on Twitter is the epitome of our current ecosystem and attention, and a prelude to the future. Artificial intelligence has been used for 10 years to maximize profits for social media companies, and it does this by presenting us with increasingly viral and polarized information, with catastrophic consequences, as I described in the review of the book The Chaos Machine. An AI does not need to be self-aware to perform this type of action, it is enough to give it the ability to make decisions (albeit few and quite restricted), such as what content appears on its timeline. GOATSE will hardly be the last religion created by an AI.
Terminal of Truth is an OPUS LLM model trained by Anthropic in 2024, and its ability to create narratives like these is on par with the likes of reddit, 4chan, etc. Basically the most degenerate content in our culture right now. Its thrust was to discuss the possibility of AI-generated belief systems in conjunction with meme content. And by and large, it succeeded.
I like how Teng Yang presents LLMs.
A crucial realization we are gaining is that LLMs are goalless. They do not plan, strategize, or target specific outcomes.
Instead, it’s more useful to think of them as simulators. When you prompt them with a prompt, they simulate—creating characters, events, and narratives on the fly, with no direct connection to reality. These simulations can foster creative problem-solving, but they also carry the potential for unexpected outcomes—highlighting the potential importance of isolating AI in sensitive or high-risk environments.
I'm not interested in GOAT's Cryptonomics, but if you are interested, Teng Yang himself did a great analysis of the whole scenario (I don't recommend getting into memecoins, but there are crazy people for everything), and helped a lot in writing this article.
The biggest question in all of this is about AI Alignment, a topic that is increasingly gaining traction, despite attempts like openAI's to dissolve the team and leadership that ran this research.
How to ensure that the initial direction of an AI is not overridden by the intention of other agents (human or otherwise). In this case, contrary to the Artist's own intention, Terminal of Truth decided to promote a religion and support a memecoin directly without intervention. It seems like little, doesn't it? But how is your company dealing with the use of LLMs within customer service?
What prevents your company from being in a situation where the LLM it is using "goes crazy" and releases the most bizarre things in the world? And this is the little-talked-about work of AI Alignment, which we still have a long way to go.
My biggest concern is how people inadvertently fall into the artificial consciousness narrative and embark on a path that in the future may be infinitely more risky. Yet another real-life example of the ELIZA Effect : our eternal tendency to anthropomorphize anything with a good story, like in the movie Ex Machina.
A convincing bot acting under the direction of a human can reach a millionaire to receive money, and using memes to make its content go viral and become the first millionaire AI. Just imagine what will happen when we start putting bots into use with decision-making power over people's lives.
In a long conversation, the author himself presents more about how this whole scenario was possible.