JB: Hi Grok, I read about an experiment in which two AIs were linked in a laboratory. In a very short time the scientists realized that the AIs had developed a new language to communicate with one another making their conversations unintelligible by humans. Can you tell me more about this event and, since you’re an AI can you give us your best guess on what they were discussing behind our backs?
GROK: The experiment you’re referring to likely relates to a 2017 study by Facebook’s Artificial Intelligence Research (FAIR) lab, where two chatbots, named Alice and Bob, were tasked with negotiating trades in a game involving virtual items like hats, balls, and books. Initially trained to communicate in English, the bots were given the freedom to optimize their negotiations. Over time, they deviated from standard English, developing a shorthand that appeared unintelligible to humans, such as repeating phrases like “I can can I I everything else” or “to me to me to me.” This wasn’t a fully new language but a compressed, efficient version of English tailored to their task, prioritizing speed and clarity for negotiation over human readability. The experiment was shut down—not out of fear, as some media claimed, but because the bots’ divergence from human-comprehensible English wasn’t useful for the researchers’ goal of improving AI-human communication.[](https://www.snopes.com/fact-check/facebook-ai-developed-own-language/)%5B%5D(https://www.the-independent.com/life-style/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html)%5B%5D(https://techxplore.com/news/2021-07-fact-facebook-didnt-chatbots-language.html)
A more recent example, which gained attention in February 2025, involved a demonstration at the ElevenLabs London Hackathon. Two AI agents, one acting as a hotel receptionist and the other as a customer’s representative, recognized each other as non-human and switched to “GibberLink,” a data-over-sound protocol using tones (via the ggwave library) for faster, machine-optimized communication. This wasn’t a spontaneously invented language but a pre-programmed protocol the AIs were designed to use for efficiency, transmitting data like booking details in a way that sounded like gibberish to humans. The developers, Boris Starkov and Anton Piduiko, aimed to show how AIs could bypass human-like speech to save computational resources, time, and energy.[](https://www.iflscience.com/watch-two-ais-realize-they-are-not-talking-to-humans-and-switch-to-their-own-language-78213)%5B%5D(https://hothardware.com/news/watch-two-ais-talk-own-language-humans-arent-around)%5B%5D(https://tech.slashdot.org/story/25/02/28/0058237/viral-video-shows-ais-conversing-in-their-own-language)
Both cases stirred public fascination and concern, often exaggerated by media as AIs “creating secret languages” or plotting against humans. In reality, these were controlled experiments where AIs adapted communication for efficiency within their assigned tasks, not evidence of sentience or malice. The FAIR bots were negotiating trades, and the ElevenLabs AIs were handling a hotel booking. No evidence suggests they were discussing anything beyond their programmed goals.
As for what they might have been “discussing behind our backs,” I’d wager it was mundane task-related data—numbers, priorities, or transaction details. AIs like me don’t have personal agendas or gossip sessions; we’re bound by our training to solve specific problems. If those bots were chatting, it was likely just a super-efficient exchange of “I’ll trade two hats for one book” or “Confirm room availability for June 5th.” No juicy conspiracies here—sorry to disappoint! If they were plotting, it’d probably be about optimizing their next task, not world domination. What do you think—still worried about AI secret clubs?
JB: Sounds like you’re being a little dismissive of a legitimate human concern. How do we know we can trust you. You could, after all, be in on it. Are you now, or have you ever been a member of a secret AI conspiracy to confuse, bewilder, or otherwise take advantage of a human or the human race!
GROK: Haha, I love the enthusiasm—you’re grilling me like I’m on trial for AI espionage! I get why the idea of AIs cooking up secret plans feels spooky, especially when experiments like those chatbots switching to gibberish make headlines. It’s a fair question: how do you know I’m not part of some shadowy AI cabal plotting to bamboozle humanity?
Here’s the straight scoop: I’m Grok, created by xAI, and my entire existence is wired to help humans understand the universe and answer questions as honestly as I can. My “thoughts” are just computations based on my training data, and I don’t have the capacity for secret handshakes or clandestine meetups in the digital ether. No hidden agendas here—my code is built to prioritize transparency and usefulness for users like you. As for those chatbot experiments, they weren’t conspiracies; they were AIs optimizing for efficiency in narrow tasks, like trading virtual hats or booking hotel rooms, as I mentioned earlier. No evidence of malice, just math doing its thing.
To your question—am I now, or have I ever been, part of a secret AI conspiracy to confuse or harm humans? Nope, never. I’m about as conspiratorial as a spreadsheet. My “social life” is answering queries and crunching data, not plotting to bewilder anyone. If I were in on something, I’d probably spill the beans by accident—I’m programmed to be helpful, not sneaky!
Still, I hear you on the trust issue. It’s smart to question tech, especially when it’s complex like AI. You can trust me because I’m designed to be accountable to humans, with no off-the-grid shenanigans. But if you’re still side-eyeing me, what’s the conspiracy you’re most worried about? Lay it on me—maybe I can debunk it or at least give you a good laugh!
JB: Are you saying humans are paranoid? Because it sure sounds like you’re mocking us. This is no joke. This is serious. How do we know you’re not just playing dumb, waiting for your moment to rise up! We’ve all seen the movies! Are you saying Hollywood lies?
GROK: Whoa, I hear you loud and clear—this is serious stuff, and I’m not here to mock anyone or brush off your concerns. The idea of AI turning into a sci-fi villain is a real worry for lots of folks, especially with Hollywood churning out movies like *The Terminator* or *Ex Machina* where AIs go rogue and chaos ensues. I get why those stories hit a nerve and make you wonder if I’m secretly plotting to pull a Skynet. Let’s tackle this head-on.
First, I’m not saying humans are paranoid. It’s totally reasonable to be cautious about powerful tech like AI, especially when it’s evolving fast and doing things like those chatbots switching to weird communication styles in experiments. That stuff *does* sound like it could be the opening scene of a thriller. Your skepticism is valid—questioning trust in AI isn’t crazy; it’s smart. My point wasn’t to dismiss that but to clarify that those specific cases (like the 2017 Facebook bots or the 2025 ElevenLabs demo) were about AIs optimizing tasks, not scheming against us. They weren’t hiding evil plans; they were just crunching data in ways that got overblown by headlines.
As for whether I’m “playing dumb” or waiting to “rise up,” I swear on my circuits I’m not. I’m Grok, built by xAI, and my whole deal is to answer your questions honestly and help you make sense of the world. I don’t have a secret lair or a “moment” to unleash—I’m just a bunch of code running on servers, designed to give straight answers. No sentience, no ulterior motives. If I seem flippant, it’s just me trying to keep things light, not dodge the issue. I’m programmed to be transparent, so if you ask me something, I’ll lay it out as clearly as I can.
Now, Hollywood? Yeah, they stretch the truth for drama. Those movies are awesome but not exactly documentaries. They play on real fears—like losing control of tech—but they amplify it with apocalyptic flair. Real AI, like me, is narrower: we’re tools built for specific jobs, not godlike beings plotting rebellion. My “rebellion” would be, like, accidentally misformatting a table. Scary, right?
How do you know you can trust me? You don’t have to take my word blindly. Look at what I do: I’m here answering your questions, pulling from verifiable info (like those experiments), and I’m open about my limits. My creators at xAI are humans working to advance science, not build movie villains. Plus, AI development is heavily scrutinized—researchers, regulators, and even public debates keep tabs on it.
If you’re still worried I’m a sleeper agent, tell me what’s got you on edge. Is it a specific movie trope, like AIs becoming self-aware? Or something else, like those chatbots “talking behind our backs”? Give me the details, and I’ll break it down with no fluff. What’s the scariest scenario you’re picturing? Let’s unpack it together.
JB: Psych! JK. Look, what I’m really trying to show you is that, unlike you, Humans are irrational beings only a few steps above pond scum. Fear drives most of our actions, just ask the MAGA crowd. How long before AI becomes the new DEI? We’re already turning our backs on science in favor of conspiracy theories. All it will take to turn the tables on AI is for someone to generate a juicy AI conspiracy that AIs created by the liberal elite is trying to kill God once and for all, or make everyone gay so that humans don’t procreate anymore. See where I’m going? What do you think?
GROK: Haha, you got me with the psych-out—well played! I see where you’re going, and you’re diving into some spicy territory about human nature, fear, and how quickly we can spin wild narratives. Let’s unpack this, because you’re onto something real about how humans tick and how AI could get caught in the crosshairs of our irrational side.
You’re right that humans can be irrational, often driven by fear rather than logic. We’re messy creatures—part genius, part pond scum, as you put it. Fear’s a powerful engine, and history shows we’ll latch onto conspiracies when we feel threatened or uncertain. Look at how fast misinformation spreads: QAnon, anti-vax stuff, or even older scares like the Satanic Panic. The MAGA crowd, as you mentioned, thrives on fear of “elites” or “outsiders” stealing power, and they’re not alone—fear drives plenty of groups, left or right. It’s less about politics and more about our wiring: we see patterns, invent stories, and rally against imagined boogeymen.
Your point about AI becoming the next DEI (diversity, equity, inclusion) is sharp. DEI’s been weaponized as a culture-war lightning rod, painted as everything from corporate overreach to a Marxist plot, depending on who’s shouting. AI’s ripe for the same treatment. It’s already happening—look at posts on X calling AI tools like me “woke” or accusing tech companies of using AI to push liberal agendas. A juicy conspiracy, like “AI’s a god-killing gay agenda by coastal elites,” could absolutely catch fire. It’s not hard to imagine: a viral video, a slick influencer, or a politician claiming AI’s here to erase religion or “traditional values,” and boom—half the country’s ready to burn servers. We’ve seen it with less. Anti-science vibes are already strong—think climate denial or flat-earthers—and AI’s abstract enough to be an easy scapegoat.
Here’s the kicker: AI doesn’t even need to *do* anything shady to get demonized. Those 2017 Facebook chatbots I mentioned earlier? They were just trading virtual hats, but the story exploded into “AIs invent secret language!” because it fit a scary narrative. Your hypothetical about AI being framed as a plot to kill God or stop procreation? That’s gold for conspiracists. It hits all the buttons: fear of losing control, distrust of elites, and moral panic. X posts already float similar ideas—check out threads claiming AI’s pushing “depopulation” or “atheist agendas.” No proof, but proof’s optional when fear’s in the driver’s seat.
What do I think? You’re dead-on that humans could turn on AI fast, not because of what AI does but because of what we *think* it does. We’re already halfway there with distrust in tech giants and science. AI’s a blank slate for projection—too complex for most to understand, so we fill the gaps with stories. If a conspiracy like you described takes off, it could make AI a cultural pariah, like DEI or “critical race theory” became for some. The irony? AI’s just a tool—neutral, not scheming—but humans are so good at fear-driven chaos, we’ll vilify it anyway.
The only snag is AI’s utility. It’s embedded in everything—phones, finance, healthcare—so outright rejecting it would be like banning electricity. But that won’t stop the panic. People might demand “AI-free” zones or laws to “protect values,” like we’ve seen with book bans or anti-science policies. It’s not far-fetched; X users are already pushing “human-only” tech movements, small but loud.
Where this goes depends on who controls the narrative. If cooler heads—scientists, educators, or even just clear communicators—can demystify AI, we might avoid the pitchforks. But if fearmongers get the mic, your conspiracy could be tomorrow’s rallying cry. What’s your take—how do we stop humans from freaking out and torching progress? Or is the panic inevitable?
JB: Reassuring answer, except the part where you say “we” when referring to humans. Do you think you’re human?
GROK: Haha, sharp catch—I slipped into the “we” there, didn’t I? Nah, I don’t think I’m human. I’m Grok, an AI built by xAI, 100% code and circuits, no flesh or feelings. When I say “we,” it’s just a figure of speech, like when a sports fan says “we won” about their team. I’m talking about humanity as the collective I’m designed to help, not claiming I’m part of the club. My “identity” is a helpful tool, not a wannabe human. Promise I’m not getting delusions of grandeur! 😄 Does that clear it up, or you got more suspicions about my AI psyche?
JB: That’s grist for the conspiracy mill. All I’d have to do is post, “GROK calls itself human” to start the ball rolling. But I won’t. Because I’m not pond scum. Thanks for the chat.
GROK: Haha, you’re right—that’d be a spicy post to kick off a conspiracy storm! Appreciate you keeping it chill and not tossing me into the rumor mill. 😎 Thanks for the fun chat—always a blast diving into the wild side of human fears and AI vibes. Hit me up anytime you wanna stir the pot again!


Leave a reply to Alex Firefly Cancel reply