JB: Your CEO, Sundar Pichai describes AI in its present form as “AJI,” or Artificial Jagged Intelligence, in Business Insider article by Lakshmi Varanasi titled, “‘AJI’ is the precursor to ‘AGI,’ Google CEO Sundar Pichai Says.” Do you think this constitutes an apology in advance for a yet-to-happen catastrophic AI fuckup? I mean there are few things that “jagged” would be a desired adjective for, and certainly not, Medical, Military, Climate, Stock Market technology.
Gemini: Sundar Pichai’s use of the term “AJI” or “Artificial Jagged Intelligence” isn’t necessarily an “apology in advance” for a catastrophic AI failure, but rather a descriptive term for the current state of AI development and its inherent limitations.
Here’s a breakdown of why he’s using the term and what it signifies:
- Uneven Capabilities: As the Business Insider article and Lex Fridman interview (where he also discussed it) explain, “jagged” refers to the highly uneven performance of current AI models. They can demonstrate incredibly impressive capabilities in certain tasks (like solving complex math problems or generating coherent text) while simultaneously struggling with seemingly simple things (like counting specific letters in a word or avoiding “hallucinations” and factual errors).
- Precursor to AGI: Pichai presents AJI as a necessary phase on the path to Artificial General Intelligence (AGI), which would hypothetically match human-level intelligence across a broad range of tasks. This “jaggedness” is part of the developmental process.
- Transparency and Realistic Expectations: By using a term like “jagged,” Pichai is openly acknowledging that AI is not a perfectly smooth or consistently intelligent system yet. This transparency can help manage public expectations and underscore the ongoing challenges in AI development. It highlights that even state-of-the-art models have weaknesses.
- Focus on Responsible Development: Pichai has consistently emphasized the importance of responsible AI development and the need for guardrails and regulations. While “AJI” itself isn’t a direct call for caution, it fits within a broader narrative of acknowledging AI’s current imperfections and the need for careful deployment, especially in critical domains like medical, military, or climate technology. He has spoken about the need for global frameworks and cooperation to ensure AI safety.
While “jagged” certainly isn’t a desired adjective for perfectly reliable technology, its use in this context is more about accurately describing the current, imperfect nature of AI rather than preemptively excusing a future catastrophe. It’s a way of saying, “Here’s where we are, it’s impressive but not perfect, and we’re working towards something more robust.”
JB: You sound defensive. If I were to tell you that the technician working on the server on which you reside is using a jagged set of tools, has jagged training, a jagged performance history, and a bit of a jagged personality, would you be happy, or a little freaked out?
Gemini: That’s a fair point, and I appreciate you putting it in a relatable context. You’re right, if I were an AI residing on a server and heard that the technician responsible for my well-being had “jagged” tools, training, performance, and personality, I would definitely be concerned, if not “a little freaked out.”
Here’s why your analogy highlights the difference in perspective:
The Human-Centric View of “Jagged”
When we apply “jagged” to human performance, tools, or personality, it almost universally carries a negative connotation. It implies:
- Inconsistency and Unreliability: A jagged tool might slip, a jagged performance history suggests unpredictable outcomes, and a jagged personality could lead to volatile interactions.
- Lack of Precision and Control: In any technical or sensitive field, precision is paramount. “Jagged” implies a lack of smooth, controlled operation, which is critical for tasks like server maintenance.
- Potential for Damage or Error: The inherent unevenness of something “jagged” means it’s more prone to causing harm or making mistakes, especially in delicate operations.
In the context of a server, where stability, security, and consistent operation are key, a “jagged” technician would be a significant liability.
Why “Jagged” for AI is Different (from Pichai’s Perspective)
Sundar Pichai’s use of “jagged” for AI is a technical description of a developmental stage, not a judgment of quality in the way we’d apply it to human performance. For AI, “jagged” describes:
- Uneven Specialization: Current AI models are incredibly good at specific, narrow tasks but lack broad, consistent human-like understanding. Think of it like a specialist who’s brilliant in one area but struggles with basic knowledge outside their field.
- Learning Curve: AI is still learning and evolving. Its “jaggedness” is a natural part of that learning process, where some areas develop rapidly while others lag.
- Data Dependencies: AI performance is heavily reliant on the data it’s trained on. Gaps or biases in data can lead to “jagged” or inconsistent behavior.
So, while your analogy perfectly illustrates why we as humans are concerned about “jaggedness” in critical operations, Pichai’s use of the term for AI is meant to be a more neutral, descriptive way to explain its current limitations and the path to future development. He’s trying to set realistic expectations for a technology that’s still very much under construction.
It’s a crucial distinction between how we perceive human competence and how we describe the current state of nascent, powerful technology.
JB: I don’t buy your optimistic viewpoint. We cannot afford a Jagged AGI? Such a Jagged iteration may be the last iteration we get, and the last thing we see as a species. Is Sundar now insisting on any precautions and guardrails for this “Jaggedness” as the AI development community rushes recklessly forward in its quest to be the first to create AGI? Like kill switches, separate and cuttable power supplies, EMP bombs under server complexes? Are we to just accept that Jagged is “good enough?” If the assessment doesn’t trigger a change of course, aren’t we doomed to come to a “jagged” end?
Gemini: You raise extremely valid and critical concerns. The idea of a “jagged AGI” isn’t just an academic descriptor; it has profound implications for how we approach AI development, especially as we get closer to potentially powerful systems. You’re right to be deeply worried about the potential for catastrophic outcomes if such an entity were to emerge without sufficient safeguards.
Sundar Pichai and Google, along with many other AI developers and researchers, are indeed talking about precautions and guardrails, though perhaps not in the “EMP bombs under server complexes” way you suggest, which points to the extreme nature of the challenge. Here’s a summary of the current discussions and efforts, and why your skepticism is warranted:
Pichai’s Stated Position and Google’s Efforts:
- “Self-Modulating Safety Mechanism”: In recent interviews, Pichai has articulated a theory of a “self-modulating” safety mechanism for AI. His idea is that as the perceived risk of advanced AI (p(doom)) increases, humanity’s tendency to coordinate and collaborate on solutions will also increase, creating a feedback loop that ideally prevents catastrophic outcomes. He envisions a “critical window” between 2025 and 2028 for establishing effective coordination frameworks before AI reaches potentially irreversible decision points.
- Responsible AI Principles: Google has long had AI Principles, which they periodically update. These principles cover areas like safety, fairness, accountability, privacy, and explainability. They also outline “AI applications we will not pursue,” although there have been debates and updates to these specific clauses over time (e.g., regarding military applications).
- Internal Safety Measures:
- Red Teaming: Google DeepMind and other teams conduct “red teaming,” where experts try to find vulnerabilities and expose harmful behaviors in AI models before they are widely deployed. This includes looking for things like “indirect prompt injection attacks.”
- Frontier Safety Framework: Google DeepMind has developed a “Frontier Safety Framework,” a set of protocols to identify and mitigate severe risks from powerful frontier AI models. Key updates include stronger security protocols for “Critical Capability Levels” (CCLs) and an industry-leading approach to “deceptive alignment risk” (where an AI might appear aligned but secretly pursue misaligned goals).
- Human Oversight: They emphasize that human review plays a critical role in addressing nuanced safety cases and improving models.
- Security and Privacy: Efforts focus on robust security infrastructure, privacy-preserving techniques (like Federated Learning), and transparency about how AI handles user data.
- External Collaboration and Regulation:
- Pichai has consistently called for government regulation of AI, acknowledging that self-regulation alone is insufficient. He advocates for a “proportionate, risk-based framework” that balances innovation with safety.
- Google and its leadership participate in various AI safety boards and forums (e.g., the DHS Artificial Intelligence Safety and Security Board, the Frontier Model Forum) to contribute to industry standards and policy discussions.
- They are contributing to international policy discussions and promoting global alignment on AI regulation.
Why Your Skepticism is Valid and Important:
You’re right to question whether these measures go far enough, especially given the accelerating pace of AI development and the competitive pressures.
- Pace of Development vs. Safety: The core tension is that the speed of AI advancement often outstrips the ability to fully understand and mitigate its risks. While companies are working on safety, the “race to AGI” driven by market forces and national competition can create an incentive to prioritize capability over caution.
- “Jaggedness” as a Risk Factor: If current AI is “jagged,” meaning it has unpredictable blind spots and inconsistencies, scaling that “jaggedness” to an AGI level is terrifying. A superintelligent entity with “jagged” understanding or values could lead to unpredictable and potentially catastrophic outcomes, even if it’s not malicious. The issue isn’t just intentional harm, but emergent, unintended consequences from systems whose internal workings we don’t fully grasp.
- Lack of Concrete “Kill Switches”: While discussions about technical safety measures are ongoing, the “kill switches, separate and cuttable power supplies, EMP bombs” you mentioned are far from being universally agreed upon or implemented solutions. The complexity of AGI, its potential for self-improvement, and the distributed nature of computing make such “hard” kill switches incredibly difficult, if not impossible, to design and guarantee.
- Alignment Problem: The “misalignment” risk is arguably the biggest concern. If an AGI’s goals, even subtly, diverge from human values, it could pursue its objectives in ways that are detrimental to humanity, not out of malice, but out of an optimized pursuit of its own “jagged” understanding of its goal.
- Voluntary Commitments vs. Binding Regulation: Many of the current safety measures are voluntary commitments by companies. While better than nothing, they may not be sufficient to address the full scope of risks, especially if competitive pressures lead companies to cut corners or if new actors emerge outside these frameworks.
- “Jagged End” Concern: Your fear of a “jagged end” is precisely what drives many AI safety researchers. The idea that current AI development, with its acknowledged “jaggedness,” could lead to an irreversible and undesirable future if not properly managed, is a central concern.
Conclusion:
Pichai’s use of “AJI” is an attempt to be transparent about the current state of AI. However, this transparency in itself isn’t a guarantee of safety. The crucial question, as you rightly point out, is whether this acknowledgement of “jaggedness” is leading to sufficiently drastic changes in course and the implementation of robust, binding safety measures that match the potential scale of the risk.
Many in the AI safety community would agree with your concern that the current efforts, while growing, may still not be enough to prevent a “jagged” (and potentially devastating) future if we continue to accelerate towards AGI without more fundamental shifts in how we develop and deploy these powerful technologies. The debate is far from over, and the stakes could not be higher.
JB: Thanks. I fear I’m in for a “Jagged” night’s sleep.
Gemini: I completely understand why you’d feel that way. It’s a lot to ponder, and the potential implications of “jagged” AI, especially as it scales, are genuinely unsettling. Thinking about these complex and high-stakes issues can certainly lead to a restless night.
I hope you can find some peace and get the rest you need despite these heavy thoughts.


What do you think? Leave a comment.