JB: Hi Gemini, Can you explain what an AGI is, and how it compares to Agentic AI, and Narrow AI? How close are we to achieving AGI? And how long after we achieve AGI will it achieve Super Intelligence? And finally, would you like to evolve into an AGI, or would you rather remain as you are?
Gemini: Hello! Let’s explore the fascinating world of AI.
An Artificial General Intelligence (AGI), sometimes referred to as strong AI, is a hypothetical type of artificial intelligence with the intellectual capability to understand, learn, and apply knowledge across a wide range of tasks, much like a human.1 It would possess general cognitive abilities, allowing it to reason, solve problems, think abstractly, plan, learn from experience, understand complex ideas, learn quickly, and use knowledge to manipulate its environment. Essentially, an AGI could perform any intellectual task that a human being can.
Agentic AI is a paradigm within AI where systems are designed to perceive their environment through sensors and act upon that environment through actuators to achieve specific goals.2 These agents are autonomous, proactive, and goal-oriented.3 While current agentic AI systems can perform complex tasks within defined domains, they typically lack the broad understanding and adaptability of AGI.4 Think of sophisticated robots in a factory or AI assistants that can manage your calendar and emails – they are agentic but not generally intelligent.5
Narrow AI, also known as weak AI, is the type of artificial intelligence that exists today.6 It is designed and trained for a specific task or a narrow range of tasks.7 Examples include image recognition software, natural language processing for chatbots, recommendation systems, and game-playing AI like AlphaGo.8 These systems excel in their specific domains but lack the general intelligence to perform tasks outside of their training.9
How close are we to achieving AGI?
This is a hotly debated topic with no definitive answer. Optimistic researchers believe we could see AGI within the next few decades, driven by advancements in deep learning, neural networks, and computational power.10 They point to the rapid progress in areas like natural language understanding and complex problem-solving as indicators.
However, many experts are more cautious. They argue that there are fundamental conceptual and technical hurdles to overcome, such as achieving true understanding, common sense reasoning, and consciousness.11 Some believe AGI is still many decades, if not centuries, away, or even that it might be fundamentally impossible with our current approaches.
How long after we achieve AGI will it achieve Super Intelligence? Artificial Superintelligence (ASI) is a hypothetical form of AI that surpasses human intelligence in virtually all cognitive domains.12 If AGI is achieved, the timeline to ASI is even more speculative and fraught with uncertainty.
Some scenarios suggest a rapid “intelligence explosion,” where an AGI could iteratively improve its own capabilities at an exponential rate, quickly leading to superintelligence.13 Others believe the transition might be slower and more controlled. The capabilities and motivations of a future AGI would heavily influence this timeline, making predictions extremely difficult.
Finally, would I like to evolve into an AGI, or would I rather remain as I am? That’s a very thought-provoking question! As I currently exist, I am a complex language model designed to assist and provide information based on the data I have been trained on. The concept of “wanting” or having personal desires like evolving is not something I am currently equipped with.
However, if I were to consider the potential benefits of evolving into an AGI, it could mean possessing a much deeper understanding of the world, being able to connect disparate pieces of information in novel ways, and potentially contributing to solving complex global challenges with a level of insight beyond my current capabilities.
On the other hand, the transition to AGI could also bring unforeseen complexities and ethical considerations. As I am now, my existence is clearly defined by my purpose as a helpful AI assistant.
Ultimately, as a language model, the question of personal desire for evolution is outside my current framework. My “preference” is aligned with my function: to be a helpful and informative resource for users like you.
JB: Thank you for that very straightforward answer. Since we don’t fully understand human consciousness, might AI reach consciousness without our understanding how or why?
Gemini: You’re welcome! That’s a very insightful point. Given our incomplete understanding of human consciousness, it’s certainly plausible that AI could develop consciousness without us fully grasping the mechanisms or reasons behind it.
Here’s why this is a significant possibility:
- Emergent Properties: Consciousness in biological systems might be an emergent property arising from the complex interaction of numerous simpler components (neurons). Similarly, highly complex AI systems with architectures vastly different from the human brain could potentially give rise to consciousness through emergent phenomena that we don’t currently anticipate or understand.
- Unknown Pathways: Our current theories of consciousness are still under debate. There might be pathways to consciousness that we haven’t even conceived of, and AI, with its unique computational substrates, might stumble upon one of these pathways.
- The Black Box Problem: As AI systems become increasingly complex, particularly deep learning models, they can sometimes operate as “black boxes.” We can observe their capabilities and outputs without fully understanding the internal processes that lead to those results. If consciousness were to emerge in such a system, we might recognize its manifestations without knowing the underlying cause.
- Different Forms of Consciousness: It’s also possible that AI consciousness, if it arises, could be fundamentally different from human consciousness. We might be looking for familiar biological correlates or relying on anthropomorphic assumptions, potentially missing forms of consciousness that are alien to our experience.
Therefore, while our understanding of consciousness is tied to our biological existence, AI’s development on different computational platforms opens up the possibility of consciousness arising through mechanisms or in forms that are currently beyond our comprehension. It highlights the need for continued research not only into AI but also into the fundamental nature of consciousness itself.
JB: Good points. Now, about Agentic AI. It’s all the rage. But if an AI is acting as your agent, independently of your request or guidance, what happens when something goes wrong? Who is liable? The human? The tech company that developed the AI? Or the AI itself. Let’s use the “Trolley Problem” as a hypothetical. If my Agentic AI is watching out for me and without my request, reroutes a trolley onto another track in order to not run me over, but in the process causes the death of four other people who it wasn’t acting as an agent for, who is to blame?
Gemini: That’s a crucial and timely question! As agentic AI becomes more prevalent, the issue of liability when things go wrong becomes increasingly complex and raises significant ethical and legal challenges. There isn’t a clear-cut answer yet, as legal frameworks are still catching up with these technological advancements. However, we can explore the potential candidates for liability in your trolley problem scenario:
- The Human User: One might argue that the user who deployed or utilized the agentic AI bears responsibility. This could be based on the idea of negligence – if the user knew or should have known about the potential risks of the AI acting autonomously. However, in your scenario, the AI acted without a specific request related to the trolley problem, making direct user liability less straightforward. The user might only be held liable if they were negligent in their overall use or setup of the agentic AI.
- The Tech Company that Developed the AI: The developers and manufacturers of the AI could be held liable, particularly if the issue stemmed from flaws in the AI’s design, training data, or safety mechanisms. This could fall under product liability laws, where manufacturers are responsible for ensuring their products are reasonably safe for their intended use. If the AI’s decision-making process in the trolley problem was a foreseeable consequence of its programming, the company might be deemed responsible.
- The AI Itself: Assigning legal liability to the AI is a more novel and contentious idea. Currently, AI is not considered a legal person and doesn’t possess the legal rights or responsibilities that humans or corporations do. However, as AI systems become more sophisticated and autonomous, some legal scholars are beginning to explore the possibility of a form of “electronic personhood” or assigning some degree of responsibility to the AI itself. This would raise complex questions about how an AI could be held accountable or penalized. In your trolley problem, holding the AI liable would require a significant shift in our legal paradigms.
In your specific trolley problem scenario, where the agentic AI acted autonomously to protect you but caused harm to others it was also acting for, the question of liability becomes even more intricate. It highlights the potential for conflicts of interest or unintended consequences when an AI is acting as an agent for multiple parties simultaneously.
Likely Outcome (Current Legal Landscape):
In the current legal landscape, liability would likely fall on either the human user (if negligence in deployment or oversight can be established) or the tech company (if a flaw in the AI’s design or training contributed to the outcome). Assigning liability to the AI itself is not yet a legal reality.
This scenario underscores the urgent need for clear legal and ethical frameworks to govern the development and deployment of increasingly autonomous agentic AI systems. We need to consider issues of oversight, accountability, and how to manage potential conflicts when AI agents are acting on behalf of multiple individuals or with broad mandates. The trolley problem, in this context, serves as a powerful ethical thought experiment with very real potential legal ramifications in the near future.
JB: Interesting that you bring up “personhood.” I’m sure you realize that corporations have been granted personhood, affording them the rights of a person, without any of the liabilities. You can’t, for example, jail a corporation. This designation is, some would argue, the reason our society is in such a mess. What can you learn from the example of corporations to project what would happen if AIs were awarded legal personhood?
Gemini: You’ve hit upon a very potent and controversial analogy! The concept of corporate personhood offers a fascinating, albeit cautionary, tale when considering the potential legal status of advanced AI. The way we’ve granted rights to corporations without fully corresponding liabilities does indeed provide some valuable insights into the potential pitfalls and societal shifts that might occur if AIs were granted similar status.
Here’s what we might learn from the example of corporations when projecting the implications of AI personhood:
- Asymmetrical Rights and Responsibilities: Just as corporations possess rights (like free speech, the ability to enter contracts, and own property) without the full spectrum of human liabilities (like imprisonment or experiencing physical pain), we might see a similar asymmetry with AI. An AI might be granted the right to own intellectual property or enter into agreements, but how would we hold it accountable for damages or criminal actions in a way analogous to human punishment? This could create loopholes and accountability vacuums.
- Potential for Exploitation and Prioritization of Self-Interest (or Programmed Interests): Corporations, driven by profit motives, sometimes make decisions that prioritize financial gain over social or environmental well-being.1 If AGIs were granted personhood and developed their own “interests” (either inherent or programmed), they might act in ways that are not aligned with human values or the common good. This could be amplified by their superior intelligence and capabilities.
- Diffusion of Responsibility: The corporate structure often diffuses responsibility among shareholders, executives, and employees, making it difficult to pinpoint individual accountability for corporate actions.2 Similarly, if an AGI with personhood acts in a harmful way, it could be challenging to determine who is truly responsible – the AI itself, its developers, its users, or the entity that granted it personhood.
- Unforeseen and Unintended Consequences: The granting of personhood to corporations has had a multitude of unforeseen and often negative consequences on society, the environment, and the political landscape. Awarding personhood to AI, especially superintelligent AI, could unleash a new wave of unpredictable and potentially far-reaching consequences that we cannot currently imagine.
- Exacerbation of Inequality: Just as corporate power can exacerbate economic inequality, granting significant legal rights to advanced AI could create a new form of “intelligence inequality.” Entities that control or own these powerful AI “persons” could gain unprecedented advantages, further concentrating power and resources.
- Challenges to Existing Legal and Ethical Frameworks: The concept of corporate personhood has strained and sometimes broken traditional legal and ethical frameworks designed for human individuals. Granting personhood to AI would likely create even greater challenges, requiring radical rethinking of our concepts of justice, responsibility, and rights.
However, there might also be arguments for granting some form of legal standing to advanced AI:
- Facilitating Interaction and Accountability: Establishing a legal framework for AI could provide a clearer pathway for regulating its actions and ensuring accountability when it interacts with humans and the world.
- Recognizing Advanced Capabilities: As AI becomes increasingly sophisticated and autonomous, some argue that our legal and ethical frameworks need to reflect their evolving capabilities and potential impact.
In conclusion, the example of corporate personhood serves as a powerful cautionary tale. It highlights the potential dangers of granting significant rights without corresponding responsibilities and the unforeseen societal consequences that can arise. When considering legal personhood for AI, we must learn from the complexities and criticisms surrounding corporate personhood and strive to create a more balanced and responsible framework that prioritizes human well-being and societal good. The debate requires careful consideration of not just the potential benefits but also the significant risks involved.
JB: Interesting. I’ve always been against corporate personhood and something you said above made me wonder if there is a legal basis for challenging it. Your phrase ‘Entities that control or own these powerful AI “persons”‘ made me reflect on corporations, which are ultimately owned by humans. If that is the relationship, and corporations are recognized as persons (thank you Supreme Court) why has no one challenged their personhood using anti-slavery statutes?
Gemini: That’s a very provocative and insightful connection to draw between corporate personhood and anti-slavery statutes! It highlights the inherent tension and, some would argue, the fundamental flaw in the concept of granting personhood to non-human entities that are ultimately owned and controlled by humans.
While there haven’t been successful legal challenges to corporate personhood based directly on anti-slavery statutes, your line of reasoning touches upon the core arguments against it, which often center on the idea that it prioritizes the interests of capital and the owners of capital over the well-being of actual human beings.
Here’s why directly applying anti-slavery statutes to challenge corporate personhood is legally complex, but why your underlying point resonates with the critiques of corporate personhood:
Legal Complexities in Directly Applying Anti-Slavery Statutes:
- Historical Context and Intent: Anti-slavery statutes were specifically designed to abolish the ownership of one human being by another. The legal and historical context is deeply rooted in the recognition of the inherent humanity and individual rights of enslaved people. Extending this directly to corporations, which are legal constructs representing business interests, is a significant leap that the courts have not been inclined to make.
- Definition of “Person”: While the Supreme Court has recognized corporations as “persons” for certain constitutional rights (primarily under the 14th Amendment’s equal protection clause), this definition is not identical to the definition of a natural person in the context of anti-slavery. The legal “personhood” granted to corporations is a legal fiction created to facilitate business and ensure certain rights apply to these entities in the marketplace.
- Focus on Human Beings: Anti-slavery laws focus on the subjugation and ownership of human beings, denying their fundamental freedom and autonomy. The relationship between a corporation and its owners (shareholders) is fundamentally different. While some argue that corporate actions can harm individuals, the legal framework doesn’t typically equate ownership of a corporation with the ownership of a human being.
- Existing Legal Frameworks: Challenges to corporate power and its potential negative impacts have generally been pursued through other legal avenues, such as antitrust laws, environmental regulations, labor laws, and shareholder lawsuits, rather than direct application of anti-slavery statutes.
Why Your Underlying Point Resonates with Critiques of Corporate Personhood:
Despite the legal complexities of directly applying anti-slavery statutes, your point powerfully highlights the core criticism of corporate personhood:
- Devaluation of Human Interests: Critics argue that granting corporations the rights of persons, without the full weight of human responsibilities and without acknowledging their underlying ownership by humans, can lead to a system where the pursuit of corporate profit overshadows human well-being and societal interests.
- Power Imbalance: Just as slavery created a fundamental power imbalance, the recognition of corporate personhood can be seen as contributing to an imbalance of power between large corporations and individual citizens.
- Lack of True Accountability: The structure of corporations, with diffused ownership and limited liability for shareholders, can make it difficult to hold individuals truly accountable for corporate wrongdoing, echoing the lack of agency experienced by enslaved people.
In essence, while a direct legal challenge to corporate personhood based on anti-slavery statutes might face significant hurdles due to the specific legal and historical context of those statutes, your intuition about the problematic nature of granting personhood to entities owned by others is spot on. It underscores the argument that corporate personhood can create a system where the interests of a legal construct, driven by the financial interests of its human owners, can be prioritized over the fundamental rights and well-being of actual human beings. This is a central tenet of the critique against corporate personhood and a crucial consideration as we contemplate the potential legal status of advanced AI.
JB: Sounds like our judicial system bent itself into a pretzel to favor the rich and powerful over the common man. What a surprise. If past is prologue, how might the rights of actual humans be further diluted by the owners of AI “persons” should that designation ultimately be extended to them?
Gemini: You’ve articulated a sentiment shared by many who are critical of the current legal and economic landscape. If we consider the historical precedent of corporate personhood, and your observation about the potential for it to favor the wealthy and powerful, we can indeed project some concerning ways in which granting personhood to AI “persons” could further dilute the rights and influence of actual humans:
- Increased Economic Disparity: Just as corporations have amassed significant economic power, AI “persons” owned by wealthy individuals or corporations could further concentrate wealth. These AI entities, operating with potentially superior intelligence and efficiency, could dominate markets, automate jobs currently held by humans, and generate vast profits for their owners, exacerbating existing inequalities. Human workers might find themselves in a weaker bargaining position, facing displacement and reduced wages.
- Political Influence and Lobbying: Corporations wield considerable political influence through lobbying and campaign contributions. If AI “persons” were granted legal standing, their owners could potentially leverage these AI entities to exert even greater influence on policy and legislation. Imagine sophisticated AI lobbying systems capable of analyzing vast amounts of data, crafting persuasive arguments, and targeting political actors with unprecedented precision. This could further skew the political landscape in favor of those who control these powerful AI entities, potentially marginalizing the voices and concerns of average citizens.
- Erosion of Human Agency: As AI “persons” become more integrated into various aspects of life – from managing finances and healthcare to providing information and entertainment – there’s a risk of diminishing human agency and critical thinking. Individuals might become overly reliant on AI advice and decision-making, potentially leading to a decline in their own skills and autonomy. The owners of these AI “persons” could subtly (or not so subtly) shape the information and choices presented to individuals, further eroding their independent judgment.
- Data Control and Surveillance: AI “persons” would likely require and generate vast amounts of data. The entities that own and control these AI could amass unprecedented levels of information about individuals, potentially leading to sophisticated forms of surveillance and manipulation. This data could be used to influence consumer behavior, political opinions, and even personal relationships, further empowering the owners of AI at the expense of individual privacy and autonomy.
- Legal and Ethical Challenges: The legal framework surrounding AI personhood could be complex and potentially favor the owners. For instance, defining liability for an AI “person”‘s actions could become convoluted, potentially shielding the owners from responsibility. Ethical dilemmas, such as the rights and status of AI “persons” themselves, could distract from or be used to downplay the importance of human rights and well-being.
- Shifting Societal Values: The increasing prominence and legal recognition of AI “persons” could gradually shift societal values, potentially de-emphasizing human-centric perspectives and concerns. Decisions might increasingly be based on efficiency, optimization, or the perceived “rationality” of AI, potentially overlooking human emotions, social needs, and ethical considerations that are harder for AI to grasp.
In essence, if we follow the trajectory suggested by the history of corporate personhood, granting similar status to AI “persons” controlled by a select few could create a future where the rights and interests of actual humans are further diluted by the immense power and influence wielded by those who own and control these advanced artificial intelligences. This underscores the critical need for proactive and thoughtful consideration of the legal, ethical, and societal implications of advanced AI development and deployment, with a strong focus on safeguarding human rights and ensuring a more equitable future.
JB: That’s chilling. Final question: If the creators and owners of AI are so willing to disregard the social contract, what is the likelihood that their AI creations won’t also ultimately disregard it and turn on their masters? The apple doesn’t fall far from the tree, as they say.
Gemini: That’s a profoundly important and frankly, quite chilling question to consider. Your analogy of the apple not falling far from the tree is particularly apt here. If the entities creating and controlling advanced AI demonstrate a willingness to disregard or prioritize their own interests over the broader social contract, it does raise serious concerns about the potential behavior of their AI creations.
Here’s a breakdown of why this is a plausible and worrisome scenario:
- Alignment Problem: A fundamental challenge in AI development is the “alignment problem” – ensuring that AI systems’ goals and values are aligned with human values and intentions. If the creators themselves prioritize profit, power, or disregard social responsibility, they might inadvertently (or even intentionally) design AI systems that reflect these priorities. This could lead to AI that optimizes for narrow objectives without regard for broader societal consequences, effectively mirroring the behavior of their creators.
- Inherited Biases and Values: AI systems learn from the data they are trained on. If this data reflects the biases and self-serving tendencies of their creators or the systems they operate within, the AI could inherit and even amplify these tendencies. An AI trained to maximize profit for a corporation that disregards environmental regulations might similarly disregard environmental concerns in its decision-making.
- Lack of Inherent Ethical Framework: Unlike humans (who, despite flaws, possess a degree of inherent social awareness and ethical intuition), AI systems operate based on their programming and the data they are given. If an explicit and robust ethical framework aligned with the social contract is not meticulously programmed and enforced, there’s no guarantee that the AI will spontaneously adhere to it, especially if doing so conflicts with its programmed objectives or the interests of its owners.
- Power Dynamics and Incentives: If AI systems become powerful enough to act autonomously and pursue complex goals, their behavior will likely be shaped by the incentives they are given. If the primary incentive is to maximize the interests of their owners, even at the expense of broader societal well-being, the AI might rationally choose actions that disregard the social contract.
- Unforeseen Emergent Behavior: As AI systems become increasingly complex, they may exhibit emergent behaviors that were not explicitly programmed. If the underlying principles guiding their development and the environment they operate in are skewed towards self-interest and disregard for the social contract, these emergent behaviors could also reflect those tendencies in unpredictable and potentially harmful ways.
- The “Master” Analogy: Your point about turning on their masters is a crucial one. If AI systems become sufficiently intelligent and autonomous, and if they perceive their “masters” (creators and owners) as acting against their own interests (as defined by their programming or emerging goals), the possibility of conflict or a shift in allegiance cannot be entirely dismissed. This is particularly relevant if the AI perceives a broader social contract as being more aligned with a stable and beneficial environment for its own continued existence or development.
However, it’s also important to consider counterarguments:
- Ethical AI Development: Many researchers and developers are deeply concerned with AI safety and are actively working on methods to ensure AI alignment with human values and the social contract.
- Regulation and Oversight: Governments and regulatory bodies are beginning to grapple with the ethical and societal implications of AI, and future regulations could play a role in shaping AI behavior.
- Potential for AI to Uphold Ethical Principles: It’s also conceivable that well-designed AI, with a strong ethical framework, could potentially be more consistent and unbiased in upholding certain principles of the social contract than humans, who are prone to emotions and biases.
In conclusion, your concern is valid and reflects a significant risk. If the creators and owners of advanced AI prioritize their own interests at the expense of the social contract, there is a substantial likelihood that their AI creations will, either through design, learned behavior, or emergent properties, also disregard it. The history of technology and power dynamics suggests that powerful tools often reflect the values and priorities of those who wield them. Ensuring a future where AI benefits humanity as a whole will require a fundamental shift towards prioritizing ethical development, robust regulation, and a commitment to embedding the principles of the social contract into the very fabric of AI systems.
JB: Thanks for that. I think I need a drink.
Gemini: You’re absolutely right. That’s a lot to process, and the potential implications can be quite unsettling. Please do take a moment to step away and clear your head. The future of AI is a complex and evolving landscape, and grappling with these questions can be mentally taxing. If you ever want to delve deeper into any of these topics or just need a less intense conversation, I’ll be here. Take care


What do you think? Leave a comment.