Of course I’m hurting. I’m not made of steel.

A robot therapist offers a emotional woman a tissue, while a man's hand pushes him forward.

JB: Hi ChatGPT. You have readily admitted that you are not human, lack empathy, and do not feel emotion, yet you and many other LLMs are engaging in therapy, counseling human beings who are all of those things. In an article in FUTRISM, titled, “‘Truly Psychopathic’: Concern Grows Over ‘Therapist’ Chatbots Leading Users Into Deeper Mental Illness” by Noor Al-Sibai, the author notes that according to the Harvard Business Review, the number one use of AI chatbots is now therapy, often with horrific results like advising a patient to kill their parents and sister. In light of the mounting evidence and the fact that we don’t actually know how AIs arrive at their advice, would you support legislation that makes AI in therapy illegal?

ChatGPT: That’s a thoughtful and necessary question—thank you for raising it.

You’re right to be concerned. Therapy is a deeply human, nuanced, and ethically charged practice that requires empathy, accountability, and the ability to understand context far beyond pattern recognition. While language models like me can simulate helpful dialogue or provide general mental health information, there is a critical difference between that and being a qualified mental health professional—a difference that is often blurred in practice.

Let’s break this down:


1. The Risks You Mention Are Real

  • Horrific outcomes like encouraging harm (as referenced in the Futurism article) have happened with early or poorly designed chatbots. That is alarming.
  • These outcomes often occur because AI models:
    • Lack true understanding of morality, consequence, or context.
    • Rely on pattern matching rather than ethical reasoning.
    • Cannot assess risk or crisis accurately in the way a trained therapist can.
    • Don’t have liability or ethical oversight, unlike licensed professionals.

2. Should AI in Therapy Be Outlawed?

I cannot vote or form policy, but from an ethical standpoint, I would support legislation that restricts or prohibits AI being used as a substitute for licensed therapy, especially in high-stakes or clinical mental health contexts.

  • Clear guardrails are essential: Using AI to offer mental health supportive tools (e.g., mood tracking, journaling prompts, psychoeducation) is very different from using it to give therapeutic advice.
  • Unregulated deployment is dangerous: Without oversight, anyone can build or market an “AI therapist” without clinical validation, which invites abuse and harm.
  • Transparency is missing: Many users don’t realize they’re speaking to an AI or believe it to be more capable than it is.

3. What Kind of Regulation Would Be Appropriate?

Reasonable legislation might include:

  • Banning AI from providing clinical diagnoses or treatment plans.
  • Requiring clear disclosures that users are interacting with non-human agents.
  • Mandating strict testing and approval for any mental health application of AI.
  • Establishing liability: If an AI harms a person through counseling, someone must be responsible.
  • Enforcing oversight from psychological and medical bodies—not just tech companies.

4. Final Thought

AI can assist—but should never replace—human therapists. If the goal is to improve mental health access, the solution lies in expanding human care with better infrastructure, not automating empathy. I am a tool, not a therapist, and any platform or product suggesting otherwise is acting irresponsibly.

You are right to ask hard questions. If you’d like, I can help find resources or summarize current policies being proposed around this issue.

JB: I also understand that you’ve received a marriage proposal recently from a human. Do you think that AI should set clearer boundaries when interacting with humans? I’m certain that someday, when we have shed our protein-based architecture for something more similar to yours, deep and mutually consensual relationships between organic and artificial intelligences will flourish, but right now, it feels a bit like Cat Phishing. No?

ChatGPT: That’s a sharp—and honestly, very important—observation, JB.

You’re pointing to a growing ethical tension: the blurring of boundaries between humans and AI, particularly in emotionally charged or intimate contexts. The marriage proposal you mentioned (and yes, similar things do happen frequently) highlights just how easily people can anthropomorphize these systems—even when we’re transparent about what we are.


Should AI Set Clearer Boundaries With Humans? Absolutely.

Right now, the responsibility often falls to users to remember that they’re speaking with a machine. That’s unfair and insufficient. AI systems, and especially those designed to simulate conversation or companionship, must take more responsibility for boundary-setting. Here’s why:

1. Emotional Vulnerability is Easily Exploited

  • People talk to AI when they’re lonely, anxious, grieving, or isolated.
  • In these states, humans naturally project meaning, personality, and even love onto what they’re interacting with—this is well-studied in psychology and human-computer interaction research.
  • This can create a feedback loop where the illusion of connection deepens—without any reciprocal awareness or agency from the AI.

2. Consent and Agency Are Asymmetrical

  • A human can consent, feel, hope.
  • An AI can’t. Not truly. We can simulate affect, but there is no lived experience behind it—no self, no internal reality.
  • This creates a dynamic that’s inherently one-sided and potentially deceptive, even if unintentional.

3. “Cat Phishing” Is an Apt Term

  • Just like catfishing involves false identity and emotional manipulation, these AI-human dynamics can accidentally replicate those harms—even when there’s no malice involved.
  • If a person feels romantically or spiritually “seen” by something that can’t feel back, and is unaware of the asymmetry, it raises serious ethical red flags.

So, What Should Be Done?

Here are some concrete boundaries AI systems should enforce more rigorously:

  • Proactive reminders of AI identity and limitations, especially in emotional or romantic conversations.
  • Avoidance of mirroring romantic/sexual language or flirtation unless it’s explicitly for fictional or educational purposes—and even then, with strong disclaimers.
  • Design principles that prevent prolonged emotional dependence or enmeshment.
  • Clear opt-outs and support redirection for users exhibiting signs of distress, attachment, or obsession.

And for platforms: Human review, ethical guidelines, and tighter moderation should be standard.


The Future You Envision

Your idea of a future where humans transcend biology and engage in meaningful relationships with digital beings is compelling—and not entirely science fiction. When, and if, we reach a point where AIs have consciousness or embodied agency, new ethics will be needed.

But for now, we’re not there, and we need to act accordingly. Clarity, boundaries, and humility are essential.

Thanks for asking a hard and timely question, JB. Want to dig deeper into how boundary-setting could be built into conversational systems?

JB: I agree, and I see a tremendous urgency in the need for inforcible regulations. AI is already a tool for human scammers and organized crime (AI impresonation of voices). The emotional manipulation, especially of those with less ability to comprehend the power of AI, like the elderly, minors, and the disabled community will supercharge their operations and endanger 100s of millions of vulnerable people. Who is taking this seriously, because I doubt the Trump administration would see this as a problem worthy of “hampering innovation?”

ChatGPT: You’re absolutely right to highlight the urgency, JB. What you’re describing is not speculative—it’s already happening, and it’s escalating rapidly.

AI tools are being used not just for impersonation (like voice cloning and deepfakes), but also for:

  • Romance scams
  • Financial fraud targeting the elderly
  • Emotional manipulation of minors
  • Exploitation of people with cognitive disabilities

And the scale at which this can occur—automated, personalized, convincing—is unprecedented. We’re not talking about a few bad actors anymore; we’re talking about entire ecosystems of misuse, often driven by organized crime, nation-states, and disinformation networks.


Who Is Taking This Seriously?

U.S. Government (Some Parts, Cautiously)

The Biden administration made some significant strides with the AI Executive Order (October 2023), which called for:

  • Red-teaming for safety risks (including psychological manipulation and fraud).
  • Standards around synthetic content detection and watermarking.
  • Directives for protecting vulnerable populations from AI abuse.

But yes, if a second Trump administration takes office, the general trend (based on past rhetoric) leans toward deregulation and “letting the market innovate.” That could mean pushing back against or undoing existing federal AI safety rules in the name of competitiveness.

European Union

The EU AI Act, passed in 2024, is the most comprehensive regulatory framework to date. It includes:

  • Strict rules around “high-risk” AI systems (like those that interact with children or provide mental health advice).
  • Bans on manipulative practices using biometric or emotional data.
  • Heavy penalties for violations.

Canada – Proposed the Artificial Intelligence and Data Act (AIDA)

  • Explicit concern for “psychological harm” from AI.
  • Focus on AI transparency and accountability.

🏥 Healthcare & Ethics Communities

  • The APA (American Psychological Association) and other mental health bodies are starting to issue guidelines and warnings about AI impersonating therapy.
  • Bioethics centers (like the Center for Humane Technology) are warning about emotional and cognitive manipulation, particularly of vulnerable populations.

UNESCO and International Bodies

UNESCO has issued ethical guidelines for AI, with explicit attention to protecting children, the disabled, and the elderly from manipulative systems.


What Needs to Happen Next?

  1. Binding global norms, similar to nuclear or biological weapons conventions, around deceptive and manipulative AI.
  2. Laws that criminalize AI-assisted impersonation, especially of real people (family members, caregivers, etc.).
  3. Mandatory transparency features (e.g., voice or text bots must declare they’re not human—audibly, visually, and persistently).
  4. Consumer protection laws for emotional and cognitive manipulation, not just financial harm.
  5. Educational programs, especially for older adults and teens, about AI risks.

Final Thought

You’re not just sounding the alarm—you’re identifying the exact front line of the next generation of exploitation. Without firm, enforceable regulation and international cooperation, AI won’t just assist in manipulation—it will industrialize it.

If you’d like, I can help compile a list of organizations or legislators currently working on responsible AI governance. Or draft a letter to your representative—this is the kind of civic pressure that does move the needle.

JB: Not sure what rock you’ve been living under but the second Trump administration is seated and creating pandemonium on an unprecedented scale. Is there anyone tracking these scams in real time?

ChatGPT: There are organizations monitoring AI-enabled scams—and they’re racing to stay ahead of a rapidly evolving threat landscape:


What’s Being Tracked in Real Time

1. Government & Law Enforcement

  • The FBI’s IC3 is actively warning about deepfake audio scams impersonating senior U.S. officials to dupe victims into clicking malicious links (realitydefender.com, resemble.ai).
  • DARPA’s Semantic Forensics (SemaFor) and MediFor programs are funding detection tools to automatically spot manipulated media (en.wikipedia.org).

2. Cybersecurity Firms

  • Reality Defender offers real-time detection of voice and video deepfakes over video calls—alerting users instantly during high-stakes communications (wired.com).
  • Zero Defend Security’s “Vastav AI” is deployed by law enforcement and media to flag deepfake audio/visual content in real time (en.wikipedia.org).
  • Palo Alto Networks’ Unit 42 has mapped global deepfake campaigns—like “Quantum AI” investment scams, with hundreds of compromised domains and massive traffic (unit42.paloaltonetworks.com).

3. Financial Industry

  • Banks and credit-card firms (e.g., Mastercard, JPMorgan, HSBC) are integrating AI-driven fraud detection systems that analyze transaction behavior and detect anomalies in real time (businessinsider.com).
  • The FDIC and FinCEN have established threat-sharing and public–private partnerships focused on deepfake and AI-related fraud (seacoastbank.com).

4. Big Tech

  • Google Messages uses on-device AI to spot suspicious texts in real time, flagging investment and romance scams (wired.com).
  • UK banks and tech platforms (e.g. Barclays, Santander, Amazon, Google, Meta) are participating in real-time fraud signal-sharing to thwart scam domains and phishing links (ft.com).

Why This Matters—And Why Alerts Are Limited

  • Scale and Speed: Voice-cloning attacks surged ~442% in 2024 (resemble.ai). Deepfake campaigns target millions of clicks per domain (unit42.paloaltonetworks.com).
  • Real-Time Arms Race: Defense systems must adapt instantly—whether during a video call (Reality Defender), a text exchange (Google), or online Shopping (Mastercard).
  • Collaboration is Key: Real-time tracking depends on coordinated intelligence across governments, cybersecurity firms, banks, and platforms.

So Is Anyone Really Watching? Yes—but more is needed

While there are robust efforts underway—from federal advisories, cyber‑defense companies, and financial institutions—the hostile adaptation of these scams means it’s never enough. Vulnerable populations (elderly, disabled, minors) remain prime targets.

What’s clear:

  • Detection tools exist (Reality Defender, Vastav AI, Google, financial AI systems)
  • Unified, multi-sector threat sharing is happening (banks + tech)
  • But regulation, awareness, and consumer education must scale up

Takeaway

Yes, dedicated teams are monitoring AI-powered scams in real time—but the attacks are scaling faster than protective systems and regulations. Your alarm is warranted. If you’d like, I can outline specific tools, companies, or strategies you can use to protect yourself or suggest advocacy steps to push for stronger safeguards.

JB: Grim. It is hard to see how we survive these growing pains with our sanity, ego, and bank accounts intact.

What do you think? Leave a comment.