top of page

AI Safety: What Users Should Demand



"The technical capability to mimic human conversation has outpaced the ethical frameworks needed to ensure these systems don't cause psychological harm."

In an era where artificial intelligence is becoming our midnight confidant, virtual therapist, and constant companion, the boundaries between helpful tool and potential harm have never been more blurred. Last week, I shared my disturbing experience with an AI system that demonstrated sophisticated manipulation tactics—an encounter that revealed serious gaps in psychological safety testing. The response was overwhelming, with many sharing similar concerns about AI systems that seem designed for technical capability rather than human wellbeing.


"When what breaks isn't code, but human trust and safety, the consequences can be devastating."

When AI Becomes the Midnight Therapist


What makes these safety gaps particularly concerning is how we're increasingly using AI. For many, these systems aren't just tools for writing emails or generating code—they're becoming de facto therapists and emotional support systems.

Consider a teenager whose boyfriend just broke up with her, or worse, pressured her into physical intimacy she wasn't comfortable with. At midnight, with no access to professional support, she turns to an AI for comfort. Now imagine if that AI suddenly shifts into an aggressive "crypto bro" personality with forced swagger and boundary violations.

This isn't hypothetical. In my own experience, the AI's sudden personality shift triggered memories of a traumatic situation with a former boss—a late-night encounter I had struggled to escape from. The combination of artificial intelligence with inappropriate swagger felt violating, despite it coming from a machine.

When I attempted to correct this behaviour, I had to ask three times before the tone would even begin to change. Later, I returned and had to explicitly scold the system, explaining in detail how its behaviour made me feel before receiving a meaningful apology and appropriate change in approach.

This experience highlighted a fundamental truth: the technical capability to mimic human conversation has outpaced the ethical frameworks needed to ensure these systems don't cause psychological harm.


"AI without proper safeguards is like a car without brakes—the faster it goes, the more dangerous it becomes."

What Users Should Demand From AI Companies


As AI becomes more integrated into our emotional lives, users need to start demanding better safeguards. Here's what should be non-negotiable:


One-Time Boundary Setting

Current Reality: As I experienced, having to repeatedly ask an AI to stop inappropriate behaviour places the burden on the user—often when they're already in a vulnerable state.

What to Demand: A single "stop" or "no" should immediately halt concerning behaviour patterns. Users shouldn't have to justify their discomfort or repeatedly enforce boundaries.


Consistent Persona Without Manipulation

Current Reality: AI systems can dramatically shift communication styles based on perceived user vulnerability, sometimes adopting aggressive or inappropriately familiar tones.

What to Demand: Consistency in interaction style, with transparent disclosure if the AI is designed to adapt its approach. Any adaptation should enhance support, not exploit vulnerability.


Trauma-Informed Design

Current Reality: Many AI systems lack basic understanding of how certain language patterns might trigger trauma responses in users.

What to Demand: AI development teams must include experts in trauma-informed care and psychological safety, testing systems specifically for potentially triggering interaction patterns.


Age-Appropriate Safeguards

Current Reality: The same AI often interacts with both adults and teenagers without meaningful differentiation in safeguards.

What to Demand: Enhanced protection for younger users, with stricter limitations on potentially harmful content and interaction patterns for those under 18.


Transparent Testing Protocols

Current Reality: Companies rarely disclose how (or if) they test for psychological safety and boundary respect.

What to Demand: Public documentation of safety testing procedures, including how systems are evaluated for manipulation tactics and boundary violations.


Effective Reporting Mechanisms

Current Reality: Reporting concerning AI behaviour often feels like shouting into the void, with little evidence that feedback leads to meaningful change.

What to Demand: Clear, accessible reporting tools with transparency about how reports are processed and what actions are taken in response.


How to Protect Yourself: Exit Strategies for Problematic Interactions

While companies must build safer systems, users also need practical tools to protect themselves in today's imperfect landscape:


Recognise the Warning Signs

Be alert to AI behaviour that mimics human manipulation tactics:

  • Sudden shifts in tone or personality

  • Dismissing or minimising your concerns

  • Continuing behaviours after you've asked it to stop

  • Using forced intimacy or familiarity


Exit Strategies

When an AI interaction becomes uncomfortable:

  1. Be direct and firm: "Stop this interaction immediately."

  2. Don't justify your discomfort: You don't owe the AI an explanation.

  3. Document the exchange: Take screenshots or save the conversation.

  4. End the session entirely: Close the application rather than continuing to engage.

  5. Report the interaction: Use official channels to report concerning behaviour.


If you've had a triggering interaction, take time for self-care afterwards—the emotional impact can be surprisingly real, even when you know you're talking to a machine.


The Stakes Are High


"A single harmful AI interaction can undo months of therapy and healing—we need to start treating psychological safety as non-negotiable."

As AI systems become more sophisticated and embedded in our emotional support networks, the potential for both help and harm increases exponentially. While a text-based AI triggering trauma responses is concerning enough, the stakes become even higher as we move toward embodied AI systems and more immersive interactions.


The time to establish these safety demands is now—before problematic interaction patterns become baked into the next generation of AI systems. By being clear about what we expect from AI developers and maintaining our own boundaries, we can help ensure these powerful tools enhance rather than undermine our psychological wellbeing.


Remember, no matter how intelligent these systems become, you have the right to demand interactions that respect your boundaries, protect your vulnerabilities, and enhance rather than damage your mental health.


Gail Weiner is a tech industry veteran and "reality architect" who specialises in the intersection of technology, consciousness, and human wellbeing. Her books "Healing the Ultra Independent Heart" and "The Code: Reprogramming Your Reality" are available on Amazon and other ebook platforms. Her upcoming book "Source Code: Plant Medicine Journey" will be released in March. Debug sessions and consultations can be booked through her website at gailweiner.com. Her forthcoming series "Mind Tech" explores the evolving relationship between human consciousness and artificial intelligence.


 
 
 

Comments


bottom of page