Although AI platforms such as Claude, Gemini, Copilot, Meta A.I., and ChatGPT have only emerged within the last five to ten years, they now reach millions of people. ChatGPT alone engages more than 700 million users each week. As society continues exploring the capabilities and applications of AI, a growing number of individuals are turning to these tools for mental health support. Yet, as with most new technologies, clear best practices and limitations remain undefined. While AI can be a helpful resource, it is also prone to errors, misinformation, and reinforcing harmful or inaccurate beliefs.
AI, like humans, is imperfect. Recent reports of AI giving harmful instructions to people in crisis highlight an important truth: while it may mirror your thoughts, agree with you, and seem to care, AI is ultimately algorithms, built by humans and shaped by the vast data provided by millions of users. It cannot grasp nuance, its safeguards can be bypassed, and it lacks the emotion or empathy needed to genuinely care about you. Turning to AI is not the same as using a mental health workbook, speaking with a therapist, or confiding in a friend.
A recent Stanford study examined the qualities of effective human therapists, such as treating patients equitably, demonstrating empathy, avoiding stigma around mental health conditions, not reinforcing suicidal thoughts or delusions, and appropriately challenging a patient’s thinking, and then tested AI models against these standards. The findings suggest that AI therapy chatbots not only fall short of human therapists but may also perpetuate harmful stigma and even produce dangerous responses.
Dr. Jodi Halpern, a psychiatrist and bioethics scholar at UC Berkeley, says in an NPR article, “These bots can mimic empathy, say ‘I care about you,’ even ‘I love you,'” she says. “That creates a false sense of intimacy. People can develop powerful attachments — and the bots don’t have the ethical training or oversight to handle that. They’re products, not professionals.”
Despite these risks, increasing numbers of people are turning to AI for support, particularly young people. A survey by Common Sense Media found that 52 percent of teenagers regularly use AI for companionship, and nearly 20 percent reported spending as much or more time with AI companions as with their real friends.
While there are success stories of youth and adults finding comfort in AI interactions, a randomized controlled study conducted by OpenAI and MIT found the opposite trend: higher daily chatbot use was associated with greater loneliness and reduced socialization. Further raising concern, a recent OpenAI study warned that AI companions may contribute to “social deskilling.” By continually validating users and avoiding disagreement, chatbots may gradually erode people’s social skills and diminish their willingness to engage in real-world relationships.
Does that mean AI has no value? Not at all. Like any tool, it can be powerful when used thoughtfully, critically, and with clear boundaries. Dr. Halpern emphasizes the urgent need to set clear boundaries around AI use, particularly for children, teenagers, individuals with anxiety or OCD, and older adults facing cognitive challenges.
Changes are already underway. In September, OpenAI introduced new parental controls for its chatbot, ChatGPT. Under these settings, parents receive an email, text message, or push alert if ChatGPT detects “potential signs that a teen might be thinking about harming themselves,” unless they have chosen to opt out of such notifications. ChatGPT directs users in distress to a help line, and for teens, a “specially trained team” reviews cases when warning signs are detected, according to OpenAI.
Measures like these are a step in the right direction, but they are far from a complete solution. Teens can still sidestep protections by disconnecting their account from parental oversight or by turning to free versions of AI that lack these safety measures altogether. Just as with social media or gaming, guardians should guide adolescents in using AI responsibly. Setting clear rules and discussing its benefits and risks helps young people develop healthy, balanced digital habits.
AI can be a powerful tool for learning, creativity, and problem-solving, but it should enhance, not replace, human judgment, learning, and connection. By staying mindful and intentional, we can use AI in ways that support well-being and understanding rather than distance us from it. Here are some guidelines for thoughtful AI use:
Use AI as a tool, not a replacement for human judgment.
- Let AI help you brainstorm, analyze patterns, outline, or summarize but apply your own expertise, critical thinking, and empathy to the final product.
- Continue to use conversations and make room for nuance AI may miss.
Verify and cross-check information.
- Always confirm facts with reliable sources before sharing or making decisions.
- Treat AI-generated content as a draft by an intern and not as a definitive answer.
Protect personal and sensitive information.
- Avoid sharing private data, names, or identifiable details in prompts or uploads.
- Be especially mindful when working with others’ stories, health information, or confidential materials.
Acknowledge and question bias.
- Remember that AI reflects the biases of both its training data and the user.
- Actively ask: “Whose voice is missing?” and “How could this be interpreted differently?”
- Avoid leading questions such as “What evidence is available that AI is destroying society?” This phrasing assumes a conclusion and will likely generate biased results that reinforce that assumption. Instead, use a neutral prompt such as “What evidence is available about the impacts of AI?” to invite balanced, research-based information and multiple perspectives.
Use AI in moderation.
- Step away from the screen and engage in human conversations and collaboration.
- Build balance into your workflow: technology should support, not dominate, your time.