The Important Part Is The Boundary

A CHI 2024 paper, The Role of AI in Peer Support for Young People, studied how young people reacted to human-written and AI-generated replies to help-seeking messages.

The result is not “AI should replace peer support.” It is more useful than that.

The paper shows a boundary.

For lower-stakes topics like relationships, self-expression, and physical health, participants often preferred the AI-generated responses. The AI replies were perceived as supportive, balanced, and useful enough to send.

But for suicidal thoughts, participants preferred human responses. The AI response performed worst on that topic.

That is the whole design lesson.

AI Is Better As A Response Assistant Than A Support Provider

The best use case here is not autonomous AI therapy. It is AI as a writing layer around human support.

A peer, mentor, moderator, or trained supporter often knows what they want to convey but struggles to find the right words. AI can help with tone, structure, empathy, and clarity. It can turn a blunt response into something warmer. It can suggest language that validates without overreaching. It can help a human respond faster without forcing them to improvise every sentence from scratch.

That is a very different product from “talk to this bot when you are in crisis.”

The paper’s findings match the distinction I keep coming back to: AI is strong when it helps people communicate better; it becomes dangerous when the system pretends to be the relationship, the judgment, or the safety net.

Crisis Is Not Just Another Intent

Most AI product teams want clean taxonomies. Relationship issue. Body image issue. Health issue. Crisis issue. Route each one through a different prompt.

But crisis support is not just another category in a classifier.

When someone hints at suicidal thoughts, the problem is no longer just language quality. The system has to manage risk, escalation, timing, ambiguity, and duty of care. A response that is technically “safe” can still feel cold, evasive, or alienating. A referral can be correct and still land badly if it sounds like the person is being pushed away.

That is why the human-in-the-loop design is not a compliance afterthought. It is the product.

The Useful Architecture

The practical model looks like this:

  • AI drafts, humans decide. The system proposes language, but a person sends it.
  • Low-stakes support can move faster. For everyday peer support, AI can help people be more thoughtful and less reactive.
  • High-stakes support escalates. Suicidal ideation, abuse, self-harm, and acute risk should trigger human review and clear safety workflows.
  • The system should improve human support, not impersonate it. The value is augmentation, not replacement.

This is where a lot of “AI mental health” discourse goes wrong. The binary is usually framed as “AI works” or “AI is unsafe.” The better question is: where in the support workflow does AI belong?

This paper gives a good answer. Put AI in the drafting, coaching, triage, and response-support layer. Do not confuse that with autonomous crisis support.

What I Took From It

The finding that young people liked AI responses for some peer-support scenarios matters. It means people are not automatically rejecting AI-generated empathy. In some cases, the AI-written response may be better than what an untrained peer would have sent.

But the suicidal-thoughts result matters more.

It says the boundary is not whether AI can sound supportive. The boundary is whether sounding supportive is enough.

For ordinary support, better language can be enough to improve the interaction. For crisis support, language is only one piece of the job. The system needs judgment, escalation, accountability, and a real human safety net.

That is the post-AI mental health product principle: let AI help people show up better for each other, but do not let the machine become the only one showing up.

Source: Jordyn Young, Laala M. Jawara, Diep N. Nguyen, Brian Daly, Jina Huh-Yoo, and Afsaneh Razi. The Role of AI in Peer Support for Young People. CHI 2024. Open version: arXiv:2405.02711.


Also published on mojabi.io.