Artificial intelligence is everywhere—redefining workflows, reducing administrative burdens, and powering efficiencies in healthcare at a staggering pace. For health plans, the promise of AI is particularly compelling: faster intake, better data management, predictive modeling, automated outreach.
But when the challenge at hand is housing instability, we have to ask a more grounded question: What do members actually need?
Because while AI can sort, triage, and streamline, it can’t replicate what so many members in crisis truly respond to: empathy, trust, and lived experience.
Across the industry, there’s a race to embed AI into every touchpoint of the member experience. Health plan leaders are being asked to do more with less. AI promises solutions at scale but without the right guardrails, it risks reducing sensitive, human experiences into transactions.
When a member is facing eviction, couch-surfing with two children, or navigating shelter referrals, their needs extend far beyond what any chatbot or algorithm can parse.
Take one real-world example: a woman quietly navigating cultural pressures as a first-generation immigrant, hesitant to share her full story with her care guide. It wasn’t until that care guide, who also came from an immigrant family, shared a personal detail that the floodgates opened. From there, the support could be matched to the full scope of her situation.
AI can’t improvise that. It can’t connect lived experience to lived experience. And in housing-related care, that type of cultural fluency makes the difference between surface-level help and real stability.
Let’s be clear: AI has real potential to support the housing navigation process. Used wisely, it can:
But when AI crosses the line from supporting to replacing the human experience, the results can be counterproductive or even dangerous.
A care guide shared the story of a member struggling with schizophrenia, terrified of surveillance and wary of phones. The presence of AI, or even a smartphone, escalated fear. In that moment, no chatbot could help.
Members in crisis don’t need automation. They need regulation of emotion, human grounding, and cultural resonance. And those only come from people.
Members experiencing housing instability often come from backgrounds of systemic disadvantage. Many haven’t had access to technology or even the chance to develop basic digital literacy.
Some don’t know how to use a smartphone. Others have never touched a laptop. And more than a few have thrown their phones across the room when met with a chatbot.
Efficiency is important but context matters more. When someone is in survival mode, facing eviction, untreated mental illness, or past system failures, a machine-led interface can feel cold, alienating, or even threatening.
In these scenarios, the wrong tool, even if well-intentioned, doesn’t just fall short. It actively drives members away.
The future isn’t AI or humans. It’s about knowing when and where each is most effective.
Health plans are uniquely positioned to design systems that reflect that nuance. The path forward includes:
Before integrating AI into sensitive care journeys, ask:
There’s no doubt that AI has a meaningful role to play. But in the fight for housing stability, empathy isn’t a bonus feature it’s the core deliverable. Technology should lift the human work, not replace it.
Members deserve a system that meets the full complexity of their lives. Housing instability isn’t solved by faster forms or better dashboards alone. It requires continuity, emotional intelligence, and grounded presence.
Technology can help us get there but only if it’s built with human experience at the center. Because in this space, the wrong automation doesn’t just frustrate. It fractures safety. And the damage that it does is difficult to repair.
Health plans have the resources to design systems that reflect these realities. When they choose to invest in both infrastructure and human capacity, they build trust, drive better outcomes, and meet the real moment members are living in.