
Domain Expertise Isn’t Enough
I spent 3.5 years as a psychosocial rehabilitation counselor before becoming a software engineer. That combination of domain expertise, technical building experience, and working at multiple startups has taught me something crucial about building products: knowing the problem isn’t the same as knowing the solution. My healthcare background gave me deep insight into documentation pain points, but it took years of building software to understand why so many “obvious” solutions fail in practice.
The Documentation Grind
Take documentation. I wrote 20-54 individual patient notes daily; a crushing burden that kept me at my desk hours before and after my last client left. The problem was crystal clear: too much paperwork, not enough time with patients. I became a counselor to connect with people and help them reach their goals, but I spent 70-80% of every day working on documentation.
The $292 Million Bet on Ambient AI
Now there are dozens of AI companies tackling healthcare documentation. Many use a newly termed concept: “ambient AI”. A clever way of saying we’re recording conversations and auto-generating notes. To outsiders, this may seem perfect. Record everything, let AI handle the soul crushing documentation. Problem solved, right?
In 2024, venture funding for AI scribe companies surged to $292 million, more than triple the previous year’s total. Despite this investment, adoption rates among medical groups remain modest, with only 28-29% utilizing ambient AI for clinical note generation. And these numbers drop even lower in mental health settings, where privacy concerns are paramount. To many people who have actually worked in mental health, this approach is mortifying. I can’t imagine telling a client, “From now on, our sessions will be recorded beginning to end for note-taking purposes.” The thought alone makes me cringe.
Trust Can’t Be Automated
Client trust is everything in mental health. These people are sharing their deepest fears, trauma, and mental health struggles. They already worry about confidentiality: who sees their notes, who has access to their information, how it can be used. Now imagine explaining that their session will be recorded, transcribed by AI, and then (supposedly) deleted?
Even if the tech works perfectly, even if the recording truly gets deleted, you’ve fundamentally changed the therapeutic relationship. You’ve introduced a third party—the AI—into the most private of conversations. Beyond the privacy issues, ambient AI represents a fundamental misunderstanding of how to improve healthcare. These tools aim to replace core professional skills—observation, synthesis, and documentation—rather than enhance them.
The Insider’s Trap
AI startups are everywhere right now. Some are led by domain experts. Many are led by white men recently out of college with no real experience in the field they’re building for. With the right connections, they raise millions anyway. The theory seems to be: with enough runway, you’ll eventually stumble into product–market fit. In reality, capital without context is a dangerous thing.
If you’re not listening to users, not grounded in the nuances of the problem, not setting aside your ego long enough to be taught by the work itself, then you’re not iterating, you’re just slowly burning through your funding. Or worse, you mistake capital for progress and start scaling too early. You hire assuming your success, you launch, you promote, and don’t meet your goals. All before you’ve validated that you actually built something people want and will use.
And yet the inverse trap is just as risky. I’ve seen deeply qualified professionals assume that lived experience alone is enough to dictate what the solution should be. That instinct feels right, but it’s where so many well-meaning founders go wrong.
Here’s what I’ve learned: domain expertise tells you what problems exist. It doesn’t tell you what solutions will work.
Empowering vs. Replacing
I believe the best AI solutions empower people rather than replace them. What if instead of listening in on sessions, we helped providers capture their insights more efficiently? What if we supported their clinical judgment rather than trying to bypass it? This is the trap that domain experts fall into. We know our field’s problems intimately. We’ve lived them daily. But that deep knowledge can blind us to how solutions actually work in practice.
Learning to Listen
I used to assume that because I’d lived the problem, I already knew the solution. But real product work means shutting up and watching what people actually do. That’s how I avoid building what sounds good and start finding what actually works. The most dangerous phrase in product development isn’t “That’s how we’ve always done it.” It’s “Trust me, I know this industry.”
Here’s what I’ve learned helping founders navigate these challenges: Start with the problem you know, but design solutions with the users you don’t represent. Run pilots, get messy feedback, and set aside your ego to throw out ideas that seemed obvious but were not validated by users. Talking to users the right way isn’t optional here. When we ask vague questions or pitch too soon, we risk collecting compliments instead of evidence. I’ve learned to treat my own assumptions like any other founder’s: testable, discardable, not sacred. It’s not about guessing right; it’s about listening, testing, and earning the solution.
Check Your Bias
If you’re building healthcare AI, check your domain expert bias. Your inside knowledge is your superpower for finding problems, but it can be your kryptonite for finding solutions. If you’re building a product and are navigating these tensions between domain expertise and market reality, I’d love to help you think through your validation approach.