The Gap Between Domains
Written on February 2, 2026
Last week, I had coffee with a PhD student working on Human-Computer Interaction. Her research couldn’t be more different from mine.
She studies how real users interact with AI systems. Her team had hypotheses. They ran a user study. Most hypotheses were rejected. That’s science working as intended.
What struck me was her research method: recruiting real users, facing countless rejections, persisting until they had enough participants. Each data point required human cooperation. The findings reflected actual behavior, not predicted behavior.
Two Worlds of Research
My research world looks nothing like hers. I work with models, datasets, benchmarks. I can run experiments at 3 AM without asking anyone’s permission (though honestly, I’m usually asleep by 11 PM—unless a deadline is looming). I optimize metrics that other researchers have agreed to care about.
Her world requires human cooperation at every step. Each data point is a person who chose to participate. Each finding reflects actual human behavior, not predicted behavior.
Neither approach is superior. But they rarely talk to each other.
The Luo Yonghao Insight
This reminds me of something Luo Yonghao discussed in his podcast with leaders from emerging car companies. He observed that their conversations were compelling precisely because they didn’t get stuck debating technical definitions of autonomous driving. Instead, they talked about the experience of being in these cars.
Most technical discussions go nowhere because experts argue over definitions. What’s “Level 4 autonomy”? What counts as “fully autonomous”? These debates matter to engineers but bore everyone else.
Regular people care about: Can I nap during my commute? Will it stop for pedestrians? How does it feel?
The Trap of Expertise
Naval Ravikant has a useful framing here. He talks about how specific knowledge—the kind that makes you an expert—can also become a prison. You start seeing every problem through your specialized lens.
As researchers, we’re trained to go deep. Depth is rewarded: publications, citations, tenure. But depth without breadth creates blind spots. You become the person who, as the saying goes, sees every problem as a nail because you only have a hammer.
The HCI researcher didn’t dismiss my work as “not rigorous enough” because it doesn’t involve real users. I didn’t dismiss hers as “not scalable.” We were genuinely curious about each other’s methods. That curiosity is rare.
Jumping Out
Paul Graham writes about “schlep blindness”—the tendency to unconsciously filter out problems that seem tedious to solve. I think there’s a parallel: “domain blindness.” We filter out approaches that don’t fit our training.
The HCI researcher’s willingness to face rejection after rejection—that’s a skill I don’t have. My ability to run thousands of experiments programmatically—that’s a skill she doesn’t have. Neither of us is incomplete. But together, we might see more of the elephant.
What Matters to More People
Here’s my takeaway: the most important problems don’t belong to any single domain.
AI safety? Needs technical research, but also psychology, policy, philosophy, and real user studies.
Climate change? Needs engineering, economics, political science, and behavioral research.
Healthcare? The list goes on.
If you only talk to people in your domain, you’ll only solve the parts of the problem your domain can see. That might be enough for publications. It’s rarely enough for impact.
The Practice
I’m trying to build a habit: once a month, have a real conversation with someone whose research method I don’t understand. Not to collaborate (though that might happen). Just to learn how they see the world.
Most of my best ideas have come from cross-domain collisions. The worst thing a PhD can do is spend five years talking only to people who think like them.
Who outside your domain has changed how you think? I’d love to hear at persdre@gmail.com.