The synchrony gap: what the AI panic gets wrong about human work

why the people who’ll thrive in the age of AI aren’t the ones learning it fastest — they’re the ones who can still connect with other humans.

Feb 14, 2026

the short version: The AI adoption conversation is focused on the wrong problem. Matt Shumer says learn AI fast or get replaced. Ross Douthat asks why people still prefer humans. IMO they’re both missing it. After a decade of designing human-AI systems, I think AI adoption is blowing up in our faces because we’re trying to implement complexity-amplifying technology into a workforce that’s already depleted — stripped of the embodied human connection (what neuroscience calls “interpersonal synchrony”) that’s required to build trust, navigate ambiguity, and exercise professional judgment. The prescription of “spend more time with AI” is the disease prescribing itself as the cure. We don’t need more training decks. We need more human infrastructure.

This week, tech founder Matt Shumer published “Something Big Is Happening,” comparing AI’s current moment to February 2020 — that strange before-time when a few people saw Covid coming and most of us didn’t. His essay immediately went viral. Ross Douthat responded in the New York Times (remarkably, I did not violently disagree with virtually everything he said!), acknowledging the disruption but raising a question Shumer never addressed: if AI can do the work, why do people still overwhelmingly prefer humans?

I started working with AI over ten years ago, almost accidentally, as a product designer at IBM. I was designing tools with some early Watson APIs, which eventually led to building trust calibration frameworks for enterprise AI as a consultant. I have had a front row seat to massive AI adoption failure. I am also a giant nerd who reads about neuroscience for fun. And I have a different answer than either of those guys, and it has surprisingly little to do with technology.

the 80/30 problem

Recently, I came in as a consultant to help fix an enterprise AI system that was struggling to get their intended user base to actually use it. What I discovered when I dug in and started listening to the people who were supposed to be benefitting from this tool was what I came to call the 80/30 trust gap. The pilot had roughly 80% technical accuracy — which was solid; genuinely useful. But the people who were testing it in their actual professional workflows perceived its accuracy at around 30%. They were skipping the AI’s answers entirely and doing manual verification for every single query.

The diagnosis from leadership was not shocking: users just don’t understand the technology. The prescribed fix was equally predictable: more training (a ten page prompting manual!), more demos, more evangelism. Matt Shumer’s advice at enterprise scale.

As any human-centered designer worth their weight in post-its could tell you, their diagnosis was just wrong.

When I actually sat with users — not surveyed them, sat with them — what I found wasn’t confusion, and it had nothing to do with AI anxiety. These were skilled professionals whose expertise was built over years. And they were already using Gemini daily. They could read a document and feel when something was off before they could articulate why. They had what designer James Harrison recently called “intuition” — that hard-to-teach quality that separates adequate from exceptional.

And we were asking them to trust a system that didn’t acknowledge and definitely couldn’t participate in any of the ways humans actually build trust.

As a consultant, I could diagnose the disease and write the prescription, but I couldn’t force the patient to take the medicine. That org wasn’t ready to hear that the problem wasn’t technical. It was human.

the science almost nobody in AI is talking about

A couple years back I read Kate Murphy’s book You’re Not Listening and not only did I devour it as an armchair pop psychologist but also as a conversation designer. So when I found out my fellow Houstonian had a book about human connection coming out I jumped on the pre-order immediately. Why We Click explores what she calls interpersonal synchrony — the scientifically documented tendency of human beings to literally fall into physiological rhythm with each other.

This isn’t metaphorical. When people interact in person, they unconsciously sync heart rates, blood pressure, brainwaves, pupil dilation, even hormonal activity. Murphy argues this unconscious syncing is among the most consequential and least discussed drivers of attraction, trust, and belonging. It's the mechanism underneath what we casually call “reading a room” or “trusting your gut.” As she put it in a recent interview: “It's a whole-body, in-the-moment experience… and cannot be achieved in its truest, most exquisite form in the digital realm.”

Here’s where it gets relevant to the AI conversation: this synchrony is how we actually calibrate trust. Not through accuracy metrics or documentation or training sessions. Through embodied, physiological co-regulation with other humans. Your meatbody decides whether to trust before your brain gets involved.

Think about how trust works in a surgical team, or with a doctor explaining a diagnosis. The information matters, but the channel through which it’s delivered matters just as much. Your nervous system is reading the other person’s confidence, their micro-expressions, the rhythm of their breathing, while your conscious mind processes their words. AI is trying to plug into exactly these kinds of expert workflows — minus that whole channel.

An expert’s gut feeling about whether info is reliable isn’t mysticism. It’s their nervous system running pattern-matching on thousands of previous embodied experiences. When we strip that channel away, we don’t just lose warm feelings. We lose the primary mechanism through which professionals do their best work.

the compounding crisis

So this is where you have to hold two things simultaneously, because the AI conversation isn’t doing that yet.

Thing one: AI is genuinely getting more capable. Shumer isn’t wrong about that. I’m not an AI denier and I’m not interested in pretending the technology doesn’t work or is inherently evil.

Thing two: The conditions under which organizations are trying to adopt AI are actively destroying their capacity to do so.

We’ve spent the post-Covid years moving knowledge work into environments that systematically strip away synchrony channels. (I’m talking about Zoom/Teams/Slack/Meet here.) Until RTO, there were no hallway convos where you could read your colleague’s body language in real time. No more sitting next to someone at the whiteboard while your nervous systems quietly sync up around solving a gnarly problem together. Instead, we’ve been interacting with each other on a grid of 2D rectangles, and shooting the shit to connect as actual people during work meetings is treated as waste.

I want to be clear here: this is not an argument against remote work. Plenty of toxic in-person cultures deplete people faster than any home office. And some remote teams can build genuine synchrony through stable relationships, intentional rituals, and periodic in-person time. This is an argument against synchrony-starved work — whatever form it takes.

Working fully remote since 2018, I’ve felt my own collaborative capacity shift, not because I’ve gotten worse at my job, but because the channels through which I do my best work have been drastically narrowed. I have found it virtually impossible to sync with teammates I’ve never shared the same air with. I can’t read the room when the room is a grid of thumbnails. Murphy nails the stakes of this: “The degree and duration with which we sync determines the viability and stability of our relationships — or whether a relationship develops at all.”

Now, right into this depleted landscape, we’re introducing AI — this incredibly powerful technology that amplifies complexity, requires comfort with ambiguity, and demands a kind of trust that can only be built through exactly the embodied human exchange and nervous system regulation we’ve been taking away.

And then when people hate it and refuse to use it, we blame them.

the disease prescribing itself as the cure

Shumer thinks the solution to this is that we all spend an hour a day with AI. Get really good at prompting. Adapt or get replaced.

To his credit, he’s advocating for curiosity and experimentation, which are genuinely useful orientations here. But notice what the prescription actually looks like in practice: replace even more of your human interaction time with screen-mediated, non-embodied, solitary exchange! This is part of a broader cultural pattern we’re all seeing and feeling — pushing the relentless optimization of individual productivity at the expense of collective capacity.

For someone already well-regulated — someone with enough human connection, enough embodied grounding — solo experimentation with AI can be genuinely empowering. That’s the Shumer experience. He’s regulated enough to play.

But for the vast majority of knowledge workers right now? They're already running a synchrony deficit. And telling depleted people to spend more time alone with a machine is prescribing the disease as the cure.

The people I’ve watched genuinely thrive with AI share a common trait, and it’s not that they’re really awesome at prompting or that they’re early adopters. They have capacity. The capacity to stay grounded while navigating complexity. To hold ambiguity without collapsing into binary thinking. To sense when human judgment is required versus when AI is appropriate.

I’ve noticed that very often, the people most resistant to AI in my network aren’t the least intelligent. They’re the least regulated in that moment. Their responses are reactive and fear-based — not because they’re Luddites (or dumb), but because their nervous systems are already maxed out. They’re freaking out over finding a job or hanging onto the one they have. Their livelihoods are at risk. They’re watching their country slide into actual authoritarianism. They’re worried about their neighbors getting abducted by the federal government, or about their kid getting red-pilled or falling in love with an “AI companion.” They don’t have the bandwidth for one more complex, ambiguous, trust-requiring thing.

You cannot prompt-engineer your way out of depletion.

what Douthat allllmost got right

Douthat noted that people prefer human piano players over player pianos, human waiters over automated restaurants, human doctors over diagnostic AI. He framed this as a question about AI's “inhumanity” and whether increasingly human-like AI might eventually overcome the preference.

I think he's close, but he has it backwards.

People don’t prefer humans out of sentiment. They prefer them because their bodies are actually doing something during human interaction that they can’t do with machines. When you talk with your doctor, your nervous systems are in dialogue — which shapes your willingness to disclose, your trust in the diagnosis, your likelihood of following the treatment. That’s not a UX problem to be solved with better chatbot personalities. It’s a fundamental feature of human cognition in high-stakes contexts.

The question isn’t whether AI will simulate humanity convincingly enough to overcome this. The question is whether we’re willing to build AI systems that account for it.

intuition is infrastructure

James Harrison wrote that smart piece recently about the return of the “intuitive designer” in the age of AI. As AI automates the process-oriented, de-skilled parts of work, the irreducibly human quality — intuition — becomes more valuable, not less. He cites Daniel Kahneman: thinking that you know without knowing why.

I want to push this further. Intuition isn’t a personality trait. It’s a nervous system capacity, the product of thousands of hours of embodied experience, stored in your body as patterns your conscious mind accesses as “gut feeling.” When Harrison talks about intuition separating good from great, he's talking about nervous system capacity. When Murphy describes interpersonal synchrony, she’s describing the mechanism through which that capacity gets built and maintained. And when Shumer tells everyone, depleted as they may be, to spend more time alone with AI, I think he’s inadvertently prescribing its erosion.

If I could go back to that 80/30 system, this is what I’d build around it: high-synchrony practice environments where people build and study embodied intuition about when to trust AI and when to override it, through shared experience rather than shoving prompting manuals or training videos at people. Observation sessions where knowledge workers narrate their gut checks while working with AI. Team drills around edge cases where people deliberately practice override decisions together. Not more training decks. More human infrastructure.

what comes next

AI capability is accelerating just as human synchrony is eroding. And with it, the nervous system capacity to navigate complexity. That’s the real story of this moment, and almost nobody is telling it.

Shumer is right that something big is happening. He’s off about what it is.

The big thing isn’t that AI can do your job. The big thing is that we’re about to find out who has the human infrastructure to integrate AI wisely — on an individual and an organizational level — and who has spent the last few years systematically dismantling it in the name of efficiency.

The people who will thrive aren’t the best prompt engineers. They’re the people who maintain enough human connection, enough embodied awareness, enough nervous system capacity to stay wise while everything accelerates. For sure, get hands-on with AI — but if you do it while starving your nervous system of human synchrony, you're building on sand.

Christy Carroll is a conversational UX designer and AI strategist with 10+ years in technology, including foundational human-AI collaboration work at IBM Watson (and 30+ years if you count her early BASIC coding experiments and Print Shop creations on her family's Apple ][e). She specializes in trust calibration — designing AI systems that earn appropriate trust by respecting how humans actually connect and decide.

Enterprise AI adoption through
strategic clarity, not magical thinking.


© 2026 Christy Carroll


Enterprise AI adoption through
strategic clarity, not magical thinking.


© 2026 Christy Carroll


Enterprise AI adoption through
strategic clarity, not magical thinking.


© 2026 Christy Carroll