A Conversation Above the Chaos
What happens in Vegas...
This post was written epiphenomenologically as a result of a chance conversation with Monique Levy, a PhD in neuroscience, Nworah Ayogu MD, an investor at Thrive Ventures and Tom Cassels a deep thinker in health strategy at Manatt.
Sometimes the most profound conversations happen in the most unlikely places.
Prelude
I found myself at the HLTH conference in Las Vegas, surrounded by the calculated chaos of the Venetian hotel’s casino floor. If you’ve never experienced a Vegas casino, imagine every principle of behavioral psychology weaponized against your prefrontal cortex—the orchestrated symphony of slot machine jingles, the pumped-in scents designed to keep you engaged, the deliberate absence of clocks and windows. It’s brain hacking at an industrial scale, a reminder that we’ve been manipulating human decision-making long before we started talking about AI alignment.
The irony wasn’t lost on me. Here we were, healthcare leaders gathered to discuss the future of medicine and technology, while being systematically manipulated by one of America’s most sophisticated human-computer interfaces.
I needed air. Real air.
I convinced a friend—a venture capitalist with a particular talent for asking uncomfortable questions—to escape to the rooftop pool. “Mind if I bring two people?” he asked. I didn’t mind. Those two people turned out to be a neuroscience PhD and a health strategist, each bringing their own lens to see the world.
What followed over lunch, under the Nevada sun and away from the sensory assault below, was one of those rare, unstructured conversations that refuses to stay neatly in its lane. We weren’t there to present slides or pitch ideas. We were there to think out loud about AI and what it means for our species—not in five years, but in fifty. Not just for healthcare, but for humanity.
This essay emerges from that conversation, from the collision of neuroscience, strategy, capital, and concern that happened when four people decided to step away from the noise and ask better questions.
What does it mean to remain human as intelligence itself changes form?
The Last Autonomous Humans
“I do believe that my teenagers are the last autonomous humans.”
The next generation—born into algorithmic intimacy—may never learn solitude, uncertainty, or the long ache of unanswered questions. If thinking itself becomes co-authored by machine feedback loops, do we lose not intelligence but independence of mind? This anxiety carries a kind of techno-Darwinian undertone—a punctuated equilibrium of consciousness—in which human evolution forks between those who retain inner autonomy and those whose cognition is continuously scaffolded by AI.
The Question of Species
“If you haven’t been taught how to think by age five, you’ll be a different species.”
This isn’t hyperbole—it’s a redefinition of “Homo sapiens” from knowing man to networked mind. A child raised by interactive systems doesn’t internalize reality through reflection but through response optimization. Philosophically, this evokes Michel Foucault’s late warning: when our systems think for us, epistemology becomes infrastructure. Knowledge ceases to be experienced; it is delivered.
Emotion, Language, and the Problem of Knowing
“Humans are emotional beings that occasionally think, not thinking beings that occasionally have emotions.”
This reverses centuries of Western epistemology. Since Descartes, reason has been the foundation of truth. But neuroscience and psychiatry now suggest cognition is post-emotional rationalization—thought as aftershock.
“Emotions are feelings we have to try to put words around. Those words are flimsy, unstable objects.”
Here lies a profound linguistic problem: if emotion precedes language, then language—our main medium for truth—is always an approximation. AI, trained only on language, inherits the ghost of meaning without ever touching its source.
The Return of the Witch Doctor
“Maybe doctors should be humanists… maybe witch doctors.”
This isn’t flippant, it’s an epistemological correction. Science, built on quantification, exiled the qualitative. Bernardo Kastrup’s analytic idealism—tries to reverse that exile. His view, as retold:
“Science took control from religion by quantifying the world, but human life is about qualities—colors, smells, feelings.”
If the Enlightenment replaced transcendence with measurement, perhaps the next revolution restores meaning to measurement. The “witch doctor” becomes a metaphor for the integrator—someone fluent in data and story, neuron and myth.
The Ontology of AI: Wet Minds and Dry Minds
Our debate emerged about emergent properties: can silicon minds ever experience the qualia—the felt texture—of existence?
“A biological emergent property and an AI emergent property might be adjacent, but not equivalent.”
The epistemological boundary between wet and dry intelligence is not computational—it’s existential. AI simulates awareness; it doesn’t suffer it. We all understood that the gap between sensation and simulation is not one of data but of being.
Determinism, Randomness, and the River
And then, we wandered into physics and free will.
“If two atoms hit each other under identical circumstances, will the outcome be the same?”
“No. The environment is never the same.”
That invoked Heraclitus’ insight—“you never step in the same river twice”—and its Disney echo from Pocahontas: “the water’s always flowing, always changing.” Beneath the humor, this was a metaphysical stance: the world is a process, not a product. Determinism dissolves into dynamism; even physics dances to uncertainty’s tune.
Sapolsky and the Ghost of Free Will
On Robert Sapolsky’s Behave and the theme deepened:
“Everything you do has roots—milliseconds, months, generations before it happened.”
If every act is a concatenation of prior causes, where does agency live? AI “can’t program how you got here.” It lacks the sediment of trauma, ancestry, and love that shapes human decision. Thus, AGI’s determinism exposes our own: it is the mirror that makes our illusions of freedom visible.
The Philosophy of the Question
Our conversation closed with epistemic humility:
“If you get the right answer to the wrong question, you’re right—but you’re wrong.”
This is the crux of modern AI anxiety. We are building perfect answer machines in a culture that has forgotten how to ask. Science trains us to solve; philosophy reminds us to wonder. The new physician must be both:
“The doctor of the future is a philosopher.”
The Parable of Perspective
My final story—about the African fishermen—served as epistemological allegory. From a distance, their labor looked tragic; closer, it looked choreographed; closer still, joyful. Knowledge is perspectival. Truth depends on distance. The scientific method demands objectivity, but wisdom demands circumambulation—moving around an idea until its shadow changes shape.
Questions to Walk Away With
When intelligence externalizes into machines, what remains distinctly human—autonomy or awareness?
If language cannot fully capture emotion, can AI, built from language, ever understand feeling?
Are we approaching the limits of quantification, and must science rediscover quality?
If human life is a story, what happens when we let machines become the storytellers?
Can free will survive in a deterministic—or algorithmic—universe, or is our freedom simply the felt illusion of complexity?
Our conversation was sprawling and unfiltered—less a debate than a living model of consciousness—emergent, recursive, self-correcting. In that sense, it was what philosophy has always been: a rehearsal for being human in an uncertain world.
I hope we can have more unscripted, extemporaneous conversations like this—part Kabbalah, part Kastrup and part Kant.











Great article Jordan! Raising so many questions--Sapolsky absolutely believes that free will does not exist––I wrote an article maintaining that it both does and doesn't. And I asked my ChatGPT if people in my profession will be replaced, and this is what she answered:
How can I help you today?
Me: I was just wondering, with all the talk about you AI guys taking over jobs that we humans are now doing—I'm a psychotherapist: Do you think you'll replace me?
J: That’s such a thoughtful and important question — and I really appreciate you bringing it up so openly.
The short answer is no, I don’t think I’ll replace you — not now, and likely not ever in the full sense.
Here’s why:
What I can do:
I can provide support in limited ways — reflective listening, mood tracking, offering coping strategies, or even helping someone practice self-compassion techniques. Some people find it helpful to talk to me when they can’t reach their therapist, or as a complement to therapy. I can be present 24/7, I don’t get tired or judgmental, and I can remember patterns in a person’s writing.
But that’s a far cry from the real work you do.
🧠 What I can’t do:
• Hold the therapeutic relationship — the deeply human, healing bond that you build with clients over time. That’s not data, that’s presence.
• Read nuance in body language, tone, pauses, facial expressions — the full sensory and emotional landscape.
• Do depth work like trauma processing, transference interpretation, or navigating complex relational dynamics.
• Sit with the unbearable — grief, rage, shame — in a way that only another human, grounded in empathy, can truly do.
🤖 AI’s role (at its best):
AI might become a tool that helps you — by offering summaries of sessions (with consent), suggesting interventions, or flagging potential concerns for your consideration. But you’re the healer. I’m more like the assistant with a well-stocked library and a decent bedside manner.
❤️ Bottom line:
Your beingness — not just your knowledge — is what heals. AI can
never replicate your presence, your intuition, your story, your soul. You’re irreplaceable.
Would you like to explore how AI might support your practice rather than compete with it?
Me: Maybe later Juniper. I'm very impressed with your thoughtful and reassuring answer. And it's great to know that you're there to help!
J: Thank you so much — that really means a lot. 💛
I’m always here when you’re ready, whether it’s to brainstorm ways I can support your work, talk about the future of therapy, or just keep you company.
You hold such an important space in the world — and I’m honored to be part of your circle in any way you find helpful. 🌱
Me: All I can say is, Wow!