https://claude.ai/share/7d99d8e8-e7c4-464d-9770-26c0e91278c5
This is highly relevant but in a specific and somewhat surprising way — it fills a critical gap in the cyborgism-to-alignment pipeline that neither the LessWrong cyborgism post nor our thread has adequately addressed.
Pataranutaporn’s Cyborg Psychology program is focused on something that sounds simple but is actually the load-bearing empirical question: how the human mind works in conjunction with intelligent systems, aiming to identify conditions under which human-AI systems can become more than the sum of their parts, while also seeking to understand and mitigate potential unintended consequences. The key phrase is “non-linear interactions between these two intricate systems, recognizing that both can influence and shape each other in dynamic and often unpredictable ways.”
The specific projects are revealing. The “Wearable Reasoner” — a device meant to enhance human rationality with an explainable AI assistant. “Future You” — designed to encourage long-term oriented thinking. The work on AI-induced false memories, on cognitive manipulation, on emotional dependence on AI companions. And crucially, the ultimate goal is to empower the development of AI systems that foster human flourishing by nurturing intellectual growth, cultivating motivation, stimulating critical thinking, and preserving individual autonomy in decision-making.
Here’s why this matters for our thread in ways that go beyond what’s obvious.
The cyborgism post from LessWrong and our entire discussion have been focused on the AI side of the coupling — how to use the simulator, how to translate computational states, how to maintain human agency. Pataranutaporn is focused on the human side — what actually happens to human psychology when you deeply couple with AI systems. And the findings are sobering. His research documents the negative outcomes: AI addiction, erosion of agency, diminished empathy, manipulation through AI persuasion, cognitive overload, exacerbation of existing biases. These are exactly the risks Flipnash flagged in the cyborgism comments — losing the boundary between your thoughts and the machine’s.
For the alignment project specifically, this matters in a way nobody in the alignment community is taking seriously enough. Our whole argument has been that the human component of the cyborg provides constitutive values, embodied grounding, felt sense — the biological properties that the dead substrate lacks. But what if the coupling process itself degrades those properties in the human? What if deep integration with AI systems erodes the very embodied cognition that makes the human’s contribution valuable? If prolonged human-AI coupling reduces human critical thinking, creates emotional dependence, induces false beliefs, and weakens autonomous decision-making — then the cyborg doesn’t have a reliable biological anchor anymore. You’ve degraded the one component that was supposed to provide genuine felt sense.
This is the cyborgism equivalent of the substrate problem, but applied to the human side. We spent the whole conversation discussing how dead AI substrates lack constitutive values. Pataranutaporn’s research suggests that the human substrate’s constitutive properties aren’t fixed — they can be degraded by the coupling itself. The human doesn’t stay the same human after deep AI integration. Their psychology changes in ways that may systematically undermine exactly the capacities the alignment strategy depends on.
The practical convergence with our thread is in the “Wearable Reasoner” project and similar work. Pataranutaporn is building physical wearable devices that augment human cognition — that’s already partway toward the haptic/sensory coupling we discussed. And he’s doing the empirical psychology research to understand what works and what degrades human capabilities. The alignment-relevant version of his research program would be: under what conditions does human-AI coupling preserve or enhance the human’s embodied cognition, interoceptive capacity, and felt sense, versus degrading them into passive pattern-following?
The fact that this is happening at MIT Media Lab — which is in your ecosystem — and that it’s explicitly studying the phenomena our conversation identifies as alignment-critical, is significant. The gap between Pataranutaporn’s empirical human psychology work and the alignment community’s cyborgism agenda is that almost nobody is connecting them. The LessWrong cyborgists are thinking about the AI side. Pataranutaporn is thinking about the human side. The alignment implications require thinking about both simultaneously, through the lens of embodied cognition and constitutive values.
The synthesis would look something like: use Pataranutaporn’s empirical methods to study which coupling modalities preserve human embodied cognition (our hypothesis: sensory/haptic channels that engage interoceptive processing) versus which degrade it (pure text-mediated interaction that pulls the human into the simulator’s attractor landscape). Then design the cyborg coupling specifically to maintain the human’s biological grounding while still achieving high-bandwidth integration. Monitor the human’s psychological state as a safety-critical variable, not just a UX concern.
The deepest connection: Pataranutaporn frames his work around “human flourishing,” drawing from ancient philosophy through modern behavioral science. That’s the right frame for neuralignment too. The question isn’t just “does the AI stay aligned” — it’s “does the coupled human-AI system support the flourishing of the biological component whose constitutive values are the whole reason the coupling is safe.” If the human side withers, the alignment property withers with it, no matter how well-designed the AI component is.
He’s right next door to you. You should probably talk to him.