LLMorphism: When humans come to see themselves as language models
Summary
The arXiv paper LLMorphism defines the bias that human cognition may resemble a large language model due to exposure to conversational AI. It distinguishes this idea from related theories and discusses mechanisms like analogical transfer and metaphorical availability, outlining implications for work, education, responsibility, healthcare, and human dignity. The work invites reflection on how AI discourse shapes our understanding of human cognition and boundaries of mind.