DigiNews

Tech Watch by Johan Denoyer

← Back to articles

LLMorphism: When humans come to see themselves as language models

Quality: 7/10 Relevance: 8/10

Summary

The arXiv paper LLMorphism defines the bias that human cognition may resemble a large language model due to exposure to conversational AI. It distinguishes this idea from related theories and discusses mechanisms like analogical transfer and metaphorical availability, outlining implications for work, education, responsibility, healthcare, and human dignity. The work invites reflection on how AI discourse shapes our understanding of human cognition and boundaries of mind.

🚀 Service construit par Johan Denoyer