Moving from lsp-mode in GNU Emacs to Eglot
Summary
This article discusses the drawbacks of using highly generic HTTP User-Agent headers, advocating for identifiable software and operators to reduce abuse by crawlers. It highlights the tension between web crawlers (including data collection for AI) and site maintainers, and urges clear identification and real URLs describing activities. While framed around a blog policy, it offers practical guidance on HTTP header practices relevant to developers and site operators.