Summary: The article discusses concerns raised by experts like Gary Marcus and theoretical physicist Michio Kaku about the impact of open-source AI tools, particularly large language models, on the internet. These tools can create echo chambers of flawed information and are vulnerable to “Habsburg AI,” where AI-generated data loops lead to distorted results. AI can also amplify practices like search-engine poisoning, further muddling online information. Misattributed quotes and misinformation can affect historical context, pandemic responses, and decision-making. The article warns that unchecked AI-generated content could create a chaotic online information landscape, undermining trust in news and knowledge sources. It calls for a clear strategy to manage AI’s risks.