top of page
Search

The Library is Burning: Why AI’s Attack on Wikipedia Threatens Every Project

  • Writer: James Garner
    James Garner
  • Oct 25
  • 5 min read


With AI summaries causing a 50% drop in click-throughs to sources, the internet’s foundational library is facing a silent crisis. For project professionals who rely on verifiable knowledge, this is a five-alarm fire.


For two decades, Wikipedia has been the internet’s unlikely miracle. A sprawling, volunteer-run encyclopedia that became the de facto starting point for everything from high school essays to complex project research. It is, as one TechCrunch writer aptly put it, “the last good website” on an internet increasingly swamped by “toxic social media and AI slop” [1]. But that last good place is now under threat, and the culprit is the very technology that promised to make information more accessible: Artificial Intelligence.


The Wikimedia Foundation, the non-profit behind the encyclopedia, recently dropped a bombshell: human page views have plummeted by 8% year-over-year [1]. The cause? The rise of AI-powered search summaries, like Google’s “AI Overviews,” which scrape information from sites like Wikipedia and present it as a neat, self-contained answer. Users get their factoid, and Google keeps its traffic. Everybody wins, except the source.


This isn’t just a problem for Wikipedia; it’s a systemic threat to the integrity of information that underpins every knowledge-based project. When the sources of truth begin to wither, the foundations of our work begin to crack.


ree

Death by a Thousand Summaries


The data paints a stark picture of this new reality. A landmark study by the Pew Research Center found that when a Google search page includes an AI summary, the number of users who click on a traditional search result link is slashed nearly in half, dropping from 15% to just 8% [2]. Even more damning, a minuscule 1% of users ever bother to click on the source links provided within the AI summary itself. The message is clear: users are content with the AI’s answer, and the original source of that information is becoming invisible.


“When you search for information online, look for citations and click through to the original source material. Talk with the people you know about the importance of trusted, human curated knowledge, and help them understand that the content underlying generative AI was created by real people who deserve their support.”

— Marshall Miller, Wikimedia Foundation [1]


This creates a death spiral for the open web. Wikipedia is a non-profit that relies on a virtuous cycle: people visit the site, they see its value, and a small fraction are inspired to either donate money or, even more crucially, volunteer their time to edit and maintain its millions of articles. As Marshall Miller of the Wikimedia Foundation warns, “With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work” [1]. The AI is, in effect, drinking the milkshake of the very ecosystem it needs to survive.


The Project Manager’s Dilemma: Trusting the Black Box

For those of us in project delivery, this trend should be setting off alarm bells. Our profession is built on a bedrock of verifiable facts, credible sources, and clear data provenance. We use these to build business cases, define scope, and manage risk. What happens when the primary gateway to that information—the search engine—actively discourages us from visiting the source?


We are being trained to trust the output of a black box. The AI summary gives an illusion of authority, but it obscures the nuance, the debates, and the citations that are the lifeblood of a well-researched Wikipedia article. It removes the “why” and just gives you the “what.” This is a dangerous path.


Consider the risk to your own projects. A team member, under pressure, googles a critical technical standard or a piece of market data. They get an AI-generated answer and plug it into their report. But the AI has subtly misinterpreted the source, or scraped from an outdated version of the page. Without the cultural norm of clicking through to the source, that error gets baked into your project’s assumptions. The consequences could range from minor rework to catastrophic failure.


"Google users are more likely to end their browsing session entirely after visiting a search page with an AI summary than on pages without a summary. This happened on 26% of pages with an AI summary, compared with 16% of pages with only traditional search results."


— Pew Research Center, July 2025 [2]


This isn’t just about Wikipedia. It’s about the health of the entire information ecosystem. As AI companies race to build more powerful models, they are engaged in a massive, often uncredited, harvesting of the internet’s intellectual commons [3, 4]. If we allow the sources to decay, the AI models of the future will be training on the echoes of echoes, a phenomenon researchers are already calling “model collapse” [5].


Rebuilding the Culture of Verification


At Project Flux, we see this not as a technology problem, but as a project management and leadership challenge. The integrity of your project’s knowledge base is a critical asset that must be actively managed. The rise of AI search requires us to instill a new, more rigorous culture of information verification within our teams.


We must treat AI-generated summaries with the same professional skepticism we would an unverified claim from any stakeholder. We need to train our teams to ask the hard questions: Where did this information come from? Can I see the original source? What context am I missing? We must champion the act of clicking the link.


The internet was designed to be a resilient, interconnected web. By concentrating information into the hands of a few AI gatekeepers, we are creating single points of failure, not just technically, but epistemologically [6]. We are trading the vibrant, chaotic, and ultimately verifiable library of the open web for a tidy, convenient, but dangerously opaque oracle.


It’s time to fight back. Not by rejecting AI, but by refusing to let it make us intellectually lazy. The future of our projects, and the health of our shared knowledge, depends on it.


Is your team equipped to handle the information integrity challenges of the AI era? Subscribe to Project Flux to get the frameworks and tools you need to build robust, reliable, and risk-aware project teams.


References


[1] TechCrunch. (2025, October 18). Wikipedia says traffic is falling due to AI search summaries and social video. https://techcrunch.com/2025/10/18/wikipedia-says-traffic-is-falling-due-to-ai-search-summaries-and-social-video/


[2] Pew Research Center. (2025, July 22). Google users are less likely to click on links when an AI summary appears in the results. https://www.pewresearch.org/short-reads/2025/07/22/google-users-are-less-likely-to-click-on-links-when-an-ai-summary-appears-in-the-results/


[3] CNET. (2025, October 22). Wikipedia Says It's Losing Traffic Due to AI Summaries, Social Media Videos. https://www.cnet.com/tech/services-and-software/wikipedia-says-its-losing-traffic-due-to-ai-summaries-social-media-videos/


[4] Search Engine Journal. (2025, October 20). Wikipedia Traffic Down As AI Answers Rise. https://www.searchenginejournal.com/wikipedia-traffic-down-as-ai-answers-rise/558803/


[5] Ars Technica. (2024, May 15). Internet-trained AI models risk ‘model collapse,’ researchers warn. https://arstechnica.com/science/2024/05/internet-trained-ai-models-risk-model-collapse-researchers-warn/


[6] Observer. (2025, October 20). Wikipedia's Traffic Falls 8% as A.I. Alters How Users Seek Information. https://observer.com/2025/10/wikipedia-ai-eating-traffic/

 
 
 

Comments


bottom of page