Did AI Create New Knowledge Silos? Voices from Around the World
As the world marks Open Access Week, the research community once again turns its attention to the free and equitable exchange of knowledge. This year, the conversation extends beyond journals and paywalls to a newer frontier: artificial intelligence (AI). Once hailed as the ultimate democratizer of information, AI now shapes how we discover, interpret, and share research. But in doing so, it also raises a pressing question: has AI created new knowledge silos, even as we strive for openness?
Artificial intelligence was supposed to be the great equalizer—an invisible force that made knowledge truly borderless! Yet as AI models become increasingly complex, a new question is emerging: has AI actually created new knowledge silos?
Across disciplines and geographies, thought-leaders are reflecting on how AI reshapes the global information ecosystem. Their perspectives reveal a paradox—AI connects us, but it can also quietly divide. We looked around and have curated the voices of some thought-leaders from around the world on this subject.
- The Algorithmic Gatekeepers
AI ethicist Timnit Gebru has long argued that large language models are not neutral tools. They mirror the inequalities of the data they are built on. When models are trained mostly on English-language and Western datasets, entire bodies of knowledge from other parts of the world remain invisible. (Harvard Business School)
Gebru’s advocacy for transparency and community-led AI development reminds us that democratizing AI isn’t about open APIs alone—it’s about who gets represented in the data to begin with.
- The Echo Chamber Effect
Social scientist Sinan Aral highlights how algorithmic and personalized content flows narrow our information diets. His concept of the “hype machine loop” explains how social-media algorithms tend to show us more of what we already like, reducing exposure to diverse viewpoints. (CHM)
While AI helps us find content that feels relevant, it also narrows what we see. The same personalization that makes a feed feel smart can quietly silo us from global perspectives—even in academic and research spaces.
- The Great Data Divide
Fei-Fei Li, co-director of Stanford’s Human-Centered AI Institute, warns of a widening “data divide.” Only a handful of corporations and elite institutions have the resources to collect, label, and train data at scale. That means innovation—once the hallmark of open collaboration—now depends on who controls the pipelines of information. (McKinsey & Company)
Li’s vision of human-centered AI stresses inclusivity and shared governance to prevent knowledge monopolies.
- Open Science at Risk
Open-access champion Johan Rooryck, Executive Director of cOAlition S, believes AI could undermine open science if it locks insights behind proprietary systems. If AI-generated insights cannot be audited or cited, we risk losing transparency—one of the foundations of scientific knowledge sharing.
- Missing Voices
Cognitive scientist Abeba Birhane describes this as “epistemic exclusion”—a phenomenon where local knowledge from the Global South rarely enters AI datasets or benchmarks. (WIRED)
When these perspectives are missing, entire communities remain invisible in the world that AI depicts.
- Building Bridges in Asia
Across Asia, a quiet counter-current is forming. In Japan and Korea, language-specific AI models (such as rinna and HyperCLOVA) are helping researchers work in their native languages while still connecting to global networks. These efforts aim to democratize AI knowledge for non-English-speaking researchers and break the linguistic silos that global AI has often reinforced.
- Editorial AI as Equalizer
According to Shilpi Mehra at Paperpal, “AI can either deepen divides or close them—depending on how we use it.” By supporting multilingual research writing and improving accessibility, editorial AI tools can help level the playing field for researchers who lack institutional support or English fluency. Rather than replacing editors, such tools amplify accessibility and inclusion.
The Way Forward
AI didn’t set out to build new silos. But by concentrating data, code and computation in limited hands, it risks doing just that. Breaking these barriers will require collaboration between technologists, publishers and research communities to ensure transparency, diversity and openness. The future of knowledge shouldn’t depend on who owns the algorithms—but on who contributes to them.




