In 1956, the U.S. passed the Federal Highway Act, promising widespread public benefits. However, these highways tore through thriving Black neighborhoods, devastating generational economic and cultural vitality. In Durham, North Carolina, where I live, Highway 147 separated the historic Black Wall Street from the vibrant Hayti neighborhood, severing the community from its cultural and economic heart. A thriving four-block area of businesses nationally recognized by W.E.B. Dubois has now been reduced to one block and a handful of Black-owned businesses.
This history shows how infrastructure, despite intentions to connect, can fracture and harm communities.
Today, we are at a similarly critical juncture. Hundreds of millions in philanthropic and governmental dollars are flowing into educational AI infrastructure, coinciding with a political climate pushing so-called “colorblind” policies and an executive order mandating the use of generative AI in schools. A combination that is likely only to ensure that biased datasets will be leveraged by biased computations, leading to unsurprisingly biased predictions, causing real harm for Black learners.
My own research into the question reveals that more than $100 million will be invested in artificial intelligence (AI) infrastructure projects in education within the next year. The investments primarily focus on building robust knowledge bases or creating large open-source student data sets. For example, a funder collaborative, which includes the Gates Foundation, has announced a forthcoming $25+ million investment in AI infrastructure.
This raises a critical question for the philanthropic field: Have those leading these philanthropic initiatives looked back at history carefully enough to inform an equitable path forward, ensuring we avoid repeating the harms of past infrastructure investments? I raise this question because about a month ago, a funder nearly turned down an AI project my organization is working on because they couldn’t see how our tool builds infrastructure from the ground up. History reminds us that infrastructure built without meaningful input from impacted communities often results not in collective progress but in displacement, exclusion, and lasting harm.
In recent history, Facebook and Instagram content moderation policies provide a clear example of algorithmic bias and its tangible consequences. These platforms frequently mislabel content from Black creators as harmful or dangerous, significantly limiting their reach, engagement, and ability to build audiences.
Now, imagine this bias replicated in educational AI systems: Black students could become systematically less likely to see educational content created by Black educators and creators — content more likely to affirm their cultural identity, reflect their experiences, and engage them meaningfully. This interrupts a well-cited strategy for improving academic outcomes for all students and especially Black students: the more engagement they have with Black educators, the better their performance.
Even well-meaning efforts like open-source educational data can echo painful histories, such as the story of Henrietta Lacks. Henrietta’s cells were collected and used without her consent, denying her economic benefit and recognition. Only after a scientist reached out to the family to get more data (i.e., blood samples) and then, after decades of advocacy, her family gained $10 million in financial restitution (her cells generated billions in revenue). Similarly, open-source educational data initiatives could extract valuable insights from unnamed individuals without fair compensation, perpetuating inequities. Ensuring ethical practices in data collection and use, including fair economic compensation, must be a priority to avoid repeating such historical harms.
To avoid these harms, we must intentionally invest in what I call ‘Cultural Infrastructure’ — an approach grounded in three core commitments:
- Community-Generated Datasets: Actively capturing authentic dialogues and aspirations about education from Black families and communities, ensuring data reflects genuine cultural values, linguistic nuances, and diverse experiences.
- Ethical Data-Sharing Protocols: Transparent and respectful practices that provide clear ownership rights, informed consent, and meaningful monetary benefits to the communities whose data inform AI models.
- Continuous Iterative Updates: Ongoing, deliberate processes to regularly refresh and refine datasets, ensuring AI systems dynamically adapt to cultural evolution and community feedback.
To grasp why this matters, consider how AI systems function. AI models designed for broad use lose accuracy when applied to specialized fields like education. Thus, funders are smartly investing in infrastructure to curate content/standards and collecting large amounts of student data, allowing AI models to generate more specialized recommendations. Who decides what goes into these datasets and the criteria included will determine whether AI repeats old biases or supports fair systems.
Several initiatives already demonstrate promising approaches toward culturally responsive AI. Stanford’s Trustworthy AI Research (STAIR) lab is developing methods to audit and reshape language models so they don’t reinforce harmful racial and gendered stereotypes. At Village of Wisdom, we plan to launch a “Dreams Assessment,” a tested, community-driven process that will capture families’ educational aspirations to guide AI tool development.
Yet, to realize AI’s full benefit for all, funders must explicitly integrate cultural infrastructure standards into their funding evaluation criteria. At a minimum, every infrastructure investment should include authentic community data creation, fair economic compensation, and iterative community-driven refinement. Beyond these criteria, funders would be wise to fund initiatives that engage earnestly with communities to develop culturally affirming benchmarks for success.
Moreover, as scholar Ruha Benjamin reminds us, perhaps all of us should spend more time looking back, leveraging our collective “Ancestral Intelligence,” before rushing into a technological future we haven’t fully prepared for. Our ancestor W.E.B. Du Bois offers us exactly this kind of AI, documenting how Black Durham — like many Black Wall Streets — built infrastructure not for individual gain, but for collective care and cultural transmission, through entities rooted in local ownership, community investment, and shared accountability.
This example stands in stark contrast to the Highway Act and the story of Henrietta Lacks, both of which reveal a core truth: When we sever communities from contributing fully and with agency, we rob everyone of the richness, wisdom, and possibility that could be built. As AI becomes the next great infrastructure project, we must decide: will Black communities finally lead, shape, and benefit from this future, or will we once again be the mined data and paved-over community underneath someone else’s “progress”?
William Jackson is the Founder and Chief Dreamer of Village of Wisdom. Find him on LinkedIn.
Editor’s Note: CEP publishes a range of perspectives. The views expressed here are those of the authors, not necessarily those of CEP.
👇Follow more 👇
👉 bdphone.com
👉 ultractivation.com
👉 trainingreferral.com
👉 shaplafood.com
👉 bangladeshi.help
👉 www.forexdhaka.com
👉 uncommunication.com
👉 ultra-sim.com
👉 forexdhaka.com
👉 ultrafxfund.com
👉 bdphoneonline.com
👉 dailyadvice.us