Breakthroughs in artificial intelligence (AI) have demonstrated promise for addressing some of society’s most complex challenges, ranging from enabling early diagnosis and treatment of cancer and forecasting wildfire paths to reducing carbon emissions through environmental monitoring and improving student learning outcomes. Yet, the rapid advancements in artificial intelligence also have potential for considerable harm: the displacement of entry-level workers, impacts of AI chatbots on mental health, environmental harms of data centers, proliferation of mis- and disinformation, online safety and privacy concerns, increased surveillance in vulnerable communities, and more.
The power in defining AI’s impact on our future is currently consolidated among only a select few, allowing them to exercise an outsized influence in shaping the future of AI. This year, big tech has invested $325B in AI infrastructure, $149B in venture capital funding has been invested in AI startups, and hundreds of millions of dollars have been spent on lobbying efforts at the state and federal levels to oppose efforts at AI regulation. Meanwhile, broad swaths of society are excluded from the process of designing AI solutions, contributing to debates on ethical deployment, advocating for responsible legislation, and benefitting economically from its success.
As AI continues to evolve and become ubiquitous in our daily lives, these innovations will need to be designed, developed, and deployed in ways that center humans, incorporate prosocial values, and do not cause irreparable harm across our communities. For this vision of responsible AI to become a reality, we must identify an affirmative vision for AI that includes specific opportunities for its transformational use, while ensuring we are simultaneously addressing potential risks and harms to both individuals and society.
This is why now, more than ever, the philanthropic sector has an opportunity to be a strong leading voice, articulating a vision for civil society, and making strategic investments in initiatives that build a more equitable AI future. Yet, the Center for Effective Philanthropy (CEP)’s recent report “AI With Purpose” highlights the current gap in our sector. Only 10 percent of foundations report supporting grantees on AI implementation, with only half of that number specifically focusing on ethical AI. Few foundations and the nonprofits they support indicate having a solid understanding of AI and its applications in their field, and 85 percent of nonprofits report not currently engaging in efforts to advance equitable AI.
As a philanthropic organization committed to reimagining and rebuilding a more equitable technology ecosystem, we at the Kapor Foundation see this as a critical moment for philanthropy to take a stand. Through strategic investments like HumanityAI — a new $500M, five-year initiative in partnership with other philanthropic leaders on AI — we believe the collective influence of philanthropy can begin to shift capital, knowledge, and power toward AI innovation and usage that works for the common good — investments that promote racial, economic, and environmental justice.
Principles of Responsible AI
In the hopes of providing guidance to other funders who are newly entering the AI landscape, the Kapor Foundation has recently released our responsible AI principles to shape the future of innovation. We aim to use these principles to guide strategic grantmaking for all those in the philanthropic sector working in areas including: education, health, climate, immigration, poverty, civil and human rights, and democracy to promote AI development, design, deployment, and regulation that supports a more just and equitable future.
Principle #1: Utilize a sociotechnical framework to identify challenges and meaningful solutions.
AI tools are far from neutral — they are designed with specific frameworks, biases, and perspectives embedded within them that go on to guide their action in and impact on the world. We view AI in application as a sociotechnical system — one in which it is impossible to separate technology from its impact on society.
As funders, we should identify the types of societal problems we aim to solve, evaluate whether AI is the appropriate solution, and which AI-driven solutions are worthy of supporting given their potential for positive social impact.
Principle #2: Incorporate prosocial design principles and continually assess broader societal impacts.
The intention of positive social impact is insufficient if the design of the AI solutions deploy problematic data collection strategies or compensation models and result in unintended harmful consequences. We must employ prosocial and design justice principles by ensuring that solutions are designed with societal benefit at the forefront, the communities most impacted by AI are centered in design, regular audits of impact are conducted, and the entire lifecycle of AI development can achieve its intended social good.
Philanthropy is already built upon prosocial values that seek broader societal impacts across sectors, and thus, it makes us uniquely positioned to invest in programs, research, initiatives, and organizations that are seeking positive social impact (as opposed to prioritizing financial returns). Especially as concerns around the environmental cost of training, exploitation of international data workers, and lack of diversity on AI teams continue to rise, we must be willing to support efforts that put environmental, civil, and human rights at the center of their solutions.
Principle #3: Support AI initiatives that shift power.
With massive resources being funneled into the AI ecosystem by U.S.-based tech companies, venture capital firms, and the federal government, the power to shape and benefit from AI continues to be narrowly concentrated within the hands of a few. Broad swaths of society have little input in decision-making about design and deployment and companies are benefitting economically from the intellectual and cultural property of artists, journalists, and citizens without compensation.
To shift that power imbalance, funders should look to support AI innovations that include a more inclusive development process that includes a diversity of backgrounds, expertise, and communities represented. We must also support efforts that expand data collection and ownership models and compensation structures, such that economic benefits can be shared more broadly. Beyond the technology solutions themselves, philanthropy must double down on its support for building power across sectors to raise concerns and address harms of AI innovation, including supporting researchers, academic institutions, unions, and other worker-led organizations, as well as nonprofits, grassroots organizations, and policy advocates.
Principle #4: Promote critical AI literacy and education across society.
Critical AI literacy will be required for all of us as we navigate this new world and understand how to use tools and ask critical questions about their development, their usage, and their impact. CEP’s report found that almost two-thirds of nonprofits and foundations report that none or just a few of their staff have a solid understanding of AI and its applications. As we consider the impact that AI integration has on high-stakes decision-making such as home lending, employment, incarceration, and surveillance, it is imperative that we support efforts to build AI knowledge amongst workers, consumers, and advocates to stay informed of how the deployment of these tools can disproportionately impact the communities they work with.
At the same time, we must advance access to computing education and critical AI literacy in K-12 education, to ensure all students have foundational technological skills and the skills to critically interrogate the development, deployment, and impacts of AI in society.
Principle #5: Build collective mechanisms for governance and accountability.
We are facing proactive large-scale lobbying efforts by corporations to eliminate any guardrails to AI development, limiting protections for children and consumers, ignoring our climate, and risking significant impact on our communities. For-profit companies and their financial interests should not be the only voice in the conversation —civil society should have power in shaping the future of the technologies that will impact all of our lives.
Philanthropy has an important role to play and we must rely on strategic tactics and allocate resources accordingly to make an impact on the responsible design, development, and deployment of AI. We must support journalists, researchers, and activists who document and identify harms of AI and raise awareness among the general public. We must also support grassroots, advocacy, and movement organizations to shape and advance policies that promote guardrails and other accountability mechanisms.
Call to Action
AI will reshape modern life, and philanthropy has a responsibility to play a decisive role in guiding its development with intention and principles that consider the needs of all society. We can steer AI’s development toward the common good–by galvanizing philanthropic capital towards responsible AI solutions that identify specific challenges to address, shift power, expand critical literacy, and drive greater accountability. In the face of industry resistance and uneven legislative oversight on AI development, this moment will require collective funding to mobilize resources and cross-sector efforts to design responsible AI solutions.
Existing research and scholarship across AI ethics and tech justice point to an undeniable reality — the win-at-all-costs AI race within the for-profit sector cannot ultimately benefit society. And we, as philanthropic leaders, have a unique opportunity to shape a more equitable AI future.
Allison Scott, Ph.D. is the CEO of the Kapor Foundation.
👇Follow more 👇
👉 bdphone.com
👉 ultractivation.com
👉 trainingreferral.com
👉 shaplafood.com
👉 bangladeshi.help
👉 www.forexdhaka.com
👉 uncommunication.com
👉 ultra-sim.com
👉 forexdhaka.com
👉 ultrafxfund.com
👉 bdphoneonline.com
👉 dailyadvice.us
