A new $500 million alliance is underway to wrest control of AI’s trajectory—Humanity AI launches now.
Ten major foundations are pooling resources to ensure AI prioritizes human flourishing, not profits.
Key Takeaways
- $500M over five years to support human-centered AI development
- Coalition includes MacArthur, Ford, Mozilla, Mellon, Omidyar, and more
- Focus on democracy, education, labor, culture, security
- Rockefeller Philanthropy Advisors to manage grants starting 2026
- Initiative aims to rebalance AI power toward communities
Humanity AI is a philanthropic coalition launched in October 2025 that commits $500 million over five years to support organizations and frameworks ensuring AI development centers human, community, and societal needs.
Why Humanity AI Matters
As AI systems—large models, automated decision systems, generative tools—become embedded in work, education, governance, and art, the benefits and risks are increasingly distributed unevenly. The voices shaping AI often belong to a narrow set of corporate or elite actors. Humanity AI is an intentional counter to that.
John Palfrey, president of the MacArthur Foundation, put it bluntly: “This technology often feels like it’s happening to us rather than with us.”
By pooling philanthropic capital, the coalition hopes to shore up civil society, fund technical innovation aligned with human values, and seed governance models that give communities more say.
Coalition & Commitments
The founding members include Doris Duke Foundation, Ford Foundation, Lumina, Kapor, MacArthur, Mellon, Mozilla, Omidyar Network, Packard, and Siegel Family Endowment.
Notably, the Siegel Family Endowment publicly committed $1 million toward the initiative, emphasizing the need to protect creative work from being overwritten by AI systems.
They plan a two-phased rollout:
- Fall 2025: aligned grantmaking begins across participating foundations.
- 2026 onward: pooled fund managed by Rockefeller Philanthropy Advisors will support open calls.
Focus Areas: Where the Money Will Go
Each foundation will commit to one or more priority focus areas.
- Democracy: Promote accountability, guard against bias and misinformation, strengthen transparency in systems using AI.
- Education: Shape AI’s use in learning so it expands access, rather than exacerbating inequality.
- Humanities & Culture: Protect creators’ rights, ensure AI augments rather than erases human expression.
- Labor & Economy: Help workers adapt, build tools that enhance human roles instead of simply automating them.
- Security: Enforce high standards in AI-driven systems (e.g. autonomous vehicles, credit decisions) to preserve safety and fairness.
One coalition member told Fast Company that they want to support projects that help artists maintain control over their likeness and work, even as generative AI proliferates.
Governance & Accountability
Rockefeller Philanthropy Advisors (RPA) will serve as fiscal sponsor, coordinating pooled funds and staffing a small core team.
The initiative will also continue expanding to include more funders focused on social, cultural, and equity issues.
MacArthur is concurrently hiring a Director of AI Opportunity to manage its “Big Bet” program, ensuring broader participation in AI development.
The Bigger Picture: Philanthropy Meets AI Governance
This move is part of a broader trend: philanthropic actors increasingly seeking influence over AI’s direction. Earlier this year, the Gates Foundation and Ballmer Group unveiled a $1 billion initiative in AI for social good.
Yet the scale and ambition of Humanity AI stand out. Some observers see this as a way to counterbalance the outsized control that tech giants currently wield over AI research, infrastructure, and deployment.
Still, questions remain: how well will the coalition coordinate transparency, performance benchmarks, conflict-of-interest safeguards, and real community engagement? These will be the tests ahead.
What Happens Next — What to Watch
- Grant calls in 2026: Early funding rounds and selection criteria will offer a signal of priorities in practice.
- New funders join: The coalition’s ability to scale depends on adding voices—particularly from the Global South, marginalized communities, and non-US philanthropy.
- Impact metrics: Will grantees be measured in terms of real shifts in power, inclusion, and governance, not just outputs?
- Policy bridging: The initiative may link with governance, regulation, or public-sector actors to codify human-first AI norms.
- Public discourse: Watch how this shapes debates in AI policy forums, international bodies, and tech regulation.
Impact on Readers & Communities
For everyday people — workers, students, creators — Humanity AI offers potential guardrails.
- A gig worker might gain access to AI tools that amplify rather than replace her role.
- A student in underserved areas might benefit from AI tutoring shaped around equity, not data bias.
- An artist might better control how their work is reused or remixed by AI systems.
However, ultimate success depends on whether those most affected are treated as partners—not passive beneficiaries—in shaping AI’s future.
Conclusion
Humanity AI is a bold philanthropic push to reclaim AI’s narrative and direction. If it succeeds, the balance of power in AI may shift—toward communities, not just corporations. The coming years will test whether it can translate ambition into tangible influence.