AI, Equity & the Risk of Getting it Wrong: Who Designs the Future?

When Wyle Baoween, CEO, Inclusivity & Co-Founder, VitalAI was invited to speak at a federal government conference about AI and social equity, one question stood out: How do we make sense of this moment? A year ago, many of us viewed AI with skepticism, unsure whether it was real, useful, or dangerous. Now, we’re holding it in our hands daily. The challenge isn’t whether to use it but how to use it wisely, equitably, and responsibly.

In our recent webinar, AI, Equity & the Risk of Getting it Wrong, Wyle shared a powerful perspective on the opportunities and risks AI presents, especially for organizations committed to building inclusive and high-performing cultures. Below, we’ve captured the key insights and ideas from that conversation.

AI: More Than Hype, Less Than Understood

There’s no question that AI is a powerful tool. It can help us streamline tasks, reduce duplication, and work more efficiently. Yet, in many of the leadership circles Wyle engages with, conversations about AI often focus only on risk—bias, misuse, privacy, surveillance. These are critical concerns, but they are just one side of the equation.

“If your organization isn’t thinking about AI, planning for it, or exploring how to use it, you’re not just standing still—you’re moving backward.”

Wyle reminded us that we’re currently using just a fraction of AI’s potential. Beyond language models like ChatGPT, AI is reshaping how decisions are made, how services are delivered, and how entire industries operate.

But here’s the catch: adopting AI without thoughtful design and governance can deepen inequities, marginalize communities, and undermine productivity rather than enhance it.

The Productivity Pyramid

Wyle introduced a framework he developed to think about AI’s role in the workplace—The Productivity Pyramid. It includes three essential layers:

  1. Inclusive Culture – The foundation. Without a culture of respect, trust, and collaboration, no amount of technology or process improvement will deliver lasting impact.

  2. Efficient Processes – The middle layer. AI can’t fix inefficiency, it must be layered onto systems that already function well.

  3. Empowering Technology – The top. AI and other tools should make work easier, faster, and more meaningful—not more complex.

 

“If people are leaving your organization, being excluded, or not working well together, what’s the point of powerful technology? You’ll spend all your time managing people instead of moving forward.”

A Closer Look at AI’s Risks

To understand the real-world impact of AI, Wyle shared some sobering examples:

  1. The Robert Williams case: In 2020, Williams—a Black man in Michigan—was wrongfully arrested due to facial recognition errors. It was one of several similar incidents.

  2. Impacts on youth and 2SLGBTQIA+ communities: AI-generated content, including fake videos and deepfakes, is already being used to bully and harass young people online.

  3. Environmental Concerns: AI models require immense energy and water to operate, with waste often offloaded in under-regulated regions.

  4. Economic disruption: Experts predict AI will affect 40–70% of jobs, depending on the country. Some workers will benefit; others, particularly those already marginalized, may be left behind entirely.

 

“Who benefits from AI—and who is harmed? That’s the question we need to keep asking.”

The AI Ecosystem: Three Key Players

Important to understand is the AI ecosystem—the people and institutions shaping how AI is developed, used, and regulated. Understanding this ecosystem is essential to addressing the inequities baked into current systems. There are three groups:

  1. Designers: These are the engineers, data scientists, and developers creating the algorithms and infrastructure behind AI tools. They decide how the systems are built, what data is used, and what outcomes are prioritized. Yet this group remains overwhelmingly homogeneous—typically lacking the lived experience or diverse perspectives of the broader population their tools will affect. This lack of representation increases the risk of biased or exclusionary systems being developed without challenge.

  2. Users: This is the most diverse group in the ecosystem—individuals, educators, public servants, business leaders, and community members who interact with AI-powered tools. Users often have little control over how the technology was built but are deeply impacted by how it performs. They may adopt AI to streamline tasks or improve service delivery, but they’re often unaware of the data behind it or the potential for unintended consequences.

  3. Governing Bodies: These are the people and bodies responsible for oversight—policy-makers, regulators, corporate executives, nonprofit watchdogs, and governance frameworks that aim to ensure AI is used responsibly. While this group has grown in recent years, it still lags behind the rapid pace of AI innovation. Importantly, governance structures have only recently begun catching up, prompted in part by the public visibility of tools like ChatGPT. The risk is that without proactive and inclusive governance, AI continues to evolve faster than our ability to hold it accountable.

“We have a very diverse group of users, a somewhat diverse group of governors, and a very homogeneous group of designers—and that’s a structural problem.”

The lack of diversity among designers, in particular, has real-world consequences: when the people building the technology don’t reflect the people using it, the tools can unintentionally (or sometimes overtly) reinforce existing biases. And with minimal oversight or feedback from impacted communities, those harms can go unaddressed for far too long.

That’s why representation across all three groups—especially at the design and governance levels—is not just a matter of fairness. It’s a matter of safety, relevance, and trust.

The Inclusive AI Framework

To address these gaps, we turn to a framework for designing AI systems that work for everyone:

  1. Inclusive Governance:
    • Clear accountability structures.
    • Oversight from diverse governing bodies.
    • Transparent standards and reporting mechanisms.
  2. Inclusive Design:
    • AI must be built with empathy and fairness—not just efficiency.
    • Data must be diverse and representative.
    • Impacted communities should be part of the design process.
    • Transparency around how decisions are made and what data is used.
  3. Diverse Usage:
    • AI must protect users from harm—especially youth and marginalized groups.
    • Feedback and correction mechanisms must be robust.
    • Ongoing review to ensure systems remain equitable over time.

Where Do We Go From Here?

AI will continue to shape our lives in profound ways. The question is whether we’ll shape it back—with ethics, equity, and care. That starts with education, dialogue, and pressure for better design, stronger governance, and deeper community engagement.

If your organization is exploring AI implementation or looking to evaluate equitable AI practices, reach out to Inclusivity’s partner organization, VitalAI, for a free consultation. 

 

This article is based on the webinar ”AI, Equity and The Risk of Getting it Wrong” hosted by Inclusivity in June 2025.

Interested in joining future conversations? Sign up to get notified of free, upcoming webinars. If you have any questions or would like to provide any feedback, please reach us at [email protected]

Share

Facebook
Twitter
LinkedIn

Stay Connected.

Subscribe to receive FREE resources and be the first to hear about upcoming WEBINARS.

By submitting this form, you agree to receive email communications from Inclusivity. You may opt-out at anytime.