Episode 4
Runtime: 53:23
January 23rd, 2026
As artificial intelligence accelerates across industries, the risks associated with AI are no longer hypothetical—they’re operational, ethical, and societal. In this episode of Crushing It, Jonathan Trimble sits down with Xiaochen Zhang, Chief Responsible AI Officer and Executive Director at AI 2030, to unpack what responsible AI really means and why the window for getting it right is rapidly closing.
The conversation goes beyond high-level principles to explore how bias, lack of transparency, and weak governance enter AI systems at every stage of the lifecycle—from data collection and model design to deployment and real-world use. Zhang explains why AI’s ability to operate at massive scale makes these issues more dangerous than human bias alone, and why organizations must rethink who is at the table when AI systems are built.
This episode also examines the often-overlooked sustainability challenges of AI, including energy consumption, carbon impact, and resource constraints tied to data centers and model training. Zhang shares how responsible AI must balance innovation with environmental and social realities, and why sustainable AI design is becoming a strategic—not optional—consideration.
The discussion closes with a look toward 2030, highlighting both the extraordinary potential of AI to improve human life and the serious risks posed by fragmented governance, unchecked speed, and misplaced trust. It’s a candid, forward-looking conversation on how leaders can embed responsibility into AI today—without slowing innovation tomorrow.
What responsible AI really means in practice
How bias and risk enter AI systems
Why governance and accountability matter
The sustainability impact of AI at scale
Why action on AI risk is urgent now
Xiaochen Zhang is a global leader in responsible artificial intelligence and emerging technology governance. He is the Chief Responsible AI Officer and Executive Director of AI 2030, a global initiative focused on mainstreaming transparent, accountable, and sustainable AI by the end of the decade.
Through his work at AI 2030, Zhang collaborates with governments, enterprises, technologists, and policymakers to address the ethical, social, and environmental challenges created by AI at scale. His work spans responsible AI frameworks, talent development, sustainability, and global governance, with a focus on ensuring AI advances human well-being rather than undermining it.
Zhang regularly advises public- and private-sector organizations on AI strategy and risk, contributes to international AI policy discussions, and leads a growing global community committed to shaping the future of responsible AI.
Jon is a former FBI Special Agent and cybersecurity executive whose career focused on intelligence, analytics, and technology development. A graduate of the U.S. Coast Guard Academy, he brings a strategic, systems-level perspective to how leaders understand risk and make decisions in complex environments.
Rob is a former FBI Special Agent who led and supported extensive international cyber investigations involving complex threat actors and cross-border risk. A graduate of the U.S. Military Academy at West Point, he brings an operational, real-world lens to conversations about resilience, accountability, and leadership under pressure.
Together, Jon and Rob bring FBI-honed lessons about risk and resilience to every conversation.
Interested in being a guest on Crushing It?
Listen on Apple Podcasts | Spotify | YouTube
Quickly and efficiently build the materials you need to support your inbound marketing strategy. Drag and drop building blocks including testimonials, forms, calls-to-action, and more.
Quickly and efficiently build the materials you need to support your inbound marketing strategy. Drag and drop building blocks including testimonials, forms, calls-to-action, and more.
©2026 Bawn, Inc. All rights reserved.