Engineering Trust in AI: A Human-Centric Approach

Artificial intelligence isn't just about deploying technology. Rather, it requires a concerted effort to understand what people need and the impact of that tech.

Pamela Passman, Chair of Corporate, APCO

April 30, 2024

4 Min Read
 tanit boonruen via Alamy Stock

Artificial intelligence’s challenges have less to do with trusting technology and more to do with trusting humans.  

The tech world needs a better playbook for engineering trust. Standards like those to be developed by the US government’s newly formed AI Safety Institute Consortium are necessary but not sufficient. Trust is social and cultural and involves complex interactions among humans. Unless we build high-trust human cultures, we will not get high-trust AI systems.  

We’ve seen this before. Companies become successful by being relentless competitors. They move aggressively to disrupt entrenched incumbents and prove naysayers wrong. Their phenomenal success sows the seeds of backlash, often in the form of reputational and regulatory broadsides. Any remaining trust dissolves. For all the good it does, the company feels embattled and friendless.  

Let’s learn from that oft-repeated experience. Real change takes time, but it is possible to engineer trust. It is not easy and requires leaders to make a conscious choice to enshrine trust in their "cultural operating system," and to rewire their business and its relationship with the outside world. So here are lessons for culture change in the age of disruption. 

Dale Carnegie said you make friends and influence people by paying attention to what’s important to them. We must recognize that combativeness gets in the way of understanding people and what matters for them. Instead of putting the company at the center of the ecosystem, we must become better listeners and use multiple sensors to analyze this ecosystem from different viewpoints: the concerns of regulators, partners, customers and others, and why they believe what they believe.  

Related:Is Innovation Outpacing Responsible AI?

That simple truth can inform corporate strategy and technological development, and it provides a feedback loop from stakeholders at the nexus of business, policy, media and civil society. These signals might not get picked up in sales metrics and stock prices, but listening to them can mitigate enterprise value at risk or unlock opportunities to lead the next wave of innovation.  

Second, we all have to do a better job of explaining our technologies, helping policymakers to better understand key issues and challenges, while also listening to their concerns. With any emergent technology there are unknown risks, but good dialog and partnership can help to anticipate them and spur new innovations to increase resilience. One of the best examples is cybersecurity. The threat environment is constantly evolving. Combatting bad actors requires ongoing cooperation among private and public sector stakeholders, the tech industry, regulators and law enforcement.  

Related:How CEOs and IT Leaders Can Take the Wheel on Responsible AI Adoption

Third, the old playbook that made you successful also creates social risks and backlash. You will try to keep on doing what’s worked so well in the past, but eventually you will realize you’re playing a new game with new rules, and you’ll need a new playbook, too. It takes more than refreshing the message or going on a charm offensive. It takes a leadership commitment to engineering trust throughout the organization, from business strategy to product design, brand, culture and especially governance.

Finally, corporate governance can only take us so far. AI does need rules of the road. What’s notable is the willingness of policymakers around the world to regulate AI at an early stage, having learned from the development of the internet and social media not to just wait and see what happens. Rather than put blind faith in innovation, they want to “trust but verify,” as Ronald Reagan might have put it. 

President Biden’s sweeping executive order on artificial intelligence is an important step for building AI systems we can trust. The new AI safety consortium that flows from it is a public-private initiative to set standards for red-teaming, risk management, safety and security, and watermarking AI-generated content. The clear objective is to strengthen American leadership while safeguarding consumers and national security at home and extending AI rules and standards globally.  

Related:AI: Friend or Foe?

There is a tension here we should acknowledge: No country wants to fall behind in the race for AI dominance. Yet for global rules to be sustainable, countries on the leading edge of AI will need to collaborate in ways that are hard for economic competitors and geopolitical rivals to do.  

My experience tells me we have the power to engineer trust; the revolution in artificial intelligence makes it imperative. If leaders across the technology and policy landscape commit to a genuine process of engaging with legitimate concerns, developing globally accepted rules and building high-trust cultures, we can engineer trust into artificial intelligence to advance human progress.  

About the Author(s)

Pamela Passman

Chair of Corporate, APCO

Pamela Passman is APCO Chair of Corporate and formerly Microsoft’s deputy general counsel for global corporate and regulatory affairs. 

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights