• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
CommentaryAI

Investors are pouring billions into artificial intelligence. It’s time for a commensurate investment in A.I. governance

By
Beena Ammananth
Beena Ammananth
Down Arrow Button Icon
By
Beena Ammananth
Beena Ammananth
Down Arrow Button Icon
January 16, 2023, 12:26 PM ET
Pope Francis holds an audience with signatories of The Rome Call For A.I. Ethics on Jan. 10. The document was signed by the Pontifical Academy for Life, Microsoft, IBM, the FAO, and the Italian Ministry of Innovation.
Pope Francis holds an audience with signatories of The Rome Call For A.I. Ethics on Jan. 10. The document was signed by the Pontifical Academy for Life, Microsoft, IBM, the FAO, and the Italian Ministry of Innovation.Vatican Media - Vatican Pool - Getty Images)

In this heyday of A.I. Innovation, organizations are pouring tens of billions of dollars into A.I. Development. However, for all the money invested in capabilities, there has not been commensurate investment in A.I. Governance.

Some companies may take the position that when world governments release A.I. Regulations, that will be the appropriate time to wrestle A.I. Programs into a governance structure that can address complex topics like privacy, transparency, accountability, and fairness. In the meantime, the business can focus solely on A.I. Performance.

Regulatory wheels are already in motion. However, regulations move at the speed of bureaucracy, and A.I. Innovation is only accelerating  A.I. Is already deployed at scale, and we are rapidly approaching a point after which A.I. Capabilities will outpace effective rulemaking, putting responsibility for self-regulation squarely in the hands of business leaders.

The solution to this puzzle is for organizations to find the balance between following existing rules and self-regulation. Some companies are rising to the responsible A.I. Challenge: Microsoft has an Office of Responsible A.I. Use, Walmart a Digital Citizenship team, and Salesforce an Office of Ethical and Humane Use of Technology. However, more organizations need to quickly embrace a new era of A.I. Self-regulation.

The business value in self-regulation

Government bodies cannot look into every enterprise, understand at a technical level what A.I. Programs are emerging, forecast the potential issues that may result, and then rapidly create rules to prevent problems before they occur. That’s an unreachable regulatory scenario–and not one business would want in any case. Instead, every enterprise has an incisive view of its own A.I. Endeavors, putting it in the best position to address A.I. Issues as they are identified.

While government regulations are enforced with fines and litigation, the consequences of failing to self-regulate are potentially much more impactful.

Imagine an A.I. Tool deployed in a retail setting that uses CCTV feeds, customer data, real-time behavior analysis, and other data to predict what the shopper may be most likely to buy if an employee uses a particular sales technique. The A.I. Also shapes customer personas that are stored and updated for targeted advertising campaigns. The A.I. Tool itself was purchased from a third-party vendor and is one of dozens of A.I. Deployed throughout the retailer’s operations.

Emerging regulations may dictate how the customer data is stored and transferred, whether consent is needed before the data is collected, and whether the tool is provably fair in its predictions. Those considerations are valid, but they are not comprehensive from the business perspective. For example, was the A.I. Vendor and its tools vetted for security gaps that could imperil the enterprise’s connected technologies? Do staff have the necessary training and documented responsibilities needed to use the tool correctly? Are customers aware that A.I. Is being used to build a detailed persona that is stored in another location? Should they be aware?

The answers to these kinds of questions can significantly impact the enterprise in terms of security, efficiency, ROI on technology investments, and brand reputation, among other things. This hypothetical case reveals how failing to self-regulate A.I. Programs exposes the organization to myriad potential problems–many of which likely fall outside of a government’s regulatory purview anyway. The best path forward with A.I. Is shaped by governance.

Governance for trust in A.I.

No two companies and A.I. Use cases are the same, and in the era of self-regulation, the enterprise is called to assess whether the tools it uses can be deployed safely, ethically, and in line with company values and existing or tangential rules. In short, businesses need to know if the A.I. Can be trusted.

Trust as a lens for governance impacts more than just the commonly cited A.I. Concerns, such as the potential for discrimination and threats to personal data security. As I discuss in my book, Trustworthy AI, trust also applies to things like reliability over time, transparency to all stakeholders, and accountability baked into the entire A.I. Lifecycle.

Not all of these factors are relevant to every organization. An A.I. That automates trade reconciliation likely does not pose a threat of discrimination, but the security of the model and the underlying data is critical. Conversely, data security is somewhat less concerning for predictive A.I. Used to anticipate food and housing insecurity, but unfairness and discrimination are priority considerations for a tool that relies on historical data that is potentially rife with latent bias.

Effective self-regulation in A.I. Requires a whole-of-lifecycle approach, where attention to trust, ethics, and outcomes is embedded at every stage of the project. Processes must be amended to set clear waypoints for decision-making. Employees must be educated and trained to contribute to A.I. Governance, with a solid understanding of the tools, their impact, and the employee’s individual accountability in the lifecycle. And the technology ecosystem of edge devices, cloud platforms, sensors, and other tools must all be aligned to promote the qualities of trust most important in a given deployment.

Self-regulation fills the gap between innovation and government-made rules. Not only does It set the enterprise on a path to meeting whatever regulations emerge in the future, but it also delivers significant enterprise value by maximizing investment and minimizing negative outcomes.

For all we have spent on building A.I. Capabilities, we should also look toward investing in how we manage and use these tools to their full potential in a trustworthy way–and we should not wait for governments to tell us how.

Beena Ammananth is the executive director of the Global Deloitte A.I. Institute.

The opinions expressed in Coins2Day.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of  Coins2Day .

More must-read commentary published by Coins2Day:

  • Will the U.S. And Europe slide into recession in 2023? Here’s how to look out when economic outlooks don’t
  • Biggest CEO successes and setbacks: 2022’s triumphs and 2023’s challenges
  • I have 10 minutes to clean a plane  before passengers board. Here’s why the holidays’ air travel chaos was entirely avoidable
  • The next era of work will be about skills–not pedigree. Here’s how employers are changing the way they judge potential, according to LinkedIn and Jobs for the Future
Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.
About the Author
By Beena Ammananth
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.