• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
CommentaryAI

I’ve read over 100 AI requests for proposals from major companies. Here’s the matrix of guardrails and obligations that is emerging

By
May Habib
May Habib
Down Arrow Button Icon
By
May Habib
May Habib
Down Arrow Button Icon
May 29, 2024, 8:22 AM ET
In October, President Joe Biden issued the executive order directing his administration to create a new chief AI officer, track companies developing the most powerful AI systems, adopt stronger privacy policies, and "both deploy AI and guard against its possible bias."
In October, President Joe Biden issued the executive order directing his administration to create a new chief AI officer, track companies developing the most powerful AI systems, adopt stronger privacy policies, and "both deploy AI and guard against its possible bias." Chip Somodevilla—Getty Images

Enterprises are adopting generative AI in a big way. We’re elevating work and transforming business processes from sales enablement to security operations. And we’re getting massive benefits: increasing productivity, improving quality, and accelerating time to market.

With this advancement comes an equal need for consideration of the risks. These include software vulnerabilities, cyberattacks, improper system access, and sensitive data exposure. There are also ethical and legal considerations, such as copyright or data privacy law violations, bias or toxicity in the generated output, the propagation of disinformation and deep fakes, and a furthering of the digital divide. We’re seeing the worst of it in public life right now, with algorithms used to spread false information, manipulate public opinion, and undermine trust in institutions. All of this highlights the importance of security, transparency, and accountability in how we create and use AI systems.

There is good work afoot! In the U.S., President Biden’s Executive Order on AI aims to promote the responsible use of AI and address issues such as bias and discrimination. The National Institute of Standards and Technology (NIST) has developed a comprehensive framework for AI systems’ trustworthiness. The European Union has proposed the AI Act, a regulatory framework to ensure the ethical and responsible use of AI. And the AI Safety Institute in the U.K. Is working towards developing safety standards and best practices for AI deployment.

The responsibility for establishing a common set of AI guardrails ultimately lies with the government, but we’re not there yet. Today, we have a rough patchwork of guidelines that are regionally inconsistent and unable to keep up with the rapid pace of AI innovation. In the meantime, the onus for its safe and responsible use will be on us: AI vendors and our enterprise customers. Indeed, we need a set of guardrails.

A new matrix of obligations

Forward-thinking companies are getting proactive. They’re creating internal steering committees and oversight groups to define and enforce policies according to their legal obligations and ethical standards. I’ve read more than a hundred requests for proposals (RFPs) from these organizations, and they’re good. They’ve informed our framework here at Writer for building our own trust and safety programs.

One way to organize our thinking is in a matrix with four areas of obligation: data, models, systems, and operations; and plot them across three responsible parties: vendors, enterprises, and governments.

Guardrails within the “data” category include data integrity, provenance, privacy, storage, and legal and regulatory compliance. In “models,” they’re transparency, accuracy, bias, toxicity, and misuse. In “system,” they’re security, reliability, customization, and configuration. And in “operations,” they’re the software development lifecycle, testing and validation, access and other policies (human and machine), and ethics.

Within each guardrail category, I recommend enumerating your key obligations, articulating what’s at stake, defining what “good” looks like, and establishing a measurement system. Each area will look different across vendors, enterprises, and government entities, but ultimately they should dovetail with and support each other.

I’ve chosen a sample question from our customers’ RFPs and translated each to demonstrate how each AI guardrail might work.

EnterpriseVendor
Data → PrivacyKey questions: Which data are sensitive? Where are they located? How might they become exposed? What’s the downside of exposing them? What’s the best way to protect them?RFP language: Do you anonymize, encrypt, and control access to sensitive data?

EnterpriseVendor
Models → BiasKey questions: Where are our areas of bias? Which AI systems impact our decisions or output? What’s at stake if we get it wrong? What does “good” look like? What’s our tolerance for error? How do we measure ourselves? How do we test our systems over time?RFP language: Describe the mechanisms and methodologies you employ to detect and mitigate biases. Describe your bias/fairness testing method over time.

EnterpriseVendor
System → ReliabilityKey questions: What does our AI system reliability need to be? What’s the impact if we do not meet our uptime SLA? How do we measure downtime and assess our system’s reliability over time?RFP language: Do you document, practice, and measure response plans for AI system downtime incidents, including measuring response and downtime?

EnterpriseVendor
Operations → EthicsKey questions: What role do humans play in our AI programs? Do we have a framework or formula to inform our roles and responsibilities?RFP language: Does the organization define policies and procedures that define and differentiate the various human roles and responsibilities when interacting with or monitoring the AI system?

As we transform business with generative AI, it’s crucial to recognize and address the risks associated with its implementation. While government initiatives are underway, today the responsibility for safe and responsible AI use is on our shoulders. By proactively implementing AI guardrails across data, models, systems, and operations, we can gain the benefits of AI while minimizing harm.

May Habib is CEO and co-founder of Writer.

More must-read commentary published by Coins2Day:

  • Fannie Mae CEO: Beyoncé is right. Climate change has already hit the housing market—and homeowners aren’t prepared
  • Trade and investment data in the last two years dispel the deglobalization and decoupling myths as U.S.-China competition ignites ‘reglobalization’
  • Big Tech employees missed out on $5.1 billion in 401(k) gains over the last decade because of fossil fuels, new research finds
  • ‘As quick as 5 minutes in California or as grueling as 11 hours in Texas’: Research reveals new post-Dobbs map of abortion access driving times

The opinions expressed in Coins2Day.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of  Coins2Day .

Coins2Day Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Coins2Day Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
By May Habib
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.