Responsible AI guidelines: One example

Every business should adopt guidelines for responsible AI that align with their core values. At ServiceNow, guidelines to developing and using AI responsibly are based on four principles:

Facial expression, Eyebrow, Lips, Skin, Beauty, Jaw, Smile, Happiness, Iris, Tooth

“Responsible AI is not about being the AI police.”

Linda Leopold, Head of AI Strategy
H&M Group

“All data is biased, but not all models built on biased data are biased.”

Scott Zoldi, Chief Analytics Officer FICO

Facial expression, Chin, Eyebrow, Forehead, Collar, Happiness, Smile, Jaw, Neck, Tooth
Vision Care, Eyewear, Smile, Glasses, Happiness, Forehead, Face, Nose, Collar, Mouth

“AI governance is a team sport.”

John Castelly, Senior Vice President and Chief Ethics & Compliance Officer
ServiceNow

Keeping AI systems unbiased, ethical, and law-abiding takes a coordinated effort across the organization

Best practices
for responsible AI:

Insights from industry leaders

By Dan Tynan, Workflow contributor

But these same customers routinely overshare sensitive personal information with the bot.

It is deeply knowledgeable about the latest fashion trends and offers insightful, personalized product advice. Mauricio ends up boosting sales significantly and is especially popular with your teenage customers.

You are running an e-commerce clothing site. You’ve asked your data scientists to develop a system for personalized recommendations, and they’ve created a customer-facing AI chatbot named Mauricio.

Here’s a thought experiment:

But these same customers routinely overshare sensitive personal information with the bot.

It is deeply knowledgeable about the latest fashion trends and offers insightful, personalized product advice. Mauricio ends up boosting sales significantly and is especially popular with your teenage customers.

You are running an e-commerce clothing site. You’ve asked your data scientists to develop a system for personalized recommendations, and they’ve created a customer-facing AI chatbot named Mauricio.

Here’s a thought experiment:

Do you keep Mauricio or do you shut it down?

Do you keep Mauricio or do you shut it down?

Do you keep Mauricio or do you shut it down?

Linda Leopold from H&M Group spun up this hypothetical scenario in 2019, as part of a series of exercises she dubbed the “Ethical AI Debate Club.” It was three years before ChatGPT launched, and virtually no one outside of a handful of data scientists had ever heard the term “generative AI.”


“Back then, this scenario was science fiction,” says Leopold, head of AI strategy for the $22 billion global fashion brand based in Stockholm. “Now it’s reality.”

The questions the Ethical AI Debate Club raised are more relevant than ever.

How can you ensure that a model’s predictions are not biased against a particular population?

What data uses are fair game, and which ones cross an ethical or legal line?

What can you do to protect the privacy and security of your data?

How do you realize the potential of this amazing technology while avoiding the pitfalls?

In short, how can you use

AI responsibly?

Five years ago, H&M was one of the first companies to establish a set of principles for the ethical use of AI. Today, virtually every enterprise hoping to deploy an intelligent chatbot or use AI to drive key business decisions has joined the debate. According to Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI), four out of five organizations have adopted at least one commonly accepted measure for enabling responsible AI, but fewer than 1% have completed the job.

of organizations have completed their responsible AI projects

1%

Fewer than

That’s because the answers to ethical questions aren’t always obvious and the process of arriving at answers is complex and time-consuming. In many cases, the hardest part of coming up with guidelines is persuading key decision-makers that they are not only necessary, but also beneficial to the business.

“Responsible AI is not about being the AI police,” Leopold says. “It’s about making sure we use AI in the best ways possible, both for the business and for our stakeholders.”

Bryan McGowan, Principal, Global Trusted AI Leader, KPMG: AI will reduce repetitive work and encourage creativity.

Data poisoning attacks that degrade model accuracy

“Hallucinations” causing reputational damage

Intellectual property violations from foundation models

Bias that harms a segment of population 

Generative AI risks:

What does responsible AI look like?

John Castelly, Senior Vice President and Chief Ethics & Compliance Officer, ServiceNow: Transparency is the key to AI trust.

Responsible AI should be a natural extension of an organization’s existing data governance frameworks, says John Castelly, senior vice president and chief ethics and compliance officer for ServiceNow.

“I believe that AI governance is data governance,” he says. “You likely already have a governance model somewhere in your shop. There’s no need to completely reinvent the wheel for AI.” 

Yet generative AI models also pose unique challenges and introduce risks that don’t arise in non-AI enterprise data use. For example: 


  • The materials used to train a large language model (LLM) may violate other organizations’ intellectual property rights, causing potential liability issues for enterprises using the model. 

  • Public-facing generative AI chatbots can be induced to reveal sensitive or proprietary material that was used in their training. 

  • Data poisoning attacks can degrade the accuracy of the model’s predictions. 

  • Chatbots also occasionally generate plausible-sounding but fictional answers to queries (hallucinations), which could lead to reputational damage for companies that deploy them. 

More than half of enterprises surveyed by Deloitte in 2024 say AI presents greater ethical risks than other emerging technologies, but fewer than one in four believe their risk management and governance operations are ready to address these risks. ServiceNow’s Enterprise AI Maturity Index reveals that the companies that are furthest along in their generative AI journeys have devoted more resources to AI governance.

0%

of enterprises surveyed say AI presents greater ethical risks than other emerging technologies

0%

of organizations surveyed believe they are highly prepared for generative AI risk management and governance

While responsible AI will look slightly different for every organization, some universal principles and goals apply: 

Fairness. Organizations need to mitigate inherent bias in data used to train AI to ensure that a model’s predictions don’t end up harming or unfairly benefiting a particular segment of the population.  

Transparency. Deep-learning AI models are self-taught and can generate outputs that surprise even their creators. The ability to explain how a model reaches a particular decision is increasingly becoming a regulatory requirement.   

Security. Using AI expands an organization’s attack surface. Organizations must take additional measures to protect the privacy and security of their proprietary data, as well as that of their customers, partners, and employees.  

Sustainability. Training and using LLMs requires huge amounts of computing power and electricity. Enterprises will need to find ways to mitigate any negative environmental impacts of their AI initiatives.

Accountability. Establishing responsible AI policies is only the first step. Organizations need systems in place to ensure these guidelines are followed across the enterprise. 

“Responsible AI means being mindful of the privacy, regulatory, and ethical questions that come with the territory,” adds Castelly. “It takes a lot of work, requires leadership buy-in from the top, and isn’t something that will happen overnight.”

Explore one example of responsible AI guidelines

Developing responsible AI takes time

A company’s responsible AI program often begins by looking at how other companies have handled these issues. OMRON, a $5.7 billion electronics manufacturer headquartered in Kyoto, Japan, uses machine learning to optimize its factory automation and home healthcare products, such as blood pressure monitors. Though the company had already established policies regarding technology and human rights, it felt the need to update them with additional guidelines for AI, especially about the handling of sensitive healthcare information, says Ryota Yamada, distinguished specialist of technology for OMRON.

Yamada says his team looked at nearly 20 other organizations’ policies and consulted with analysts and researchers before drafting its own.

It then shared the draft with internal AI stakeholders in Japan, followed by subsidiaries in the U.S., Europe, and China. After integrating everyone’s feedback, the team sought approval from top leadership. 

The whole process took about two years, he adds.

Time for OMRON to create responsible AI policies

2 years

“The underlying ideas in all of these guidelines were mostly the same,” says Yamada. “We were able to learn what was important and the kinds of things we needed to integrate into our own AI policy.” 

The idea that AI should be responsible is one of five core principles in Hewlett Packard Enterprise’s (HPE) ethical AI guidelines, which it began to formulate in 2019 as an outgrowth of its policies related to human rights, says Kirk Bresniker, HPE fellow and chief architect for HPE Labs. Unlike OMRON, the $29 billion technology giant decided to create its guidelines from scratch. 

“I started with a blank page because we wanted something that was authentic to HPE," says Bresniker, who led the working groups charged with formulating the principles. “I figured we’d end up with five or six of them and it would take us about a week to work out each one. A year later, we were still talking.”

HPE published its AI ethics guidelines in 2020. After ChatGPT emerged in November 2022, the company realized it needed to augment them.

“It was a tectonic shift in technology,” he says. “In January of 2022, we were at the World Forum in Davos, offering toolkits to the C-suite so they could talk about the potential and risks of AI without sounding like crazy people. After November 2022, it was like, ‘Aha.’ Everyone realized they needed to talk more about responsible use.”

Stakeholder engagement and education: AI takes a village

Establishing responsible AI guidelines is not something you can simply delegate to data scientists or governance professionals, notes Castelly. Every stakeholder in the organization has a role to play.

“AI governance is most definitely a team sport,” he says. “You want people who understand the technology, and you want folks a little higher up who understand the bigger picture. You want someone in that room who doesn’t have a vested interest in the development of the product but is only thinking about what’s best for the company at large.”

For ServiceNow, that meant involving everyone from business unit leaders and engineers to legal, marketing, security, sales, and HR. H&M took a similar multidisciplinary approach, bringing in experts on ethics, law, data science, human rights, sustainability, and diversity, says Leopold.

“H&M’s digital ethics principles were created in collaboration with multiple teams across the company,” she adds. “Responsible AI is about maintaining the well-being of people, the business, and the planet, so you need people who can provide different perspectives. Our ethical discussions go much better when there’s a diverse set of people participating.”

Mitigating bias, with humans in the loop

Many of the underlying principles of responsible AI—mitigating bias, ensuring transparency, and enabling human control—are interdependent. In order to determine whether a model produces answers that unfairly disadvantage a particular group, you need to be able to understand how it reaches these decisions. You then need the ability to step in and manually adjust the model so that it delivers fairer outcomes.

Rumman Chowdhury, Head of Parity Consulting and Responsible AI fellow, Harvard University: To make AI work for people, start with a problem to solve, not technology.

“All data is biased, but not all models built on biased data are biased,” says Scott Zoldi, chief analytics officer for FICO, the $1.7 billion analytics software company best known for its FICO score used to measure credit risk. “Responsible AI requires human beings to step in, look at the latent features of that data, and test whether they discriminate on gender, race, geography, or other demographic factors. The human in the loop is critically important.”

But that also requires AI models to be transparent in how they operate—and the overwhelming majority of them are not, adds Zoldi.

“If a model is going to make a decision about someone, you need to be able to explain how it reached that decision so that person can contest that decision or that score or at least be informed as to why,” he says. “At FICO, we need the ability to say with absolute clarity, ‘This is the reason the model scored this high or that low.’”

For Zoldi, that meant telling the company’s data scientists they could not use deep-learning AI models and had to instead build simpler, more interpretable AI. 

“It took more time and care to build those models,” he says. “But we were comfortable with that because we understood the problems and the behavior of the models better. We gained more insight while ensuring responsible use.”

Auditing and compliance

It’s not enough to create guidelines; you also need to make sure your people are following them. That’s often a manual process, says Castelly. 

“You have to pressure test,” he says. “You have to spot-check your models, do surprise audits, and make sure the risk machinery of your company has AI compliance on its roadmap. But giving that enforcement the teeth it needs to create movement requires buy-in from the very top of the organization.”

At HPE, any team member who wants to launch an AI initiative fills out a form describing their project and what they’re hoping to achieve. Within two weeks, they’re having a conversation with the appropriate people in ethics and compliance, cybersecurity, IT operations, and/or data privacy about the risks involved, says Bresniker. It’s up to the team member to decide whether to move forward with the project. And sometimes they push back. 

“One of the other HPE fellows once asked me, ‘Are you going to stand between me and the business?’” he says. “I said, ‘No. I’m going to stand beside you, talking into your ear, giving you guidance and advising you on the risks. You’re the decider, and I want you to be able to own your decisions about any AI-driven outcomes, with confidence in our principles.’” 

FICO uses the blockchain to ensure the AI models it creates comply with its internal policies, says Zoldi. Every time a developer creates a new model or modifies an existing one, that event is recorded on the blockchain. A second person tests the new code to determine whether it meets FICO’s responsibility criteria and records their findings. A third person then looks at their work and confirms that neither of them made a mistake. 

“No model gets released from FICO until it meets all of our requirements,” Zoldi says.

Regulatory considerations 

There’s no shortage of guidance for what constitutes best practices in a responsible AI program, from the AI Risk Management Framework published by the National Institute of Standards and Technology (NIST), to Anthropic’s thoughts about constitutional AI. Multiple governments, agencies, and public and private companies (including ServiceNow) have published their own ethical principles. (In January, the Trump administration rescinded President Biden’s 2023 executive order regulating the development and use of AI in the United States. New guidance from the Trump administration is pending.) 

There’s no shortage of guidance for what constitutes best practices in a responsible AI program, whether it’s the AI Risk Management Framework published by the National Institute of Science and Technology (NIST), the White House Executive Order or Anthropic’s thoughts about Constitutional AI. Multiple governments, agencies, and public and private companies (including ServiceNow) have published their own ethical principles

Key AI regulations and frameworks

Jurisdiction/Organization

Regulation/framework

Highlights

European Union

AI Act

Prohibits AI use for deceptive practices, social scoring, and most involuntary biometric applications

Utah, USA

Artificial Intelligence Policy Act (2024)

Requires companies to disclose the use of generative AI tools to customers

Colorado, USA

Colorado AI Act (Effective 2026)

Restricts the use of automated decision-making systems

National Institute of Standards and Technology (NIST)

AI Risk Management Framework

Provides guidelines for managing risks associated with AI systems

United Nations Educational, Scientific and Cultural Organization (UNESCO)

Recommendation on the Ethics of AI

Offers ethical guidelines for AI development and use

Anthropic

Constitutional AI

Proposes principles for aligning AI behavior with human values

Key AI regulations and frameworks

The AI Act prohibits AI use for deceptive practices, social scoring, and most involuntary biometric applications.
The 2024 Artificial Intelligence Policy Act requires companies to disclose the use of generative AI tools to customers.
The Colorado AI Act (effective in 2026) restricts the use of automated decision-making systems.
NIST’s AI Risk Management Framework provides guidelines for managing risks associated with AI systems.
UNESCO’s Recommendation on the Ethics of AI offers ethical guidelines for AI development and use.
Anthropic’s Constitutional AI process proposes principles for aligning AI behavior with human values.

But regulation is still in its nascent phase. In the U.S., many state laws are still pending approval or have not yet gone into effect. More regulations are coming, and businesses should be ready for them, warns Castelly. Companies that fail to install proper safeguards now may suffer consequences when their AI model hallucinates or is trained with data it isn’t supposed to have access to, he adds.

“A lot of business people tend to say, ‘I’ll do this thing now and worry about getting permission later,’” says Castelly. “We saw that with ESG and crypto. Companies that build their platforms with the idea that regulation will eventually come will still be here in five or 10 years. Those that say we’ll figure it out later probably won’t be.”

Fostering a culture of responsibility

Margaret Mitchell, Researcher and Chief Ethics Scientist, Hugging Face: You can’t create a “fair” system without considering the population it serves and the context in which it’s used.

Establishing rules and setting up enforcement mechanisms will only go so far, cautions Leopold. Business leaders need to create a culture of responsibility that permeates the organization.

“It’s so easy to go down the governance route and think that you will solve the problem by just creating great principles and guidelines,” she says. “You need to combine them with cultural activities focusing on awareness, literacy, and engagement. You need to make people actually see the value of using the governance tools and following the guidelines.”

Ultimately, says HPE’s Bresniker, you want to create responsible AI guidelines that reflect your organization’s core values. And that means getting everyone on the same page.

“I would really encourage everyone—team members, shareholders, your governance operations—to start this conversation,” he says. “You want to make sure the values of your company are represented in how you use AI tools. Otherwise, someone else’s values will be.”

Author Dan Tynan is an award-winning journalist whose work has appeared in more than 100 publications. He has been writing about technology since before Google was a verb. 

Font

Five years ago, H&M was one of the first companies to establish a set of principles for the ethical use of AI. Today, virtually every enterprise hoping to deploy an intelligent chatbot or use AI to drive key business decisions has joined the debate. According to Stanford’s Center for Human-Centered Artificial Intelligence (HAI), four out of five organizations have adopted at least one commonly accepted measure for enabling responsible AI, but fewer than one percent have completed the job.

of organizations have completed their responsible AI projects

1%

Less than

Double-click to select video

brYan mcgowan, Global Trusted AI Leader, KPMG

Get more stories like this

in your inbox

Subscribe