Embracing Responsible AI: How Product Managers Can Champion Ethics and Responsibility

July 25, 2023

Generative artificial intelligence (AI) is widely spreading across many industries. This field of AI is transforming existing products, services and driving the development of novel AI-powered products. It’s exciting to note that AI is not just for the tech giants like Google, Amazon, Meta, etc. The application of AI technologies is wide open to a variety of players, from individual developers, startups to small and large companies. New research shows 67% of senior IT leaders are prioritizing generative AI for their business within the next 18 months. As many organizations are figuring out AI and how to achieve business success, its adoption brings new ethical risks, and responsibilities. it isn’t enough to just apply it or bolt it on to a product.

Product management is at the epicenter of this exciting revolution with product managers taking on a greater role to meet these new responsibilities. Successful AI product management requires prioritizing the responsible use of AI while being mindful of the ethical implications and the necessary steps to reduce risks.


The AI Product Manager

An AI product manager focuses on the application of artificial intelligence- its various subfields such as deep learning, machine learning or generative AI to enhance- or develop new products. The best PMs will know how to uncover the right data and apply the insights gained to the design of a product that delights customers and delivers business value. 


Skillsets of an AI Product Manager

The key difference between AI- and software product managers who may have a background in UX and marketing, is that the AI PM will have an understanding of data processing and statistics. While both share common responsibilities, the skillset of an AI PM includes some of the following:

  • Data literacy: Knowing the right data questions to ask, the hypotheses to formulate and the understanding of data and models are central to the role of the AI PM. 
  • Tailoring specs and acceptance criteria: AI PMs need to deliver AI specifications to data science teams effectively. They also need to pay attention to AI accuracy in addition to the traditional acceptance criteria. 
  • Communication and evangelism: The AI PM must effectively bridge the language of data science and product development, being able to communicate beyond the product. The AI PM has to understand and promote the merits of AI adoption across the organization.
  • AI transparency and trust: As stewards of the trust in the product, the AI PM must ensure that explainable AI provides customers and stakeholders understanding into Ai decision-making.
  • AI Ethics: the product decisions must consider the on-going ethical application and principles of responsible AI.


The Unintended Consequences of AI

It’s important that product managers remain customer centric given the tantalizing opportunities of AI. Product management must remain aware and vigilant to the risks that AI products and services may introduce. There have been various high-profile examples of the unintended consequences of deploying models and AI systems like in the case of Word embedding, that mischaracterized african-american names as unpleasant or WestJet’s AI service bot that misdirected customers to a suicide prevention line (in response to positive reviews) and who can forget Amazon’s AI recruitment bot that filtered out women as job candidates.    

These risk factors can be summarized within the following themes:

  • Reproducing and amplifying biases: AI systems are trained on data, and if that data is biased, the AI system will be biased as well. This can lead to discrimination against certain groups of people.
  • Unexpected and/or inadequate behaviors: Ai systems might yield results that are outside of the original intent and design. AI-powered products and services can be used to make decisions that affect people’s jobs, health, and even their freedom. It is therefore essential that AI is used in a way that is fair, ethical, and transparent.
  • Inability to detect and rectify unfair outcomes: AI systems are complex and often operate in ways that are not fully understood. This can create new risks, such as exploitation of the AI system or its use to spread misinformation.

About 79% of senior IT leaders are concerned about the potential for security risks, unexpected behaviors and possible biased outcomes. And as many organizations are trying to figure out AI, product management must work with stakeholders and teams across the org to ensure that models and AI systems are deployed responsibly. 


What is Responsible AI?

There is no universal definition and standard practice that defines and guides the implementation of responsible AI. Instead, organizations have been developing their own principles and practices that reflect their mission and values. We can still identify common themes among a consistent set of ideas which can be applied to any organization. The main themes include:

  • Transparency
  • Fairness
  • Accountability
  • Privacy
  • Accuracy and Reliability


Principles of Responsible AI

These principles can only go so far if the organization lacks an ethical AI practice to operationalize them. An ethical AI practice within an organization will operationalize its AI principles, governance processes, and tools that together embody their values and guide its approach to product development and deployment. This practice should provide more cohesion among product management, data science, engineering, privacy, legal, user research, design, and accessibility. 

For example, Google defines its approach to responsible AI as a commitment that: AI is built for everyone, it is accountable and safe, it respects privacy and is driven by scientific excellence. In June, 2018, Google published its 7 principles for responsible AI – providing a framework to guide decision-making and the incorporation of responsibility by design into AI systems and products. Here is an overview of Google’s 7 Principles of Responsible AI:

  1. AI should be socially beneficial: Any AI product should take into account a broad range of social and economic factors and can only proceed if the overall benefits substantially exceed the risks or downsides.
  2. AI should avoid creating or reinforcing unfair bias: Care should be taken with the data that’s selected for training models and systems to avoid unjust effects on people, particularly those related to sensitive characteristics.
  3. AI should be built and tested for safety: Ai systems need constant oversight, you can’t just “set-it and forget-it”. Certain automation can help the review process by collecting and analyzing metadata from the AI system. Ultimately, humans need to be in the testing loop checking the output for accuracy, bias and unintended behaviors.
  4. AI should be accountable to people: The AI system provides opportunities for feedback, relevant explanations, and appeal.
  5. AI should incorporate privacy design principles: The system should have an architecture with privacy safeguards such as providing notice, consent, transparency and control over the use of data.
  6. Scientific excellence: The highest standards and scientific rigor should be used to deliver verifiable results that balance accuracy, precision, and recall (the model’s ability to correctly identify positive cases within a given dataset). It should enable people to validate them and verify the sources where the model is pulling information from while providing the ability to highlight any uncertainties. 
  7. AI should be made available for uses that accord with these principles: in order to limit potentially harmful or abusive applications.

As you can see, the core idea themes (shown earlier) are present within the 7 principles of responsible AI. Over the years, other institutions have refined these principles. The core ideas can serve as a framework for leaders who are establishing ethical AI practice within their organizations.


The Role of the Responsible AI Product Manager

Product managers will need to take on a greater role to ensure that AI is used responsibly. Though there are very good job frameworks for AI PMs, the responsible AI product manager will prioritize the moral utilization of AI and that AI products are robust, fair, ethical, explainable, built-in accountability and can be trusted. 

Here are some additional ideas that can help PMs get started on the practice of responsible AI: 

  • Operationalizing an ethical AI practice: Product managers should involve stakeholders, such as data scientists, engineers, and ethicists, in the development of AI products. This will help to ensure that the products are designed in accordance with the ethical principles of the organization.
  • Ensure that data is used responsibly: This includes selections of training data, model evaluation and system testing to ensure that bias is eliminated and system limitations are documented. 
  • Use responsible AI principles: Product managers should use responsible AI principles, such as fairness, transparency, and accountability, when developing AI products. This also includes data privacy and controls.
  • Monitor and evaluate: Product managers should monitor and evaluate the performance of AI products to ensure that they are working as intended. They should also be prepared to make changes to the products if necessary.
  • Educate users about AI: Product managers should educate users about how AI works and how it is used in their products. This will help to build trust between users and product managers.


Final Thoughts

The scope, required skills and responsibilities of product managers have significantly increased when it comes to ensuring that AI is used responsibly and that the ethical practices are delivering products that are safe for the customers and the intended users. Ai continues to offer exciting opportunities, especially within the practice of product management, and its affording new opportunities for PM to be influential leaders within their organization. As we’re customer centric in our approach to product, we’re not only delivering products that change the way people think, work and live – we must uphold the highest ethical standards, scientific rigor and responsibility in our use of artificial intelligence.