The Responsible AI: A Framework for making ethical AI systems

Why AI needs to be responsible?

A fundamental question needs to be answered:

  1. Who is accountable for the results churned out by AI?
  2. How did the model ensure that the output is fair and holistic?
  3. Is the AI model reliable such that its results are consistent?
  4. How does one ensure that the laws of privacy were followed?
  1. What are its characteristics?
  2. What is the framework for implementing a responsible AI?

The Framework

Microsoft has been championing the cause for a responsible AI for a long time. Microsoft has created a framework that ensures that AI is used responsibly across the organization and its core principles are followed in every product that Microsoft creates. The six principles in this framework are as follows:

  1. Accountability: AI systems should have algorithmic accountability.
  2. Fairness: AI systems should treat people fairly.
  3. Reliability and Safety: AI systems should perform reliably and safely.
  4. Privacy and Security: AI systems should be secure and respect privacy.
  5. Inclusiveness: AI systems should empower everyone and engage people.

1. Transparency

Transparency is one of the foundational principles on which all other principles rely on. AI systems are complex. They fuse domains of mathematics, statistics, programming, and the business world together. Such a system can be challenging to explain to a general audience. Many AI models that rely on neural networks are much more complex and challenging to explain. However, it is pivotal for an AI system to demonstrate its decision-making process. One needs to understand the rationale behind its decisions. A few questions to ponder to ensure that the AI system is transparent are as follows:

How can the output churned out by an AI system be explained?

Is the explanation logical and simple to understand to a generic audience?

Does the explanation align with other principles of a responsible AI system?

2. Accountability

Another foundational principle for a responsible AI is system is accountability. Where does the buck stop? Accountability implies that one who develops the AI system is accountable for its actions. Without accountability, the holistic framework of a responsible AI system can’t be created. Imagine that a self-driving car’s algorithm directs the car and meets with an accident. Who is responsible for it? In a world where AI takes over more human tasks, accountability for the outcome is pivotal. A few questions to ponder to infuse accountability in an AI system are:

What is the potential impact of the outcome of an AI system?

Who is governing the accountability of the AI system developed?

How is the accountability measured and deployed?

3. Fairness

When it comes to fairness, the fundamental question to ask is, what is fair? The answer to this question has multiple layers of social and cultural wrappings. What is acceptable to me may not be feasible from your perspective? It depends on the vantage point, and there is no universal yardstick to measure fairness. Fairness is a relative concept.

Are the attributes used to train the model prone to data biases?

Can the data that is creating bias eliminated?

Can the data collection methods investigated further to reduce bias?

4. Reliability and Safety

Any trustworthy system needs to be reliable and safe. AI systems are no different. Being reliable and secure is much more critical for AI systems as they permeate the very fabric of human experiences. However, again a few questions need to be clarified:

Are the model performance metrics consistent in multiple scenarios, especially in outlier cases?

What will be the impact on the stakeholders if the AI system doesn’t behave as expected?

Is the model output repeatable and reproducible?

5. Privacy and Security

Privacy and Security have become a controversial topic. The recent backlash on WhatsApp’s changes to the privacy policies and the unprecedented switch to alternatives like Signal or Telegram reinforces that it needs to be taken seriously. AI systems like face recognition or voice tagging can definitely be used to infringe on the privacy of an individual and threaten one’s security as well. How an individual’s online footprint is used to trace, deduce and influence someone’s preferences or perspective is a serious concern that needs to be addressed. How fake news or deep fakes influence public opinions also poses a threat to individual or societal safeties. AI systems are more and more misused in this domain. There is a pertinent need to establish a framework that protects an individual’s privacy and Security.

Is the data acquired to train the AI model acquired legally and with transparency?

Will the outcomes delivered by the AI system compromise the privacy of an individual or groups on individuals?

Is the AI system using data securely?

6. Inclusiveness

The world around us is diverse. There are people from all walks of life. People with disabilities, organizations that are not for profits, government agencies, and much more need AI systems as much as any other individuals or enterprises. AI system should be inclusive and attuned to the needs of this diverse ecosystem. This is often a tough question to ask with not much return on investment. However, it is an important question to ask. When AI systems think about inclusiveness, the following questions need to be answered:

Does the AI system developed to ensure that it includes different categories of individuals or organizations in the specific context?

Are there any categories of data that need to be handled exceptionally to ensure that they are included?

Does the experience that the AI system provides excludes any specific types of categories? If yes, then is there anything that can be done about it?

Bringing it all together

The questions that need to be asked to create a responsible AI system are summarized here:

WITH GREAT POWER COMES GREAT RESPONSIBILITY.

References:

  1. Meet the Secret Algorithm That’s Keeping Students Out of College
  2. What Happens When AI is Used to Set Grades?
  3. What happens when AI is used to set students’ grades?
  4. AI Fairness Isn’t Just an Ethical Issue
  5. Identify guiding principles for responsible AI
  6. The Future Computed
  7. Privacy International on AI
  8. Facebook Security Breach Exposes Accounts of 50 Million Users
  9. GDPR Info
  10. Facebook sued over Cambridge Analytica data scandal

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store