The Responsible AI: A Framework for making ethical AI systems

Artificial Intelligence (AI) is set to change the way the world works. It is the engine that fuels the digital transformations. Organizations and societies are optimistic about AI, its potential, and its ability to transform the world we live in.

Like the breaking of atoms can be harnessed to light cities or to destroy one so is AI’s promise marred by its perils.

It is time to harness AI technologies for their goodness and creates a framework for them to be responsible. This article attempts to elucidate the practical framework for a responsible AI based on the principles laid down by Microsoft.

Why AI needs to be responsible?

Why AI needs to be responsible?

Let me illustrate a story that emphasizes the need for a responsible AI. In 2020, the world was entangled in the stranglehold of a global pandemic. The high school exams were canceled due to the raging pandemic. Students had to be graded, and the International Baccalaureate Organization (IBO) decided to be a little more experimental. They entrusted an AI to determine students’ final grades based on current and historical data. As one can imagine, the outcome of this experiment was not palatable. Thousands of students received degrees that deviated substantially from what they expected, and it opened a pandora’s box.

Can AI be trusted with such important decisions?

This also raised questions on the responsible use of AI:

  1. How does one explain the model outcomes?
  2. Who is accountable for the results churned out by AI?
  3. How did the model ensure that the output is fair and holistic?
  4. Is the AI model reliable such that its results are consistent?
  5. How does one ensure that the laws of privacy were followed?

AI systems are here to stay. AI will be more and more ubiquitous in our daily lives. It needs to be managed. Hence, an important question to ask is the following:

How do we ensure that AI systems are responsible?

This blog attempts to think through the following questions:

  1. What does a responsible AI system mean?
  2. What are its characteristics?
  3. What is the framework for implementing a responsible AI?

Responsible AI is an evolving topic, and the answers to these questions are subjective and skirts philosophy. Hence, the modus operandi that I will employ create a question-based framework.

The Framework

  1. Transparency: AI systems should be understandable.
  2. Accountability: AI systems should have algorithmic accountability.
  3. Fairness: AI systems should treat people fairly.
  4. Reliability and Safety: AI systems should perform reliably and safely.
  5. Privacy and Security: AI systems should be secure and respect privacy.
  6. Inclusiveness: AI systems should empower everyone and engage people.

Let us take a deeper look at each of these tenets.

1. Transparency

How can the output churned out by an AI system be explained?

Is the explanation logical and simple to understand to a generic audience?

Does the explanation align with other principles of a responsible AI system?

2. Accountability

What is the potential impact of the outcome of an AI system?

Who is governing the accountability of the AI system developed?

How is the accountability measured and deployed?

3. Fairness

Like the concept of fairness to a human is a manifestation of nature and nurture, fairness in AI depends on what building blocks are used to create it. That building block is data. One needs to look at fairness in data that is used to train the AI system. It needs to ensure that the data is devoid of as much bias as possible. Typically, biases tend to creep into data when sensitive features are used to train the AI model. These sensitive features include, but are not limited to, data points directly or indirectly linked with attributes like race, gender, social status, and age.

Whenever an AI system is trained, processes should be in place to systematically examine data that are prone to bias. The critical question to ask for a fair AI system is:

Are the attributes used to train the model prone to data biases?

Can the data that is creating bias eliminated?

Can the data collection methods investigated further to reduce bias?

4. Reliability and Safety

What do reliability and safety mean in an AI system?

As a reliable human being can answer complex questions rationally, reliability in an AI system means that it behaves rationally in most circumstances. It is essential to verify the results turned out of an AI system and understand why it is doing so. The AI system needs to ensure that outlier cases are catered to as well.

An AI system’s safety ensures that the output is not negatively impacting any process that affects humans.

How does one measure reliability and safety in an AI system?

The measurement of reliance is consistency. A metric on how consistent an AI model is in standard and outlier cases is a good indicator of its consistency.

Safety is another vital aspect of a responsible AI system. A prudent measurement of safety can be the following:

Quantifiable or perceived impact to stakeholders if the AI system deviates from what it is supposed to do.

Critical questions to ask when it comes to reliability and safety are:

Are the model performance metrics consistent in multiple scenarios, especially in outlier cases?

What will be the impact on the stakeholders if the AI system doesn’t behave as expected?

Is the model output repeatable and reproducible?

5. Privacy and Security

Privacy is any data that can identify an individual and/or their whereabouts, activities, and interests. Such data are generally subject to strict privacy and compliance laws, e.g., GDPR in Europe. AI systems must comply with privacy laws that require transparency about the collection, use, and storage of data. It must mandate that consumers have appropriate controls to choose how their data is used.

Security is another crucial aspect of AI systems. The hackers are on the prowl. Data used by AI systems are a treasure trove for these hackers. The Cambridge Analytica scandal and the security flaw that compromised the data breach of 50 million Facebook users are well known. They are just an example of how security breaches can impact trust, reputation and potentially compromise user data.

When such data is used to create AI models, the following questions need to be answered:

Is the data acquired to train the AI model acquired legally and with transparency?

Will the outcomes delivered by the AI system compromise the privacy of an individual or groups on individuals?

Is the AI system using data securely?

6. Inclusiveness

Does the AI system developed to ensure that it includes different categories of individuals or organizations in the specific context?

Are there any categories of data that need to be handled exceptionally to ensure that they are included?

Does the experience that the AI system provides excludes any specific types of categories? If yes, then is there anything that can be done about it?

Bringing it all together

Making a responsible AI system should be central to all the organizations embarking on their AI journey.

The Peter Parker principle, a proverb popularized by Spider-man summarizes aptly calls for the need of a responsible AI:



  1. What Happens When AI is Used to Set Grades?
  2. What happens when AI is used to set students’ grades?
  3. AI Fairness Isn’t Just an Ethical Issue
  4. Identify guiding principles for responsible AI
  5. The Future Computed
  6. Privacy International on AI
  7. Facebook Security Breach Exposes Accounts of 50 Million Users
  8. GDPR Info
  9. Facebook sued over Cambridge Analytica data scandal



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Pradeep Menon

Pradeep Menon


Creating impact through Technology | #CTO at #Microsoft| Data & AI Strategy | Cloud Computing | Design Thinking | Blogger | Public Speaker | Published Author