The Responsible AI: A Framework for making ethical AI systems

Pradeep Menon
8 min readFeb 11, 2021

Artificial Intelligence (AI) is set to change the way the world works. It is the engine that fuels the digital transformations. Organizations and societies are optimistic about AI, its potential, and its ability to transform the world we live in.

Like the breaking of atoms can be harnessed to light cities or to destroy one so is AI’s promise marred by its perils.

It is time to harness AI technologies for their goodness and creates a framework for them to be responsible. This article attempts to elucidate the practical framework for a responsible AI based on the principles laid down by Microsoft.

Why AI needs to be responsible?

A fundamental question needs to be answered:

Why AI needs to be responsible?

Let me illustrate a story that emphasizes the need for a responsible AI. In 2020, the world was entangled in the stranglehold of a global pandemic. The high school exams were canceled due to the raging pandemic. Students had to be graded, and the International Baccalaureate Organization (IBO) decided to be a little more experimental. They entrusted an AI to determine students’ final grades based on current and historical data. As one can imagine, the outcome of this experiment was not palatable. Thousands of students received degrees that deviated substantially from what they expected, and it opened a pandora’s box.

Can AI be trusted with such important decisions?

This also raised questions on the responsible use of AI:

  1. How does one explain the model outcomes?
  2. Who is accountable for the results churned out by AI?
  3. How did the model ensure that the output is fair and holistic?
  4. Is the AI model reliable such that its results are consistent?
  5. How does one ensure that the laws of privacy were followed?

AI systems are here to stay. AI will be more and more ubiquitous in our daily lives. It needs to be managed. Hence, an important question to ask is the following:

How do we ensure that AI systems are responsible?

This blog attempts to think through the following questions:

  1. What does a responsible AI system mean?
  2. What are its characteristics?
  3. What is the framework for implementing a responsible AI?

Responsible AI is an evolving topic, and the answers to these questions are subjective and skirts philosophy. Hence, the modus operandi that I will employ create a question-based framework.

The Framework

Microsoft has been championing the cause for a responsible AI for a long time. Microsoft has created a framework that ensures that AI is used responsibly across the organization and its core principles are followed in every product that Microsoft creates. The six principles in this framework are as follows:

  1. Transparency: AI systems should be understandable.
  2. Accountability: AI systems should have algorithmic accountability.
  3. Fairness: AI systems should treat people fairly.
  4. Reliability and Safety: AI systems should perform reliably and safely.
  5. Privacy and Security: AI systems should be secure and respect privacy.
  6. Inclusiveness: AI systems should empower everyone and engage people.

Let us take a deeper look at each of these tenets.

1. Transparency

Transparency is one of the foundational principles on which all other principles rely on. AI systems are complex. They fuse domains of mathematics, statistics, programming, and the business world together. Such a system can be challenging to explain to a general audience. Many AI models that rely on neural networks are much more complex and challenging to explain. However, it is pivotal for an AI system to demonstrate its decision-making process. One needs to understand the rationale behind its decisions. A few questions to ponder to ensure that the AI system is transparent are as follows:

How can the output churned out by an AI system be explained?

Is the explanation logical and simple to understand to a generic audience?

Does the explanation align with other principles of a responsible AI system?

2. Accountability

Another foundational principle for a responsible AI is system is accountability. Where does the buck stop? Accountability implies that one who develops the AI system is accountable for its actions. Without accountability, the holistic framework of a responsible AI system can’t be created. Imagine that a self-driving car’s algorithm directs the car and meets with an accident. Who is responsible for it? In a world where AI takes over more human tasks, accountability for the outcome is pivotal. A few questions to ponder to infuse accountability in an AI system are:

What is the potential impact of the outcome of an AI system?

Who is governing the accountability of the AI system developed?

How is the accountability measured and deployed?

3. Fairness

When it comes to fairness, the fundamental question to ask is, what is fair? The answer to this question has multiple layers of social and cultural wrappings. What is acceptable to me may not be feasible from your perspective? It depends on the vantage point, and there is no universal yardstick to measure fairness. Fairness is a relative concept.

Like the concept of fairness to a human is a manifestation of nature and nurture, fairness in AI depends on what building blocks are used to create it. That building block is data. One needs to look at fairness in data that is used to train the AI system. It needs to ensure that the data is devoid of as much bias as possible. Typically, biases tend to creep into data when sensitive features are used to train the AI model. These sensitive features include, but are not limited to, data points directly or indirectly linked with attributes like race, gender, social status, and age.

Whenever an AI system is trained, processes should be in place to systematically examine data that are prone to bias. The critical question to ask for a fair AI system is:

Are the attributes used to train the model prone to data biases?

Can the data that is creating bias eliminated?

Can the data collection methods investigated further to reduce bias?

4. Reliability and Safety

Any trustworthy system needs to be reliable and safe. AI systems are no different. Being reliable and secure is much more critical for AI systems as they permeate the very fabric of human experiences. However, again a few questions need to be clarified:

What do reliability and safety mean in an AI system?

As a reliable human being can answer complex questions rationally, reliability in an AI system means that it behaves rationally in most circumstances. It is essential to verify the results turned out of an AI system and understand why it is doing so. The AI system needs to ensure that outlier cases are catered to as well.

An AI system’s safety ensures that the output is not negatively impacting any process that affects humans.

How does one measure reliability and safety in an AI system?

The measurement of reliance is consistency. A metric on how consistent an AI model is in standard and outlier cases is a good indicator of its consistency.

Safety is another vital aspect of a responsible AI system. A prudent measurement of safety can be the following:

Quantifiable or perceived impact to stakeholders if the AI system deviates from what it is supposed to do.

Critical questions to ask when it comes to reliability and safety are:

Are the model performance metrics consistent in multiple scenarios, especially in outlier cases?

What will be the impact on the stakeholders if the AI system doesn’t behave as expected?

Is the model output repeatable and reproducible?

5. Privacy and Security

Privacy and Security have become a controversial topic. The recent backlash on WhatsApp’s changes to the privacy policies and the unprecedented switch to alternatives like Signal or Telegram reinforces that it needs to be taken seriously. AI systems like face recognition or voice tagging can definitely be used to infringe on the privacy of an individual and threaten one’s security as well. How an individual’s online footprint is used to trace, deduce and influence someone’s preferences or perspective is a serious concern that needs to be addressed. How fake news or deep fakes influence public opinions also poses a threat to individual or societal safeties. AI systems are more and more misused in this domain. There is a pertinent need to establish a framework that protects an individual’s privacy and Security.

Privacy is any data that can identify an individual and/or their whereabouts, activities, and interests. Such data are generally subject to strict privacy and compliance laws, e.g., GDPR in Europe. AI systems must comply with privacy laws that require transparency about the collection, use, and storage of data. It must mandate that consumers have appropriate controls to choose how their data is used.

Security is another crucial aspect of AI systems. The hackers are on the prowl. Data used by AI systems are a treasure trove for these hackers. The Cambridge Analytica scandal and the security flaw that compromised the data breach of 50 million Facebook users are well known. They are just an example of how security breaches can impact trust, reputation and potentially compromise user data.

When such data is used to create AI models, the following questions need to be answered:

Is the data acquired to train the AI model acquired legally and with transparency?

Will the outcomes delivered by the AI system compromise the privacy of an individual or groups on individuals?

Is the AI system using data securely?

6. Inclusiveness

The world around us is diverse. There are people from all walks of life. People with disabilities, organizations that are not for profits, government agencies, and much more need AI systems as much as any other individuals or enterprises. AI system should be inclusive and attuned to the needs of this diverse ecosystem. This is often a tough question to ask with not much return on investment. However, it is an important question to ask. When AI systems think about inclusiveness, the following questions need to be answered:

Does the AI system developed to ensure that it includes different categories of individuals or organizations in the specific context?

Are there any categories of data that need to be handled exceptionally to ensure that they are included?

Does the experience that the AI system provides excludes any specific types of categories? If yes, then is there anything that can be done about it?

Bringing it all together

The questions that need to be asked to create a responsible AI system are summarized here:

Making a responsible AI system should be central to all the organizations embarking on their AI journey.

The Peter Parker principle, a proverb popularized by Spider-man summarizes aptly calls for the need of a responsible AI:



  1. Meet the Secret Algorithm That’s Keeping Students Out of College
  2. What Happens When AI is Used to Set Grades?
  3. What happens when AI is used to set students’ grades?
  4. AI Fairness Isn’t Just an Ethical Issue
  5. Identify guiding principles for responsible AI
  6. The Future Computed
  7. Privacy International on AI
  8. Facebook Security Breach Exposes Accounts of 50 Million Users
  9. GDPR Info
  10. Facebook sued over Cambridge Analytica data scandal



Pradeep Menon

Creating impact through Technology | #CTO at #Microsoft| Data & AI Strategy | Cloud Computing | Design Thinking | Blogger | Public Speaker | Published Author