Morals, Machines and Practical Mitigations

Bala Madhusoodhanan - Jun 19 '23 - - Dev Community

Intro:
Artificial Intelligence wave is to enable computers to think and make decisions like humans based on the observation we pass as part of decision making. Just like humans have rules and values that guide their behavior, AI also needs rules to make sure it does things in a fair and good way. We call these rules 'moral' or 'ethics'. We want AI to help people and make the world a better place, so we need to make sure it follows these rules and treats everyone fairly and equally.

Moral Machine in actions:
To emphasize the importance of factoring the moral / ethical element while augmenting human decision making, have a play with AI MoralMachine platform. The platform is designed to explore the moral dilemmas faced by autonomous vehicles and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behavior.

Image description

Some of the research key headlines based on the responses collected across 150+ countries are as below

  1. Morally believe the vehicles should operate under utilitarian principles, but prefer to buy vehicles that prioritize their own lives as passengers.
  2. overreactions risk might slow the adoption. How to bring the balance
  3. Lack of transparency into the underlying decision-making processes can make it difficult for people to trust

Its clear that to automate the decision making we need collective consent. By bringing together diverse perspectives, we can ensure that AI technologies are developed and deployed in a responsible and inclusive manner.

Key Principle while designing AI systems

Image description

Core Pillars:

Image description

Approach:
Trustworthy AI is a socio-technological challenge, it needs to be approached holistically.

Image description

Governance Framework:

Image description

Example: Automating Hiring System

Principal Project Risk Mitigations and Requirements
Fairness 1) Historic data set may be biased towards privileged groups, against protected classes 2) Disparate impacts, discriminatory outcomes 1) Design thinking scenarios and training provided to development team (e.g on equity or equality) 2) Supplement training data with synthetic data 3) Iteratively monitor outcomes & KPI’s of models with MRM tooling
Transparency 1) Stakeholders do not have access to key information throughout model lifecycle 2) Business users not aware what AI model is optimised for 1) Facts recorded and accessible throughout model lifecycle (factsheets & governance tooling) 2) Audit framework and standards applied and information on their use accessible
Explainability 1) Model owners cannot explain outcomes, Inferences are not monitored and system is black box 2) Users do not understand why hiring decisions were made 3) Candidates are not provided appropriate feedback 1) Ensure knowledge of tools and standards in development team – e.g Explainability toolkits & SHAP Values 2) Mandatory process - applicants notified of models use and provided feedback 3) Incorporate functionality into user interface to demonstrate explanations
Privacy 1) Breaking privacy policies and regulation when training model 2) Damage external reputation and internal trust from employees 1) Data must be effectively masked, obfuscated or pseudonymised in training and when accessed 2) Permission is requested from data owners for use in model training

Conclusion:
The content above was the session educating the need of AI ethical framework for large enterprise at the London AI summit. The key takeaway is to have stakeholders from various backgrounds, including experts, policymakers, industry leaders, and the public, to come together and actively participate in shaping the guidelines and principles that govern AI development.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .