"The upheavals (of artificial intelligence) can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease."
–The New York Times

 

The above quote is just one of many instances wherein thinkers and influencers have raised their concerns with regard to Artificial Intelligence (AI) technology. In my previous blog, I had touched upon the prejudices that creep into AI-driven systems, thanks to human bias. In this blog, I want to elaborate on building an ethical AI framework.

According to one study, one in three cybersecurity heads rate ethical risks as one of top three AI-related organizational concerns. Therefore, building AI that we can trust needs to be at the heart of its design and development. The process, however, is anything but straight-forward. An ethical AI framework has multiple moving parts such as methodologies, stakeholders and design principles – and asking relevant questions can help build a deeper understanding of these aspects and how they relate to each other.

Guiding Principles

A WNS DecisionPointTM report states that an ethical AI framework must be lawful, align with moral values, and be socially and technologically robust to not cause any unintended harm. As AI tools are increasingly deployed in different areas of human life, legal alignment is becoming an imperative. Taking a prime step in this direction, a leading international law firm has launched a special AI-practice group to help clients defend against class-action lawsuits related to compliance, data privacy, AI governance and ethics. The idea behind this endeavor is to advise companies on integrity, best practices and prejudices in the deployment and usage of AI, and help mitigate legal challenges.

However, aligning AI with the human value system could be challenging. AI scientist Stuart Russell illustrates this with the example of a domestic robot that cooks. When there is no meat in the refrigerator, the robot might just cook the family pet to meet the human objective, without understanding the emotional value of the pet. Hence, understanding the complexity of human morals is imperative to avoid unintended harm through AI systems.

Implementation Approach

Implementing an ethical AI across the system’s lifecycle requires a combination of technical and non-technical methods. Technical methods include building resistance to harmful data and defining nuanced rules for system functionality. Non-technical methods include documenting internal policies and standards pertaining to fundamental rights, discernibility and prevention of harm; and setting-up internal and external governance systems.

Putting ethical principles into practice, a leading global technology company has formed a committee for advising the leadership on plausible questions and challenges in AI innovation and deployment. Additionally, a dedicated AI office has been tasked with undertaking and implementing cross-company governance and public policy framework.

Risk Assessments

Before implementing AI technology, potential risk scenarios must be tested by a panel of experts. These assessments can help in more robust screening criteria and controls to be implemented prior to deployment or can put a stop to wrongful deployment altogether. Such systemic risk assessments not only protect people and their fundamental rights, but increase the credibility and acceptance of new technologies.

Finally, it is also necessary to clearly define roles and accountabilities in the organization to drive said processes and eventually build, implement and monitor a trustworthy AI.

There have been endless debates on whether AI can even be shut down by humans in the future. Hence, making humans trust a complex and powerful technology is far from easy. It needs to be built brick by brick.

To know more about creating ‘trust’ within an AI ecosystem, read the WNS DecisionPointTM report

Join the conversation