AI and Ethics

 

 

Introduction

 

Artificial intelligence (also known as "AI", if necessary, read our 5-minute reading article: What is artificial intelligence?) is a much fantasized field of research. Today, its capabilities have often been exaggerated to the point where people or companies have compared them to those of the human brain. While we are indeed quite far from this stage of "general AI", it is undeniable that recent key achievements in the field of machine learning have made real commercial applications possible.

With this enthusiasm and wave of experiments around the world, a crucial question needs to be asked: what are the main ethical risks * posed by AI? This paper will attempt to cover the opportunities and main pitfalls that machine learning systems can encounter, and the solutions we recommend and implement today.

* By ethics, we mean conforming to moral principles and values shared by the members of a society. A set of values and principles was agreed at the time of the Declaration of Human Rights by the United Nations in Paris in 1948, but today it is contested by many states and entities. We set out our ethical commitments at the end of this article.

I. Opportunities

 

AI offers incredible opportunities for players large and small around the world. Here are just a few sectors where AI can make a huge difference.

AI helps detect lung cancerAI can help radiologists detect cancer more safely

I.1 For the agricultural sector

AI can be used in agriculture to identify with great precision where farmers need to water their crops, and where they need to intervene to protect them from an environmental threat. This is achieved using deep learning models trained on satellite images and images taken by tractor-mounted cameras.

 

I.2 For the transport sector

Car accidents claim an estimated 1.25 million lives worldwide every year. They are also extremely costly, causing hundreds of billions of dollars in damage every year in the United States. What's more, the National Highway Traffic Safety Administration (NHTSA) estimates that 94% of car accidents are caused by human error.

Reducing human input can lead to fewer deaths and injuries, and less economic damage, if automated driving systems are up to the task of taking over. Autonomous vehicle technology relies heavily on Artificial Intelligence models combined with rule engines (deterministic computing).

I.3 For the healthcare sector

AI has proved extremely useful in helping doctors make the right decisions for their patients. Recently, an artificial intelligence developed by MIT proved better at spotting breast cancer in mammograms than expert radiologists.

The AI outperformed the specialists by detecting cancers that radiologists missed in the images, while ignoring features they falsely reported as possible tumors.

 

I.4 For internal security

Police and secret services can benefit from AI by saving valuable time in identifying known criminal faces or vehicle license plates in CCTV footage and photos published online.

AI can also improve weapons detection at airports during baggage scanning by automatically detecting suspicious luggage. This could prove very useful in preventing an attack on the national population.

 

I.5 For financial services

The banking industry has undergone a large-scale transformation since the banking crisis of 2008, with a focus on compliance. AI can help compliance officers meet the many challenges they face: automatically extracting and synthesizing huge amounts of information to verify that transactions and new account openings comply with regulatory requirements.

AI can process internal records to ensure that no protocol violations have occurred. With the ultimate aim of preventing a new risk to the bank and the system as a whole.

 

II RISKS

 

As is often the case with powerful new technologies, artificial intelligence entails risks at various levels. After defining the notion of bias, we'll describe these risks.

II.1 The bias

Bias is generally defined as an inclination or prejudice for or against a person, group or thing.

  • He may be consciousfor example, choosing a banana for its yellow color: we may have a conscious bias to choose yellow bananas because we have previous experience or scientific knowledge of bananas that are ripe when yellow.
  • Or it may be unconsciousIt may be unconscious: associating negative feelings with a segment of the population even if you are openly inclusive (as demonstrated by Harvard's famous implicit test of cognitive bias).

"Two quite opposite qualities equally bias our minds - habit and novelty." - Jean de la Bruyère (Les Caractères ou les Moeurs de ce siècle, 1688)

Bias is a widespread reality, with Jean de la Bruyère noting that bias can be forged by habit, by repeating the same choices and refusing to change, while something new can trigger a positive or negative bias simply by being new. The field of cognitive bias is an extremely active one, with much still to reveal about inherent human biases.

Statistics and machine learning have their own mathematical definition of bias. Bias and variance are the main sources of error between the predictions of a machine learning model and reality:

  • The bias error is due to oversimplified models that fail to capture the complexity of reality.
  • The variance error is due to overly complex models that fit the training data perfectly but fail to generalize well.

 

II.2 Input bias > Output bias

Education, employment, health, access to credit: even today, these important components of our society are subject to biased behaviors and decisions.

In the field of law enforcement, a number of studies have demonstrated the biases that exist:

  • A 2011 study showed that British judges were more merciful after lunch than at other times of the day.
  • The Open Justice initiative in California demonstrated the LAPD's negative bias towards blacks: 28% of those arrested by the police are black, even though they represent only 9% of the local population.

British judges are more lenient after lunchBritish judges are more forgiving on average after lunchtime

If past data is treated as a fundamental truth, AI models trained on the data will reproduce existing social biases. This has already been verified on several occasions:

Key problem no. 1:

Whenever AI is developed with human-related data biases, they are necessarily present in the training data.

Our recommendation:

Recognizing existing biases is the first step in setting a goal of zero bias in the selection or development of an AI system. For an AI system to be fair, a bias analysis must be carried out on previous data so that gender or ethnic biases can be balanced out before AI training.

Given that our societies' biases are undergoing long-term change (for example, the elimination of the gender pay gap), it is also essential that AI systems have a continuous bias mitigation function.

 

II.3 An intrusive Ia

Today's AI systems are based on calculations, a sum of binary operations that are performed by our computers: they lack emotions and are incapable of being genuinely empathetic and understanding our emotions. They can only simulate and reproduce the patterns they have learned (with varying degrees of randomness).

These limitations make systems claiming to recognize emotions, mental health, personality or other inner states inherently flawed. A call to ban these technologies has recently emerged.

However, automatically identifying the content that triggers a given reaction (e.g. a hateful emoji on Facebook) in a given segment of the population (e.g. suburban single mothers in Ohio) and using this information to create similar content with the aim of triggering those emotions again already poses a huge problem.

State populations affected by Cambridge AnalyticaUS citizens affected by the Cambridge Analytica scandal (source: Business Insider)

The Cambridge Analytica scandal revealed how targeted advertising attempted (with some success) to influence voters on a massive scale. In the case of the 2016 US presidential campaign, the company aggregated over 80 million Facebook profiles and processed a huge amount of personal information to detect trends among voters and tailor messages to provoke the desired emotions and reactions.

Key problem no. 2:

Accumulation and aggregation of highly personal data (political opinion, family situation, sexual orientation, etc.)

Our recommendation:

Regulators should strengthen data protection and regulation. Companies must be wary of the data they process, and use as little data as possible to carry out their processes.

On May 25, 2018, the European Union finally gave a regulatory framework to protect European citizens' data (RGPD) and recently the State of California also passed a Consumer Protection Act (CCPA).

These frameworks are essential to encourage good practice in terms of AI development within companies and to help prevent another Cambridge Analytica from happening.

II.4 AI for the common good?

The third ethical problem with AI relates to the very application it is intended to improve.

Elon Musk and other prominent AI figures have called for a ban on autonomous killer robots. The initiative is more than commendable, but it's important to note that autonomous killer robots are unfortunately just another step in the development of life-destroying robots that already exist and are in use today.

Killer robots illustrate the augmentation by AI of an unethical product or means (killing your fellow man).

Killer Robot in Black Mirror (Metalhead)The "Metalhead" killer robot (Black Mirror - Episode 5 Season 4)

The same applies to mass surveillance or the genetic selection and modification of living animal cells. While artificial intelligence can help perform these actions, the resulting models are unethical by design, because their task performed by a human is already unethical.

To be perfectly clear: this argument does not mean that it is permissible to develop killer robots, quite the contrary. It shows that AI models can never be ethical if the underlying goal itself is unethical.

Key problem no. 3:

Wanting to "augment" unethical approaches (e.g. genetic selection) or unethical products (e.g. lethal weapons) with AI.

Our recommendation:

The development of AI on a large scale is an ideal opportunity to reconsider some of our activities, particularly those that are questionable in terms of our set of values and moral principles. AI investment and development should be devoted to activities that create sustainable value for the human community, and should in no way cause damage or destruction.

 

 

III. Our commitment

 

As we saw in the first section, AI can be extremely valuable to society and to workers, legislators, bankers, teachers and many other professionals. However, it is important to remain highly vigilant to ensure that AI is ethical from the outset.

As an AI company, we are committed to applying these key precautionary measures when introducing AI into our customers' existing processes:

  • Perform bias analysis: we run descriptive statistics on demographics to see how past decisions have varied according to variables such as gender or ethnicity. If the study proves an existing bias, unlock the data on which AI will learn past trends. We also recommend that our customers be proactive and launch an internal communications campaign to highlight past biases and promote better human decisions.
  • We keep only the strict amount of data relevant to the given process: we carefully examine with the company what data is mandatory. If we need to process unstructured data such as images, videos or texts, we anonymize them as much as possible. In this way, we ensure that our processing complies with the RGPD and we also avoid our models adopting undesirable biases if a supposedly neutral variable (e.g. gender) actually takes on a positive or negative weight
  • We advise our customers against the risks of total automation: machine learning (which powers the vast majority of AI systems), is based on probabilities. You can have a confidence score on a prediction, but it will only be a statistic based on past observations. Think of it like a weather forecast: it can be extremely accurate and extremely useful, but it's still a forecast, with a (low) probability of failure. When making decisions as critical as whether or not to lend money or provide access to property, education or health care, human validation is mandatory.
  • Before working on a project and installing our solution, we ask ourselves and the customer: will the system be any good? Is there a chance of a very negative outcome? We carefully weigh up the pros and cons, and make sure there are safety measures in place that will easily identify and prevent negative effects.

 

In conclusion

At Datakeen, we're focused on producing machine learning technology that is free from existing biases or harms to any group of people or community.

We work hand in hand with our customers to ensure that their input data is unbiased through preliminary analysis, and that their use of AI does not violate any ethical barriers. This is part of our DNA and makes us proud as an AI company.

I hope you've enjoyed reading and look forward to reading our future publications.

 

About the author

Gaël Bonnardot, Co-founder and CTO at Datakeen

A passionate practitioner of Machine Learning and its deployment on business use cases, Gaël leads the design of AI solutions at Datakeen and is determined to make AI as ethical as it can be effective.

Further information

Here is some further reading on this important subject: