Get A Quote

Artificial Intelligence (AI) opens new Privacy and ethical challenges that must be dealt with through policy and careful solution design in order to achieve harmony. AI and its subsets, machine learning (ML) and deep learning (DL), are generating ideas and innovative products at pace. This is panning out as the market for AI-enabled solutions is set to be worth 733.7 billion by 2027 at a CAGR of over 42%.

These AI solutions feed on data, they consume it and analyze it to provide the intelligence needed for AI-enabled solutions to function. These data are often personal, behavioral, and can be highly sensitive data such as health and biometric information. Where there is data there are potential privacy and ethical implications. With so much personal data being used to train algorithms, where does privacy sit in an AI-enabled future? In this follow-up article focusing on AI, we will explore how to achieve harmony with AI, Ethics and Privacy.

Areas where AI, Privacy and Ethics intersect

The intersection of Artificial Intelligence with ethics and Privacy is coming to a head as the technology becomes more commonplace in IT infrastructures. There are numerous examples of where the power of AI algorithms results in privacy issues and ethical dilemmas. Below are just a few examples showing the types of areas that present privacy and ethical issues in AI-enabled services, apps, and systems.

Facial recognition systems

Facial recognition is being used in several applications, including:

  1. To verify a person’s identity, to cross-check a person is who they say they are. Amazon’s Rekognition system uses AI to recognize a person from an image such as a passport photo.
  2. To check a person's age, mobile apps such as Yoti use machine learning to identify a person’s age from their face.
  3. In the prevention of retail crime, AI is used to identify known thieves.
  4. As a method of authentication, e.g., to log into a mobile device using a facial biometric.

Facial recognition works by capturing facial features, analyzing patterns, and making a match. The algorithm behind a facial recognition system is trained against sample faces; training continues as data from a user’s face, during use, is captured and analyzed. When presented with the face of a user during a process, e.g., to verify that someone is who they say they are, the face is checked against a database of users to find a match. Facial recognition is becoming very popular and a survey from Georgetown University found that about half of American adults have facial images stored in databases allowing law enforcement agencies to perform searches.

Ensuring that facial biometrics are captured as part of a system dependent on facial recognition, requires extreme care. Our face is an intrinsic part of our privacy. Georgetown University states that “Face recognition is a powerful technology that requires strict oversight”. In terms of this what this oversight consists of, important principles of ‘Privacy by Design’ can help in maintaining a privacy-enhanced facial recognition service. These include data minimization, i.e., only using facial biometrics when necessary.

Bias

Human beings have ‘cognitive blind spots’ known as biases. There are 180 recognized biases in human societies: Amongst them are racial and sex stereotypes. When AI-enabled algorithms are designed, these biases can unknowingly be included. These algorithms are then often used to inform important decisions, including government and healthcare ones.

An example of an issue of AI-enabled bias in medicine was brought to the attention of the British Medical Council (BMJ). In an article, the BMJ discusses the case of Avery Smith and his wife, LeToya, who died of melanoma. Smith, a software developer, found out that algorithms used in skin cancer detection were trained using predominantly white skin. Smith’s wife, a black woman, was negatively impacted by AI-algorithms that only understood white skin.

To address bias in AI-enabled systems, a combination of technology, society, and policy needs to come together to address the issues. A recent evaluation of the problems of inherent bias in AI, as applied to healthcare solutions, is addressed in the paper “The ethics of AI in healthcare: A mapping review”. The paper identifies six key layers that require focus to resolve the issue of AI-enabled bias in healthcare: individual, interpersonal, group, institutional, and societal or sectoral.

Differential Privacy and Consumer Data

Target Corp was behind an infamous case of big data revealing more than you expect. Target had an idea to create a “pregnancy score” from user data. This was put into action by the company to use shopping behavior to then target users with pregnancy-related vouchers. In one unfortunate event, a father allegedly found out his teenage daughter was pregnant when he saw pregnancy vouchers for her in an email. This story may, or may not, be true, but the concept of overreaching with data is a serious concern in AI-enabled/machine learning solutions that predict behavior, such as shopping preferences. According to IBM, 79% of retail and consumer product companies will be using AI-enabled automation for customer intelligence by 2021. The design of AI-enabled marketing solutions must place privacy as a core principle of design to avoid loss of consumer trust as well as reduce the likelihood of privacy violations.

Finding the happy balance that allows companies to improve products and understand customer needs whilst ensuring their privacy is upheld is being addressed by both technology and regulations. The concept of “differential privacy” (DP) attempts to resolve the ability to collect and share the personal and behavioral data needed by an AI-system, whilst still preserving the privacy of these data. The application of DP is as a mathematical model that can help mitigate against certain privacy-related attacks. Other technological approaches include certain pseudonymization methods that can be used with AI and ML. Regulations too, can help enforce the measures needed to preserve individual privacy, whilst collecting and analyzing large datasets representing consumer behavior.

AI and Data Privacy Governance

Several regulations that cover areas of data protection also have clauses that relate to the use of AI. The EU’s GDPR is one such law that, whilst it does not specifically pull out artificial intelligence, sets requirements that encompass its use, e.g., GDPR Article 22 covers “automated individual decision-making, including profiling.”

In a recent policy briefing “AI Governance Post-GDPR: Lessons Learned and the Road Ahead” a number of key areas were identified in tackling AI and privacy:

  1. Incentivize compliance-centered innovation in AI
  2. Empower civil society through AI
  3. Enhance the interoperability of AI-governance structures

The report encourages dialog at both local and international levels to resolve the challenges of AI governance within a privacy respectful context. On this subject, the report points out that “privacy-by-design (in AI-enabled systems) should be treated as an opportunity to innovate in rights-centered products”.

AI and Privacy the road ahead

AI is powering many processes and will continue to do so, especially as manufacturing embraces the technology and smart cities arise. But this technology, by its nature, must consume ever-more data to improve smartness and efficiency. In doing so, it opens up new privacy and ethical challenges that must be dealt with through policy and careful solution design. The Centre for Data Ethics and Innovation states that to reduce barriers to AI acceptance in society, “businesses, citizens and public sector need clear rules and structures that enable safe and ethical innovation in data and AI”. Artificial intelligence-enabled solutions may well become ubiquitous in the coming years. We need to act now to ensure that these solutions act in ethical and privacy-preserving ways.

phone-handsetcrossmenu