In October, Amazon had toÂ discontinue an artificial intelligenceâ€“powered recruiting toolÂ after it discovered the system was biased against female applicants. In 2016, aÂ ProPublica investigationÂ revealed a recidivism assessment tool that used machine learning was biased against black defendants. More recently, the US Department of Housing and Urban DevelopmentÂ sued FacebookÂ because its ad-serving algorithms enabled advertisers to discriminate based on characteristics like gender and race. And GoogleÂ refrained from renewing its AI contractÂ with the Department of Defense after employees raised ethical concerns.
Those are just a few of theÂ many ethical controversies surrounding artificial intelligenceÂ algorithms in the past few years. Thereâ€™s a six-decade history behind the AI research. But recent advances in machine learning and neural networks have pushed artificial intelligence into sensitive domains such as hiring, criminal justice and health care.
In tandem with advances in artificial intelligence, thereâ€™s growing interest in establishing criteria and standards to weigh the robustness and trustworthiness of the AI algorithms that are helping or replacing humans in making important and critical decisions.
With the field being nascent, thereâ€™s little consensus over the definition of ethical and trustworthy AI, and the topic has become the focus of many organizations, tech companies and government institutions.
In aÂ recently published documentÂ titled â€œEthics Guidelines for Trustworthy AI,â€ the European Commission has laid out seven essential requirements for developing ethical and trustworthy artificial intelligence. While we still have a lot to learn as AI takes a more prominent role in our daily lives, ECâ€™s guidelines, unpacked below, provide a nice roundup of the kind of issues the AI industry faces today.
Human agency and oversight
â€œAI systems should both act as enablers to a democratic, flourishing and equitable society by supporting the userâ€™s agency and foster fundamental rights, and allow for human oversight,â€ the EC document states.
Human agency means that users should have a choice not to become subject to an automated decision â€œwhen this produces legal effects on users or similarly significantly affects them,â€ according to the guidelines.
AI systems can invisibly threaten the autonomy of humans who interact with them by influencing their behavior. One of the best-known examples in this regard isÂ Facebookâ€™s Cambridge Analytica scandal, in which a research firm used the social media giantâ€™s advertising platform to send personalized content to millions of users with the aim of affecting their vote in the 2016 U.S. presidential elections.
The challenge of this requirement is that weâ€™re already interacting with hundreds of AI systems everyday, including the content in our social media feeds, when we view trends in Twitter, when we Google a term, when we search for videos on YouTube, and more.
The companies that run these systems provide very few controls over the AI algorithms. In some cases, such as Googleâ€™s search engine, companies explicitly refrain from publishing the inner-workings of their AI algorithms to prevent manipulation and gaming. Meanwhile, various studies have shown that search results can have a dramatic influence on the behavior of users.
Human oversight means that no AI system should be able to perform its functions without some level of control by humans. This means that humans should either be directly involved in the decision-making process or have the option to review and override decisions made by an AI model.
In 2016, Facebook had toÂ shut down the AI that ran its â€œTrending Topicsâ€ sectionÂ because it pushed out false stories and obscene material. It then returned humans in the loop to review and validate the content the module was specifying as trending topics.
Technical robustness and safety
The EC experts state that AI systems must â€œreliably behave as intended while minimizing unintentional and unexpected harm, and preventing unacceptable harmâ€ to humans and their environment.
One of the greatest concerns of current artificial intelligence technologies is the threat ofÂ adversarial examples. Adversarial examples manipulate the behavior of AI systems by making small changes to their input data that are mostly invisible to humans. This happens mainly because AI algorithms work in ways that areÂ fundamentally different from the human brain.
Adversarial examples can happen by accident, such as an AI system that mistakesÂ sand dunes for nudes. But they can also be weaponized into harmful adversarial attacks against critical AI systems. For instance, a malicious actor can change the coloring and appearance of a stop sign in a way that will go unnoticed to a human but willÂ cause a self-driving car to ignore it and cause a safety threat.
Adversarial attacks are especially a concern withÂ deep learning, a popular blend of AI that develops its behavior by examining thousands and millions of examples.
There are already been several efforts to build robust AI systems that are resilient to adversarial attacks. AutoZOOM, a method developed by researchers at MIT-IBM Watson AI Lab, helpsÂ detect adversarial vulnerabilities in AI systems.
The EC document also recommends that AI systems should be able to fallback from machine learning to rule-based systems or ask for a human to intervene.
Since machine learning models are based on statistics, it should be clear how accurate a systems is. â€œWhen occasional inaccurate predictions cannot be avoided, it is important that the system can indicate how likely these errors are,â€ the ECâ€™s ethical guidelines state. This means that the end user should know about the confidence level and the general reliability of the AI system theyâ€™re using.
Privacy and data governance
â€œAI systems must guarantee privacy and data protection throughout a systemâ€™s entire lifecycle. This includes the information initially provided by the user, as well as the information generated about the user over the course of their interaction with the system,â€ according to the EC document.
Machine learning systemsÂ are data-hungry. The more quality data they have, the more accurate they become. Thatâ€™s why companies have a tendency to collect more and more data from their users. Companies like Facebook and Google have built economic empires by building and monetizing comprehensive digital profiles of their users. The use this data to train their AI models to provide personalized content and ads to their users and keep them glued to their apps to maximize their profit.
But how responsible are these companies in maintaining the security and privacy of this data?Â Not very much. Theyâ€™re also not very explicit about the amount of data they collect and ways they use it.
In recent years, general awareness about privacy and new rules such as the European Unionâ€™sÂ General Data Protection Regulation (GDPR)Â andÂ Californiaâ€™s Consumer Privacy Act (CCPA)Â are forcing organizations to be more transparent about their data collection and processing practices. In the past year, many companies have offered users the option to download their data or to ask the company to delete it from its servers.
However, more needs to be done. Many companiesÂ share sensitive user information with their employees or third-party contractorsÂ to label data and train their AI algorithms. In many cases, users donâ€™t know that human operators review their information and they falsely believe that only algorithms process their data.
Very recently, Bloomberg revealed that thousands of Amazon employees across the worldÂ access the voice recordings of the users of its Echo smart speakersÂ to help improve the companyâ€™s AI-powered digital assistant Alexa. The idea does not sit well with the users, who expect to enjoy privacy in their homes.
The European Commission experts define AI transparency in three components: traceability, explainability and communication.
AI systems based on machine learning and deep learning are highly complex. They develop their behavior based on correlations and patterns found in thousands and millions of training examples. Often, the creators of these algorithms donâ€™t know the logical steps behind the decisions their AI models make. This makes it very hard to find the reasons behind the errors these algorithms make.
EC specifically recommends that developers of AI systems document the development process, the data they use to train their algorithms, and explain their automated decisions in ways that are understandable to humans.
Explainable AIÂ has become the focus of several initiatives by the private and public sector. This includes aÂ widespread effort by the Defense Advanced Research Projects Agency (DARPA)Â to create AI models are open to investigation and methods that can explain AI decisions.
Another important point raised in the EC document is communication. â€œAI systems should not represent themselves as humans to users; humans have the right to be informed that they are interacting with an AI system,â€ the document reads.
Last year, Google introduced Duplex, an AI service that could place calls on behalf users and make restaurant and salon reservations.Â Controversy ensuedÂ because the assistant refrained from presenting itself as an AI agent and duped its interlocutors into thinking they were speaking to a real human. The company later updated the service to present itself as Google Assistant.
Diversity, non-discrimination, and fairness
Algorithmic biasÂ is one of the well-known controversies of contemporary AI technology. For a long time, we believed that AI would not make subjective decisions based on bias. But machine learning algorithms develop their behavior from their training data, and they reflect and amplify any bias contained in those data sets.
There have been numerous examples of algorithmic bias rearing its ugly head, such as the examples listed at the beginning of this article. Other cases include a study that showed popularÂ AI-based facial analysis servicesÂ being more accurate on men with light skin and making more errors on women with dark skin.
To prevent unfair bias against certain groups, ECâ€™s guidelines recommend that AI developers make sure their AI systemsâ€™ data sets are inclusive.
The problem is, AI models often train on data that is publicly available, and this data often contains hidden biases that already exist in the society.
For instance, a group of researchers at Boston University discovered that word embedding algorithms (AI models used in tasks such as machine translation and online text search) trained on online articles hadÂ developed hidden biases, such as associating programming with men and homemaker with women. Likewise, if a company trains its AI-based hiring tools with the profiles of its current employees, it might be unintentionally pushing its AI toward replicating the hidden biases and preferences of its current recruiters.
To solve hidden biases, EC recommends for companies that develop AI systems hire people from diverse backgrounds, cultures and disciplines.
One consideration to note however is that fairness and discrimination often depends on the domain. For instance, in hiring, organizations must make sure that their AI systems donâ€™t make decisions. But in another field like health care, parameters like gender and ethnicity must be factored in when diagnosing patients.
Societal and environmental well-being
â€œ[The] broader society, other sentient beings and the environment should be also considered as stakeholders throughout the AI systemâ€™s life cycle,â€ ECâ€™s guidelines state.
The social aspect of AI has been deeply studied. A notable example are social media companies, which use AI to study the behavior of their users andÂ provide them with personalized content. This makes social media applications addictive and profitable, but also causes a negative impact on users, making themÂ less social, less happyÂ and less tolerant toward opposing views and opinions.
Some companies have started to acknowledge this and correct the situation. In 2018, Facebook declared that it would beÂ making changes to its News Feed algorithmÂ and provide users with more posts from friends and family and less from brands and publishers. The move was aimed at making the experience more social.
The environmental impact of AI is less discussed, but is equally important. Training and running AI systems in the cloud consumes a lot of electricity and leaves a huge carbon footprint. This is a problem that will grow worse as more and more companies use AI algorithms in their applications.
One of the solutions is to useÂ lightweight edge AI solutionsÂ that require very little power and Â run on renewable energy. Another solution is to use AI itself to help improve the environment. For instance, machine learning algorithms can help manage traffic and public transport to reduce congestion and carbon emissions.
Finally, EC calls for mechanisms â€œto ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment and use.â€ Basically, this means there should be legal safeguards to make sure companies keep their AI systems conformant with ethical principles.
U.S. lawmakers recently introduced theÂ Algorithmic Accountability ActÂ which, if passed, will required companies to have their AI algorithms evaluated by the Federal Trade Commission for known problems such as algorithmic bias as well as privacy and security concerns.
Other countries, including the UK, France and Australia have passed similar legislation to hold tech companies to account for the behavior of their AI models.
In most cases, ethical guidelines are not in line with the business model and interests of tech companies. Thatâ€™s why there should be oversight and accountability. â€œWhen unjust adverse impact occurs, accessible mechanisms should be foreseen that ensure adequate redress. Knowing that redress is possible when things go wrongÂ is key to ensure trust,â€ the EC document states.
This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.Â