The development of full artificial intelligence could spell the end of the human race.Stephen Hawking
The warning from Stephen Hawking is probably a bit excessive, isn’t it? It dates back to 2014. Seven years later, the need for ethics in artificial intelligence (AI) is still a hot topic and much remains to be done. A recent event shed light on this, as seen in the Financial Times some days ago (March 2, 2021). In the last year, two top ethics researchers have been fired after criticizing Google’s lack of diversity and warning of the risks of some of its AI systems. They had expressed their concerns that they relied too much on unrepresentative data sets and excess amounts of electricity consumption.
For many – or at least for me 🙄 -, it is difficult to understand the extent of such an event. AI is an intricate world and it is difficult to enter into it. Here below, you will find an outline of the AI capabilities and impacts potential on our society. Yes, it may have some great benefits but may harm also if not used accurately 🤔. Setting up an AI ethic framework is as indispensable as it is challenging.
Setting up an AI ethic framework is as indispensable as it is challenging.
The AI potential
AI has been considered for some years now as a quest for the grail, with a myriad of great applications that could change our lives: avoiding traffic, self-driving cars, instant language translators, some predictive capabilities (🤨 I am still curious about this). Many factors made this quick evolution possible, be it a growing amount of datasets available, some ameliorations in hardware as well as in algorithm design.
Today, we see a surge of AI everywhere, available to everyone, with easy access to open-source libraries, public datasets, online open courses, and numerous blog articles. On my side, I completed the deep learning specialization from Coursera. I enjoyed it and I recommend it. Deep learning is a subfield of AI, which is focused on the use of artificial neural networks. It allowed me to understand the basics and how to implement them. My thoughts on these advanced algorithms, as seen here, is that there is a (long) path to follow before using them efficiently. It is also not a magic tool to “rule (them) all” the problems we want to solve.
A recent breakthrough
Nonetheless, we can see that things move forward, as seen in the latest post of Facebook research (March 4, 2021). Their paradigm of self-supervised learning seems to be a major breakthrough. After having trained an algorithm with an enormous amount of data, it shows some kind of “common sense” and can get very good results in any new situation without much extra training data. As they say, “it clears the path for more flexible, accurate, and adaptable computer vision models in the future”.
However, I understand that the algorithm scope remains constrained in its domain, ie computer vision here. Similar algorithms are used for natural language processing (NLP) but are used alternately. Hence, even if it is a major breakthrough, I suppose we are probably still far from the concept of general artificial intelligence.
But fair enough, with self-supervised learning, AI almost understands the visual world. What kind of consequences can AI have on us? Let’s try to have a broader look at the impact that AI may have on our society.
Which impact AI may have on our society?
This article from Nature is very interesting. It proposes an assessment of AI’s effect on the achievement of the Sustainable Development Goals (SDGs). The paper indicates that if AI can enable the accomplishment of many targets across all the goals, it may also inhibit a significant number of them (80% of positive impacts, 35% of negative impacts).
Yes, AI-enabled technology can act as a catalyst to achieve the 2030 Agenda, report a net positive impact of AI-enabled technologies associated with increased productivity, It can support low-carbon energy systems with high integration of renewable energy and energy efficiency, prevent and significantly reduce marine pollution of all kinds, it can help to identify desertification trends over large areas.
But also, it may trigger inequalities that may act as inhibitors. It may for example lead to additional qualification requirements for any job, increasing the inequalities between workers, both between and within countries. It may also be used with malice to exploit psychological weaknesses to steer decisions creating problems such as damaging social cohesion. It can inadvertently learn and reproduce the societal biases against women and girls, which are embedded in current languages. This may lead to political polarization and affect social cohesion, showing users content specifically suited to their preconceived ideas.
Great, they have a priori more positive than negative impacts (see the questioning for the a priori). Now, a large part of the AI acting as an inhibitor is due to biases. Let’s focus on them.
Biases are inherent to the nature of a machine learning algorithm, which looks only to reproduce patterns from datasets. Having very large datasets does not necessarily imply that they will be sufficiently representative of any situation. They can even worsen existing situations where existing biases in the data used to train AI algorithms may result in the exacerbation of them, eventually leading to increased discrimination. Thankfully, their identification and avoidance are possible. But they are challenging, it is today an essential debate of AI practitioners. Consider the followings for a deeper understanding:
- A long and acrimonious dispute on AI bias, between Facebook Chief AI Scientist Yann LeCun and one of the ethic researcher who got fired, see this post (June 2020)
- A nice and pedagogical explanation of the challenge in biais identification and avoidance, see this post (Feb 2021)
Setting up a policy is not an easy task
Neither individuals nor governments seem to be able to follow the pace of these technological developments. Note some efforts are being made, though. On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published a set of guidelines “Policy and investment recommendations for trustworthy Artificial Intelligence”. We haven’t heard much since then, as far as I have understood.
Nature’s article details some of the raising problems. Definitively, complex concepts and politics do not easily combine:
- The topic being complex, policymakers have probably hardly sufficient understanding of AI challenges to be able to formulate sound policy. Hence, a policy formulated without understanding is likely to be ineffective at best and counterproductive at worst.
- AI also raises the issue of the inherent dilemma of collective vs. individual benefit. AI applications that have positive societal welfare implications may not always benefit each individual separately.
- The dynamicity of context and the level of abstraction at which human values are described imply that there is not a single ethical theory that holds all the time in all situations. Consequently, a single set of utilitarian ethical principles with AI would not be recommendable due to the high complexity of our societies. It is also essential to be aware of the potential complexity in the interaction between human and AI agents.
I hope this analysis helped you to understand better the status of AI today and its implication on our society. The topic is challenging and arduous. I let the last sentences of the article as a concise conclusion.
All actors in all nations should be represented in this dialogue, to ensure that no one is left behind. On the other hand, postponing or not having such a conversation could result in an unequal and unsustainable AI-fueled future.
Questioning the questioner
Reviews are – somehow – easy to be made… I like the questioning which is then made on the validity of its statements. I provide here a worth reading – long – extract that I prefer to leave it as it is. Okay, it tells us that we need to move ahead cautiously, we hardly get certitudes in this AI topic. Later on, it also mentions the existence of the publication bias and the need of longer-term studies to discover the detrimental aspects of AI.
Furthermore, although we were able to find numerous studies suggesting that AI can potentially serve as an enabler for many SDG targets and indicators, a significant fraction of these studies have been conducted in controlled laboratory environments, based on limited datasets or using prototypes. Hence, extrapolating this information to evaluate the real-world effects often remains a challenge. This is particularly true when measuring the impact of AI across broader scales, both temporally and spatially. We acknowledge that conducting controlled experimental trials for evaluating real-world impacts of AI can result in depicting a snapshot situation, where AI tools are tailored towards that specific environment.