There’s widespread consensus that we are on the brink of the fourth industrial revolution.

Artificial Intelligence (AI) and related technologies have brought about probably the most immersive, influential and invasive transformation since the introduction of the computer, radically changing how we live, work and do business.

By now, few retain doubts about the enormous transformative power of AI. But as the cliché goes, with great power comes great responsibility.

While its widespread use as an enabling technology is helping businesses to gain in productivity or to improve their customer experience, there’s also an endless list of questions related to potential risks.

How does one make sure that the AI you’re using is fair or unbiased? What are the ethical implications? How about its potential for job displacement?

Yet, the perception of AI as a revolutionary tool has also meant the emergence of boundless possibilities and opportunities; driving innovation that can play a pivotal role in combating challenges such as the pandemic we are currently facing.

We spoke to Isabelle Lunven, Managing Director, Transformation and innovation at PwC Luxembourg and Andreas Braun, Senior Manager to learn a bit more about these possibilities and implications of AI and also about the potential dangers.

1 - What are the main perceptions around AI today? Are they the same for consumers and businesses?

Isabelle Lunven (IL): For both consumers and businesses, AI can seem a buzz word to which a vast array of misconceptions is attached. In these misconceptions lie elements of AI as a magical, all-encompassing solution versus the reality of what AI can actually accomplish in practice. In truth, AI is limited in its impact by its reliance upon clean and relevant data, in quantitative and qualitative terms. It requires a combination of hard work and qualified skills to get the gist and make the data talk.

© PwC

Artificial Intelligence as such is not a new concept, having existed since late-1950s when the term “artificial intelligence” was coined at a Dartmouth College conference in Hanover, New Hampshire. The concept has gone through its fair share of ups-and-downs over the years but has faced its most dramatic evolutions in the last few decades, benefiting from increased calculation capacities that fuelled the acceleration of AI applications within diverse industries. It is now increasingly used in sectors such as security (face recognition, identity), health (detecting malignant cancers like melanoma), prediction (natural disasters, pandemics...) as well as in behavioural analysis and the digitisation of repetitive tasks, especially if combined with the Internet of Things (IoT).

Despite a vast array of applications, the perception of AI differs amongst consumers, business and governments. Among consumers and workers, the concerns exist on an individualistic level, pertaining firstly to the obvious concern surrounding personal data protection. Then comes the fears of obsolescence and resulting job loss, driven by the rising possibility of competing with robots and automated systems. Finally, AI has also triggered questions surrounding the area of higher education in terms of what subjects to learn and what skills to teach in order to achieve future-proof solutions within coming generations of students and academics.

For businesses, AI can represent a great opportunity to improve efficiency and operational excellence. But AI remains a disruption to the status quo, one that threatens the current models of conducting business (accounting, notary, etc.). The survival of these companies consequently relies upon a need to reinvent and redefine their business models in order to weather the transformations brought about by AI. Furthermore, the diffusion of AI amongst businesses has brought forward key questions regarding data privacy and cybersecurity which companies must address so as to be able to efficiently apply AI to their activities.

Meanwhile, the adoption of AI by governments has produced challenges on a more systemic level, both in its impact upon the societal structures over which they preside as well as the citizens for whom governments are responsible. For example, a major concern within the public sector is employing government personnel adequately skilled in AI and knowledgeable enough about its legal and ethical implications. However, in the pursuit for individuals with AI expertise, the government is faced with an added issue of having to compete with a private sector just as desperate in acquiring and retaining AI talent. Additionally, dominant AI players have begun to position themselves globally with leaders of the likes of GAFAM (Google, Apple, Facebook, Amazon, Microsoft) in the US and the Chinese BAT (Baidu, Alibaba, Tencent). In the face of these rising AI players, governments must ensure that they are capable of protecting their sovereignty all while building a balanced collaboration with private sectors on the topic of artificial intelligence that would benefit both citizens and the local economy.

2 - What is responsible AI? What’s the need to make artificial intelligence “responsible”?

IL: The creation of ‘responsible’ AI requires multilateral participation in directing its evolution. As AI is a complex topic rooted in technical knowledge, the expertise relating to its applications and implications has been vastly concentrated within scientific and technological communities. However, due to this technical complexity, governments, businesses and the ordinary individual generally hold limited knowledge about AI and its implications upon human society.

Consequently, we see a dependence come about where the decision-makers in government and in business structures must rely upon the AI experts to provide them with accurate information in order to then conduct educated and responsible decisions. This makes trust a key foundation in the pursuit of responsible AI, requiring transparency in the definition and creation of algorithms.

Furthermore, with the AI ecosystem developing at rapid speed, there is a risk of overlooking human rights, safety, equality of treatment and non-discrimination to name a few. Thus, it is crucial to handle questions pertaining to bias in algorithm creation and ethical issues such as use of biotechnology supported by AI. It is only through an ethical design that emerged from multilateral discussion that we will see the creation of responsible AI, ensuring that the role and sovereignty of governments are not challenged and that individual freedom is not put at risk.

3 - What are the premises or foundations of responsible AI? What are its opportunities?

Andreas Braun (AB): AI already has a great impact on our lives and economy today - which is sure to grow even more significant in the future. As with all technologies, there is a risk for abuse or failure, by nation states, companies, or even individuals. Think about the recent case where the algorithm of a credit card company gave less credit to women than to men - everything else being equal. Responsible AI is built around the key principle that when we use this technology, we should do it properly and aligned to our values. We need to create systems that are free of bias, that are explainable, and that are robust and secure, particularly when human lives are at stake. This process is not only relevant for the data scientist developing the algorithm, but has to be embedded in the whole organisation, to ensure appropriate governance.

© PwC

The opportunity of Responsible AI is significant. As a company, I can be sure that my product won’t harm my customers and reputation; as an individual, it is ensured that no data is shared without my consent; as a country we can maximise the benefit from AI for our citizens, while keeping the risks in check; as an economy, we can leverage the growth opportunities of AI in a sustainable way. The key is to balance innovation and regulation.

4 - Can responsible AI be also used in the fight against pandemics like the one we are experiencing today?

AB: While it might pale in comparison to the efforts of essential workers and individuals adhering to lockdown measures, analytics, machine learning and AI have been used from the first day of the response to the pandemic, including analysing x-ray images of potential COVID-19 cases, the tracing of contacts with big-data applications in Singapore, or real-time risk assessment from health data and travel profiles in Taiwan. A key factor in the success of these technologies is the availability of a significant amount of data. What will happen to this infrastructure after the pandemic? Is there a risk for abuse, is there a risk of moving towards a surveillance state? A responsible AI governance would ensure that such a risk is minimised, by only allowing access to what is needed, by ensuring independent oversight, or by deleting data after its purpose has been fulfilled.

5 - What should businesses and governments do concretely in order to progress towards responsible AI? And what has been done until now?

Businesses and governments that want to fully leverage AI, need to have values and governance in mind from the onset. This does not mean that innovation has to be stifled, but instead that it should be embedded in a framework. We can showcase how this works in practice in our PwC AI Lab that has been built around the idea of Responsible AI and where one has the opportunity to look at a number of practical business cases. Organisations can also take a swift self-check to see where they stand on this.

6 - Will responsible AI solve all the general concerns that shroud the world of artificial intelligence today? And how to ensure that we don’t lose sight of our most powerful asset: human beings?

IL: Responsible AI is surely a step forward towards making artificial intelligence beneficial for people and society. But there’s still a long way to go, both in the ethics and the evolution of AI.

This is especially relevant to the questions surrounding cognitive robotics and artificial consciousness which present a multitude of concerns, especially from the general public. For, at the centre of these questions lies issues pertaining to the infamous concept of ‘singularity’ which spells out the end of humanity as we know it and heralds the reign of an omniscient AI, or cyborgs and AI-enhanced humans. Such possibilities require a response that is not purely economic- or science-based but that includes perspectives anchored in philosophy, theory and reality. As such, it becomes important to have clear rules and regulations that permit such a focus on ethics to be able to not only explore what consists of responsible and irresponsible AI but also delve into what it means to be human.

Whether AI manages to become a true problem-solving tool for the current global challenges will depend upon its application. In the short term, AI does have a very real and concrete impact in taking over many repetitive, low-value added tasks especially where huge amounts of data need to be processed. As algorithms get better, we can expect more complex tasks to be covered. The idea of AI becoming more intelligent is not so much about it overtaking humans as it is working alongside humans, using the technology in a more integrated way with us in the future. It is this balanced integration of AI into society that will allow for the best results to come about for the well-being of individuals.

Humans have always been able to evolve and adapt. That’s in our nature. The key here is to anticipate, do our homework and be prepared and relevant. No matter how far the reach of technology, humans will still be needed to make life-death decisions or perform highly complex atypical tasks (medical surgeons for example). Humans are and will be essential in people-to-people businesses and interactions. We’ll be the ones who’ll be needed to build policies to ensure the role of states, governments and other governance structures.

Ultimately, it’ll depend on us. Making AI responsible, prioritising transparency and explainability, minimising bias will determine whether artificial intelligence serves us as a force for good or if it is an existential threat to humanity.