Peace for the World

Peace for the World
First democratic leader of Justice the Godfather of the Sri Lankan Tamil Struggle: Honourable Samuel James Veluppillai Chelvanayakam

Tuesday, January 1, 2019

2019 — The Year of Managing Artificial Intelligence


by Dr Ruwantissa Abeyratne-
The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most. ~Elon Musk
The way things are going, there is no room for doubt that there will be plenty of political upheavals and surprises in 2019.  No one has a clue as to where Brexit is headed or, for that matter, where the entire Western world is going, let alone the prevailing uncertainty in the United States and the onward march of China.  Both the pundits and the jury are out on these issues and only a retrospective look at the end of the year will clear the fog of rhetoric that is going on at present. 
Speaking of pundits, The Economist, in it’s The World in 2019 opines that “Red lights are flashing – not everywhere and not all at once, but enough to signal economic trouble in 2019…emerging markets will be particularly unsettled…the underlying weakness as ever, is debt…a global downturn in 2019 is not inevitable…banks are better capitalized than in 2007 and companies are better at managing risks”.
It is the last bit of the prognosis that warrants particular focus in 2019.  Are companies better at managing risks, especially in the context of their use of Artificial Intelligence (AI)?  Tom Standage, in the same journal says of AI: “as it is applied in a growing number of areas, there are legitimate concerns about possible unintended consequences…the immediate concern is that the scramble to amass the data needed to train AI systems is infringing on people’s privacy”.  He cites the General Data Protection Regulation of The European Union as a positive step in handing back control of personal data to the owner of the data, and the right of the owner to demand of user companies relevant information of usage.  However, Standage argues that the answer to regulating AI is not to introduce new legislation to manage AI but rather to adapt existing privacy and discrimination legislation to take AI into account and address the issues that might emerge.
Garry Kasparov, former world chess champion who defeated the AI computer Deep Blue in 1996 but was later defeated by the computer writes in Encyclopaedia Britannica Yearbook of 2018: “Humans will still set the goals and establish the priorities. We must ensure that our agnostic machines represent the best of our human morality.  If we succeed, our new tools will make us smarter, enabling us to better understand our world and ourselves.  Our real challenge is to avoid complacency, to keep thinking up new directions for AI to explore.  And that’s one job that can never be done by a machine”. 
Thomas H. Davenport and Ranjeev Ronanki, writing in the Definitive Management Ideas of the Year From the Harvard Business Review 2019 recommend that companies shift their focus from AI “moonshots” such as AI systems that could diagnose and recommend treatment for cancer using such machines as IBM’s Watson to concentrating on less ambitious projects such as staff IT problems and hotel reservations and in particular in three main areas: automating business processes; gaining insight through data analysis; and engaging with customers and employees.  Furthermore, the authors suggest that companies use AI to:  enhance products; make better decisions; create new products; optimize internal business operations; pursue new markets (in other words, engage in disruptive innovation); capture and apply scarce knowledge as the need arises; optimize external processes such as marketing and sales; and reduce head count through automation.
Managing AI would be a critical issue in 2019 if only so  companies  keep a check on AI.  Harvard Business Review cites three possible concerns where humans would not comprehend how a machine reached a conclusion.  They are: hidden biases cultivated by the machine through the learning process; since machines are mostly neural networks that work with statistical data, it would be difficult to think that the solutions given by a machine would work in every case, particularly where there are variables and random circumstances; and when a machine error occurs, it would be difficult to correct the error for the first concern cited – that humans may not understand how the machine came to its conclusion.
Sutapa Amornvivat, who runs an AI driven company in Thailand, cautions that AI has to be managed well as: “with the right tools and technology, crucial insights can be unlocked from data.  At the same time, we should be aware that the blind spots and biases within can lead us to the wrong conclusions.  Real limitations to data-driven approaches exist and necessitate human oversight to ensure that they are utilized correctly and to their fullest protection”.
Eleonore Pauwels, Research Fellow on Emerging Cybertechnologies at United Nations University (UNU), says about AI: “AI is already ubiquitous, but will affect people differently, depending on where they live, how much they earn, and what they do for a living. Scholars from civil society have started raising concerns about how algorithmic tools could increasingly profile, police, and even punish the poor. On the global and political stage, where corporations and states interact, AI will influence how these actors set the rules of the game. It will shape how they administer and exert power on our societies’ collective body. These new forms of control raise urgent policy challenges for the international community”.
At a United Nations conference on AI in 2017, U.N. Secretary General Antonio Guterres said: “Artificial Intelligence has the potential to accelerate progress towards a dignified life, in peace and prosperity, for all people…the time has arrived for all of us – governments, industry and civil society – to consider how artificial intelligence will affect our future.”
2019 may be a good start to commence composing a new “civilizational story line” as suggested by Julie Friedman Steele, Board Chair and CEO of The World Future Society. She says: “We must be socially, psychologically and existentially prepared.  We must consciously evolve and be able to see outside of ourselves.  We must, in other words, cultivate a futurist mind set and become futurist citizens. This will be our greatest achievement”.
There is no better time than the present to think about this.