Hiding from Karma in an AI World

Recently an artificial intelligence system in China successfully passed a medical exam for the first time.  This is a significant advance in healthcare.  Potentially AI can soon provide high quality medical diagnoses remotely anywhere around the world.   Another significant step in AI and robotics happen a couple of years ago in Saudi Arabia where they granted citizenship to a robot named Sophia.  I wonder if that robot will be forced to wear a burka?  With all these rapid advancements, I think it is time we explore the spiritual life of robots and artificial intelligence.

Up until recently, human programmers coded and configured algorithms, AI, automation and machine learning system and took personal responsibility for all of their own code.  Today, however, AI has escaped the confines of human oversight and has been empowered and employed to self-program, self-optimize, self-test, self-configure and self-learn.  David Gunning writes, "Continued advances [in AI] promise to produce autonomous systems that will perceive, learn, decide, and act on their own."  That's potentially a big problem for karma.

A simplistic definition of karma is a spiritual principle that teaches good actions and good intent lead to good things now and in the future, while bad actions and bad intent lead to bad things now and in the future.  What happens to a human programmer that empowers or transfers responsibility for future decisions and actions to a robot - an autonomous machine with artificial intelligence?  Will karma eventually seek out the original human programmer of the autonomous system, long since retired and fishing on a mountain lake to extract retribution, or direct bad karma to the machine?  It's a problem.



The US Department of Defense is also concerned enough with karma that it is investing in the development of XAI (explainable artificial intelligence).  The unspoken intent is to point bad karma in the right direction.  No one wants to become collateral damage to misguided bad karma.  XAI can be described as a set of techniques and models to help humans (and potentially karma) understand why an autonomous system did what it did.  It is a way for machines to explain why they shut down your bank account, rejected your transaction, raised your insurance rate, gave you a bad credit score, rejected your university or job application, matched you with an awful date and forever ruined your life.  Karma has to go somewhere, and the XAI folks want it going to the right human programmer or the right machine - karma's decision.

More from Gunning, "New machine-learning systems [XAI] will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. The strategy for achieving that goal is to develop new or modified machine-learning techniques that will produce more explainable models." I am very concerned with what "moral and ethical code" is being embedded in these autonomous thinking and acting systems?  I know karma is as well.

What about unintended consequences?  Karma views intent as a very important consideration.  Will karma give a pass to human programmers that didn't intend for the battlebot they programmed to attack and destroy innocent people?

Google is also concerned with bad karma.  They are so concerned in fact that they are building a layer of deniability to defend themselves from bad karma.  The layer of deniability is called AutoML.  To hide their true intent from karma, they use business terms to describe their motivations, "[Today] Our machine learning models are painstakingly designed by a team of engineers and scientists. This process of manually designing machine learning models is difficult because the search space of all possible models can be combinatorially large — a typical 10-layer network can have ~1010 candidate networks! For this reason, the process of designing networks often takes a significant amount of time and experimentation by those with significant machine learning expertise [humans]."  So to avoid the time and expense, Google is outsources the development of AI to other machines - their layer of deniability.   Me thinks Karma will not be so easily fooled.

************************************************************************
Kevin Benedict
Futurist/Digital Strategist
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Join the Linkedin Group Digital Intelligence

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I work with and have worked with many of the companies mentioned in my articles.

Featured Post

Leadership Advice from a Futurist - A Reading

Leadership is hard.  So for all the leaders and want-to-be leaders out there, here is some advice that I hope you will find useful. ***...