AI, Autonomous Programming and Karma

Autonomous Programming
Recently an artificial intelligence system in China successfully passed a medical exam for the first time.  Potentially AI can soon provide high quality medical diagnoses remotely anywhere around the world, but I don't know about their bedside manner.   Another significant step in AI and robotics happen a couple of years ago in Saudi Arabia where they granted citizenship to a robot named Sophia.  I wonder if that robot will be forced to wear a burka?  With all these rapid advancements, I think it is time we explore the spiritual life of robots and artificial intelligence.

Up until recently, human programmers coded and configured algorithms, AI, automation and machine learning system and took personal responsibility for all of their own code.  Today, however, AI has escaped the confines of human oversight and has been empowered and employed to self-program, self-optimize, self-test, self-configure and self-learn.  

There are many emerging AI self-programming projects underway.  Bayou is an AI application, sponsored by Google and DARPA that uses deep learning to generate code by itself.  DeepCoder is a joint project between Microsoft and Cambridge University.  SketchAdapt is an AI environment that learns how to compose short, high-level programs, while letting a second set of algorithms find the right sub-programs to fill in the details.  SketchAdapt is a collaboration between Solar-Lezama and Josh Tenenbaum, a professor at CSAIL and MIT’s Center for Brains, Minds and Machines. 

David Gunning, Technical Program Manager at Facebook writes, "Continued advances [in AI] promise to produce autonomous systems that will perceive, learn, decide, and act on their own."  That's potentially a big problem for both programmers and karma.

A simplistic definition of karma is a spiritual principle that teaches good actions and good intent lead to good things now and in the future, while bad actions and bad intent lead to bad things now and in the future.  What happens to a human programmer that empowers or transfers responsibility for future decisions and actions to robots, autonomous machines with artificial intelligence?  Will karma eventually seek out the original human programmer of the autonomous system, long since retired and fishing on a mountain lake to extract retribution, or direct bad karma to the machine?  It's a problem.

The US Department of Defense is also concerned enough with karma that it is investing in the development of XAI (explainable artificial intelligence).  The unspoken intent is to point bad karma in the right direction.  No one wants to become collateral damage to misguided bad karma.  XAI can be described as a set of techniques and models to help humans (and potentially karma) understand why an autonomous system did what it did.  It is a way for machines to explain why they shut down your bank account, rejected your transaction, raised your insurance rate, gave you a bad credit score, rejected your university or job application, matched you with an awful date and forever ruined your life.  Karma has to go somewhere, and the XAI folks want it going to the right human programmer or the right machine - karma's decision.

More from Gunning, "New machine-learning systems [XAI] will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. The strategy for achieving that goal is to develop new or modified machine-learning techniques that will produce more explainable models." I am very concerned with what "just, moral and ethical code" is being embedded in these autonomous thinking and acting systems?  I know karma is as well.

What about unintended consequences?  Karma views intent as a very important consideration.  Will karma give a pass to human programmers that didn't intend for the battlebot they programmed to attack and destroy innocent people?

Google is also concerned with bad karma.  They are so concerned in fact that they are building a layer of deniability to defend themselves against it.  The layer of deniability is called AutoML.  To hide their true intent from karma, they use business terms to describe their motivations, "[Today] Our machine learning models are painstakingly designed by a team of engineers and scientists. This process of manually designing machine learning models is difficult because the search space of all possible models can be combinatorially large.  For this reason, the process of designing networks often takes a significant amount of time and experimentation by those with significant machine learning expertise [humans]."  So to avoid the time and expense, Google is outsourcing the development of AI to other machines - their layer of deniability.   Me thinks Karma will not be so easily fooled.

************************************************************************
Kevin Benedict
Partner | Futurist | Leadership Strategies at TCS
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Join the Linkedin Group Digital Intelligence

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I work with and have worked with many of the companies mentioned in my articles.

Interviews with Kevin Benedict