Do Robots have Karma?

This month, an AI (artificial intelligence) system passed a medical exam in China for the first time.  I wonder how its bedside manner will be?  In addition, Saudi Arabia granted citizenship to a robot named Sophia.  I wonder if the robot will be granted the rights of males, or females?  With all these rapid advancements, I think it is time we explore the spiritual life of robots.

Up until recently, programmers coded and configured algorithms, AI, automation and machine learning system and took personal responsibility for all the code.  Today, however, AI has escaped the confines of human oversight and has been empowered and employed to self-program, self-optimize, self-test, self-configure and self-learn.  David Gunning writes, "Continued advances [in AI] promise to produce autonomous systems that will perceive, learn, decide, and act on their own."  That's a problem, not only with me, but with Karma.



A simplistic definition of Karma is a spiritual principle that teaches good actions and good intent lead to good things now and in the future, while bad actions and bad intent lead to bad things now and in the future.  What happens to a programmer that empowers or transfers responsibility for future decisions and actions to a robot - an autonomous object with artificial intelligence?  Will Karma eventually seek out the original programmer of the autonomous system, long since retired and fishing on a mountain lake to extract retribution, or direct bad Karma to the machine?  It's worth considering, especially if you are that programmer.

The US Department of Defense is also concerned enough with Karma that it is investing in the development of XAI (explainable artificial intelligence).  The unspoken intent is to point bad Karma in the right direction.  No one wants to become collateral damage to misguided bad Karma.  XAI can be described as a set of techniques and models to help humans understand why an autonomous system did what it did.  It is a way for machines to explain why they shut down your bank account, rejected your transaction, raised your insurance rate, gave you a bad credit score, rejected your university or job application, matched you with an awful date and forever ruined your life.  Karma has to go somewhere, and the XAI folks want it going to the right programmer.

More from Gunning, "New machine-learning systems [XAI] will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. The strategy for achieving that goal is to develop new or modified machine-learning techniques that will produce more explainable models." Am I the only person concerned with what "moral code" is being embedded in these autonomous thinking and acting systems?  I know Karma is.  I recently explored this subject in an article I wrote titled, What Artificial Intelligence Can Teach Us.

What about unintended consequences?  Karma views intent as a very important consideration.  Will Karma give a pass to programmers that didn't intend for the battlebot they programmed to attack and destroy innocent people?  See this very interesting video clip on the subject.

Google is also concerned with bad Karma.  They are so concerned in fact that they are building a layer of deniability to protect themselves from bad Karma.  The layer of deniability is called AutoML.  To hide their true intent from Karma, they use business terms to describe their motivations, "[Today] Our machine learning models are painstakingly designed by a team of engineers and scientists. This process of manually designing machine learning models is difficult because the search space of all possible models can be combinatorially large — a typical 10-layer network can have ~1010 candidate networks! For this reason, the process of designing networks often takes a significant amount of time and experimentation by those with significant machine learning expertise [humans]."  So to avoid the time and expense, Google has outsourced the development of AI to other machines - their layer of deniability.   Me thinks Karma will not be so easily fooled.

To read more articles on AI, please visit my Artificial Intelligence site.

************************************************************************
Kevin Benedict
Principal Analyst | Consultant | Digital Technologies and Strategies - Center for Digital Intelligence™
Website C4DIGI.com
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin's YouTube Channel
Join the Linkedin Group Digital Intelligence
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I work with and have worked with many of the companies mentioned in my articles.

Popular posts from this blog

Artificial Intelligence and Your Soul - An Interview with David Espindola

Digital Twins Film Project - Premier

Insuring Against Cyber Threats with Expert Bob Parisi