Recently an artificial intelligence system in China successfully passed a medical exam for the first time. This is a significant advance in healthcare. Potentially AI can soon provide high quality medical diagnoses remotely anywhere around the world. Another significant step in AI and robotics happen a couple of years ago in Saudi Arabia where they granted citizenship to a robot named Sophia. I wonder if that robot will be forced to wear a burka? With all these rapid advancements, I think it is time we explore the spiritual life of robots and artificial intelligence.
Up until recently, human programmers coded and configured algorithms, AI, automation and machine learning system and took personal responsibility for all of their own code. Today, however, AI has escaped the confines of human oversight and has been empowered and employed to self-program, self-optimize, self-test, self-configure and self-learn. David Gunning writes, "Continued advances [in AI] promise to produce autonomous systems that will perceive, learn, decide, and act on their own." That's potentially a big problem for karma.
A simplistic definition of karma is a spiritual principle that teaches good actions and good intent lead to good things now and in the future, while bad actions and bad intent lead to bad things now and in the future. What happens to a human programmer that empowers or transfers responsibility for future decisions and actions to a robot - an autonomous machine with artificial intelligence? Will karma eventually seek out the original human programmer of the autonomous system, long since retired and fishing on a mountain lake to extract retribution, or direct bad karma to the machine? It's a problem.