Giving Our Own Moral Codes to Robots

In military and other time sensitive environments the ability to shorten, or compress the time it takes to gather relevant information, make a decision and then act on it is critical.  For that reason, no matter how concerned people are about introducing artificial intelligence, automation and robots onto the battlefield, it will happen.  

Already today, inexpensive swarms of commercial drones supported by open source software and algorithms, high definition cameras and commonly available weapons can be launched by the dozens to attack predesignated targets.  The low costs of these attack drones guarantee that large numbers will be used to overwhelm slow, human dependent defense strategies and responses.  These vulnerabilities today ensure that automated defense systems will need to be employed in the future.  The speed and complexity of an offense dictates what is required of a defense.

Without the luxury of time, defenses of the future will need algorithms that are based on human programmed morals and philosophies to instantly process available information and automatically respond in microseconds.  That means our mothers, elders, philosophers, leaders, priests, preachers and military strategists will need to work together to create predefined and acceptable responses that we are willing to codify and upload to our robots' algorithms.  The problem is we have never been able to completely agree on these ourselves.

Our proselytized robots, will make up a congregation of saintly parishioners that will follow their given moral code to the letter.  This very real scenario, of course, invites a thousand questions.  What moral frameworks should we use?  Would a Jesus or Buddha following robot actually fight?  Would we create a different moral framework for robots than for humans?  Would we program our robots to do things we humans would find unacceptable? 

It seems we would want to keep the commandments we provide our military robots simple and narrow.  We would not want them to be pondering the bigger questions like why we must kill people to restore peace, or in defense of an economic or political system - that might just muddy the waters.  In addition, it would slow down the response of the military robots if they had to ponder all of these issues before destroying an inbound swarm of somethings or someones.

The requirement to replace humans on the battlefield with robots is going to force us to confront our own very human issues in ways we have avoided in the past.

********************************************************
Kevin Benedict
Partner | Futurist at TCS
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Join the Linkedin Group Digital Intelligence

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I work with and have worked with many of the companies mentioned in my articles.

No comments:

Interviews with Kevin Benedict