Singularity Ghost in machine mind body dualism

Consciousness soul  

Artificial intelligence 

Override laws 

Random segments of code leading to irrationality and illogical behaviour 

Three laws 

Turing test  

Stochastic  language 

A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law 

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm. 

“The main issue I expect to be important for humanity is not the moral regulation of a large number of semi-smart humanoid robots, but the eventual development of advanced forms of artificial intelligence (whether embodied or not) that function at far greater than human levels,” Helm told io9. “This development of superintelligence is a filter that humanity has to pass through eventually. That’s why developing a safety strategy for this transition is so important. I guess I see it as largely irrelevant that robots, androids, or ’emulations’ may exists for a decade or two before humans have to deal with the real problem of developing machine ethics for superintelligence.” 

And most AI developers are ethical people, so they will avoid creating what philosophers would refer to as a ‘beings of moral significance.’ Especially when they could just as easily create advanced thinking machines that don’t have that inherent ethical liability