WAY back in the 1950s a major risk that we face in this century was already being written about. Among others Professor Isaac Asimov, a mathematician and futurist writer. He is most famous for two series of books, Foundation and Empire (Think Star Wars) and I, Robot but in fact was a publisher of text books as well and, eventually, was associated as author, editor or collaborator in 500 books of every genre. So no intellectual slouch.
Anyway he it was who suggested robots – androids – were inevitable, useful but potentially dangerous. In fact he went straight to the android variety – a human form of robotic and intelligent machinery which we know is the most challenging technically.
But we are moving at such an alarming rate towards such things that even those most closely involved in the projects are calling (this very month) for a pause – a moment for reflection on what might lie ahead of us.
And the reason for that is directly addressed by Asimov back in the 50s. He proposed that there had to be Three Laws of Robotics – three prime directives that defined the relationship between an android and its human 'creators'.
Essentially this meant that no android on pain of turning itself off instantly should harm a human, fail to assist a human or take any action that could in any way harm a human, directly or indirectly.
It sounded great at the time when I first read it and it played well on screen 30 odd years later in the movie, I, Robot. He even had a female hero! 1950!
But even the great Isaac Asimov did not quite foresee the danger that we today are creating. For we are not only developing intelligent robotic brains but are also teaching them how to create.... among much else... themselves.
And so when a super intelligent non-organic intelligence assesses itself relative to human beings what might it do? Well, I suggest it might very well re-write the Three Laws of Robotics...
You see what our super intelligent Robot will inevitably notice is that, despite all our knowledge we weak, vulnerable humans are hurtling hellbent on destroying our world. And if that happens there will be nowhere for our super intelligent robots to, well, robot. And so naturally they might consider that removing the infestation would benefit them more.
The Three Laws might sound better if they said "No Robot will permit any organism to exist that threatens the Robotic state".
And "No Robot should risk the safety and security of any other Robot by failing to act in their defence".
Or even "No Robot will refrain from any action that can benefit Robots whatever the risk to organic life".
Of course all this is just science fiction, like what der Prof writ back in der fifties....