- UK robotics professor leading calls for a worldwide ban on autonomous weapons
- We can't rely on robots to conform to international law, says Noel Sharkey
- Sharkey is chairman of and NGO leading a campaign to "Stop Killer Robots"
- Autonomous robots could destabilize world security and trigger unintentional wars
As wars become increasingly automated, we must ask ourselves how far we want to delegate responsibility to machines. Where do we want to draw the line?
Weapons systems have been evolving for millennia and there have always been attempts to resist them. But does that mean that we should just sit back and accept our fate and hand over the ultimate responsibility for killing to machines?
Over the last few months there has been an increasing debate about the use of fully autonomous robot weapons: armed robots that once launched can select their own targets and kill them without further human intervention.
Some have argued that robots could be more accurate on the battlefield than human soldiers and save more civilian lives. But this is speculation based on assumptions about future developments of computer hardware and software. It is no more than "h