ADF investigating AI weapons
The Defence Force is spending over $5 million on studies of artificially intelligent weaponry.
The ADF says it wants to design ethical killing machines, shifting the decision to kill from soldiers over to programmers and engineers.
Experts have welcomed the massive investment in AI ethics, which will allow several years of research into the values that may help decide when a machine kills.
The Navy already uses a limited form of ‘automatic’ weaponry in the form of the ‘phalanx gun’ on the back of destroyers, which automatically detects and fires upon incoming missiles.
UNSW Canberra lead researcher Dr Jai Galliot says that in the same way driverless cars will become better drivers than humans, so too will AI weaponry become more ethical.
“The ideal is to achieve an equal or better morality as we would find in human decision-making,” Dr Galliot said.
“So part of it is understanding how human military members go about making their decisions, and where they maybe go wrong sometimes.”
He said drones could be programmed not to fire on objects bearing the red cross sign, or not to shoot at children, for example.
The money comes in part from the Defence Department's $730 million technology fund for future intelligence, space and automation technologies.