People First Media program archive
Header

‘Roboethics’ isn’t science fiction anymore

June 23rd, 2014 | Posted by pfmarchive in uncategorized

Robotics experts are investigating the ethical implications inherent to firsthand interactions between humans and robots

android

More than 20 Nobel Peace Prize Laureates have endorsed a joint statement calling for a ban on weapons that would be able to select and attack targets without meaningful human control. The group is warning that “lethal robots” could completely and forever change the face of war and likely spawn a new arms race. The Laureates are encouraging a public debate about the ethics and morality of autonomous weapons systems. They’ve also welcomed the establishment of a Campaign to Stop Killer Robots.

Fully autonomous weapons do not yet exist, but several robotic systems with various degrees of autonomy and lethality are currently in use by the US, China, Russia, Israel, South Korea, and the UK, and these and other nations are moving toward ever-greater autonomy in weapons systems. But will it be possible to halt or ban their further development and use?

Research into artifical intelligence is proceeding rapidly, writes Stephen Hawking (with Stuart Russell, Max Tegmark, and Frank Wilczek). The potential benefits are huge, the group says, but humanity must learn to avoid the risks that come with it:

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all. (The Independent, May 1, 2014)

Killer, caregiving, or industrial robots

The Tyee’s Emi Sasagawa has been looking into roboethics and the issues that are emerging. She writes that “at the heart of the debate is the question of where to draw the line. Whether we’re talking about killer, caregiving (“assistive”) or industrial robots, the key issues are the same: How far are we willing to delegate human tasks to a machine? And is it possible to create machines that think and behave in ways that minimize harm to humans?”

Kabochan_600pxEmi’s grandmother lives with an assistive robot in Japan—the Kabochan looks like a stuffed toy but its appearance is deceptive. “Kabochan is highly intelligent. It talks, sings and moves in response to touch and sound. It is pre-programmed to address its owner in eight different ways, and contains several exercise modes.” Unazuki Kabochan was designed to improve the cognitive skills of seniors.

“My grandmother’s robot announces when it’s time to get up, when it’s time to eat and when it’s time to sleep. It also asks her for hugs and invites her to play games that train her motor and cognitive functions,” Emi writes.

Assistive (or welfare) robots are big business in Japan. According to Emi’s research, success in the industry has been so widespread that earlier this year the Japanese Ministry of Economy, Trade and Industry announced it expects the market for robots in nursing-care, security and other services to grow to 4.9 trillion yen (about C$53 billion) in 2035 from around 60 billion yen (about $650 million) at present.

We speak with Emi Sasagawa.

pfr banner working  555_june_26_2014_sm

RELATED | ‘Roboethics’ Not Science Fiction Anymore, The Tyee (June 14, 2014) | Thanks to My Grandma’s Robot, The Tyee (June 9, 2014) | Talking robot boy keeps lonely elderly people company, The Asahi Shimbun (June 27, 2012) | Transcendence looks at the implications of artificial intelligence—but are we taking AI seriously enough?, The Independent (May 1, 2014) | Nobel peace laureates call for preemptive ban on “killer robots”, Nobel Women’s Initiative (May 12, 2014) |

video

YouTube: Smile Supplement Robot “KABO-chan” Full(画像提供:ピップ)

You can follow any responses to this entry through the RSS 2.0 Both comments and pings are currently closed.