Stanford University's driverless car, an Audi TTS named Shelley, has hit 115 mph on a closed racetrack.

Story highlights

As technology gets more advanced, our relationship with it is changing

Scientists found people think slower automatic doors are smarter than faster ones

Some people fear giving up control to self-driving cars, but others trust them too much

San Jose, California CNN  — 

Computers are evolving. We have voice-controlled assistants on our phones, telepresence robots for when we can’t make it to a meeting in person, and self-driving cars that are headed to a road near you.

These machines aren’t just taking over human tasks. Computerized systems are also taking on more human characteristics. As technology gets more advanced, how will our relationships with it change?

People are funny when it comes to automated devices, whether they’re automatic doors or humanoid robots. We’ll give them names and personalities, see them as cute or creepy, trust them with our lives and even get mad at them.

This was a prevailing theme at The Atlantic’s recent Big Science Summit in San Jose, California. For starters, take the example of a smart thermostat that was a little too smart. The popular Nest thermostat, created by the designer behind the iPod, uses sensors and information about your behaviors over time to maximize energy efficiency in the home.

During beta testing, the engineers tried having the Nest automatically set a heating and cooling schedule out of the box, based on the pattern it knew would be most energy efficient, according to Yoky Matsuoka, Nest’s vice president of technology. The device imposed this efficient schedule on users under the assumption that they would learn to adapt to it.

People hated it.

If they felt a touch too cold, they would rebel against the thermostat, manually turning it up even higher than they normally would, wasting energy. The homeowners wanted to feel that they were in control, and were unhappy having something make the decisions for them.

The feature was changed and the final version of the Nest starts with a blank slate, waiting to learn the user’s patterns and preferences.

Smarter doors

Wendy Ju, a researcher at the Stanford Human-Computer Interaction Group, studies how people perceive and react to computers.

“Robots seem like weirdos,” said Ju at the Big Science Summit. She’s experimented with tweaking computerized systems so that they seem less like “strange creatures.”

In one experiment, Ju’s group rigged automatic doors to open in different ways: Some would open slowly, then pause before fully opening; others would immediately jerk all the way open. The people walking by the doors assigned them different levels of intelligence, and thought the doors that opened in two steps just seemed smarter.

It turned out that adding the pause gave illusion of forethought, even though it was just an extra programming step. People thought the door was more intelligent because it appeared to think before carrying out an action.

In another experiment, Ju’s group found people were twice as likely to use a public information kiosk when it had a waving robot hand attached to it. The physical movement made the kiosks seem more approachable.

Understanding these types of little human quirks are the key to making better computers and robots in the future, and to getting people to embrace using them, scientists say.

Science fiction is filled with conflicting depictions of smart computers. For every benign system, like “Star Trek’s” tea-making computer, there is a more nefarious example like “The Terminator’s” Skynet. It’s no surprise people are wary of computers becoming smarter than us, taking power from us and doing tasks we do perfectly well already, thank you very much.

Self-driving cars

Like driving. Self-driving cars have been a concept for many decades, but in recent years they finally jumped from idea to reality, pushed by a contest started by the Defense Advanced Research Projects Agency in 2004. Search giant Google has been testing automated vehicles on the roads in Nevada and California, but they’re not the only ones working on self-driving cars.

The Center for Automotive Research at Stanford (yes, CARS) is also experimenting with autonomous vehicles. The group started testing an autonomous Audi TTS on the wide-open Bonneville Salt Flats in western Utah, where there are no pesky obstacles. Next it went up Colorado’s Pike Peak, a famously twisty mountain road that zigs and zags unpredictably for 12.4 miles. Finally they took it to northern California’s Thunderhill Raceway, where it hit 115 mph without a driver, and without crashing.

Every successful test builds hope that these cars can eliminate human driving errors, which are responsible for 90% of auto accidents. But part of selling the idea is making it clear that even a street of all automated cars, communicating with each other, won’t be absolutely safe.

“It’s hard to make software that is perfect. If your software has an error, that could be fatal in a self-driving car,” said Chris Gerdes, director of the center, at the science summit. For example, he said, the vehicles’ pedestrian-detection system must analyze a huge variety of human shapes – when driving through San Francisco on Halloween, for example.

It’s generally thought that one of the biggest hurdles for self-driving cars is convincing people to trust them. However, in their experiments, CARS researchers have also seen a surprisingly different reaction: people putting too much confidence in the cars.

When the self-driving technology moves from testing into daily use, it will likely start small, appearing in safety features in cars. By 2015, we could have automated traffic-jam assistants that help with driving in stop-and-start traffic, freeing the driver up to text or read. But if drivers are 100% comfortable turning over control to the car, how do you get their attention back?

“I’m worried there’s not much of a gray area,” said Gerdes.

Life-like robots

One researcher who is moving far beyond just trying to make a system seem smart or trustworthy is David Hanson, CEO of Hanson Robotics. Hanson wants to make a lifelike robot that has human-level intelligence.

“It benefits people to humanize our technology,” said Hanson at the Big Science Summit. “We discover things about ourselves.”

Visually, he’s very close to reaching that goal. Using a substance called frubber and his own background in animation, Hanson has created incredibly (some might say creepily) lifelike heads that can mimic subtle facial movements and expressions.

The robots are loaded with personality profiles and can hold real-time conversations by drawing on a database of dialogue produced by creative writers.

Ideally, Hanson would like to make the robots look and act so human that people would be able to form relationships with them.

But that level of intimacy with a robot isn’t for everyone.

“I think that the number of people I want to have that deep relationship with is small, maybe 10,” said Ju.