Visions of robot wars and an apocalyptic future dominate imaginations.
Billionaire futurist Elon Musk tweeted last weekend that artificial intelligence could be more dangerous than nuclear weapons.
While it's a scary thought, Vlad Sejnoha, the chief technology officer with Nuance Communications, a company that works with artificial intelligence, said he's an "optimist."
"I am skeptical this amping up of the fear factor of when it comes to artificial intelligence," he told ABC News.
"If you look at the history of technology, we have been subject to immense changes," Sejnoha said. "Life would be unrecognizable to a lot of folks who came before us."
What Elon Musk Says Could Be More Dangerous Than Nuclear Weapons
Google Paves the Way to a Robotic Future
Baseball Team Shows Off Robot Fans
Sejnoha shared three ways artificial intelligence is already working to help, not hurt humans.
Get What You Want, Now
Whether its driving directions, movie reviews or scheduling a time for the cable guy to come by -- getting the information you want takes several steps.
Why not leave it to an artificially intelligent machine?
"One simple idea is you cut through that clutter with simple spoken requests," Sejnoha said.
"You should be able to do that across lots of different devices," he said. "You could look up a sports score while driving in the car and when you drive home you can tell the TV to turn that exact game on. It could also become your agent to controlling other smart things."
The bottom line:
"I think we will have the ability to transact with our surroundings but also get information in very seamless ways," he said.
They Care About Your Health
There are artificially intelligent systems already being used by doctors, Sejnoha said.
"We’re making it possible for doctors to take notes or document the patient encounters, putting those facts into electronic health records and then helping the doctor provide quality care by noticing discrepancies."
An Artificially Intelligent Friend
The movie "Her," showed the deep emotional bond a man developed with Samantha, his personal operating system. It's a scenario that isn't inconceivable in real life, to an extent, Sejnoha said.
"A lot of users form a bond with their virtual assistants. I think its entirely plausible some folks might view them as a companion," he said. "As we build more and more intelligent systems that can perform more sophisticated tasks, the question is: Should we make them human-like or neutral software tools? There is an argument being made to emulate some human characteristics."