A.I. can do almost anything now, but here’s 6 things machines still suck at | Robotics

types of artificial intelligence ai explained 05

Predicting what A.I. and computers aren’t capable of is a fool’s errand. Throughout A.I.’s 60-year history, skeptics have attempted to single out tasks that they think will never be able to achieve. Such tasks have ranged from playing a game of chess to generating pieces of music to driving a car. In almost every instant, they have been proved wrong — sometimes profoundly so.

But as amazing as A.I. is here in 2018, there are still things that it is most assuredly not able to do. While some are more frivolous than others, they all showcase some part of machine intelligence that’s currently lacking. Here are six examples which highlight how much more there is to do.

Writing funny jokes

If you’re an A.I. researcher reading this, consider this one the (possibly) low-hanging fruit, tantalizingly within your reach. After all, writing a decent joke should be easy, right? Tell that to every attempt at creating joke-generating A.I.s so far.

Earlier this year, one intrepid coder trained a neural network on more than 43,000 jokes and asked it to invent new jokes. A representative, laughter-defying sample goes: “What do you get when you cross a cow with a rhino? A bungee with a dog.”

ibm watson

Whether a joke is funny or not is hugely subjective, but even the biggest bungee enthusiast is unlikely to find too much to chuckle about there. IBM’s Jeopardy!-playing A.I. showed that machines can be made to understand linguistic complexities such as multiple meanings to the same word. But so far not to purposely humorous effect.

So here’s a challenge: Get an A.I. to write and then deliver a 3-minute set list of comedic material that makes 50 percent of its (non-coder) human audience laugh. And no joke stealing allowed either. That means you should probably throw out your Carlos Mencia training data.

Writing good novels

The rise of companies like Narrative Science and the use of algorithms for sports reporting shows that writing is not out of reach for today’s computers. But we don’t expect to see a machine write a novel yet, regardless of whether that’s chart-topping popular genre fiction or highfalutin literary fiction.

It’s about venturing into problem-solving tasks, and deciding which pieces of information are relevant.

Writing either one requires not just generating text to reveal fragmentary scraps of information, such as the score in a local football game. It means composing a narrative (or willfully subverting that idea) which resonates with readers, and then figuring out the best way to tell it.

There are some fascinating demonstrations of A.I. used to write prose. There are some very silly ones as well. But we’re not holding our breath for either a computational Jane Austen or J.K. Rowling any time soon. If ever.

Formulating creative strategies

On one level, this simply isn’t true. As Google DeepMind’s game-playing A.I. demonstrated, when it comes to things like playing Atari video games, intelligent agents versed in reinforcement learning can indeed formulate optimal strategies. I’m also of the belief that creativity isn’t an untouchable area for artificial intelligence.

What I’m talking about here, however, is the ability to formulate the kind of creative strategies that, for instance, define a great lawyer’s ability to form unique arguments or a top CEO to lead his or her company in bold new directions.

This isn’t just about analyzing data; it’s about venturing into unstructured problem-solving tasks, and deciding which pieces of information are relevant and which can be safely ignored.

Excelling at these tasks also frequently requires the ability to…

Being human

Tough goal, right? No, we don’t mean this literally: If a machine needed to literally be a human to be considered intelligent then it would never happen. Instead, this refers to traits like compassion and an ability to tap into the things which drive us as human beings.

Machines are getting very good at identifying individual users’ emotional states through things like facial expressions and vocal patterns. They can then use this insight to modify how they interact with us, such as recommending us certain playlists when we feel sad or happy.

tech trends 30 years from now questionable ai sophia getty
S3studio/Getty Images

But as good as computers might be getting at identifying diseases like cancer, would you choose one, instead of a human doctor, to tell you that you are dying of a terminal illness? On the lighter side, books and movies like Moneyball show us how data analytics can pick out winning sport teams. But could an A.I. be a top level sports coach? These are important human roles, and they’re ones that are going to remain human for the foreseeable future.

If machines can’t adopt these skills, it’s going to limit the scope of what they can achieve in the workplace. (On the plus side, maybe that throws a lifeline to humans!)

Making a cup of coffee

Bear with us for a second here. Yes, there are plenty of smart coffee machines out there, but that’s not what we’re referring to. The Coffee Test was put forward by Apple co-founder Steve Wozniak as a measure of multiple aspects of machine intelligence and robot dexterity. The test Wozniak describes involves a machine entering a random American home, finding the coffee machine, adding water, finding a mug, and brewing a coffee by pushing the correct buttons.

Will we ever see a team of robots beat a human soccer team?

What I like about this test is how measurable it is. Other attempts to quantify an Artificial General Intelligence focus either on philosophical abstractions (the Turing Test) or have already arguably been reached (Nils John Nilsson’s proposal of an Employment Test). Wozniak’s test requires high performance in areas like image recognition, but it also needs a generalized, multi-purpose intelligence. So far, this hasn’t been achieved — yet.

Beating human sports teams

Consider this a bonus one. That’s because, like making a cup of coffee, it’s not just about about A.I., but also its related field of : the hardware yang to software’s yin.

In the same way that A.I. must be generalized if it’s ever going to be considered intelligent, then robots must also be multi-purpose if they’re going to fully live up to their potential. We’re starting to see this with the likes of Boston Dynamics’ Atlas robot, which is as happy performing backflips as it is jogging or carrying out a bit of parkour.

But while A.I. has performed intellectual feats like defeating grandmasters at chess or winning games of Go against the best players in the world, the same hasn’t proven true for robots. Will we ever see a team of robots beat a human soccer team? Considering the speed and myriad skills that game entails, it seems a long way away.

Are we wrong?

It’s not often that I reach the end of an article and think, “Boy, I really hope I’m wrong about this one.” In this case, however, I really mean it. These are not, in my view, universal truths that can never change. The extraordinary progress of A.I. shows us how rapidly things are moving. More to the point, much of this progress has specifically been a response to statements like “a computer will never do X.”

If you think there is evidence for one or more of these statements to be untrue, let me know. Because, as we move forward in A.I. research, these are some of the hurdles that will need to be dealt with. Especially as we begin working with A.I. and robots in the workplace on a regular basis.

You might also like
Leave A Reply

Your email address will not be published.