Machine Learning: Making Sense of a Messy World

17

You ready to start right now?

Oh yeah, yeah. Yeah, yeah. Thank you.

[MUSIC]

There have been a number of shifts in the way we think about computing

over the past few decades.

The terminology artificial intelligence has come in and out of favor in the scientific community.

Sometimes it's called machine learning.

We tend to call it machine intelligence these days.

I just call it intelligence.

And sometimes it's just the effort to build machines that are better.

So in the early days

everything was built on logic.

Doing mathematical integration problems. Playing chess.

But we realized that what the real challenges were were the things

that people can do every day.

The real world is actually very messy. Hard logical rules are not the way to solve

really interesting real world problems.

You have to have a system that will learn to get the knowledge in. You can't just

program it all in.

Artificial intelligence is an effort to build machines that can learn from their environment,

from mistakes and from people.

And we're still at the stage where we don't know what is the right path and the right

breakthrough.

So I mean there's certainly a whole raft of different approaches.

One of the subfields we call pattern recognition.

Artificial neural network.

Reinforcement learning, for example.

Statistical inference and probabilistic machine learning.

Supervised learning. Unsupervised learning. And we're not quite sure what technique is

going to lead to better systems. And, in fact, it's probably not one technique for everything,

it's probably a bunch of different techniques and combinations of those techniques.

Any progress we make in building truly intelligent systems is going to depend on progress in

technology generally.

And until recently, we didn't have computers that were fast enough or data sets that were

big enough to do that.

And so being able to take a particular problem and spread it out over lots and lots of machines

is a very important approach because it makes our research faster.

So there's applications of artificial intelligence around us all the time.

When it begins to work or it does work it's all of a sudden given another name.

We're all already using it and very comfortable with it.

Things that now we regard as routine 30 years ago would have been regarded as amazing

examples of artificial intelligence.

Antilock braking.

Autopilot systems for planes.

Search.

Recommendations.

Maps.

To decide whether or not this particular email is spam or not spam.

The ability to translate one language to another with your phone.

Ten years ago if you tried to talk to your computer or to your phone, you know, that

would just be hopeless.

We are seeing a steady torrent of these tricks one after the other getting figured out right now.

I think a lot of people that are close to the field have this

do have that kind of breathless sense that things are moving quickly.

It's a progressive thing. It's about building things that are slightly better

slightly better, slightly better.

Intelligence is really not going to be something that we ever succeed in defining in a succinct

and singular way. It's really this whole constellation of different capabilities

that all kind of are beautifully orchestrated and working together.

Predicting the long term future is very difficult.

Nobody can really do it.

And the bad thing to do is take whatever's working best now and assume the future's going to be like that forever.

[MUSIC]