And that doesn’t mean better, just greater than.  Not many people realize that there’s actually a difference since the words are used mostly interchangeably.  Artificial Intelligence is, as a field of study, larger than Machine Learning.  In fact, ML is just a specific subset of AI.  I wasn’t too clear on where exactly the line was drawn, but it turns out the accepted industry answer is that you can have AI without ML, but whenever you’re doing ML, that’s a form of AI.  Machine Learning can also be further broken down into sub-fields like Deep Learning, but that’s just getting fancy. This is a short blurb about wtf those buzzwords actually mean and examples of how they’re applied today.

Artificial Intelligence – Intelligence displayed by machines, in contrast to “natural” intelligence as displayed by humans and other animals

Machine Learning – Using statistics and other fields of mathematics to make computers “learn” how to better perform a task

So what do these actually mean?  We could go way down the rabbit hole on the first one, but I don’t think this is the time or place.

To understand what AI means let’s find a definition of intelligence (from, of course):

1) The ability to learn or understand or to deal with new or trying situations – also, the skilled use of reason

This seems pretty standard.  The point about dealing with new or trying situations is an important one.  As I’ve written in past articles, computers are just dumb machines. They do exactly what we tell them to do, whether we told them the right thing or not.  One consequence of this is that computers are extremely bad at doing things that they’re not explicitly told to do. AI is the art of getting a computer to make these kinds of inferences and deal with these new situations without having to hard-code every single algorithm.  Emergent behavior is a frequently studied topic in mathematics and computer science and AI has a lot to do with that. (And while we’re on the topic, here’s a plug for dynamics, my favorite branch of math which encompasses chaos theory and some more good stuff – go read about it!)

2) The ability to apply knowledge … to think abstractly as measured by objective criteria (such as tests)

Ok, here’s where it gets trippy.  What does it mean for a computer to think abstractly?  What does it mean for humans to think abstractly? Is this abstract thinking right now?  Whoa, dude.


In my opinion, thinking abstractly as measured by tests is best exemplified by AI playing games against humans at top competitive levels.  Any nerd can code up a checkers AI or a bot that plays poker, but it’s extremely hard to get to be the best at games like Chess or Go.  These games require not just knowledge of the current board state, but foresight for what the opponent is currently trying to do and what they could start to do as the game develops.  Go in particular is a game with so many board states (~2 * 10 ^ 170, or a 2 with 170 zeros after it) that there would be no way to articulate to the computer all of the possible strategies it could use, let alone to defend against.  As a solution to this, the ML nerds over at DeepMind had an AI called AlphaZero that they trained solely by having it play games against itself, seeing what worked and what doesn’t work.  This way, humans didn’t need to pre-program every strategy as the computer was coming up with it’s own.  This is the aforementioned emergent behavior: the AI came up with strategies that humans hadn’t thought up in thousands of years of playing that game.  People were actually learning new things from a machine that spent weeks to months in an incubation stage, training itself and developing a base of “knowledge” about a game.  This is the very essence of artificial intelligence.

Now, how or where does Machine Learning come into this?  The Go AI certainly was an example of machine learning. The researchers were able to define a reward system where when the machine did things correctly, it was more likely to remember that and do the same thing again in another game.  Similarly in applications like computer assisted medical screenings, AI’s are taught using what’s called reinforced learning.  Computers make predictions based on their current knowledge base, and then if they’re correct, they’re “rewarded” and if they’re incorrect, they’re “penalized”.  Of course, the researchers aren’t stuffing dog treats into CD drives, but you get the point. The algorithms are built to learn from their failures as defined by humans or by arbitrary criteria, like winning or losing a game.  This shows how ML is just one way to train an AI.

At this point, I would love to start diving into all the different forms of machine learning, but that’s enough for its own post (or even a series of posts).  Hopefully this was a good insight into demystifying when people are talking about AI or ML. Stay tuned and we actually can go down the rabbit hole and see what’s going on.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s