People often say "we don't even know what intelligence is" when talking about Artificial Intelligence, AGI, Deep Learning, and so on. They're right! We haven't agreed yet on what intelligence is. But that doesn't matter because, like gravity, we can measure what it does.

Despite not agreeing on what gravity is, we know what it does, and we've learned how to measure it. Precisely measuring gravity is the next best thing to knowing exactly what it is.

By knowing exactly what intelligence does, we also discover how to measure it. With a reliable measure of intelligence, we gain a reliable guide toward how to get more of it. And what is AGI if not a quest to get a lot more intelligence?

In 2007, Shane Legg, who is now primarily known as a co-founder of DeepMind, published a definition of intelligence that DeepMind still references, so it's probably a pretty good one.

Informally, Legg defines intelligence like this:

Intelligence measures an agent's ability to achieve goals in a wide range of environments.

This is a fine enough definition, but not especially actionable. He goes on to formalize this into a measure of machine intelligence that's not especially accessible:

It's machine intelligence, capiche?

We can do better than this.

The one thing intelligence does

While we can each think of a few things that intelligence does, most of them are fairly difficult to measure.

Back in 1994, Linda Gottfredson surveyed[pdf] 131 intelligence experts and managed to get 52 of them to agree on this phrasing:

Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.

As you might expect, it's generic enough to be nothing we didn't already know, listing seven abilities that we can expect from an intelligence. When we're discussing AGI, seven seems like too many start with.

If we can boil Legg's definition of intelligence down to an action, it's "achieving goals". Unfortunately, that's overly general and difficult to measure. After all, how can we measure progress toward difficult or unlikely goals like achieving AGI? We have no point of reference.

I believe that to get to AGI, we must answer the question of what single, specific thing does intelligence do that encapsulates all the general things it does and yet is measurable.

Here's how I phrase the one thing that intelligence does:

Intelligence alters the future choices of other intelligences to favor its own interests.

Put another way:

The more choices in other people that you're able to alter in your favor, the more intelligent you prove yourself.

Now, I bet you already disagree with me. Altering choices in other people? Is that all? Why that?

What about achieving goals? What about reasoning, planning, solving problems, and thinking abstractly?

Surely intelligence does these things!

What about figuring out how to survive in a complex, fast-changing environment? What about solving hard problems? What about delayed response to stimulus? What about detecting patterns in noisy environments and making accurate predictions?

Surely intelligence does these things too!

Yes, intelligence does all that and more, but they are precursors and reactions to its core function of altering the choices of other intelligences.

The nice thing about settling on "altering choices" is that it's equally applicable to both human and artificial intelligences. It's surprisingly measurable for both and allows for meaningful comparison between and among both humans, different AIs, and the same AIs at different times.

The building block of survival

As everyone knows, we humans are social creatures. We depend on other people for our sustenance, shelter, and technology. We need help, support, or at least cooperation from other people to accomplish our goals in life, whether small or large.

So if there's one thing we must do well, we must learn how to make more choices that smooth our own way. This means making more of the choices that cause other people to make choices that favor us rather than someone else, however slightly. It's the bottom line for everyone.

Want to get that good job? You'll have to say and do the right thing, sometimes years in advance and certainly during the interview process, to influence the hiring manager's choice in your favor.

Naturally, you figured out how to survive adulthood to that point, solving many hard problems (with help) along the way. You achieved goals, reasoned, thought abstractly, learned from experience, and all the rest. But that doesn't get you the job unless you also arranged it all to result in a favorable choice by the hiring manager, which means that you persuaded better than your fellow job-seekers.

Despite all else we tend to think intelligence does, it comes down to altering choices in other people in your favor in a competitive environment. You cannot thrive and may not survive unless you alter enough choices in enough other people, especially the right people.

Measuring the one thing intelligence does

"But is choice alteration measurable?", you might be wondering. Indeed it is, both directly and by proxy. Furthermore, like gravity, the measurement begins at zero and can be as fine as we wish.

Direct measurement

Direct measurement is possible in a digital environment.

One example is online advertising: if you put an ad in front of someone, and they click-through to buy what you're selling, you've altered their choice. They would have gone on with their day otherwise, or bought from a competitor, or any number of other choices in that moment. Being a digital process, it's all recorded for later analysis.

The more choices you alter this way, the more revenue you receive and the longer your business can survive. With all the details recorded, you get great feedback about what worked and what didn't, so you can adjust your ad, keyword, and target audience as needed.

Another example is social media: if you post a beauty of a pic on Instagram, and you get a thousand likes and a hundred thousand views, then that's at least a thousand people (barring bots) whose future choices have been slightly altered in your favor. All of it is on record. Now more people are more likely to send you a DM, ask you out for a date, buy your product, or recommend their friend to follow you too.

People are more likely to talk to you at a party, be friendly toward you, help you if you ask, and maybe recognize you on the street (and help you there too). None of these impacts are necessarily large or likely, but they represent a real difference to your prospects.

Proxy measurement

Measurement by proxy is what we used to do before we went digital. We still do, but we used to too. We use dozens of proxy measures, including:

  • How many friends do you have?
  • How many people laugh when you tell a joke?
  • How many people can you expect at your funeral?
  • How many people voted for you?
  • What's your net worth?
  • How many employees report to you?
  • How many goals do you score a game?
  • How often do you win?

And many more. Each of these measurements represents our ability to alter the choices of other people in our favor.

With measurement comes guidance and growth

When working toward stronger AI, and especially AGI, I believe it's critically important to have a clear measure of intelligence. Otherwise we have no idea if we're even making progress.

With a clear measure, we no longer need to stick with what we know and shy away from making mistakes.

With a clear measure that begins at zero, we gain the impetus to perform vast multitudes of small and safe experiments because we can now differentiate the good results from the bad. When vast multitudes of small and safe experiments are on the table, we are no longer blind to the idea that an AGI can grow from the ground up via evolution.

It's a huge problem that we can only imagine getting to AGI by the cathedral method and not the bazaar method. I believe it's a major cause of our current logjam: all acceptable approaches to AGI are so complex that only extremely well funded organizations have a chance of solving the problem. Then we see these same organizations frantically working in secret like villains, desperately hoping to surprise the rest of us with a world-dominating AGI that captures all the rewards.

Thankfully, as we know from our experience with gravity, we don't have to know exactly what intelligence is before we can measure it. With the right measure, we give ourselves the chance to discover what intelligence is and whether it even matters.

With the right measure, we also open ourselves up to a vastly safer approach to AGI, the bazaar method where millions of individuals can grow and improve their own tiny kernels all at the same time, one small step at a time, always easily replaced by an improved version.

So again, here is the view of intelligence that I believe gives us that measure:

The more choices in other people that you're able to alter in your favor, the more intelligent you prove yourself.