The Quest to Quantify the Unquantifiable

rethinking-machine-intelligence

For centuries, humanity has attempted to measure and categorize intelligence, driven by our innate desire to understand what makes us unique as a species. This pursuit has taken on new urgency in the age of artificial intelligence, as we grapple with fundamental questions about machine cognition and consciousness. But our historical approaches to measuring intelligence may be leading us down the wrong path entirely.

The Western philosophical tradition, dating back to ancient Greece, has long equated intelligence with rational thought. Philosophers like Aristotle used this framework to establish hierarchies of intelligence, placing educated European men at the apex. This perspective established a paradigm that still influences how we think about and measure intelligence today.

The Limitations of Traditional Testing

Consider how we currently assess intelligence in different domains. For humans, we rely heavily on standardized measures like IQ tests and academic examinations. For animals, we devise elaborate experiments to test their problem-solving abilities. And for AI systems, we create benchmarks based largely on human performance metrics. But do any of these approaches truly capture the essence of intelligence?

According to cognitive scientist Erica Cartmill, our human-centric view of intelligence often blinds us to other forms of cognitive capability. “We are very good at particular things and we tend to place more value on the things that we’re good at,” she explains. This bias leads us to overlook or undervalue different types of intelligence that don’t fit our preconceptions.

Take elephants, for instance. For years, scientists were puzzled by elephants’ ability to coordinate meetings across vast distances, until they discovered that the animals communicate through low-frequency rumbles that propagate through the ground, detected by specialized cells in their feet. This represents a form of intelligence perfectly adapted to their environment and needs, yet entirely different from human capabilities.

The AI Assessment Challenge

When it comes to artificial intelligence, our measurement challenges become even more complex. Modern AI systems like large language models can pass sophisticated tests designed for humans, including bar exams and medical licensing tests. But as Brown University professor Ellie Pavlick points out, “What it means when a person passes the MCAT or scores well on the SAT is not the same thing as what it might mean when a neural network does that.”

The fundamental issue is that we don’t understand the mechanisms behind AI success on these tests. While we know that human performance on standardized tests generally correlates with broader cognitive capabilities, we have no such assurance for AI systems. AI systems may be pattern-matching their way to correct answers without any deeper understanding of the concepts involved.

Looking Under the Hood

This has led researchers to develop new approaches for assessing AI systems, focusing more on understanding their internal operations rather than just their outputs. One promising technique is “path patching,” which Pavlick describes as “kind of like an fMRI” for neural networks. These methods allow researchers to identify which components of the network are most active in different types of decisions.

But even these more sophisticated analytical tools bump up against fundamental questions about the nature of intelligence itself. As Linda Smith of Indiana University notes, “What does it mean to be innate? I once had a discussion with Susan Carey… in which the claim was made that as long as something was nearly universal and pretty hard to stop from happening, it was innate.” This definition, while seemingly straightforward, breaks down under closer examination.

The Need for New Frameworks

The challenge ahead lies not just in developing better ways to measure intelligence, but in reconceptualizing what we mean by intelligence in the first place. Current approaches often treat intelligence as a single, quantifiable characteristic that can be ranked on a linear scale. The reality is far more complex.

Intelligence might better be understood as a constellation of capabilities, each evolved or developed to serve particular needs in particular contexts. This perspective helps explain why different species and different artificial systems can display remarkable capabilities in some areas while appearing limited in others.

Looking Forward

As we continue to develop more sophisticated AI systems, we need equally sophisticated ways of understanding and measuring their capabilities. This might mean moving away from human-centric benchmarks toward more nuanced frameworks that can account for different types of intelligence and different ways of processing information.

The goal shouldn’t be to determine whether machines are “more” or “less” intelligent than humans or other animals, but to understand the unique characteristics and capabilities of each type of intelligence. This understanding will be crucial as we work to develop AI systems that can complement human intelligence rather than merely trying to replicate it.

As Pavlick suggests, we might need to be comfortable with uncertainty for a while: “I wish we could all just be like, let’s all just wait a decade… I get the world wouldn’t allow that, but I wish we could just do that.” In the meantime, we would do well to approach the measurement of intelligence whether human, animal, or artificial – with more humility and nuance than we have in the past.

The future of intelligence assessment likely lies not in creating more sophisticated tests, but in developing better frameworks for understanding the diverse ways that intelligence can manifest. Only then can we move beyond our historical biases and begin to appreciate the full spectrum of cognitive capabilities that exist in both natural and artificial systems.

Read More

Read more about psychology and intelligence

Sign up for a Free Demo of our flagship product, AlphaIQ.

Read more of Joseph Byrum‘s work.

Tags

About the Author