What Machine Learning Is Teaching Us About Human Learning

machine learning

Researchers have known that “artificial neurons” could carry out logical functions—i.e., learn the way humans do—since 1943. The term “artificial intelligence” has been around since its introduction at a science conference at Dartmouth University in 1956. But only in the past several years have we started seeing theory put into practice the way those researchers imagined. We now have machines that can translate languages, compose music, write novels, and operate vehicles.

So what might the implications of these developments be for educators and students?

The primary goal of AI research may be to teach machines how to learn, thereby automating some of the tasks that complicate our everyday lives, but brain scientists are saying it goes both ways: We now know more about human learning as a result of machine learning, and it has some exciting implications for the classroom.

Here are four especially intriguing insights from the field:

1. Our Bodies Aid Our Memories

One major difference between humans and machines, one that’s often overlooked when we talk about artificial intelligence, is the way we make sense of our physical environment. When humans enter a room, for example, we subconsciously build our understanding of that room around how we can navigate it physically. If we see chairs, we understand that we can sit in them. If we see a ladder, we understand that we can climb it. A robot processes space differently because it doesn’t have a human body: if it enters a room and sees chairs and ladders, it may not consider them useful or defining aspects of the room because it can’t sit or climb.

Noticing or not noticing chairs and ladders may not seem relevant to educational research, but brain scientists are finding that, actually, it is highly relevant.

Linda Smith, a psychologist and brain scientist at Indiana University, Bloomington, has discovered that our ability to retain new information depends, in part, on our physical relationship with it.

Studying children between 16 and 24 months of age, Smith found that “the consistency of the body’s posture and spatial relationship to an object as an object’s name was shown and spoken aloud were critical to successfully connecting the name to the object.” The children could build a mental association between the object and its name in their working memory, but only if the objects were not moved around. If objects were moved around, they couldn’t retain the memory.

“People focus their attention like a spotlight in space,” she explains. “Objects in the same spotlight are linked together, or bound, in working (or short-term) memory.”

Other research has shown that the same thing happens when the subject moves rather than the object, suggesting it’s all about the consistency of that spatial relationship. Tony Morse, a researcher at Plymouth University in the United Kingdom, found that a robot trained to crawl and walk like an infant was “unable to maintain the association between an object and its name when it changed its posture and looked back at an object for a second time.” When it could no longer rely on the same physical signposting to make sense of its environment, it lost the memory.

Smith, learning about Morse’s research, tested her children again and found the same effect. “It was a novel prediction and it was absolutely correct,” she said. “It led me to a whole line of research about the role of body movement in disrupting children’s memory.”

If you think about it, we use body posture in educational environments all the time.

When a teacher tells a student to “sit down and pay attention,” for instance, she’s appealing, whether she knows it or not, to the way the brain works. That student is going to remember what she writes on the board, in part, because he’s engaging his body in the learning process. It does make a difference, as it turns out, to direct your body toward the object of your learning.

Results from a recent study out of the University of Copenhagen add another dimension to the relationship between learning and the body: children improve at math when their bodies are engaged during instruction.

One hundred and sixty five Danish first grade students at three different schools in Copenhagen participated in a six-week study to determine whether movement made a difference in math performance. They were divided into three groups:

  • Group 1: Students used their whole bodies to solve math problems—for example, “making a triangle or shaping numerals with their bodies, or using one another when being asked to add or subtract.”
  • Group 2: Students used fine motor skills to solve math problems—e.g. “they used bricks for arithmetic or to build models for solving geometry tasks.”
  • Group 3: Students “engaged in regular mathematics instruction, using pencils, paper, rulers and the like.”

Students wore special hoods to record brain activity while solving problems.

After the end of six weeks, children took a standardised fifty-question national test. Students from the whole-body performed best, with “nearly four more correct responses than the baseline, and twice as much improvement as the sedentary fine motor skills group.”

“The children learn more if they move and use the whole body to learn,” according to head researcher and Associate Professor Jacob Wienecke of the University of Copenhagen’s Department of Nutrition, Exercise and Sports. “Compared to previous studies which demonstrated that intense physical activity could improve learning outcomes, we have been able to show that lower intensity activities are just as effective, or even more effective, as long as movement is integrated into the topic at hand.”

Machine learning studies like Smith’s and Morse’s are beginning to provide explanations for why physical activity improves learning in situations like these.

2. Metaphors Are Powerful Learning Tools

Machines struggle with metaphors. One way to explain this is to point out that metaphors rely heavily on shared human experience. Here’s an example:

The Russian violin teacher Pyotr Stolyarsky was rumoured to have whispered metaphors into the ears of his best students once he’d exhausted his instructional capacity: “He might lean over and explain how his mother cooked Sabbath dinner,” writes Alan Brown for Nautilus. “His advice gave no specific information on what angle the bow should describe, or how to move the fingers across the frets to create vibrato. Instead, it distilled his experience of the music into metaphors his students could understand.”

A machine, on the other hand (assuming it could play the violin), would have no clue what to do with this information. That’s because it’s a very specific kind of information, something AI researchers are calling “privileged information.” It’s what makes a joke funny to so many people and a poem moving to so many readers. It’s what makes personal experience so universal. And machines don’t seem to have it. But we can give it to them, and when we do, they learn much faster.

Vladimir Vapnik, an AI specialist currently teaching machines to recognise handwriting, has found that if he works privileged information into an algorithm, the machines learn more efficiently. For instance, if he’s trying to teach a machine to recognise the number 5, a standard algorithm might describe the shape of it or record a thousand idiosyncratic variations of it until the machine can reliably recognise any “5” it sees. But the best algorithm he’s found so far relies on more qualitative information.

Teaming up with Natalia Pavlovich, a professor of Russian poetry, Vapnik began feeding the algorithm poems that described the way different handwritten numbers looked. Pavlovich wrote 100 poems total, each on a different example of a handwritten “5” or “8.” Here’s one poem describing “5”s:

He is running. He is flying. He is looking ahead. He is swift. He is throwing a spear ahead. He is dangerous. It is slanted to the right. Good snaked-ness. The snake is attacking. It is going to jump and bite. It is free and absolutely open to anything. It shows itself, no kidding.

After updating the algorithm with this privileged information, the machine “was able to recognise handwritten numbers with far less training than is conventionally required. A learning process that might have required 100,000 samples might now require only 300. The speedup was also independent of the style of the poetry used.”

Vapnik describes privileged information as “a second kind of language with which to instruct computers.” A language of metaphors.

Brown adds: “Where the language of brute force learning consists of technical measurements, such as shapes, colors, forces, and the amount you spent on groceries, privileged information relies on metaphor. And metaphor can make the difference between smart science and brute force science.”

What’s interesting is that, despite being so obviously important in simulating human cognition, metaphor isn’t really used as a learning tool for humans, at least not deliberately. But AI research has shown us it probably should be. Findings like Vapnik’s make a powerful argument for interdisciplinary curricula, for instance: Can poetry help us better understand history, physics, government, economics?

3. There’s No Substitute For Learning-by-Doing

Another lesson from machine learning and its relationship to privileged information is that machines, and therefore humans, often need to perform a learning task themselves in order to understand its use in a different context, such as seeing someone else perform the same task at a different point in time.

“Nearly 30 years ago, George Reeke and Nobel Prize winner Gerald Edelman showed that AI systems that traced letters with robotic arms had an easier time recognising diverse styles of handwriting and letters than visual-only systems,” Brown writes. Giorgio Metta, who built the iCub robot at the Italian Institute of Technology, tells him, “In many cases, humans use their own knowledge about actions to recognise those actions in others. If a robot knows how to grasp, it has better chance of recognising grasping actions of a person.”

Then there are those sensitive categories of knowledge that can be gained from experience but are difficult to explain to others:

“There is a difference in how you sing a G# note that leads to an A, and a G# that leads to an F,” says Patrice Michaels, who directs vocal studies at The University of Chicago. “It’s the same pitch, but depends on where you are going—how you aim it.” Called singing “vertically into the harmony,” it’s a method that has a reputation for being “unteachable” but is nonetheless a regular part of singing instruction. To employ it, singers need to be “attuned to the harmonic intention of the composer.”

Anticipating the composer’s intentions, of course, comes more easily when you’ve been trained musically and already have experience singing both types of G#s.

4. Good Teaching Draws On Shared Experience

When you think about how much information we’re bombarded with when we enter even a simple environment such as a classroom, it’s amazing we can make meaning out of the world around us at all. Efficient meaning-making is one huge difference between humans and machines, and machines fall quickly behind when it comes to organising and representing the world the way humans do.

An example again from Brown: “Is that flat surface a table? A chair? The floor? What if it’s partly in shadow, or partly obscured? After years searching for simple ways to answer these questions, the AI community is finding that the complexity of the real world is, in some ways, irreducible.”

This is where real, live teachers come in handy—and always will.

“The thousands of points of raw data describing the room collapse into a few simple ideas when subject to the constraints and demands of a physical body. If a teacher knows what it’s like to have a body, he, she, or it can pass these simple ideas to a student as privileged information, creating an efficient description of a complex environment.”

But this goes beyond the physical, as we saw with the metaphor example. Teachers can draw on shared experience in countless ways, ways machines may never be able to.

“There is a special, valuable communication that occurs between teacher and student, which goes beyond what can be found in any textbook or raw data stream,” Brown continues. “If there was any doubt that good teachers are important, machine learning is helping put it to rest.”


Saga Briggs is an author at InformED. You can follow her on Twitter@sagamilena or read more of her writing here.

Leave a Reply