Imagine you are on your first visit to a foreign city—let’s say Istanbul. You find your way to the metro station and stand bewildered before the ticket machine. After puzzling out how to pay your fare, you thread your way through the noisy throng and search for the train that will take you to your hotel. You move tentatively, in fits and starts, with many changes of direction. Yet after a few days of commuting by subway, you breeze through the system effortlessly. Simply by experiencing the new environment, you quickly master its complexities. How was that learning possible? The truth is, neuroscientists do not know. Learning theory as we know it today still rests largely on the century-old experiments of Ivan Pavlov and his dogs salivating at the sound of a bell. His theory has yielded plenty of knowledge about how we acquire behaviors through the pairing of stimulus and reward (or punishment) and the strengthening of connections between neurons that fire together. It is the kind of training we do with our pets and, to some degree, our children, but it explains little about most human learning. In fact, whether getting to know a stranger, negotiating a new setting or picking up slang, our brain absorbs enormous volumes of information constantly and effortlessly as we go about everyday life, without treats or praise or electric shocks to motivate us. Until recently, if you asked neuroscientists like me how this process worked, we would shrug our shoulders. But a number of researchers have begun to use technology, including virtual reality, in innovative ways to explore how the human brain operates in complex, real-world environments—a process known as unsupervised learning. What they are finding, as I learned by visiting several pioneering laboratories, is that this type of cognition entails more than building up pathways that link localized neurons. Instead unsupervised learning engages broad swaths of the brain and involves wholesale changes in how neural circuits process information. Moreover, by studying the shifting electrical patterns of brain waves as we learn, researchers can reliably guess what we are thinking about (yes, rudimentary mind reading is possible!), and they can predict our aptitude for learning certain subjects. As these scientists confront the complexity of unsupervised learning, they find themselves grappling with one of the deepest mysteries of being human: how the brain creates the mind. Onboard a Virtual Ship The walls and ceiling of the cavernous room are painted black. Twenty-four digital cameras arrayed around the space detect infrared diodes on my body to track my movements, feeding them into a computer as I walk about. I am in a virtual-reality room in the supercomputer center at the University of California, San Diego—probably the closest thing on Earth to the holodeck on Star Trek’s USS Enterprise. Neuroscientist Howard Poizner uses this facility to study unsupervised learning—in this case, how we learn to master an unfamiliar environment. The diodes are not the only gizmos I am wearing. On my head is a rubber cap studded with 70 electrodes that send electrical signals generated by my brain to instruments inside a specialized backpack I am toting. I also wear large goggles equipped with 12 miniature video projectors and high-resolution screens. The day before my visit here, I toured the U.S. Navy aircraft carrier Midway at its anchorage in San Diego Harbor. Little did I know what a happy coincidence that would turn out to be: Poizner and his colleagues had modeled their virtual-reality sequences on the carrier’s layout. When they turn on the projectors inside my goggles, I am instantly transported back to the ship. What I see is an utterly convincing 120-degree vista of a storeroom inside the aircraft carrier. Looking up, I see triangular steel trusses reinforcing the ceiling that supports the flight deck. Looking down, I see hideous blue government-issued linoleum. High-fidelity speakers all around the lab create a three-dimensional sonic space to complete the illusion.

 

Studies using electroencephalographic (EEG) recordings of people as they explore a virtual-reality world are showing how the brain learns about an unfamiliar place. In the VR laboratory at the University of California, San Diego, the author (left) pops a green sphere containing a hidden object inside a computer-generated storeroom, much like the scene the avatar is exploring (center). At the controls, neuroscientist Joseph Snider (right) monitors what the author sees as he moves about the room. COURTESY OF HOWARD POIZNER University of California, San Diego AND R. DOUGLAS FIELDS (left and right); FROM “HUMAN CORTICAL Θ DURING FREE EXPLORATION ENCODES SPACE AND PREDICTS SUBSEQUENT MEMORY,” BY JOSEPH SNIDER ET AL., IN JOURNAL OF NEUROSCIENCE, VOL. 33, NO. 38; SEPTEMBER 18, 2013 (center)

Verisimilitude is critical, Poizner explains, both for immersion and for helping the brain organize the rich sensory information available to it. “If you are just moving a joystick or hitting a button, you are not activating the brain circuits that construct spatial maps,” he says. “Here you are walking out in the environment. You are learning how to move in it, how to interact with it. Your brain is always predicting.” The fact that I can walk through the virtual environment while my brain waves are being recorded is a breakthrough in itself. Usually people must keep still during electroencephalographic (EEG) recordings to eliminate electrical signals generated by their muscles as they contract, which would obscure the feeble brain waves. Poizner’s group devised hardware and software to eliminate this noise as subjects move about freely. “We’re putting you in the video game,” Poizner says. I wander over to an oval hatch and peer out onto the hangar deck where fighter jets are stationed in rows. I raise my leg to step over the high threshold leading to the deck. “Don’t go out there,” Poizner says. “You must stay inside the storage room.” I quickly retract my leg. From his perspective, it must look as if I am pantomiming in an empty room. I see gray bubbles the size of beach balls resting on storage racks inside the room. “You are looking for a green bubble,” Poizner says. I search the room. Turning to my left, I see it sitting on the shelf next to the other gray spheres. I reach out and touch the green bubble. It pops! An object hidden inside appears—a red fire extinguisher. I turn, find and probe another green bubble in the opposite corner of the room. I pop it and see that it contains a wrench. As I explore the novel environment, Poizner can tell from changes in my brain-wave activity that I am forming a mental map of the storeroom space. Neurons communicate by generating brief electrical impulses of about a tenth of a volt in flashes that last a thousandth of a second—a signal so faint that to detect the firing of a single neuron, you would have to open the skull and place a microelectrode into direct contact with the nerve cell. Still, when large groups of neurons fire together, the ensuing fluctuations in the electrical field of the tissue surrounding them are sufficiently strong that electrodes on the scalp can detect them. These EEG recordings are much like the roar of a crowd, which is audible in the stadium parking lot while conversations of individual spectators are not. Building Maps with Brain Waves The brain’s electrical activity takes the form of waves of different frequencies that sweep across the brain. Some brain waves crash in a high-frequency tempest, while others roll by in slow oscillations like ocean swells. Brain waves change dramatically with different cognitive functions [see box in sidebar]. Poizner’s experiments have found that low-frequency theta waves—which oscillate at about three to eight hertz—increase in the parietal lobe as the subjects move through the room and build spatial maps. (The parietal lobe is at the top back of the brain, roughly below the part of the head covered by a skullcap.) Scientists are not sure why brain-wave power at the theta frequency changes during spatial learning. But they do know that theta waves are important in strengthening synapses as we form memories. In fact, in my own research on the cellular mechanisms of memory, I stimulate neurons at the theta frequency to strengthen synapses in slices of rat brain that I keep alive in a dish. Joseph Snider, the research scientist who was operating the computer as I explored the virtual Midway, suggests that because of their low frequency, theta waves could be responsible for long-range communication within brain networks, much as lower-frequency AM radio signals propagate farther than high-frequency FM broadcasts. In that model, the role of brain waves in learning would be to combine large groups of neurons into functional assemblies so that they can fire together and ride the peaks and troughs of electrical waves as they traverse the brain—which is exactly what must happen to form a spatial map of our environment or to encode any complex recollection. Consider all the sensory elements, cognitive processes and emotional sensations that must converge to give us a vivid memory: the green color of the sphere, the unexpected surprise and sound of the pop, the location in the storeroom, the recognition of the fire extinguisher hidden inside. Each aspect of that experience is coded in circuits in different parts of the brain specialized for sound, color and other sensations. Yet to learn and remember this array as a coherent experience, all these elements must coalesce. From Poizner’s eavesdropping on people’s brain waves as they encounter the virtual reality environment, we now know that theta waves are crucial to this synthesis and learning. In addition to their role in the formation of spatial maps, brain waves are key to cognitive function in the wake of a specific stimulus. Such evoked responses are like ripples from a stone cast into a pond, in contrast to the random, ever present movements of the water. Poizner analyzed the brain-wave response at the instant I popped the green bubble and discovered the object hidden inside. He found that a characteristic ripple in my evoked brain wave erupted 160 milliseconds after I popped the green bubble. “This is amazingly fast,” Poizner observes. “It takes 200 milliseconds just to make an eye movement. It is preconscious perception that the brain is detecting something amiss.”

 

Credit: STUART BRIERS

When Poizner brought subjects in his VR study back for a second day, he found that they had clearly memorized the storeroom in detail without any instruction, forewarning or effort. The evoked brain wave revealed this fact in a surprising way. Poizner and his colleagues deliberately misplaced some of the objects that were concealed in the green bubbles. So when a person popped a green bubble that had held a fire extinguisher the previous day but now contained a wrench, the evoked brain-wave response was much larger than when subjects found objects in the same location as before. Faster than the blink of an eye, our brain knows something has changed in our environment, and our brain knows it before our mind can comprehend it. The U.S. Navy, which funds Poizner’s research, is interested in tapping into these rapid preconscious brain signals. Reading a pilot’s brain waves could let a computer take action even before the pilot is consciously aware of the threat. The quickest draw in such a gunfight would not even know he had pulled the trigger. Poizner’s research reveals another ripple in the evoked brain wave about half a second later, the result of the brain cogitating on the anomaly and putting it into context. “We think this represents a second pass [of neural processing],” he says. “The first pass is, Something is wrong. The second is, Oh! Okay, I’ve now incorporated the new information into my reconstruction of the environment.” Researchers have reported similar results in very different experiments. When a subject hears an unexpected remark—“I take my coffee with cream and dog,” for example—a similar brain-wave response erupts at about the same time. Finding the Way to Speech Learning our native language through everyday experience is very much like unsupervised learning of a new space. Despite the complexity of language, we all master our spoken tongue as children, simply by experiencing it. “We know that in utero, fetuses are already starting to learn about the properties of their language,” says Chantel S. Prat, an associate professor of psychology at the University of Washington and a leading researcher on changes in the brain during language learning. According to a 2011 study led by psychologist Lillian May, while at the University of British Columbia, newborns can recognize their mother’s voice and prefer their native language. Psychologist Barbara Kisilevsky and her colleagues at Queen’s University in Ontario found that even fetuses at 33 to 41 weeks of age show startle responses to their mother’s voice and to a novel foreign language, which means that these sounds capture their attention amid the surrounding buzz. We often fail to appreciate the complexities of language because we use it constantly every day in conversation and in our thoughts. But when we try to learn a second language, the challenges become obvious. Prat and her colleagues have been monitoring brain-wave activity of subjects learning a second language to see how we meet these challenges. Remarkably, they have found that the brain-wave patterns themselves indicate how well the students are doing. As in Poizner’s research, the changes Prat observed during this learning were in specific frequencies of brain-wave activity in particular regions of the brain. After eight weeks of foreign-language training, the power of brain waves increased not only in Broca’s area, the language region of the brain located in the left hemisphere, but also in the beta waves (with a frequency of 12 to 30 Hz) of the right hemisphere—a surprise because language is not typically associated with that side of the brain. “The bigger the change, the better they learned,” she said. It was a surprise that would prove to be significant. Reading Minds If thoughts are the essence of being, some scientists are preparing to peer into our souls. That is, they can now tell a great deal about what someone is thinking by observing their brain activity, which has intriguing implications for how unsupervised learning works. Marcel Just and his colleagues at the Center for Cognitive Brain Imaging at Carnegie Mellon University can reliably say whether a person is thinking of a chair or a door, or which number from 1 to 7 a person has in mind, or even what emotion the person may be feeling—anger or disgust, fear or happiness, lust or shame—simply by looking at a functional MRI scan. Specific clusters of neurons throughout the brain increase activity with each of these concepts or emotions, and these clusters appear in the same places from one person to the next. In research to be published this year, Just is demonstrating that he can read minds even when people are learning abstract concepts. As students review material from a college physics course, the researchers are able to identify which of 30 concepts a person is focusing on from fMRIs of the student’s brain. What is more, the data show that different abstract scientific concepts map onto brain regions that control what might be considered analogous, though more concrete, functions. Learning or thinking about the way waves propagate, for example, engages the same brain regions activated in dancing—essentially a metaphor for rhythmic patterns. And concepts related to the physics of motion, centripetal force, gravity and torque activate brain regions that respond when people watch objects collide. It seems that abstract concepts are anchored to discrete physical actions controlled by specific circuits in the brain. These investigators are beginning to unravel the secret of how the human brain represents and retains information. And this insight is helping scientists transmit information from brains to machines. For instance, researchers in many labs around the world are developing prosthetic limbs controlled by a person’s thoughts. Computers detect and analyze brain waves associated with limb movements and then activate electric motors in a robotic limb to produce the intended motion. The next step sounds a little like induced telepathy or Vulcan mind melding. “We’ve found that you can use brain signals from one person to communicate with another,” Prat says. “We can encode information into a human brain.” In a fascinating study published in 2014, she uses a technique called transcranial magnetic stimulation to modify a subject’s brain waves so that they take the shape of the brain waves she had observed in a different person—in effect downloading information from one brain into another. Prat’s motive in this futuristic research is not to figure out how to transmit the contents of my mind into yours; we already have very effective means for accomplishing that goal. In fact, I am doing so right now as you read these patterns of type and reproduce my thoughts in your brain. Rather they are trying to test their findings about learning and encryption of information in the brain. “If I stimulate your visual cortex and you see,” Prat says, “you are seeing with your brain, not with your eyes.” That achievement will prove she has indeed cracked the brain’s coding of visual information. And she will have written part of a new chapter in our neuroscience textbooks, alongside the one about Pavlov and his dogs. Predicting Your Future In her latest research, Prat has used EEG analysis to an even more exceptional end: to accurately forecast which students will be able to learn a new language rapidly and which ones will struggle. What our brain does at rest tells researchers a great deal about how it is wired and how it operates as a system. Mirroring her discovery of beta-wave activity in the right hemisphere during language learning, Prat found that the higher the power of beta waves in a person’s resting-state EEG in the right temporal and parietal regions, the faster the student will be able to learn a second language. The reasons are not clear, but one possibility is that if most neural circuits in the region were fully engaged in a variety of other tasks, many small groups of neurons would be oscillating at their own slightly different frequencies, so high power at any one frequency suggests a large untapped pool. “They are sort of waiting to learn a language,” Prat theorizes. That propensity is significant because mastering a new language is associated with many cognitive benefits, including improved skill in mathematics and multitasking. But, she warns, our brain cannot be good at everything: “When you get better at one thing, it comes at a cost to something else.” I challenge Prat to measure my brain waves to see if she can predict how quickly I can learn a second language. She eagerly agrees. Prat and her graduate student Brianna Yamasaki apply electrodes to my head, moistening each one with a salt solution to improve conduction of the tiny signals from my brain. As she tests each electrode, it appears on a computer monitor, changing color from red to green when the signal strength is strong. Once they are all green, Prat says, “Close your eyes. It’ll be five minutes. Remain still.” As she dims the lights and slips out the door, she says, “Just relax. Clear your mind.” I try, but my mind is racing. Can this contraption really tell Prat how easily I could learn a new language while I sit here doing nothing? I recall a similar boast Poizner had made to me in his VR lab—that he could predict how well people would perform in his spatial-learning experiment from an fMRI scan of their brain activity as they sat and let their mind wander. This so-called resting-state fMRI of the brain’s activity while people are doing nothing but letting their mind drift is different from the familiar fMRI studies of the brain’s response to a specific stimulus. Indeed, months after taking such readings of a group of people, Poizner brought them for a VR trial and found that those who learned the layout of the virtual storeroom faster had resting-state fMRI recordings that showed tighter functional integration of the brain networks responsible for visuospatial processing. The five minutes pass. Prat and Yamasaki return. “Did you get good data?” I ask. “This is a little lower than average,” Prat says looking at my feeble beta waves. She then pulls up a recording of her own brain waves, which shows a sharp peak in the alpha-frequency band. It looks something like a spike in a stock-market chart. My brain instead shows a power shift to higher frequencies, characteristic of information processing in the cerebral cortex. I clearly was not able to zone out and let my mind rest. “Am I a good second-language learner?” I ask. “No,” Prat says. “Your slope is about 0.5, and the average is about 0.7.” It’s true. I took Spanish in high school and German in college, but they didn’t really stick. This is creepier than tarot cards. “There must be something good about it,” I say. “Sure … plenty of things.” “Tell me one.” “You are very entrenched in your first language.” I groan. Then she adds, “The relation of beta power to reading is the opposite. You are probably an excellent reader.” A few days after returning to my lab, a new paper by Tomas Folke of the University of Cambridge and his colleagues reports that monolinguals are superior to bilinguals at metacognition, or thinking about thinking, and that they excel at correcting their performance after making errors. I feel a little better. Thinking about thinking and learning from failed experiments: that is exactly what I do as a neuroscientist. You could have read that in my bio—and in my brain waves, too.

Learning theory as we know it today still rests largely on the century-old experiments of Ivan Pavlov and his dogs salivating at the sound of a bell. His theory has yielded plenty of knowledge about how we acquire behaviors through the pairing of stimulus and reward (or punishment) and the strengthening of connections between neurons that fire together. It is the kind of training we do with our pets and, to some degree, our children, but it explains little about most human learning. In fact, whether getting to know a stranger, negotiating a new setting or picking up slang, our brain absorbs enormous volumes of information constantly and effortlessly as we go about everyday life, without treats or praise or electric shocks to motivate us.

Until recently, if you asked neuroscientists like me how this process worked, we would shrug our shoulders. But a number of researchers have begun to use technology, including virtual reality, in innovative ways to explore how the human brain operates in complex, real-world environments—a process known as unsupervised learning. What they are finding, as I learned by visiting several pioneering laboratories, is that this type of cognition entails more than building up pathways that link localized neurons. Instead unsupervised learning engages broad swaths of the brain and involves wholesale changes in how neural circuits process information. Moreover, by studying the shifting electrical patterns of brain waves as we learn, researchers can reliably guess what we are thinking about (yes, rudimentary mind reading is possible!), and they can predict our aptitude for learning certain subjects. As these scientists confront the complexity of unsupervised learning, they find themselves grappling with one of the deepest mysteries of being human: how the brain creates the mind.

Onboard a Virtual Ship

The walls and ceiling of the cavernous room are painted black. Twenty-four digital cameras arrayed around the space detect infrared diodes on my body to track my movements, feeding them into a computer as I walk about. I am in a virtual-reality room in the supercomputer center at the University of California, San Diego—probably the closest thing on Earth to the holodeck on Star Trek’s USS Enterprise. Neuroscientist Howard Poizner uses this facility to study unsupervised learning—in this case, how we learn to master an unfamiliar environment.

The diodes are not the only gizmos I am wearing. On my head is a rubber cap studded with 70 electrodes that send electrical signals generated by my brain to instruments inside a specialized backpack I am toting. I also wear large goggles equipped with 12 miniature video projectors and high-resolution screens.

The day before my visit here, I toured the U.S. Navy aircraft carrier Midway at its anchorage in San Diego Harbor. Little did I know what a happy coincidence that would turn out to be: Poizner and his colleagues had modeled their virtual-reality sequences on the carrier’s layout. When they turn on the projectors inside my goggles, I am instantly transported back to the ship. What I see is an utterly convincing 120-degree vista of a storeroom inside the aircraft carrier. Looking up, I see triangular steel trusses reinforcing the ceiling that supports the flight deck. Looking down, I see hideous blue government-issued linoleum. High-fidelity speakers all around the lab create a three-dimensional sonic space to complete the illusion.

Verisimilitude is critical, Poizner explains, both for immersion and for helping the brain organize the rich sensory information available to it. “If you are just moving a joystick or hitting a button, you are not activating the brain circuits that construct spatial maps,” he says. “Here you are walking out in the environment. You are learning how to move in it, how to interact with it. Your brain is always predicting.”

 

 

The fact that I can walk through the virtual environment while my brain waves are being recorded is a breakthrough in itself. Usually people must keep still during electroencephalographic (EEG) recordings to eliminate electrical signals generated by their muscles as they contract, which would obscure the feeble brain waves. Poizner’s group devised hardware and software to eliminate this noise as subjects move about freely. “We’re putting you in the video game,” Poizner says.

I wander over to an oval hatch and peer out onto the hangar deck where fighter jets are stationed in rows. I raise my leg to step over the high threshold leading to the deck. “Don’t go out there,” Poizner says. “You must stay inside the storage room.” I quickly retract my leg. From his perspective, it must look as if I am pantomiming in an empty room.

I see gray bubbles the size of beach balls resting on storage racks inside the room. “You are looking for a green bubble,” Poizner says. I search the room. Turning to my left, I see it sitting on the shelf next to the other gray spheres. I reach out and touch the green bubble. It pops! An object hidden inside appears—a red fire extinguisher. I turn, find and probe another green bubble in the opposite corner of the room. I pop it and see that it contains a wrench.

As I explore the novel environment, Poizner can tell from changes in my brain-wave activity that I am forming a mental map of the storeroom space. Neurons communicate by generating brief electrical impulses of about a tenth of a volt in flashes that last a thousandth of a second—a signal so faint that to detect the firing of a single neuron, you would have to open the skull and place a microelectrode into direct contact with the nerve cell. Still, when large groups of neurons fire together, the ensuing fluctuations in the electrical field of the tissue surrounding them are sufficiently strong that electrodes on the scalp can detect them. These EEG recordings are much like the roar of a crowd, which is audible in the stadium parking lot while conversations of individual spectators are not.

Building Maps with Brain Waves

The brain’s electrical activity takes the form of waves of different frequencies that sweep across the brain. Some brain waves crash in a high-frequency tempest, while others roll by in slow oscillations like ocean swells. Brain waves change dramatically with different cognitive functions [see box in sidebar]. Poizner’s experiments have found that low-frequency theta waves—which oscillate at about three to eight hertz—increase in the parietal lobe as the subjects move through the room and build spatial maps. (The parietal lobe is at the top back of the brain, roughly below the part of the head covered by a skullcap.)

Scientists are not sure why brain-wave power at the theta frequency changes during spatial learning. But they do know that theta waves are important in strengthening synapses as we form memories. In fact, in my own research on the cellular mechanisms of memory, I stimulate neurons at the theta frequency to strengthen synapses in slices of rat brain that I keep alive in a dish. Joseph Snider, the research scientist who was operating the computer as I explored the virtual Midway, suggests that because of their low frequency, theta waves could be responsible for long-range communication within brain networks, much as lower-frequency AM radio signals propagate farther than high-frequency FM broadcasts.

In that model, the role of brain waves in learning would be to combine large groups of neurons into functional assemblies so that they can fire together and ride the peaks and troughs of electrical waves as they traverse the brain—which is exactly what must happen to form a spatial map of our environment or to encode any complex recollection. Consider all the sensory elements, cognitive processes and emotional sensations that must converge to give us a vivid memory: the green color of the sphere, the unexpected surprise and sound of the pop, the location in the storeroom, the recognition of the fire extinguisher hidden inside. Each aspect of that experience is coded in circuits in different parts of the brain specialized for sound, color and other sensations. Yet to learn and remember this array as a coherent experience, all these elements must coalesce. From Poizner’s eavesdropping on people’s brain waves as they encounter the virtual reality environment, we now know that theta waves are crucial to this synthesis and learning.

In addition to their role in the formation of spatial maps, brain waves are key to cognitive function in the wake of a specific stimulus. Such evoked responses are like ripples from a stone cast into a pond, in contrast to the random, ever present movements of the water. Poizner analyzed the brain-wave response at the instant I popped the green bubble and discovered the object hidden inside. He found that a characteristic ripple in my evoked brain wave erupted 160 milliseconds after I popped the green bubble. “This is amazingly fast,” Poizner observes. “It takes 200 milliseconds just to make an eye movement. It is preconscious perception that the brain is detecting something amiss.”

When Poizner brought subjects in his VR study back for a second day, he found that they had clearly memorized the storeroom in detail without any instruction, forewarning or effort. The evoked brain wave revealed this fact in a surprising way. Poizner and his colleagues deliberately misplaced some of the objects that were concealed in the green bubbles. So when a person popped a green bubble that had held a fire extinguisher the previous day but now contained a wrench, the evoked brain-wave response was much larger than when subjects found objects in the same location as before.

Faster than the blink of an eye, our brain knows something has changed in our environment, and our brain knows it before our mind can comprehend it. The U.S. Navy, which funds Poizner’s research, is interested in tapping into these rapid preconscious brain signals. Reading a pilot’s brain waves could let a computer take action even before the pilot is consciously aware of the threat. The quickest draw in such a gunfight would not even know he had pulled the trigger.

Poizner’s research reveals another ripple in the evoked brain wave about half a second later, the result of the brain cogitating on the anomaly and putting it into context. “We think this represents a second pass [of neural processing],” he says. “The first pass is, Something is wrong. The second is, Oh! Okay, I’ve now incorporated the new information into my reconstruction of the environment.” Researchers have reported similar results in very different experiments. When a subject hears an unexpected remark—“I take my coffee with cream and dog,” for example—a similar brain-wave response erupts at about the same time.

Finding the Way to Speech

Learning our native language through everyday experience is very much like unsupervised learning of a new space. Despite the complexity of language, we all master our spoken tongue as children, simply by experiencing it. “We know that in utero, fetuses are already starting to learn about the properties of their language,” says Chantel S. Prat, an associate professor of psychology at the University of Washington and a leading researcher on changes in the brain during language learning. According to a 2011 study led by psychologist Lillian May, while at the University of British Columbia, newborns can recognize their mother’s voice and prefer their native language. Psychologist Barbara Kisilevsky and her colleagues at Queen’s University in Ontario found that even fetuses at 33 to 41 weeks of age show startle responses to their mother’s voice and to a novel foreign language, which means that these sounds capture their attention amid the surrounding buzz.

We often fail to appreciate the complexities of language because we use it constantly every day in conversation and in our thoughts. But when we try to learn a second language, the challenges become obvious.

Prat and her colleagues have been monitoring brain-wave activity of subjects learning a second language to see how we meet these challenges. Remarkably, they have found that the brain-wave patterns themselves indicate how well the students are doing. As in Poizner’s research, the changes Prat observed during this learning were in specific frequencies of brain-wave activity in particular regions of the brain. After eight weeks of foreign-language training, the power of brain waves increased not only in Broca’s area, the language region of the brain located in the left hemisphere, but also in the beta waves (with a frequency of 12 to 30 Hz) of the right hemisphere—a surprise because language is not typically associated with that side of the brain. “The bigger the change, the better they learned,” she said. It was a surprise that would prove to be significant.

Reading Minds

If thoughts are the essence of being, some scientists are preparing to peer into our souls. That is, they can now tell a great deal about what someone is thinking by observing their brain activity, which has intriguing implications for how unsupervised learning works. Marcel Just and his colleagues at the Center for Cognitive Brain Imaging at Carnegie Mellon University can reliably say whether a person is thinking of a chair or a door, or which number from 1 to 7 a person has in mind, or even what emotion the person may be feeling—anger or disgust, fear or happiness, lust or shame—simply by looking at a functional MRI scan. Specific clusters of neurons throughout the brain increase activity with each of these concepts or emotions, and these clusters appear in the same places from one person to the next.

In research to be published this year, Just is demonstrating that he can read minds even when people are learning abstract concepts. As students review material from a college physics course, the researchers are able to identify which of 30 concepts a person is focusing on from fMRIs of the student’s brain. What is more, the data show that different abstract scientific concepts map onto brain regions that control what might be considered analogous, though more concrete, functions. Learning or thinking about the way waves propagate, for example, engages the same brain regions activated in dancing—essentially a metaphor for rhythmic patterns. And concepts related to the physics of motion, centripetal force, gravity and torque activate brain regions that respond when people watch objects collide. It seems that abstract concepts are anchored to discrete physical actions controlled by specific circuits in the brain.

These investigators are beginning to unravel the secret of how the human brain represents and retains information. And this insight is helping scientists transmit information from brains to machines. For instance, researchers in many labs around the world are developing prosthetic limbs controlled by a person’s thoughts. Computers detect and analyze brain waves associated with limb movements and then activate electric motors in a robotic limb to produce the intended motion.

The next step sounds a little like induced telepathy or Vulcan mind melding. “We’ve found that you can use brain signals from one person to communicate with another,” Prat says. “We can encode information into a human brain.” In a fascinating study published in 2014, she uses a technique called transcranial magnetic stimulation to modify a subject’s brain waves so that they take the shape of the brain waves she had observed in a different person—in effect downloading information from one brain into another.

Prat’s motive in this futuristic research is not to figure out how to transmit the contents of my mind into yours; we already have very effective means for accomplishing that goal. In fact, I am doing so right now as you read these patterns of type and reproduce my thoughts in your brain. Rather they are trying to test their findings about learning and encryption of information in the brain.

“If I stimulate your visual cortex and you see,” Prat says, “you are seeing with your brain, not with your eyes.” That achievement will prove she has indeed cracked the brain’s coding of visual information. And she will have written part of a new chapter in our neuroscience textbooks, alongside the one about Pavlov and his dogs.

Predicting Your Future

In her latest research, Prat has used EEG analysis to an even more exceptional end: to accurately forecast which students will be able to learn a new language rapidly and which ones will struggle. What our brain does at rest tells researchers a great deal about how it is wired and how it operates as a system. Mirroring her discovery of beta-wave activity in the right hemisphere during language learning, Prat found that the higher the power of beta waves in a person’s resting-state EEG in the right temporal and parietal regions, the faster the student will be able to learn a second language. The reasons are not clear, but one possibility is that if most neural circuits in the region were fully engaged in a variety of other tasks, many small groups of neurons would be oscillating at their own slightly different frequencies, so high power at any one frequency suggests a large untapped pool. “They are sort of waiting to learn a language,” Prat theorizes. That propensity is significant because mastering a new language is associated with many cognitive benefits, including improved skill in mathematics and multitasking. But, she warns, our brain cannot be good at everything: “When you get better at one thing, it comes at a cost to something else.”

I challenge Prat to measure my brain waves to see if she can predict how quickly I can learn a second language. She eagerly agrees. Prat and her graduate student Brianna Yamasaki apply electrodes to my head, moistening each one with a salt solution to improve conduction of the tiny signals from my brain. As she tests each electrode, it appears on a computer monitor, changing color from red to green when the signal strength is strong. Once they are all green, Prat says, “Close your eyes. It’ll be five minutes. Remain still.” As she dims the lights and slips out the door, she says, “Just relax. Clear your mind.”

I try, but my mind is racing. Can this contraption really tell Prat how easily I could learn a new language while I sit here doing nothing? I recall a similar boast Poizner had made to me in his VR lab—that he could predict how well people would perform in his spatial-learning experiment from an fMRI scan of their brain activity as they sat and let their mind wander. This so-called resting-state fMRI of the brain’s activity while people are doing nothing but letting their mind drift is different from the familiar fMRI studies of the brain’s response to a specific stimulus. Indeed, months after taking such readings of a group of people, Poizner brought them for a VR trial and found that those who learned the layout of the virtual storeroom faster had resting-state fMRI recordings that showed tighter functional integration of the brain networks responsible for visuospatial processing.

The five minutes pass. Prat and Yamasaki return. “Did you get good data?” I ask.

“This is a little lower than average,” Prat says looking at my feeble beta waves. She then pulls up a recording of her own brain waves, which shows a sharp peak in the alpha-frequency band. It looks something like a spike in a stock-market chart. My brain instead shows a power shift to higher frequencies, characteristic of information processing in the cerebral cortex. I clearly was not able to zone out and let my mind rest.

“Am I a good second-language learner?” I ask.

“No,” Prat says. “Your slope is about 0.5, and the average is about 0.7.”

It’s true. I took Spanish in high school and German in college, but they didn’t really stick. This is creepier than tarot cards. “There must be something good about it,” I say.

“Sure … plenty of things.”

“Tell me one.”

“You are very entrenched in your first language.”

I groan. Then she adds, “The relation of beta power to reading is the opposite. You are probably an excellent reader.”

A few days after returning to my lab, a new paper by Tomas Folke of the University of Cambridge and his colleagues reports that monolinguals are superior to bilinguals at metacognition, or thinking about thinking, and that they excel at correcting their performance after making errors.

I feel a little better. Thinking about thinking and learning from failed experiments: that is exactly what I do as a neuroscientist. You could have read that in my bio—and in my brain waves, too.