Donald Hebb was a famed Canadian scientist who produced key findings that ranged across the field of psychology, providing insights into perception, intelligence and emotion. He is perhaps best known, though, for his theory of learning and memory, which appears as an entry in most basic texts on neuroscience. But now an alternative theory—along with accompanying experimental evidence—fundamentally challenges some central tenets of Hebb’s thinking. It provides a detailed account of how cells and the electrical and molecular signals that activate them are involved in forming memories of a series of related events. Put forward in 1949, Hebb’s theory holds that when electrical activity in one neuron—perhaps triggered by observing one’s surroundings—repeatedly induces a neighboring “target cell” to fire electrical impulses, a process of conditioning occurs and strengthens the connection between the two neurons. This is a bit like doing arm curls with a weight; after repeated lifts the arm muscle grows stronger and the barbell gets easier to hoist. At the cellular level, repeated stimulation of one neuron by another enables the target cell to respond more readily the next time it is activated. In basic textbooks, this boils down to a simple adage to describe the physiology of learning and memory: “Cells that fire together, wire together.” Every theory requires experimental evidence, and scientists have toiled for years to validate Hebb’s idea in the laboratory. Many research findings have showed that when a neuron repeatedly fires off an electrical impulse (called an “action potential”) at virtually the same time as an adjacent neuron, their connection does indeed grow more efficient. The target cell fires more easily, and the signal transmitted is stronger. This process—known as long-term potentiation (LTP)—apparently induces physiological change or “plasticity” in target cells. LTP is routinely cited as a possible explanation for how the brain learns and forms memories at the cellular level. But long-term potentiation leaves a few open questions. When we encounter something new, the experience often occurs as a sequence of events over at least a matter of seconds—not tiny fractions of a second, as postulated for LTP—and somehow a memory still forms. Nor are many repeated exposures to an event necessarily needed for learning to occur: A child sees the alluring blue and yellow flame on the stove a few feet away. She approaches the stove, slowly raises a finger and then quickly pulls away the hand. Once is enough to learn this lesson for life. A new paper published in Science on September 8 provides evidence for what Jeffrey Magee and other researchers at Howard Hughes Medical Institute’s Janelia Research Campus contend is a more plausible explanation for how a sequence of events may form a memory of a place. In their experiments, a mouse running down a track created a memory of a particular spot along the track—a “place field,” in neurospeak—over a period five seconds. The place field was implanted in an area of the brain called the hippocampus after as little as a single traversal of the track The action took place in synapses, the tiny clefts between neurons where a signal passes from one cell to another. Visual, tactile or other inputs from another part of the mouse’s brain passed through long neuron fibers called axons, crossing over to a target cell in an area called the hippocampus. The inputs trigger the production of a set of signals that persist for several seconds in tiny protrusions, called dendrites, on the hippocampal target cell. In this form of plasticity, the key signal in the hippocampal cell was not a sub-millisecond action potential, rather it was an electrical signal called a “plateau potential” in the dendrites of the target cell that lasts up to hundreds of times longer. The plateau potential caused a relatively large burst of calcium to enter the target neuron’s membrane and this set off a chain of events that lead to molecular and structural changes within the cell itself. After a mouse made just a few runs of the track—sometimes only one—the hippocampal neuron underwent this biochemical learning process and a place field was formed that became active when the mouse passed over the spot again. Thus the animal now “knew” this defined location along the track when the place field activated. This newly discovered learning process differs in basic ways from the LTP concept long found in textbooks. LTP requires (as Hebb had predicted) that one neuron repeatedly sends an input signal that causes a nearby neuron to fire off submillisecond pulses. Magee and colleagues’ discovery—dubbed “behavioral timescale synaptic plasticity”—does not require such a cause-and-effect relationship. One neuron does not induce the firing of another. Instead, input signals from elsewhere in the brain arrive at the hippocampal neuron several seconds before the calcium spike (the plateau potential) begins in the dendrites. These same input signals persist for several seconds after the plateau potential has ended. The entire five-second time course—the initial inputs followed by a plateau potential and then the inputs that continue afterward—corresponds to the same interval over which a set of actions may occur: the child sees the stove, approaches it, touches the flame and pulls back her hand. What’s more, a new memory of what happened at a particular place is cemented in the brain after a mouse makes one or only a few runs along the track. The researchers also found that when a mouse goes back to the track after this learning process has taken place, a now-trained neuron fires before the animal actually arrives at the spot it has learned—suggesting the memory helps the brain piece out what lies in the physical path ahead. Magee, the senior author on the study who is now at Baylor College of Medicine, says this new type of plasticity probably will not supplant long-term potentiation in the textbooks. But it may provide a more suitable explanation of how memories are formed from a connected series of events. It may also account for how the brain remembers important places: where a squirrel stores acorns for winter or where a hiker saw a snake on a trail, for instance. “There was always this nagging suspicion that something wasn’t quite right about long-term potentiation, and that something was the timing requirement,” Magee says. “When you use it to evoke synaptic plasticity, you had to have this really tight timing window. But behavior actually occurs on these much longer timescales—even very simple behaviors.” Magee says his group’s findings still need to be replicated. And key questions remain, such as where in the brain the signals that serve as inputs to the dendrites originate. If the work from Magee and his team is further confirmed, LTP may come to be thought of as a process that assists in keeping intact the memories formed by the new type of plasticity discovered by Magee’s group—or it may be found to be involved in simpler sensory detection processes that do not require the piecing-together of multiple events. Alcino Silva, a neuroscientist at University of California, Los Angeles, who was not involved with the research, calls the work “a groundbreaking study,” and says it “promises to change the way we think about how space is learned and remembered.” He adds that the study “is just a provocative beginning.” He notes the need for further research to ensure that this finding is “actually key to learning and memory. For example, it will be important to explore this form of plasticity, and then show that manipulating it can both interfere with and enhance specific forms of learning.” Another researcher, György Buzsáki, a neuroscientist at New York University, also not involved in the study, says: “Overall, this is a significant step forward in our understanding of the mechanisms involved in place field generation in the hippocampus.” He adds that the neuroscience literature includes examples of various mechanisms for creating such place markers in an animal’s brain, including a study his own laboratory that conforms more closely to Hebb’s model. The hippocampus, he says, can also store an internal sequence of events without any sensory inputs of physical surroundings—mental imagery of moving about a place one has never visited, for example—a situation that the behavioral timescale plasticity model discovered by Magee and team may not account for. Whichever model prevails, the new Science study provides another example of the constant flux in the brain sciences. A close look at the details of any given process assumed to underlie a long-standing theory can call into question the theory itself, and open up an entirely new avenue of research.
Put forward in 1949, Hebb’s theory holds that when electrical activity in one neuron—perhaps triggered by observing one’s surroundings—repeatedly induces a neighboring “target cell” to fire electrical impulses, a process of conditioning occurs and strengthens the connection between the two neurons. This is a bit like doing arm curls with a weight; after repeated lifts the arm muscle grows stronger and the barbell gets easier to hoist. At the cellular level, repeated stimulation of one neuron by another enables the target cell to respond more readily the next time it is activated. In basic textbooks, this boils down to a simple adage to describe the physiology of learning and memory: “Cells that fire together, wire together.”
Every theory requires experimental evidence, and scientists have toiled for years to validate Hebb’s idea in the laboratory. Many research findings have showed that when a neuron repeatedly fires off an electrical impulse (called an “action potential”) at virtually the same time as an adjacent neuron, their connection does indeed grow more efficient. The target cell fires more easily, and the signal transmitted is stronger. This process—known as long-term potentiation (LTP)—apparently induces physiological change or “plasticity” in target cells. LTP is routinely cited as a possible explanation for how the brain learns and forms memories at the cellular level.
But long-term potentiation leaves a few open questions. When we encounter something new, the experience often occurs as a sequence of events over at least a matter of seconds—not tiny fractions of a second, as postulated for LTP—and somehow a memory still forms. Nor are many repeated exposures to an event necessarily needed for learning to occur: A child sees the alluring blue and yellow flame on the stove a few feet away. She approaches the stove, slowly raises a finger and then quickly pulls away the hand. Once is enough to learn this lesson for life.
A new paper published in Science on September 8 provides evidence for what Jeffrey Magee and other researchers at Howard Hughes Medical Institute’s Janelia Research Campus contend is a more plausible explanation for how a sequence of events may form a memory of a place. In their experiments, a mouse running down a track created a memory of a particular spot along the track—a “place field,” in neurospeak—over a period five seconds. The place field was implanted in an area of the brain called the hippocampus after as little as a single traversal of the track The action took place in synapses, the tiny clefts between neurons where a signal passes from one cell to another. Visual, tactile or other inputs from another part of the mouse’s brain passed through long neuron fibers called axons, crossing over to a target cell in an area called the hippocampus. The inputs trigger the production of a set of signals that persist for several seconds in tiny protrusions, called dendrites, on the hippocampal target cell.
In this form of plasticity, the key signal in the hippocampal cell was not a sub-millisecond action potential, rather it was an electrical signal called a “plateau potential” in the dendrites of the target cell that lasts up to hundreds of times longer. The plateau potential caused a relatively large burst of calcium to enter the target neuron’s membrane and this set off a chain of events that lead to molecular and structural changes within the cell itself. After a mouse made just a few runs of the track—sometimes only one—the hippocampal neuron underwent this biochemical learning process and a place field was formed that became active when the mouse passed over the spot again. Thus the animal now “knew” this defined location along the track when the place field activated.
This newly discovered learning process differs in basic ways from the LTP concept long found in textbooks. LTP requires (as Hebb had predicted) that one neuron repeatedly sends an input signal that causes a nearby neuron to fire off submillisecond pulses. Magee and colleagues’ discovery—dubbed “behavioral timescale synaptic plasticity”—does not require such a cause-and-effect relationship. One neuron does not induce the firing of another.
Instead, input signals from elsewhere in the brain arrive at the hippocampal neuron several seconds before the calcium spike (the plateau potential) begins in the dendrites. These same input signals persist for several seconds after the plateau potential has ended. The entire five-second time course—the initial inputs followed by a plateau potential and then the inputs that continue afterward—corresponds to the same interval over which a set of actions may occur: the child sees the stove, approaches it, touches the flame and pulls back her hand.
What’s more, a new memory of what happened at a particular place is cemented in the brain after a mouse makes one or only a few runs along the track. The researchers also found that when a mouse goes back to the track after this learning process has taken place, a now-trained neuron fires before the animal actually arrives at the spot it has learned—suggesting the memory helps the brain piece out what lies in the physical path ahead.
Magee, the senior author on the study who is now at Baylor College of Medicine, says this new type of plasticity probably will not supplant long-term potentiation in the textbooks. But it may provide a more suitable explanation of how memories are formed from a connected series of events. It may also account for how the brain remembers important places: where a squirrel stores acorns for winter or where a hiker saw a snake on a trail, for instance. “There was always this nagging suspicion that something wasn’t quite right about long-term potentiation, and that something was the timing requirement,” Magee says. “When you use it to evoke synaptic plasticity, you had to have this really tight timing window. But behavior actually occurs on these much longer timescales—even very simple behaviors.” Magee says his group’s findings still need to be replicated. And key questions remain, such as where in the brain the signals that serve as inputs to the dendrites originate.
If the work from Magee and his team is further confirmed, LTP may come to be thought of as a process that assists in keeping intact the memories formed by the new type of plasticity discovered by Magee’s group—or it may be found to be involved in simpler sensory detection processes that do not require the piecing-together of multiple events. Alcino Silva, a neuroscientist at University of California, Los Angeles, who was not involved with the research, calls the work “a groundbreaking study,” and says it “promises to change the way we think about how space is learned and remembered.” He adds that the study “is just a provocative beginning.” He notes the need for further research to ensure that this finding is “actually key to learning and memory. For example, it will be important to explore this form of plasticity, and then show that manipulating it can both interfere with and enhance specific forms of learning.”
Another researcher, György Buzsáki, a neuroscientist at New York University, also not involved in the study, says: “Overall, this is a significant step forward in our understanding of the mechanisms involved in place field generation in the hippocampus.” He adds that the neuroscience literature includes examples of various mechanisms for creating such place markers in an animal’s brain, including a study his own laboratory that conforms more closely to Hebb’s model.
The hippocampus, he says, can also store an internal sequence of events without any sensory inputs of physical surroundings—mental imagery of moving about a place one has never visited, for example—a situation that the behavioral timescale plasticity model discovered by Magee and team may not account for. Whichever model prevails, the new Science study provides another example of the constant flux in the brain sciences. A close look at the details of any given process assumed to underlie a long-standing theory can call into question the theory itself, and open up an entirely new avenue of research.