If you’ve spent more than about 5 minutes surfing the web, listening to the radio, or watching TV in the past few years, you will know that cognitive training—better known as “brain training”—is one of the hottest new trends in self improvement. Lumosity, which offers web-based tasks designed to improve cognitive abilities such as memory and attention, boasts 50 million subscribers and advertises on National Public Radio. Cogmed claims to be “a computer-based solution for attention problems caused by poor working memory,” and BrainHQ will help you “make the most of your unique brain.” The promise of all of these products, implied or explicit, is that brain training can make you smarter—and make your life better. Yet, according to a statement released by the Stanford University Center on Longevity and the Berlin Max Planck Institute for Human Development, there is no solid scientific evidence to back up this promise. Signed by 70 of the world’s leading cognitive psychologists and neuroscientists, the statement minces no words: “The strong consensus of this group is that the scientific literature does not support claims that the use of software-based “brain games” alters neural functioning in ways that improve general cognitive performance in everyday life, or prevent cognitive slowing and brain disease.” The statement also cautions that although some brain training companies “present lists of credentialed scientific consultants and keep registries of scientific studies pertinent to cognitive training…the cited research is [often] only tangentially related to the scientific claims of the company, and to the games they sell.” This is bad news for the brain training industry, but it isn’t surprising. Little more than a decade ago, the consensus in psychology was that a person’s intelligence, though not fixed like height, isn’t easily increased. This consensus reflected a long history of failure. Psychologists had been trying to come up with ways to increase intelligence for more than a century, with little success. The consistent finding from this research was that when people practice some task, they get better on that task, and maybe on very similar tasks, but not on other tasks. Play a videogame and you’ll get better at that videogame, and maybe at very similar videogames, the research said, but you won’t get better at real-world tasks like doing your job, driving a car, or filling out your tax return. What’s more, whenever intelligence gains were reported, they were modest, especially given how much training it took to produce them. In a University of North Carolina study known as the Abecedarian Early Intervention Project, low-income children received intensive intervention from infancy to age 5 that included educational games, while children in a control group received social services, health care and nutritional supplements. At the end of the study, all of the children were given at IQ test, and the average was about 6 points higher for the treatment group than the control group—in statistical terms, a medium size effect.   Thinking about the modifiability of intelligence started to change in the 2000s. A major impetus was a 2008 study led by Susanne Jaeggi—then a postdoctoral researcher at the University of Michigan and now a professor at the University of California Irvine—and published in the Proceedings of the National Academy of Sciences. Jaeggi and colleagues had a sample of young adults complete a test of reasoning ability to assess “fluid” intelligence—the ability to solve novel problems. The participants were then assigned to either a control group, or to a treatment group in which they practiced a computerized task called “dual n-back,” which requires a person to monitor two streams of information—one auditory and one visual. (The task is challenging, to put it mildly.) Finally, all of the participants took a different version of the reasoning test to see whether the training had any impact on fluid intelligence. The results were striking. Not only did the training group show more improvement in the reasoning test than the control group, the gain was large enough to have an impact on people’s lives. As John Jonides, the senior University of Michigan scientist on the research team, explained, there was also a dosage-dependent relationship: “Our discovery is that 4 weeks or so of training will produce a noticeable difference in fluid intelligence…We’ve also shown that the longer you train short-term memory, the more improvement you get in IQ.” The study made a splash. Once published, most studies are cited in the scientific literature no more than a few times, if they are ever cited at all. The Jaeggi study has now been cited over 800 times—an astonishing number for a study published just six years ago. Discover magazine called the findings one of the top 100 scientific discoveries of 2008, and the psychologist Robert Sternberg—author of more than 1,500 publications on intelligence—declared that the study seemed to “to resolve the debate over whether fluid intelligence is, in at least some meaningful measure, trainable.” Not everyone was convinced. In fact, not long after it was published, the Jaeggi study was No. 1 on a list of the top twenty studies that psychologists would like to see replicated. Above all, what gave the skeptics pause was the magnitude of the reported gain in intelligence—it seemed larger than possible. Studies like the Abecedarian Early Intervention Project had shown that it takes years of intensive intervention to increase IQ by a few points. Jaeggi and colleagues’ findings implied a 6 point increase in just a few hours. The study had serious flaws, too, making the results difficult to interpret. One problem was that there was no placebo control group—no group that received training in a task that was not expected to increase intelligence (analogous to people in the placebo group of a drug study taking a sugar pill). Instead, the control group was a “no-contact” group, meaning that the people simply took the reasoning test two times, and had no contact with the researchers in between. Therefore, the possibility that the treatment group got better on the reasoning test just because they expected that they would get better could not be ruled out. Further complicating matters, the reasoning test differed across training groups; some of the participants got a 10-minute test, while others got a 20-minute test. Finally, Jaeggi and colleagues only used one test to see whether intelligence improved. Showing that people are better on one reasoning test after training doesn’t mean they’re smarter—it means they’re better at one reasoning test. With all of this in mind, my colleagues and I set out to replicate Jaeggi and colleagues’ findings. First, we gave people 17 different cognitive ability tests, including 8 tests of fluid intelligence. We then assigned a third of the participants to a treatment group in which they practiced the dual n-back task, a third to a placebo control group in which they practiced another task, and the remaining third to a no-contact control group. Finally, at the end of the study, we gave everyone different versions of the cognitive ability tests. The results were clear: the dual n-back group was no higher in fluid intelligence than the control groups. Not long after we published these results, another group of researchers published a second failure to replicate Jaeggi and colleagues’ findings. A meta-analysis cast further doubt on the effectiveness of brain training. Synthesizing the results of 23 studies, researchers Monica Melby-Lervåg and Charles Hulme found no evidence that brain training improves fluid intelligence. (A meta-analysis aggregates the results of multiple studies to arrive at more precise estimates of statistical relationships—in this case, the relationship between training and improvement in intelligence.) Jaeggi and colleagues have since published their own meta-analysis, and have come to the slightly more optimistic conclusion that brain training can increase IQ by 3 to 4 points. However, in the best studies in this meta-analysis—those that included a placebo control group—the effect of training was negligible. In another highly publicized study, published last year in Nature, a team of researchers led by University of California San Francisco professor and entrepreneur Adam Gazzaley gave a sample of older adults training in a custom videogame called Neuroracer. The theory behind Neuroracer was originally proposed by the cognitive psychologists Lynn Hasher and Rose Zacks. In a series of articles, Hasher and Zacks argued that a major cause of what we now call “senior moments”—forgetfulness, inattentiveness, and other mental lapses—is mental “clutter.” That is, as we get older, we are more easily distracted by things in the outside world, and by irrelevant thoughts. Neuroracer is designed to strengthen the ability to filter out distraction. The player’s goal is to steer a car on a windy road with one hand, while using the other hand to shoot down signs of a particular color and shape, ignoring other signs. Gazzaley and colleagues gave older adults tests of memory, attention, and other cognitive abilities before and after they practiced either Neuroracer or a control task over a 4 week period to assess transfer of training—in other words, to see whether there were any benefits of playing Neuroracer. Not surprisingly, people got better in Neuroracer. In fact, after practicing, the older adults improved to the level of a 20-year old in the game. Moreover, the researchers claimed that there was evidence that playing Neuroracer mitigated effects of aging on certain cognitive functions. But there were problems with this study, too. One critique raised no fewer than 19 concerns about the results and methods. Compared to the placebo group, the training group showed more improvement from pre-test to post-test for only 3 of the 11 transfer measures. Also, the sample size was small, meaning that even these hints of effectiveness may not replicate, and nearly a quarter of the people in the study were dropped from the statistical analyses. Finally, there was no demonstration that the Neuroracer training made people better in real-world tasks. These concerns notwithstanding, with investment from pharmaceutical companies Pfizer and Shire, Gazzaley and colleagues have applied for FDA approval of a new game based on Neuroracer. The goal, as Gazzaley explained in a recent presentation, is for the game to “become the world’s first prescribed videogame.”    The bottom line is that there is no solid evidence that commercial brain games improve general cognitive abilities. But isn’t it better to go on brain training with the hope, if not the expectation, that scientists will someday discover that it has far-reaching benefits? The answer is no. Scientists have already identified activities that improve cognitive functioning, and time spent on brain training is time that you could spend on these other things. One is physical exercise. In a long series of studies, University of Illinois psychologist Arthur Kramer has convincingly demonstrated that aerobic exercise improves cognitive functioning. The other activity is simply learning new things. Fluid intelligence is hard to change, but “crystallized” intelligence—a person’s knowledge and skills—is not. Learn how to play the piano or cook a new dish, and you have increased your crystallized intelligence. Of course, brain training isn’t free, either. According to one projection, people will spend $1.3 billion on brain training in 2014. It is too soon to tell whether there are any benefits of brain training. Perhaps there are certain skills that people can learn through brain training that are useful in real life. For example, University of Alabama Birmingham psychologist Karleen Ball and her colleagues have shown that a measure called “useful field of view”—the region of space over which a person can attend to information—can be improved through training and correlates with driving performance. What is clear, though, is that brain training is no magic bullet, and that extraordinary claims of quick gains in intelligence are almost certainly wrong. As the statement from the scientific community on the brain training industry concluded, “much more research is needed before firm conclusions [on brain training] can be drawn.” Until then, time and money spent on brain training is, as likely as not, time and money wasted.  

Yet, according to a statement released by the Stanford University Center on Longevity and the Berlin Max Planck Institute for Human Development, there is no solid scientific evidence to back up this promise. Signed by 70 of the world’s leading cognitive psychologists and neuroscientists, the statement minces no words:

“The strong consensus of this group is that the scientific literature does not support claims that the use of software-based “brain games” alters neural functioning in ways that improve general cognitive performance in everyday life, or prevent cognitive slowing and brain disease.”

The statement also cautions that although some brain training companies “present lists of credentialed scientific consultants and keep registries of scientific studies pertinent to cognitive training…the cited research is [often] only tangentially related to the scientific claims of the company, and to the games they sell.”

This is bad news for the brain training industry, but it isn’t surprising. Little more than a decade ago, the consensus in psychology was that a person’s intelligence, though not fixed like height, isn’t easily increased. This consensus reflected a long history of failure. Psychologists had been trying to come up with ways to increase intelligence for more than a century, with little success. The consistent finding from this research was that when people practice some task, they get better on that task, and maybe on very similar tasks, but not on other tasks. Play a videogame and you’ll get better at that videogame, and maybe at very similar videogames, the research said, but you won’t get better at real-world tasks like doing your job, driving a car, or filling out your tax return.

What’s more, whenever intelligence gains were reported, they were modest, especially given how much training it took to produce them. In a University of North Carolina study known as the Abecedarian Early Intervention Project, low-income children received intensive intervention from infancy to age 5 that included educational games, while children in a control group received social services, health care and nutritional supplements. At the end of the study, all of the children were given at IQ test, and the average was about 6 points higher for the treatment group than the control group—in statistical terms, a medium size effect.  

Thinking about the modifiability of intelligence started to change in the 2000s. A major impetus was a 2008 study led by Susanne Jaeggi—then a postdoctoral researcher at the University of Michigan and now a professor at the University of California Irvine—and published in the Proceedings of the National Academy of Sciences. Jaeggi and colleagues had a sample of young adults complete a test of reasoning ability to assess “fluid” intelligence—the ability to solve novel problems. The participants were then assigned to either a control group, or to a treatment group in which they practiced a computerized task called “dual n-back,” which requires a person to monitor two streams of information—one auditory and one visual. (The task is challenging, to put it mildly.) Finally, all of the participants took a different version of the reasoning test to see whether the training had any impact on fluid intelligence.

The results were striking. Not only did the training group show more improvement in the reasoning test than the control group, the gain was large enough to have an impact on people’s lives. As John Jonides, the senior University of Michigan scientist on the research team, explained, there was also a dosage-dependent relationship: “Our discovery is that 4 weeks or so of training will produce a noticeable difference in fluid intelligence…We’ve also shown that the longer you train short-term memory, the more improvement you get in IQ.”

The study made a splash. Once published, most studies are cited in the scientific literature no more than a few times, if they are ever cited at all. The Jaeggi study has now been cited over 800 times—an astonishing number for a study published just six years ago. Discover magazine called the findings one of the top 100 scientific discoveries of 2008, and the psychologist Robert Sternberg—author of more than 1,500 publications on intelligence—declared that the study seemed to “to resolve the debate over whether fluid intelligence is, in at least some meaningful measure, trainable.”

Not everyone was convinced. In fact, not long after it was published, the Jaeggi study was No. 1 on a list of the top twenty studies that psychologists would like to see replicated. Above all, what gave the skeptics pause was the magnitude of the reported gain in intelligence—it seemed larger than possible. Studies like the Abecedarian Early Intervention Project had shown that it takes years of intensive intervention to increase IQ by a few points. Jaeggi and colleagues’ findings implied a 6 point increase in just a few hours.

The study had serious flaws, too, making the results difficult to interpret. One problem was that there was no placebo control group—no group that received training in a task that was not expected to increase intelligence (analogous to people in the placebo group of a drug study taking a sugar pill). Instead, the control group was a “no-contact” group, meaning that the people simply took the reasoning test two times, and had no contact with the researchers in between. Therefore, the possibility that the treatment group got better on the reasoning test just because they expected that they would get better could not be ruled out. Further complicating matters, the reasoning test differed across training groups; some of the participants got a 10-minute test, while others got a 20-minute test. Finally, Jaeggi and colleagues only used one test to see whether intelligence improved. Showing that people are better on one reasoning test after training doesn’t mean they’re smarter—it means they’re better at one reasoning test.

With all of this in mind, my colleagues and I set out to replicate Jaeggi and colleagues’ findings. First, we gave people 17 different cognitive ability tests, including 8 tests of fluid intelligence. We then assigned a third of the participants to a treatment group in which they practiced the dual n-back task, a third to a placebo control group in which they practiced another task, and the remaining third to a no-contact control group. Finally, at the end of the study, we gave everyone different versions of the cognitive ability tests. The results were clear: the dual n-back group was no higher in fluid intelligence than the control groups. Not long after we published these results, another group of researchers published a second failure to replicate Jaeggi and colleagues’ findings.

A meta-analysis cast further doubt on the effectiveness of brain training. Synthesizing the results of 23 studies, researchers Monica Melby-Lervåg and Charles Hulme found no evidence that brain training improves fluid intelligence. (A meta-analysis aggregates the results of multiple studies to arrive at more precise estimates of statistical relationships—in this case, the relationship between training and improvement in intelligence.) Jaeggi and colleagues have since published their own meta-analysis, and have come to the slightly more optimistic conclusion that brain training can increase IQ by 3 to 4 points. However, in the best studies in this meta-analysis—those that included a placebo control group—the effect of training was negligible.

In another highly publicized study, published last year in Nature, a team of researchers led by University of California San Francisco professor and entrepreneur Adam Gazzaley gave a sample of older adults training in a custom videogame called Neuroracer. The theory behind Neuroracer was originally proposed by the cognitive psychologists Lynn Hasher and Rose Zacks. In a series of articles, Hasher and Zacks argued that a major cause of what we now call “senior moments”—forgetfulness, inattentiveness, and other mental lapses—is mental “clutter.” That is, as we get older, we are more easily distracted by things in the outside world, and by irrelevant thoughts. Neuroracer is designed to strengthen the ability to filter out distraction. The player’s goal is to steer a car on a windy road with one hand, while using the other hand to shoot down signs of a particular color and shape, ignoring other signs.

Gazzaley and colleagues gave older adults tests of memory, attention, and other cognitive abilities before and after they practiced either Neuroracer or a control task over a 4 week period to assess transfer of training—in other words, to see whether there were any benefits of playing Neuroracer. Not surprisingly, people got better in Neuroracer. In fact, after practicing, the older adults improved to the level of a 20-year old in the game. Moreover, the researchers claimed that there was evidence that playing Neuroracer mitigated effects of aging on certain cognitive functions. But there were problems with this study, too. One critique raised no fewer than 19 concerns about the results and methods. Compared to the placebo group, the training group showed more improvement from pre-test to post-test for only 3 of the 11 transfer measures. Also, the sample size was small, meaning that even these hints of effectiveness may not replicate, and nearly a quarter of the people in the study were dropped from the statistical analyses. Finally, there was no demonstration that the Neuroracer training made people better in real-world tasks. These concerns notwithstanding, with investment from pharmaceutical companies Pfizer and Shire, Gazzaley and colleagues have applied for FDA approval of a new game based on Neuroracer. The goal, as Gazzaley explained in a recent presentation, is for the game to “become the world’s first prescribed videogame.”   

The bottom line is that there is no solid evidence that commercial brain games improve general cognitive abilities. But isn’t it better to go on brain training with the hope, if not the expectation, that scientists will someday discover that it has far-reaching benefits? The answer is no. Scientists have already identified activities that improve cognitive functioning, and time spent on brain training is time that you could spend on these other things. One is physical exercise. In a long series of studies, University of Illinois psychologist Arthur Kramer has convincingly demonstrated that aerobic exercise improves cognitive functioning. The other activity is simply learning new things. Fluid intelligence is hard to change, but “crystallized” intelligence—a person’s knowledge and skills—is not. Learn how to play the piano or cook a new dish, and you have increased your crystallized intelligence. Of course, brain training isn’t free, either. According to one projection, people will spend $1.3 billion on brain training in 2014.

It is too soon to tell whether there are any benefits of brain training. Perhaps there are certain skills that people can learn through brain training that are useful in real life. For example, University of Alabama Birmingham psychologist Karleen Ball and her colleagues have shown that a measure called “useful field of view”—the region of space over which a person can attend to information—can be improved through training and correlates with driving performance. What is clear, though, is that brain training is no magic bullet, and that extraordinary claims of quick gains in intelligence are almost certainly wrong. As the statement from the scientific community on the brain training industry concluded, “much more research is needed before firm conclusions [on brain training] can be drawn.” Until then, time and money spent on brain training is, as likely as not, time and money wasted.