Pre Loader

Can Brain Training Increase Intelligence (g)? A Scientific Primer

Can brain training increase intelligence?

This question has sparked off an animated debate going on among scientists, journalists, and social media pundits. I’ve been involved in an extended Quora debate on this topic myself recently. Who hasn’t been involved in this debate?

I want to take the opportunity to clarify some key definitions that help us evaluate the claims, and then take a closer look at the evidence.

This article is targeted at an audience who want to invest some effort into understanding the science behind brain training, and use this understanding to guide their decision-making when it comes to choosing health, resilience or performance interventions. The evidence is reviewed in even more detail here.

This evidence concerns working memory training at its most basic (e.g. dual n-back) level. This kind of training can be further augmented with e.g. interference control training and capacity-strategy training (as implemented in i3 Mindware) but we are not reviewing the evidence for these improved interventions here.

 

What kind of brain training?

First off, almost all the studies focus on just two types of brain training exercise: working memory training (e.g. dual n-back) and executive control (e.g. attention control) training.

The more scientifically credible companies also focus on this kind of training (e.g. i3 Mindware, CogMed, Nintendo, Fitbrains, Memento, Jungle Memory). Over time there’s been a clear convergence of companies towards this kind of training, and products like i9 are designed to implement the most comprehensive variations of this kind of training.

 

What kind of intelligence?

First off, it’s important to know what ‘intelligence’ means! All this controversy about whether brain training improves intelligence – but what is it? What IS IQ? What is it that IQ tests measure?

The Jaeggi n-back PNAS study in 2008 that set off a lot of these debates looked at just one component of intelligence – what is called fluid intelligence or fluid reasoning – also known as Gf. Gf is interpreted as your ability to reason in novel, unlearned situations.This is measured by a subset of the tests you’ll get in a full-scale IQ test – the so-called ‘culture fair’ matrix reasoning test (e.g. Raven’s).

Almost all the subsequent debates on whether brain training improves IQ – including the latest study on placebo effects – equate Gf with general intelligence.

But there is much more to intelligence than Gf!

 

What do professionally administered IQ tests measure?

Perhaps the most established full scale IQ test is currently the WAIS-IV. You can see in the diagram below that the Gf matrix reasoning test is just one of three ‘POI’ measures.

WAIS-IV IQ Test
Note that the ‘Working Memory Index’ is another key element of intelligence in this test. It’s working memory that is trained by the dual n-back and subsequent developments such as found in i3 Mindware.

How do research scientists define general intelligence?

Perhaps the most authoritative classification of the sub-factors of intelligence is the CHC theory of cognitive abilities. The CHC model, draws from the Cattel-Horn (Gf-Gc) and Carroll (3 Stratum) theories of intelligence that had previously established themselves as the strongest candidates for understanding general intelligence (g) scientifically.

Structure of Intelligence - CHC theory
The 7 subfactors of intelligence are:
 .
  • Gf (fluid reasoning)
  • Cc  (crystallized intelligence – knowledge and know-how
  • Gwm (working memory)
  • Gv (visual processing)
  • Ga (auditory processing)
  • Glr (long term storage and retrieval)
  • Gs (processing speed)

 So these 7 factors of IQ should be informing how we interpret any claims about whether brain training increases intelligence.

But the scientific controversy about whether ‘brain training increases intelligence’ is narrowly confined to the ‘fluid reasoning’ factor of IQ as measured by matrices tests.

It turns out that while the evidence for the effectiveness of brain training is arguably controversial for fluid reasoning – although see below – the evidence is strong and conclusive for working memory (Gwm), another critical factor of IQ. Working memory is in fact the factor that Kevin McGrew regards as more critical to general intelligence than fluid reasoning.

So let’s look at the evidence.

What kind of evidence?

A ‘gold standard’ for scientific research is a double-blind, placebo-controlled experiment. You take a bunch of people and randomly put them in two groups: an experimental/treatment group that gets the brain training and a control group that gets some other sort of activity (e.g. a reading task, or doing cross-words), and after the training period, you take measures of their IQs and get the group averages to see if the brain training has had an effect compared to the control. That’s the ‘placebo-controlled’ bit.
placebo controlled experiment
You also make sure that neither group thinks they are in the ‘special’ brain training group to ensure that you don’t get one-sided expectation effects – that’s the ‘placebo-controlled’ bit.

But a single report using this gold standard is not enough.

  • Sometimes there aren’t enough participants in the study to reach a firm conclusion.
  • And researchers tend to only publish results when there’s a positive result.
  • Different labs have different biases – some are pro brain training, some are against.
  • You may even get data-manipulation to support an existing bias.
  • Randomly, sometimes you get a result that’s a ‘false positive’ – where there looks like a real effect of the brain training in terms of the data, but in reality there isn’t.
  • And sometimes you get a ‘false negative’ result, where it looks like there’s no effect of the brain training in terms of the data, but in reality there is.

So you need replications – repeats of the experiment – in different labs. And the more the better. And all these replications need to be analysed statistically to draw a more sound conclusion. That’s what a ‘meta-analysis’ gives you.

 

Meta-analyses

A meta-analysis systematically assesses all well-designed peer-reviewed studies looking at a particular intervention, sifting through in attempt to find reliable, valid estimates of the size of the effect if there is one at all.

It’s easy to identify the meta-analyses that have been published in recent years on the effectiveness of working memory or executive control brain training, and the conclusions clearly support the claim that brain training improves intelligence.

Karbach and Verhaeghen did a brain training study meta-analysis in 2014, looking at broad cognitive abilities, and they found evidence for training gains in:

  • Executive control & attention
  • Fluid reasoning (Gf)
  • Episodic memory – a type of long term memory (Glr)
  • Working memory (Gwm)
  • Processing speed (Gs)

‘Effect size’ of the brain training is calculated by multiplying by 15 (one standard deviation), so gains are gains are typically around the 5-6 point range on standardized tests.

Schwaighofer and colleagues’ 2015 meta-analysis reports substantial verbal and spatial short-term memory and verbal and spatial working memory gains following brain training – both within a few days of training and 6-12 month follow-up.

 

The visuospatial working memory gain of 0.63 is equivalent to 0.63 x 15 = 9.5 points. Long term visuospatial working memory gains from training equate to 6.5 points. The fact there are still Gwm gains after 6 months tells us we’re looking at long-term neuroplasticity change, not little ‘placebo bursts’!

Melby-Lervag & Hulme’s 2013 meta-analysis also found that working memory training resulted in real visuo-spatial and verbal working memory (Gwm) gains even though they did not report fluid reasoning (Gf) gains.

 

What about fluid reasoning (Gf) gains?

Au and colleagues 2015 meta-analysis looked specifically at dual n-back training effects on fluid reasoning (Gf). They reported a 2-3 point increase in experiments with active controls and a 7-8 Gf increase in experiments with passive controls. ‘Active control’ groups did another cognitive task while ‘passive control’ groups did nothing until follow-up testing.

They concluded:

“We urge that future studies move beyond attempts to answer the simple question of whether or not there is transfer [from training to increases in fluid reasoning] and, instead, seek to explore the nature and extent of how these improved test scores may reflect true improvements in intelligence that can translate into practical, real-world settings.” Jacky Au and colleagues, University of California, April 2015

After this meta-analysis came out, the skeptics Melby-Lervag & Hulme (2015) argued that this conclusion wasn’t justified. They argued that the apparent fluid reasoning gains could be better explained as a placebo (expectation) effect – as seen in the dramatic difference between active vs passive controls.

But Au and colleagues then countered this criticism, pointing out that if the difference were due to placebo expectations, you’d expect the active control groups to always outperform the passive control groups – but the opposite seems to be the case, and they concluded for a second time:

“We demonstrate that there is in fact no evidence that the type of control group per se moderates the effects of working memory training on measures of fluid intelligence and reaffirm the original conclusions.’

 

Brain imaging studies

What is often missing from skeptical scientist’s arguments is the work that neuroscientists are doing.

There are now many brain imaging studies showing consistent Fronto-Parietal Network (FPN) neuroplasticity effects from working memory brain training, such as the dual n-back (e.g. Thompson et al., 2016; Metzler-Baddeley et al. 2016; Kundu et al., 2015). The FPN is a key network underlying intelligent, goal directed action and learning.

In the recent MIT, Harvard and Stanford neuroimaging study this year Thompson and colleagues, found:

“[Dual n-back] training differentially affected activations in two large-scale frontoparietal networks thought to underlie working memory: the executive control network and the dorsal attention network. …Load-dependent functional connectivity both within and between these two networks increased following training, and the magnitudes of increased connectivity were positively correlated with improvements in task performance. These results provide insight into the adaptive neural systems that underlie large gains in working memory capacity through training.”

For another recent example, Metzler-Baddeley and colleagues (2016) found that working memory training resulted in increases of cortical thickness in right frontal and parietal cortex.

The consensus from this cognitive neuroscience evidence (reviewed in detail here), favours the claim that programs of working memory brain training result in long-term neuroplasticity change in the key intelligence-related brain network – the Fronto-Parietal Network.

 

working memory training brain imaging
Left: brain regions activated during 2-back task. Right: resting-FC change from WM training. Below – resting-FC change from WM training (DMN) (Tacheuchi et al., 2013)

 

Summary

The objective of this article was to clarify some key definitions that help us evaluate the claims in the ‘does brain training increase intelligence’ debate, and then take a closer look at the evidence.

We’ve clarified that intelligence – measured as a full scale IQ score – is a broader concept that just fluid reasoning (Gf). Working memory is a key component of general intelligence – it is not simply another cognitive ability. It is core to IQ. And the evidence clearly supports the claim that working memory brain training (e.g. i3 Mindware, Brain Workshop or the CogMed program) results in long-term gains in short-term and working memory.

Most of the recent meta-analyses also favour the claim that brain training results in fluid reasoning (Gf) gains, above and beyond placebo effects.

So can brain training – specifically working memory or executive control training such as i3 – increase intelligence? Looking at all the evidence to date, the scientific answer is affirmative.

 

I am a cognitive scientist specializing in health, resilience and performance (HRP) brain training interventions and self-quantification. I have a joint Ph.D in cognitive psychology and neuroscience from the Center of the Neural Basis of Cognition (Carnegie Mellon/Pittsburgh) and for a number of years was a researcher and lecturer at Cambridge University.

i3 Mindware IQ App