A few months ago, several readers alerted me to a short article
from The Economist (14 Oct 2010), titled “Learning difficulties”.
Reporting on research by Daniel Oppenheimer et al. (Princeton),
the article claimed that “making something hard to read means
it is more likely to be remembered”. Being someone who goes
to great lengths to make every piece of text easy to read, I had
reasons to be distressed by such a shocking “scientific” finding,
so it is with some apprehension that I started to read the article.
Alas, the only bad news to me was how it exemplifies yet again
all that is wrong with empirical research into learning and com-
munication, and with the reporting of this research in the media.
As stated in The Economist, 28 volunteers were asked to learn,
from written texts, about three species of extraterrestrial alien,
each with seven features. Half of the volunteers received texts
set in “difficult-to-read fonts” (Comic Sans MS and Bodoni MT,
both at 12 pt and in 75% gray), while the rest got them in Arial
at 16 pt, “which tests have shown is one of the easiest to read”.
In the end, both groups were asked questions about the aliens,
such as What is the diet of the Pangerish? and What color eyes
does the Norgletti have? As you can guess, the Comic/Bodoni
folks did better on the test (86.5%) than the Arial ones (72.8%).
So what's wrong with this research? At least three aspects of it.
First, there is reasonable doubt about the assumed readability
of the selected fonts. Personally, I would not dream of reading
a printed text at 16 pt in any font (and while I have never been
exactly fond of Comic Sans MS, I have no problem with Bodoni).
Ah, but “tests have shown” that Arial was “one of the easiest”,
have they not? Well, I demand to see. Like many text features,
readability cannot be dissociated from its context. What text
was used for the tests? Who were the readers (and how many)?
How was readability assessed? And above all else, how exactly
was the text set: which font size, line length, and line spacing?
The numerous confounding factors make it almost impossible
to draw general conclusions about relative readability for fonts.
Second, the Economist article blurs the line between correlation
and causality, and ignores variability—two classic shortcomings.
So 14 people reading a text in one font score on average better
on a test than 14 other people reading this text in another font.
Can we conclude that the supposed lower readability of the font
is responsible for the superior average score? Unfortunately not.
The effect might be attributable to any of a number of artifacts,
one of them being the novelty effect of using a less usual font.
The article's conclusion that “The lesson … is to make textbooks
harder to read, not easier” is indefensible from the data shown.
Third, and in my eyes most important, the very task under test
is a sad view of what learning is all about. Having to memorize
features of unknown species out of context and for no reason:
is this what a biology lesson is supposed to be? Is this a model
of instructional text, one underpinning advice for all textbooks?
Is this what The Economist means when it mentions “think hard
about what [people] are shown” in order to “remember it better”?
I would not want to write texts like that anyway, so I can safely
discard this research and its overstated conclusions as irrelevant.
I was very reassured by the common-sense reactions of readers
on the website of The Economist as well as other online forums.
In contrast, I am worried about the many other readers who will
happily have accepted the misleading lesson as an exciting truth.
Use a hard-to-read font may thus soon join the broad collection
of our contemporary legends and other conventional absurdities,
along with the many Eskimo words for snow and the 7±2 myth.
I hate to think of how such an ill-conceived idea will be applied.
For example, is it a wish to make its site hard-to-read that drove
the Princeton Department of Psychology to display its menu bar
at an angle, with items in all-uppercase on tightly spaced lines?
Whatever the motivation, and no matter what empirical research
might claim about it, I sure cannot find it an inviting set of links.