My knowledge as anti-knowledge

During my more pessimistic moments, I grow increasingly skeptical about our ability to know anything.

Take science. Academia is supposed to be our most reliable source of knowledge, right? And yet, a number of fields seem to be failing us. Any results shouldn’t really be believed before they’ve been replicated several times. Yet, of the 45 most highly regarded studies within medicine suggesting effective interventions, 11 haven’t been retested, and 14 have been shown to be convincingly wrong or exaggarated. John Ioannidis suggests that up to 90 percent of the published medical information that doctors rely on is flawed – and the medical community has for the most accepted most of his findings. His most cited paper, “Why Most Published Findings Are False” has been cited almost a thousand times.

Psychology doesn’t seem to be doing that much better. Last May, the Journal of Personality & Social Psychology refused to publish a failed replication of the parapsychology paper they published earlier.

The reason Smith gives is that JPSP is not in the business of publishing mere replications – it prioritises novel results, and he suggests the authors take their work to other (presumably lesser) journals. This is nothing new – flagship journals like JPSP all have policies in place like this. […] …major journals simply won’t publish replications. This is a real problem: in this age of Research Excellence Frameworks and other assessments, the pressure is on people to publish in high impact journals. Careful replication of controversial results is therefore good science but bad research strategy under these pressures, so these replications are unlikely to ever get run. Even when they do get run, they don’t get published, further reducing the incentive to run these studies next time. The field is left with a series of “exciting” results dangling in mid-air, connected only to other studies run in the same lab.

This problem is not unique to psychology – all fields suffer from it. But while we are on the subject of psychology, the majority of its results are from studies conducted on Western college students, who have been presumed to be representative of humanity.

A recent survey by Arnett (2008) of the top journals in six sub-disciplines of psychology revealed that 68% of subjects were from the US and fully 96% from ‘Western’ industrialized nations (European, North American, Australian or Israeli). That works out to a 96% concentration on 12% of the world’s population (Henrich et al. 2010: 63). Or, to put it another way, you’re 4000 times more likely to be studied by a psychologist if you’re a university undergraduate at a Western university than a randomly selected individual strolling around outside the ivory tower.” Yet cross-cultural studies indicate a number of differences between industrialized and “small-scale” societies, in areas such “visual perception, fairness, cooperation, folkbiology, and spatial cognition“. There are also a number of contrasts between “Western” and “non-Western” populations “on measures such as social behaviour, self-concepts, self-esteem, agency (a sense of having free choice), conformity, patterns of reasoning (holistic v. analytic), and morality” ( ; ).

Many supposedly “universal” psychological results may actually only be “universal” to US college students.

In any field, quantiative studies require intricate knowledge about statistics and a lot of care to get right. Academics are pressed to publish things at a fast pace, and the reviewers of scientific journals often have relatively low standards. The net result is that the researchers have neither the time nor the incentive to conduct their research with the necessary care.

Qualitative research doesn’t suffer from this problem, but it suffers from the obvious problem of often having a limited sample group and difficult-to-generalize findings. Many social sciences that are heavily based on qualitative methods outright state that carrying out an objective analysis, where the researcher’s personal attributes and opinions don’t influence the results, is not just difficult but impossible in principle. At least with quantiative sciences, it may be possible to convincingly prove results wrong. With qualitative sciences, there is much more wiggle room.

And there’s plenty of room for the wiggling to do a lot of damage even in the quantative sciences. From the previous article on John Ioannidis:

Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials. The article spelled out his belief that researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and even using the peer-review process, in which journals ask researchers to help decide which studies to publish, to suppress opposing views. “You can question some of the details of John’s calculations, but it’s hard to argue that the essential ideas aren’t absolutely correct,” says Doug Altman, an Oxford University researcher who directs the Centre for Statistics in Medicine.

Of course, all of this is not to say that science wouldn’t be good for anything. I’m typing this on a computer that obviously works, in an apartment built by human hands, surrounded by countless of technological widgets. The more closely related a science is to a branch of engineering, the more likely it is that it is basically right. Its ideas are constantly and rigorously being tested in a way that actually incentivizes being right, not just publishing impressive-looking studies. The farther out a science is from engineering and from having practical applications that can be tested at once, the more likely it is that it’s just full of nonsense.

Take governmental institutions. Academia, at least, still has some incentive to seek the truth. Meanwhile, politicians have an incentive to look good to voters, who by and large do not care about the truth. The issues that citizens care the most strongly about tend to be the issues that they know the least about, and often they do not even know the political agendas of the parties or politicians that they vote for. For the average voter, who has very little influence on actual decisions but who can take a lot of pleasure from believing things that are actually pleasant to believe, remaining ignorant is actually a rational course of action. Statements that sound superficially good or that appeal to the predjudices of a certain segment of the population are much more important for politicians than actually caring about the truth. Often, even considering a politically unpopular opinion to be possibly true is thought to be immoral and suggestive of a suspicious character.

And various governmental institutions, from academics funded by government funds to supposedly neutral public institutions are all suspect to pressures from above to sound good and produce pleasing results. The official recommendations of any number of government agencies can be the result of political compromise as much as anything else, and researchers are routinely hired to act as the politicians’ warriors. Even seemingly apolitical institutions like schools and the police may fall victim to the pressure to produce good results and start reporting statistics and results that do not reflect reality. (For a particularly good illustration of this, watch all five seasons of The Wire, possibly the best television series ever made.)

Take the media. Is there any reason to expect the media to do much better? I don’t see why there would be. Compared to academia, journalists are under even more time pressure to produce articles, have even less in the way of rigorous controls on truthfulness, and have even more of an incentive to focus on big eye-catching headlines. Even for the journalists who actually follow strict codes of ethics, the incentives for sloppy work are strong. Anybody who has an expertise in pretty much any field that’s been reported on will know that what’s written often has very little resemblance to reality.

Some time ago, there were big claims about how Twitter was powering revolutions and protests in a number of authoritarian countries. Many of us have probably accepted those claims as fact. But how true are they, really?

In the Iranian case, meanwhile, the people tweeting about the demonstrations were almost all in the West. ‘It is time to get Twitter’s role in the events in Iran right,’ Golnaz Esfandiari wrote, this past summer, in Foreign Policy. ‘Simply put: There was no Twitter Revolution inside Iran.’ The cadre of prominent bloggers, like Andrew Sullivan, who championed the role of social media in Iran, Esfandiari continued, misunderstood the situation. ‘Western journalists who couldn’t reach – or didn’t bother reaching? – people on the ground in Iran simply scrolled through the English-language tweets post with tag #iranelection,’ she wrote. ‘Through it all, no one seemed to wonder why people trying to coordinate protests in Iran would be writing in any language other than Farsi.’” ( )

Take the Internet. Online, we are increasingly living in filter bubbles, where the services we use attempt to personalize the information we read to what they think we want to see. Maybe you’ve specifically gone to the effort of including both liberals and conservatives as your Facebook friends, as you want to be exposed to the opinions of both. But if you predominantly click on the liberal links, then eventually the conservative updates will be invisibly edited out by Facebook’s algorithms, and you will only see liberal updates in your feed. Various sites are increasingly using personalization techniques, trying to only offer us content they think we want to see – which is often the content most likely to appeal to our existing opinions.

Take yourself. Depressed by all of the above? Think you should only trust yourself? Unfortunately, that might very well produce even worse results than trusting science. We are systematically biased to favorably misremember events, only seek evidence confirming our beliefs, and interpret everything in our own favor. Our conscious minds may not be evolved to look for the truth at all, but to choose of various defensible positions the one that the most favors ourselves. ( ; )

Our minds run on corrupted hardware: even as we think we are trying to impartially look for the truth, other parts of our brains are working hard to give us that impression while hiding the actual biased thought processes we engage in. We have conscious access to only a small part of our thought processes, and have to rely on countless amounts of information prepared by cognitive mechanisms whose accuracy we have no way of verifying directly. Science, at least, has some safeguards in place that attempt to counter such mechanisms – in most cases, we will still do best by relying on expert opinion.

But if you plan to mostly ignore the experts and base your beliefs on your own analysis, you need to not only assume that ideological bias has so polluted the experts as to make them nearly worthless, but you also need to assume that you are mostly immune from such problems!” ( Robin Hanson: Against DIY Academics )


Most of the things I know are probably wrong: with each thing I think I learn, I might be learning falsehoods instead. Because the criteria for an idea catching on and for an idea to be true are different, the ideas that a person is the more likely to hear about are ones that are more likely to be wrong. Thus most of the things I run across in my life (and accept as facts) will be wrong.

And of course, I’m quite aware of the irony in that I have here appealed to a number of sources, all of which might very well be wrong. I hope I’m wrong about being wrong, but I can’t count on it.

(Essay also cross-posted to Google Plus.)

One comment

  1. Tom Arbuz

    Great article!
    The general idea is indeed depressing, it makes you feel that maybe your life time effort of accumulating knowledge was wasted, and the only reason you didn’t noticed it thus far is that this knowledge is what everyone believe.


  1. Neuroinformatics 4 seminar, session III – GWT/meditation, neural correlates of consciousness | Kaj Sotala - [...] how much weight I should actually put on these results. A study that has not been replicated is little…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.