Cognitive Dissonance in Science and Medicine – An Anecdote About How to Detect Pseudoscience, and Why Smart People Fall For It

It has been argued that critical, scientific thinking is not a natural characteristic of humans.(Achenbach 2015) The prevalence of superstitions in every human culture is a testament to this fact. Modern societies are plagued by the superstitions of religions, astrology, new age medicine, and Deepak Chopra (as such a master of vacuous nonsense, I thought he deserved his own category). Despite the human propensity for magical thinking, science has risen to phenomenal heights, revealing to us amazing facts about the universe and its history. In conjunction with human imagination and ingenuity, science has given rise to mindblowing technologies, like computers, cell phones, jetliners, and spacecraft. And yet, even in the twenty-first century – the technological age – “smart” people are not immune to the lure of superstition. Francis Collins, the director of the National Institutes of Health, and former director of the Human Genome Research Institute, is an evangelical Christian. William Newsome, neuroscientist and member of the National Academy of sciences, is a “serious Christian.”(Newsome 2006) People who consider themselves to be wholly rational might be asking themselves, “okay, how are these people, who are undeniably smarter than I am, able to reconcile their science with their superstition?” It is a good question. Science and superstition seem to require thinking patterns that are polar opposites. And yet there is no shortage of scientists that exhibit irrational thinking patterns. Michael Shermer addresses this phenomenon in his book, Why People Believe Weird Things, specifically in the last chapter, entitled, “Why Smart People Believe in Weird Things.”(Shermer 2002)

An Ancedote

I have this former friend – a bright, beautiful young lady with aspirations to get her Ph.D. in neuroscience – who lost her shit one day when I went on a rant about naturopathy. Considering how smart she seemed, I was sort of caught off guard. I later found that she worked for Brain State Technologies, a company that sells biofeedback materials, specifically “brainwave entrainment software.” Turns out that this… product… is apparently marketed to and used by naturopaths for treatment of all sorts of disorders, from insomnia to ptsd – all without any convincing evidence or acceptance by the science-based medicine community.

After a little research into the company via its website and publications, the red flags started popping up. The website itself claims that the BRAINtellect® technology in particular has helped clients get better sleep, relief and improved adaptability to stress, significant improvement in memory and recall, and increased performance at work, athletics, learning, and more. The technology itself bears all of the hallmarks of an elaborate placebo, but even more revealing was the published research by Brain State researchers and their collaborators at Wake Forest. As always, when something sounds too good to be true, it probably is.

I uncovered six published research articles and one review. There was also one letter penned by a few of the Brain State and Wake Forest researchers, namely Charles Tegeler, Sung Lee, and Hossam Shaltout, to the Journal of the American Medical Association Psychology (JAMA Psychology). The six research articles were all published in open access journals, all of which have 2014 impact factors between 2.0 and 3.7, and SCImago Journal and Country Ranks (SJRs) ranging from 0.95 to 1.7 Impact factors and SJRs are indicators of the relative contribution of a journal to the field and to science in general. Impact factors are calculated by determining the average number of citations per published article over the previous two years. When assessing the importance of a journal to its respective field, you can compare the journal’s impact factor with that of the top journals in the field. Since the articles published by the Brain State Tech/Wake Forest research group are arguably related to the field of neuroscience and psychology, I compared the impact factors of the journals in which the researchers published their research with the top journals of neuroscience and psychology. The three top research journals (not review journals) in neuroscience were Nature Neuroscience, Neuron, and Biology of Mood and Anxiety disorders, with 2014 impact factors of 13.8, 12.0, and 7.0, respectively. The 2014 SJRs for these respective journals were 10.5, 10.2, and 7.1. The top three research journals in experimental psychology were the Journal of Experimental Psychology, Cognitive Psychology, and the Journal of Memory and Language; the respective 2014 impact factors were 5.3, 4.9, and 4.4; the respective 2014 SJRs were 3.4, 2.9, and 2.4. For perspective, this means that in the four year timespan in which these articles were published, the Brain State/Wake Forest researchers could not manage to get even one article into a second tier journal, let alone a top tier one. In addition, all of the journals in which they are published are open access, and these journals may be less selective than their subscription counterparts.

Okay, so these guys published in so-so journals that aren’t known for having articles that make a spalsh. But what really tells you how impactful individual papers are is the citation count – how many citations, and by whom. I noticed that none of the articles by the Brain State group are well-cited, with the highest number of citations on a single article being only six.(Gerdes, Gerdes et al. 2013) Just for comparison, one of the post docs at my last job had papers with anywhere from 20-42 citations, some of which have been published for less than a year.

There’s a good article in Popular Mechanics that explains how to determine if an article or research group is bullshit. It is entitled, “6 Warning Signs That a Scientific Study is Bogus.”(Fecht 2014) I pulled the following paragraph from it:

“Do the Researchers Cite Their Own Papers? If so, this is a red flag that they are promoting views that fall outside the scientific consensus.”(Fecht 2014)

Let’s go back to Brain State’s/Wake Forest’s publications on brainwave entrainment. I checked all of the citations those articles received and I found that all but one of those citations were from within the collaborative team. In other words, the only people citing those papers were the authors themselves. Every paper but one was written by some combination of the same 8-10 people. The outlying paper cited one of the Brain State papers merely to acknowledge its existence as another paper acknowledging the need for research into interventions for PTSD.(Lee, Gerdes et al. 2014, Ginsberg, Pietrabissa et al. 2015) But it did not reference anything in the Brain State paper at all, indicating that the authors either didn’t read it, or didn’t care about the content of the paper they cited.

Take Away

The neurologist Steven Novella has written articles in Neurologica on neurofeedback and brainwave entrainment software vendors, and their conclusions are basically that it’s all pseudoscientific quackery, an elaborate marketing ploy to con scientifically illiterate naturopaths to purchase training and materials to treat placebo-susceptible patients.(Novella 2007, Novella 2008)


This is a lesson about the process of science. First of all, just because there is literature on something, doesn’t mean there is true EVIDENCE that it works. Second of all, if something is not widely accepted by the medical community, then more likely than not, it just doesn’t work, or it relies on premises that run counter to fundamental scientific principles, or it contradicts robust evidence in the field. In this case, there is robust evidence that biofeedback treatments have limited applicability, due to the paucity of evidence for efficacy.

And finally…

Knowing how science works can protect you, as a potential client or patient who might be throwing away precious time and money on garbage that doesn’t work. It can also protect you as an ambitious young scientist from being drawn into the world of snake oil and pseudoscience, effectively alienating you from the scientific community.

science literacy

While science is supposed to be an impartial process of gaining knowledge, humans who conduct science are fallible. That’s why we have peer review. Just remember that peer review isn’t over once something is published. Either ideas stand on their merit, or they fall, sometimes grasping desperately, and persistently, for purchase.

What does this mean for the people in your life who fall prey to irrational thinking? It’s hard to tell whether it makes a huge difference how “smart” or educated a person is; when someone is in the grips of irrational ideas, they tend to hold to them, and evidence be damned. When it comes to ideas, I believe that most people are unable or unwilling to let them go, clingin to them while vehemently condemning any and all criticism. The “sunk cost bias” may go some way to explaining this: the sunk cost bias, or the sunk cost fallacy, occurs when someone has invested so much that he or she cannot abandon their effort or idea, no matter how much evidence points to its inevitable failure. Abandoning the idea means acknowledging a significant lost investment.


My friend has invested in neurofeedback. She has invested years of her life, and a significant amount of intellectual effort. I ask myself what she would think, how would she react if she read my blog, and my conclusion is this: she would cast the evidence aside, and brand me the biased, irrational one. When it comes to resolving cognitive dissonance, only the most intellectually honest succeed.



Achenbach, J. (2015). “Why Do Many Reasonable People Doubt Science?” National Geographic. from

Fecht, S. (2014). “6 Warning Signs That a Scientific Study is Bogus: The power of peer review, correlation vs. causation, and more ways to tell that a study isn’t what it appears to be.” Popular Mechanics. from

Gerdes, L., et al. (2013). “HIRREM™: a noninvasive, allostatic methodology for relaxation and auto‐calibration of neural oscillations.” Brain and behavior 3(2): 193-205.

Ginsberg, J., et al. (2015). “Treating the mind to improve the heart: the summon to cardiac psychology.” Frontiers in psychology 6.

Lee, S. W., et al. (2014). “A bihemispheric autonomic model for traumatic stress effects on health and behavior.” Frontiers in psychology 5.

Newsome, W. (2006). “Of two minds: A neuroscientist balances science and faith.” Stanford Medicine Magazine. from

Novella, S. (2007). “Neurofeedback and the Need for Science-Based Medicine.” Neurologica. from

Novella, S. (2008). “Brainwave Entrainment and Marketing Pseudoscience.” Neurologica. from

Shermer, M. (2002). Why people believe weird things: Pseudoscience, superstition, and other confusions of our time, Macmillan.





Will humans ever develop a science-based society?

I used to have this crazy notion that people paid deference to science and scientists, as a general rule. I was a kid then. Little did I know that there was a whole spectrum of belief systems with which science was incompatible. Little did I know about cognitive impedances like congruence, confirmation, and optimism biases. Little did I know that not just a few people, but most people spend their entire lives under misapprehensions; and not only do they spend their lives believing things that are demonstrably wrong in the face of clear evidence, they are comfortable with it, even self-assured. I learned these facts about people over many years, and I was slow to learn. I had somehow developed this notion that everyone’s thinking was governed by a drive toward intellectual integrity. When you’re wrong, you’re wrong, and you learn and move on.

The process by which we sort out what’s true and what’s not is the same process by which we conduct science. The knowledge schemes in my brain are constantly being rewritten as I learn new things, especially as my preconceptions are challenged and I have to reconcile ideas that are causing cognitive dissonance. But I’ve learned that this process of developing ever-evolving knowledge schema is not something that humans are really any good at. In fact, it seems that human knowledge schema become hard-wired over time. Since the scientific process of acquiring knowledge depends on testing ideas, throwing out or modifying wrong ideas, and embracing new ideas that are consistent with observation, it occurred to me that human brains might be ill-equipped to do science. Furthermore, developing entire science-based societies might be an unfortunate impossibility.

Let’s examine a few specific examples where cognitive biases are impenetrable barriers to scientific thought:

As an environmental scientist, I’m naturally concerned with issues such as global warming, environmental pollution, and so forth. Over the past few decades, thousands of scientific articles have been published on climate change; there are numerous papers elucidating the link between anthropogenic air pollution and global temperature anomalies, and since the existence of man-made “global warming” has been well-established, there are thousands more papers examining the current and future effects of climate change.  In the environmental science community, among people that make a living studying climate, global warming isn’t a controversy. It’s a given. And yet, ask an American conservative about global warming and you’ll likely hear something like, “it’s a hoax,” or “it isn’t man-made,” or “it’s a liberal conspiracy,” or something else along those lines. Why is this? Now I’m not going to entertain for a second that they could be right, because there is no debate among scientists on this topic. Why, if there is overwhelming scientific consensus on the topic, do conservatives overwhelmingly come down on the side that scientists are not on? Well, it clearly comes down to cognitive biases. One study into public skepticism about climate change suggested that the correlation between conservatism and climate change denial was rooted in people’s “core values” and “worldviews.”1 This is nothing new. One social cognition theory suggests that affinity for a particular group (e.g., conservatives, Republicans, etc.) influences motivated cognition, such that people develop a kind of information filter that prevents the processing or consideration of evidence or ideas that contradict the beliefs of the group.2 In other words, if you identify as a conservative because of social and familial influences, then you are less likely to consider the evidence of climate change because there is a narrative within your social group that climate change is a hoax, or that it isn’t man-made.

It might seem like I’m picking on conservatives disproportionately with respect to their denial of science, and lack of intellectual curiosity or integrity. It’s true that self-identified liberals might fare no better on cognitive reflection, or at objectively challenging their own predispositions when confronted with evidence.2 It is undeniable that the people that comprise 911 conspiracy theorist groups, anti-vaccine groups, anti-GMO groups, and alternative medicine proponent groups are often very liberal. Yet it’s also true that anti-environmentalists (climate change deniers, ozone hole deniers, acid rain deniers), and opponents of the Big Bang and theory of evolution are overwhelmingly conservative. It’s not like either liberals or conservatives have a monopoly on science illiteracy or motivated reasoning.

So what does this mean in terms of working toward a science-based society? Science always has pushed back against groups of people whose judgments are hindered by motivated reasoning, and science will continue to push back against them in the future. People are slow to change; that’s just the way humans are. I believe we may be wired to some degree for authoritarianism,3 and this fact alone is enough to undermine our ability as a species to embrace scientific thinking. So will humans ever live in the kind of technological, enlightened society portrayed in the Star Trek universe? I think it’s unlikely. Perhaps even impossible. If humans ever do reach to the stars and develop the ability for interplanetary travel, I think it’s much more likely that the future will much more resemble the present, with future space-travelling humans exhibiting the same tribal, authoritarian-influences in their motivated reasoning. Some people will still deny stellar fusion, embracing instead the debunked alternative explanations in plasma cosmology; some will still think vaccines are evil or dangerous; some will embrace conspiracy theories that have no evidential basis at all; yet others will likely attend Jedi church services, earnestly believing that one day Master Yoda will return and restore order to the galaxy, as foretold by the prophet George Lucas in days of old.


1          Poortinga, W., Spence, A., Whitmarsh, L., Capstick, S. & Pidgeon, N. F. Uncertain climate: An investigation into public scepticism about anthropogenic climate change. Global Environmental Change 21, 1015-1024 (2011).

2          Kahan, D. M. Ideology, motivated reasoning, and cognitive reflection: an experimental study. Judgment and Decision Making 8, 407-424 (2012).

3          Kessler, T. & Cohrs, J. C. The evolution of authoritarian processes: Fostering cooperation in large-scale groups. Group Dynamics: Theory, Research, and Practice 12, 73 (2008).