Understanding the complexities of modern life is overwhelming. The constant bombardment of information is the new norm.
Parsing through seemingly divergent pieces of information about topics of interest is a necessary skill, but most Americans haven’t had any formal education in how to do this. In my experience, many people are turned off by even hearing the term research (#totallyanecdotal).
Well, I am not a research methodologist or statistician, but I have a reasonably strong background in research (see this amazing dissertation). Seriously, as I talk to people in my family, community, and especially online in parenting groups, I realize there are fundamental misunderstandings about science. The topics are very complicated, but the basics can go a long way to helping you become a better consumer of information.
Consider two potential “research” headlines you may see on any given day:
1. “Coffee Is Going to Kill You”
2. “Drinking 5 Cups of Joe Daily Will Increase Your Longevity”
What gives? Well, all research is not created equally. There is no Declaration of Independence: Research Edition. In fact, we hold these truths to be self-evident that we need to take many “research findings” with a grain of salt. So, let’s do a quick lesson in research methods. Yep, grab the coffee to stay awake. (Hopefully, it won’t kill you.)
Case Studies, Naturalistic Observations, and Surveys
These are known as descriptive methods, because they simply describe behavior. Interesting? Often… yes. Can we ever conclude causality? Nope. They often help initiate a new field of study, which can be fascinating. They are also helpful as a starting point for future research that may dive deeply into understanding the relationships between variables.
Some of you have likely heard the statement; correlation does not equal causation. If not, please remember this now. It is a vital component of our ability to comprehend research at the most basic level.
Most research reported in the popular press is correlational. What does this mean? If two variables, such as coffee and learning are correlated, for example, this means they vary together. So, if drinking more coffee is said to be related to better learning, the researchers first and foremost must define “more coffee” (three cups or 30 cups?) and “better learning” (10-question quiz on Facebook or SAT results?).
Then, if the researchers do find a relationship, all we know is that there is a relationship between coffee and learning. It could be that coffee drinking increases learning, or that better students drink more coffee, or it could be that some unrelated variable (e.g., education, income level, gender, etc.) is causing the relationship. This variable is known as a spurious correlation. (Click here for many examples of spurious correlations, such as divorce rates in Maine correlating with per capita consumption of margarine.)
So, how do we understand if a variable indeed “causes” another variable? Well, it is a bit challenging to understand, and books are written on the topic, but here is a quick explanation: Scientists must conduct an experiment to infer causality.
Two things are necessary for a study to be called an experiment:
1. Random assignment of participants to either an experimental or control group
2. Manipulation of an independent variable
So, continuing our example from above, researchers would need to sample people from a given population under study, then use a statistical method to randomly assign them to either the experimental or control group (coin flipping works). Then, the experimental group would be the group to receive the independent variable, which would be “more coffee” in the previous example. There are many complexities to conducting good experiments, but this is the gist.
It is often extraordinarily challenging to find the original research to determine what type of design was utilized. We are presented with soundbites and snippets of information that usually just confirm our biases, and our brains tune out opposing viewpoints. In addition, we have to ensure there aren’t glaring conflicts of interest within the studies themselves. Even more challenging is determining whether or not we are seeing all of the research on a particular topic. (Looking at you, Big Sugar.)
Another issue is the need for replication in science. Often, the popular media takes a small finding and blows it up to a vastly exaggerated idea. See neuroscientist Molly Crockett’s excellent TED Talk on this problem.
Currently, the field of psychology is going through a bit of a crisis. There are many thoughts on why this may be, but much of it has to do with lack of replication.
How can we combat being taken in by clickbait headlines and outlandish claims in the popular media? Be skeptical. If it sounds too good (or bad) to be true, take it with a grain of salt (which is apparently good for you again). And, please, please, please remember that correlation does not equal causation.
This is a simplistic view of this topic, but I am hopeful it will encourage people to stop and think about research being reported in the mass media. If you would like to learn more, check out the following presentations from TED Talks: Battling Bad Science, Can you Spot the Problem with These Headlines, and Not All Scientific Studies are Created Equal.
Originally published at https://www.psychologytoday.com.