Determining Causality: Research and Resources

Click here to view original web page at www.utexas.edu

Identifying where influence exists and the direction of that influence

Website

Correlation or Causation? - Jon Muller, North Central College.

Provides several useful activities to teach the difference between correlation and causation, a link to a description of correlation and causation, and popular press articles that provide the opportunity to assess data presented in mainstream media.

Books

Browne, N.M. & Keeley, S.M. (2010). Are there rival causes? In Asking the right questions: A guide to critical thinking (9th Edition, pp. 122-136). Upper Saddle River, NJ: Prentice Hall.

A clearly-written overview of the process of determining causation, and sorting through the many possible causes for an event.

Halpern, D.F. (2003). Thinking as hypothesis testing. In Thought and knowledge: An introduction to critical thinking (4th Edition),(231-262). Mahwah, NJ: Lawrence Erlbaum.

This chapter contextualizes the act of determining causation within a larger context of thinking as hypothesis testing. The chapter is a plainly written description of basic statistical terms and experimental methods (e.g., validity and reliability, dependent and independent variables, populations and samples) that is appropriate for beginners.

Nelson, J. (2005). Cultivating judgment: A sourcebook for teaching critical thinking across the curriculum. Sillwater, OK: New Forums Press.

This sourcebook contains activities that are either applicable in any discipline or specific to a few similar disciplines. For activities relevant to determining causality, see "Activity #8: Weighing the Evidence" and "Activity #22: Just Because."

Saville, B.K., Zinn, T.E., Lawrence, N.K., Barron, K.E., & Andre, J. (2008). Teaching critical thinking in statistics and research methods. In D.S. Dunn, J.S. Halonen, & R.A. Smith (Eds.), Teaching critical thinking in psychology (pp. 149-160). West Sussex, United Kingdom: Wiley-Blackwell.

This chapter focuses on the relationship between critical thinking and statistics in undergraduate education. Attention is given to understanding the barriers to integrating critical thinking and statistics in undergraduate education with a following section on how to address these barriers through course activities, course format, and instruction methods.

Articles

Adams, D.S. (2003). Teaching critical thinking in a developmental biology course at an American liberal arts college. International Journal of Developmental Biology, 47, 145-151.

Helping students learn what is necessary and sufficient in an experimental design to support the scientific claims it generates can help students evaluate the material they learn throughout the course. This method of instruction also teaches students to evaluate the reliability and limits of evidence for theories and concepts. In actively applying the scientific method to course material, students increase their comprehension, learn to think critically, and learn how to draw valid conclusions.

Connor-Greene, P.A. (1993). From the laboratory to the headlines: Teaching critical evaluation of press reports of research. Teaching of Psychology, 20(3), 167-169.

This article describes an active learning exercise to help undergraduate students differentiate between correlation and causation using media reports of research findings. Students are assigned to small groups and given a popular news article describing a research finding. The author provides questions each group must answer about its article (e.g., "What conclusion is implied?" and "Can this study prove its conclusion?"). The class then reconvenes to discuss their evaluations and the instructor presents the research findings as they were originally reported in a scientific journal. Students are encouraged to compare the press report and the original the research findings to identify omissions or misrepresentations made by the media. Students are asked to explore the issues of causality within the framework of research design. Discussion should also include any recommendations for media reporting of scientific research findings in order to avoid confusion or misrepresentation of research.

Haig, B.D. (2003). What is a spurious correlation? Understanding Statistics, 2(2), 125-132.

The author proposes a typology of correlations, organizing correlations by their presumed causes. At the first level of presumed causes are accidental and genuine correlations. Accidental correlations do not have a causal framework whereas genuine correlations have an underlying causal interpretation. Accidental correlations can be further described as either nonsense or spurious. Nonsense correlations are accidental correlations where no sensible causal link can be provided. One example given for this type of correlation was that at one point in Britain's history, a statistician found a positive correlation between birth rate and the number of storks in the country. Spurious correlations are accidental correlations that are not the result of the claimed cause. Examples of spurious correlations can arise from sample selection bias, errors in measurement, or the selection of an improper correlation coefficient. Genuine correlations can be subdivided into direct and indirect correlations. When variable A directly causes variable B, it is considered a direct correlation. One example of this correlation would be that heavy trucks cause damage to roads. Indirect correlations "are produced by common or intervening variables" (p. 129). The author points out that indirect correlations are misleadingly labeled as "spurious correlations" but that in fact these are genuine correlations, and thus by rule are not spurious. The example is given that vocabulary and math subtests are often correlated but are both causally linked to the overall intelligence quotient of the individual. However, this does not mean that vocabulary and math skills are not genuinely correlated.

Harcum, E.R. (1988). A classroom demonstration of the difference between correlation and causality. Perceptual and Motor Skills, 66, 801-802.

To demonstrate the difference between correlation and causality, the author constructed an apparatus that included a hand crank and flag. Although not visible to the students, the instructor could simultaneous turn the crank while pulling a string under the apparatus that caused the flag to wave. When viewed, it appears that the hand crank caused the flag to wave. In this demonstration, the instructor then asked the students to hypothesize about the relationship between the hand crank and the waving flag, which usually resulted in students describing a causal relationship. The instructor then would turn around the apparatus to expose the trick, pointing out that what they in fact saw was a correlation, but not a causal link between the two variables. This demonstration serves as a starting point for a discussion into correlation and causality and the need to thoughtfully evaluate the validity of the relationship between each variable before drawing conclusions in research.

Hatfield, J. (2006). Avoiding confusion surrounding the phrase "correlation does not imply causation." Teaching of Psychology, 33(1), 49-51.

The authors suggest that when instructors utilize the simple phrase "correlation does not imply causation," it leads to students inferring that this limitation is the result of the type of statistical analysis utilized. Conversely, the authors suggest that the focus of instruction for understanding this concept should be on the type of research design that allows for conclusions to be drawn from statistical analysis. In this way, the phrase "correlation does not imply causation" explicates the weakness of nonexperimental versus experimental research designs. Thus, in experimental research designs a correlation coefficient can indeed imply causation if the independent variable has been manipulated. The authors made several teaching recommendations. First, use the term correlation in a statistical context and delineate it as correlation analysis or as utilizing a correlation coefficient. Second, use the term association to describe the relationship between two variables. Third, use the term nonexperiemntal research when variables have not been manipulated as opposed to labeling this correlational research. Fourth, instead of using the phrase " "correlation does not imply causation," use the phrase "without manipulation, association does not imply causation" (p. 51).

Jungwirth, E., & Dreyfuss, A. (1992). After this, therefore because of this: One way of jumping to conclusions. Journal of Biological Education, 26(2), 139-142.

This article explores "post hoc" thinking -- the common mistake of attributing causality between events when they are connected by a temporal sequence. That is, when one variable precedes another variable, and both could be logically connected, many individuals assume a causal relationship spontaneously without questioning alternative causes. For example, if the previous year a high school football team won all of its games, but lost most of its following games the next year after hiring a new coach, most individuals would draw the conclusion that the increase in losses was due to the new coach. The authors conducted a test of this post hoc thinking in a sample of high school students, undergraduates, student teachers, and teachers. In this experiment, two conditions were explored: 1) subjects asked to read a text, presented with conclusions, and asked to accept or reject the conclusions or 2) subjects given the same text but explicitly asked to evaluate the validity of the conclusions. The results indicated that a majority of high school students, undergraduates, and student teachers spontaneously accepted the logical conclusions when they were not given explicit instructions to evaluate the validity of the conclusion. However, when explicitly asked to evaluate the validity of the claims made in the text, the rate of spontaneous acceptance decreased by two to three times, depending on the sample. The results of this study suggest that instructors should focus on teaching students about post-hoc fallacies, and challenge students to assess both the possibility that the sequence of events are causally connected as well as that the events may be connected by a third variable which accounts for the connection between events.

Spears, R., Eiser, J.R., & Van Der Pligt, J. (1987). Further evidence for expectation-based illusory correlations. European Journal of Social Psychology, 17, 253-258.

In 1967, Loren J. Chapman coined the term "illusory correlation" to define the erroneous perception that individuals make between two variables based on preconceived notions or beliefs. Since this original article, many studies have been conducted based on this phenomenon. This article presents an interesting look at how beliefs can influence inferences. It is a valuable concept to understand how underlying assumptions and beliefs can affect the way objective data is interpreted. Spears, Eiser, and Van Der Pligt utilized 37 subjects in this experiment. The individuals were presented with the scenario that a small and large city were considering building a nuclear plant. Each individual was given a list of opinion statements made by the residents of both towns. The list included an equal number of pro- and anti-nuclear statements for both the small and large city residents. The students were asked to read through the statements and decide which city contained a higher proportion of residents who opposed the nuclear plant. As predicted, a significantly higher proportion of the subjects stated that the small city contained a higher proportion of residents who did not want the nuclear plant. This finding supported the concept of expectancy-based illusory correlation, in that subjects' preconceived ideas of what small city residents would feel, compared to the large city residents was erroneous the given the presented facts. Instructors must always keep in mind that each student brings assumptions, beliefs, and attitudes to how they interpret experiences and therefore effort should be given to help students make objective evaluations of the data presented.

Leave a Reply

Your email address will not be published. Required fields are marked *