[As research director at Human Impact Partners, Holly Avey spends a lot of time not just looking at our findings but thinking about how we conduct and use research. This is one in a series of blogs about the role of research in HIA.]
In my research blog published back in 2013, I asked: How far should we go with qualitative research in HIA? Is it just used when we don’t have enough quantitative data to answer our research question, or are there other reasons to consider incorporating qualitative research into your HIA work?
A national evaluation of HIAs conducted by the Environmental Protection Agency states that “stakeholder and community input lend themselves to qualitative analysis”, and beyond that, qualitative analysis is warranted in HIAs in the following circumstances: “lack of available scientific research, unavailability of local data, time limitations, limited resources, etc.” (p. 39). The implication is that qualitative data is warranted as a means of stakeholder input, but from a data perspective, you might only pursue qualitative data if you don’t have and/or can’t get quantitative data.
The authors further state, “most HIAs qualitatively characterized impacts; the use of quantitative analysis was lacking.” (p. 80). This statement implies that qualitative characterization of impacts is not sufficient or appropriate when quantitative data is available and the process allows it to be obtained.
This perspective is not unique to the EPA, or to the field of HIA. As Margarete J. Sandelowski states in her editorial Justifying Qualitative Research, quantitative research is often the default modality for the health sciences and is therefore introduced first. This results in many health researchers being trained to think of the ways qualitative research is different from, less than, or deficient in comparison to quantitative research. For example, qualitative research may be described as “less mathematically precise and as producing findings that are not generalizable” when compared to quantitative research. Alternatively, one never sees a comparison that assumes the qualitative research perspective and describes quantitative research as, “less descriptively precise and attentive to context” and limited to generalizations based on objective (nomothetic) phenomena (p. 193).
Thus it is no surprise that one of the EPA’s evaluation review criteria assumed the quantitative default perspective and was originally labeled “quantification of impact” but later changed to “characterization of impact” after the full-scale review had been completed, as a means of reflecting the fact that impacts can be characterized both qualitatively and quantitatively (p 12). Although the authors were trying to accommodate the multitude of research approaches that can be used in HIA, their quantitative default perspective still resulted in the summary statement that “quantification of impacts was lacking” (p. 80). How often might we similarly challenge health researchers to say “qualitative analysis was lacking”?
There may be two underlying assumptions here. One, that quantitative research is more rigorous and defensible in comparison to qualitative research, and two, that quantitative data is more compelling to decision-makers (note how both use the quantitative default perspective). To the first point, I would reiterate what I mentioned in my last blog, which is that qualitative and quantitative research are designed to answer different research questions. They are often based on different research philosophies (see my first research blog). They can both be executed in a manner that is rigorous or a manner that is sloppy. Rigor and defensibility are not the domains of one over the other, but many health researchers who are trained with the quantitative default perspective may assume a higher level of rigor with their default approach.
To the second point, what kind of data is more compelling to decision-makers? Well, in an interesting article published in the American Journal of Public Health titled Understanding Evidence-Based Public Health, the authors argue that “there is no single, ‘best’ type of evidence .” (p. 1578). … “Studies from the communication field have shown that the combination of [both qualitative and quantitative] evidence appears to have a stronger persuasive impact than either type of evidence alone.” (p. 1577).
The authors go on to state, “Qualitative evidence can make use of the narrative form as a powerful means of influencing policy deliberations, setting priorities, and proposing policy solutions by telling persuasive stories that have an emotional hook and intuitive appeal. This often provides an anchor for statistical evidence…”(p. 1577). They suggest that quantitative evidence be incorporated within a compelling story that is created with the qualitative data to maximize the potential use of the data in the policy process. They also go on to report that “in a survey of 292 US state policymakers, respondents expressed a strong preference for short, easy-to-digest data” (p. 1577). This finding may contradict what many quantitatively-focused HIA researchers may assume, which is that the more thorough and specific the data, the better.
While quantitative research can provide powerful data to inform our predictions with numerical specificity, we do not need to sacrifice research rigor for qualitative research. Qualitative research can inform new theories about connections to health that have not yet been studied. It can provide the localized context and community-specific perspectives that can create a compelling narrative and provide relevance and meaning. Qualitative data collection analysis processes can be powerful experiences for stakeholders, when they are offered in a participatory fashion.
So, returning to my original question and the title of this blog – when is qualitative research warranted for HIAs? Hmmm. Now isn’t that a question you’d only ask if you were coming from the quantitative default perspective? We should stop dismissing qualitative research as less-than or if-needed. We need both in HIA.