Tag Archives: Research

HIA Research: When is Qualitative Research Warranted?

[As research director at Human Impact Partners, Holly Avey spends a lot of time not just looking at our findings but thinking about how we conduct and use research. This is one in a series of blogs about the role of research in HIA.]

In my research blog published back in 2013, I asked: How far should we go with qualitative research in HIA? Is it just used when we don’t have enough quantitative data to answer our research question, or are there other reasons to consider incorporating qualitative research into your HIA work?

A national evaluation of HIAs conducted by the Environmental Protection Agency states that “stakeholder and community input lend themselves to qualitative analysis”, and beyond that, qualitative analysis is warranted in HIAs in the following circumstances: “lack of available scientific research, unavailability of local data, time limitations, limited resources, etc.” (p. 39). The implication is that qualitative data is warranted as a means of stakeholder input, but from a data perspective, you might only pursue qualitative data if you don’t have and/or can’t get quantitative data.

The authors further state, “most HIAs qualitatively characterized impacts; the use of quantitative analysis was lacking.” (p. 80). This statement implies that qualitative characterization of impacts is not sufficient or appropriate when quantitative data is available and the process allows it to be obtained.

This perspective is not unique to the EPA, or to the field of HIA. As Margarete J. Sandelowski states in her editorial Justifying Qualitative Research, quantitative research is often the default modality for the health sciences and is therefore introduced first. This results in many health researchers being trained to think of the ways qualitative research is different from, less than, or deficient in comparison to quantitative research. For example, qualitative research may be described as “less mathematically precise and as producing findings that are not generalizable” when compared to quantitative research. Alternatively, one never sees a comparison that assumes the qualitative research perspective and describes quantitative research as, “less descriptively precise and attentive to context” and limited to generalizations based on objective (nomothetic) phenomena (p. 193).

Thus it is no surprise that one of the EPA’s evaluation review criteria assumed the quantitative default perspective and was originally labeled “quantification of impact” but later changed to “characterization of impact” after the full-scale review had been completed, as a means of reflecting the fact that impacts can be characterized both qualitatively and quantitatively (p 12). Although the authors were trying to accommodate the multitude of research approaches that can be used in HIA, their quantitative default perspective still resulted in the summary statement that “quantification of impacts was lacking” (p. 80). How often might we similarly challenge health researchers to say “qualitative analysis was lacking”?

There may be two underlying assumptions here. One, that quantitative research is more rigorous and defensible in comparison to qualitative research, and two, that quantitative data is more compelling to decision-makers (note how both use the quantitative default perspective). To the first point, I would reiterate what I mentioned in my last blog, which is that qualitative and quantitative research are designed to answer different research questions. They are often based on different research philosophies (see my first research blog). They can both be executed in a manner that is rigorous or a manner that is sloppy. Rigor and defensibility are not the domains of one over the other, but many health researchers who are trained with the quantitative default perspective may assume a higher level of rigor with their default approach.

To the second point, what kind of data is more compelling to decision-makers? Well, in an interesting article published in the American Journal of Public Health titled Understanding Evidence-Based Public Health, the authors argue that “there is no single, ‘best’ type of evidence .” (p. 1578). … “Studies from the communication field have shown that the combination of [both qualitative and quantitative] evidence appears to have a stronger persuasive impact than either type of evidence alone.” (p. 1577).

The authors go on to state, “Qualitative evidence can make use of the narrative form as a powerful means of influencing policy deliberations, setting priorities, and proposing policy solutions by telling persuasive stories that have an emotional hook and intuitive appeal. This often provides an anchor for statistical evidence…”(p. 1577). They suggest that quantitative evidence be incorporated within a compelling story that is created with the qualitative data to maximize the potential use of the data in the policy process. They also go on to report that “in a survey of 292 US state policymakers, respondents expressed a strong preference for short, easy-to-digest data” (p. 1577). This finding may contradict what many quantitatively-focused HIA researchers may assume, which is that the more thorough and specific the data, the better.

While quantitative research can provide powerful data to inform our predictions with numerical specificity, we do not need to sacrifice research rigor for qualitative research. Qualitative research can inform new theories about connections to health that have not yet been studied. It can provide the localized context and community-specific perspectives that can create a compelling narrative and provide relevance and meaning. Qualitative data collection analysis processes can be powerful experiences for stakeholders, when they are offered in a participatory fashion.

So, returning to my original question and the title of this blog – when is qualitative research warranted for HIAs? Hmmm. Now isn’t that a question you’d only ask if you were coming from the quantitative default perspective? We should stop dismissing qualitative research as less-than or if-needed. We need both in HIA.

Making a List, Checking it Twice

Although Health Impact Assessments are great tools for analyzing the health impacts of development and other urban planning initiatives, they can be long and resource-intensive. HIA is not always the best tool, especially when project proponents and public health practitioners participate early on or arrive very late in the planning process. So among planning departments there has been a lot of recent interest in healthy development checklists as alternative approaches to data collection and analysis to ensure health and equity are considered in decision-making.

A healthy development checklist includes a list of indicators of health and well-being tied to development, and a set of associated criteria meant to evaluate proposed policies, plans, and projects. Many jurisdictions have created indicator systems – measures that can be used to capture the status of social and environmental conditions – but not all of these have criteria against which specific development proposals can be evaluated. So a checklist is an indicator system, but not all indicator systems are checklists.

The San Francisco Department of Public Health has applied a healthy development checklist to planning activities such as public housing redevelopment, pedestrian and bicycle planning, and several specific area plans. Examples of outcomes of these checklist applications include greater community involvement in plan development, potential mitigations and design strategies, and policy and implementation recommendations to better account for health.

Before jumping in, jurisdictions considering developing a checklist should also consider the process, benefits, and challenges of creating an indicator system. HIP produced this resource for the San Diego Association of Governments. It provides a review of several jurisdictions’ experiences with indicator systems and offers some approaches that may prove useful for those considering developing a healthy development checklist.

There are, however, additional considerations that checklist developers and users need to be aware of. In theory, a checklist can be a useful collaboration tool for public health and planning practitioners to ensure health goals are included in development, but keep the following questions in mind:

  • Who develops the checklist? Is the process collaborative? Which priorities are reflected in the checklist? The development of a checklist involves selecting domains of interest, ways of measuring these domains via indicators, determining the health and equity objectives that the indicators reflect, and criteria to gauge whether an indicator will meet stated objectives. Who is involved in the checklist decision-making process will influence the objectives and criteria expressed by the checklist – and ultimately, what value they have to the larger community.
  • Which domains and indicators should be included? To be inclusive, a range of perspectives should be sought. But ultimately, the priorities should reflect human needs – an underlying set of values determined by collaborators. Resist the urge to include the easiest indicators, or all indicators you can think of, in the checklist. Some of the most important things to include relate to what people need to live and be productive members of society – a living wage, education, and freedom from injustice and violence.
  • Will data be available for all the important indicators? There is a good chance that for at least some indicators, data will be hard to come by, which will affect your budget, process, and analysis or interpretation. A collaborative process can help to overcome this challenge because affected communities can be included in data collection and interpretation. Be creative and, wherever possible, make plans to accommodate additional data collection efforts for hard-to-reach but important indicators.
  • What is the process for applying the checklist to proposals? Who will be included? Will the community have input into in the process of interpreting the data, deciding whether criteria and objectives are met, and what should be done if they are not? Make these decisions up front and include them in instructions that accompany the checklist – otherwise, its value as a tool will be limited.

Most importantly, uphold the values of HIA – equity, democracy, sustainable development, ethical use of evidence, and a comprehensive approach to health – in developing and applying a healthy development checklist. Using these values will help ensure that the checklist and its application advance not just the technical goal of considering health, but the ethical and just goal of creating healthy communities.

It’s Time for a Feminine Perspective

Last year I wrote a blog about the stress response, explaining how chronically stimulating the fight-or-flight response to stress can have a host of impacts on health.

But there’s another, less well known, response to stress. In the animal world, females are often responsible for caring for the young. When threatened, they may not be strong enough to fight off the aggressor, and fleeing would mean leaving their young vulnerable to attack. So the females will often group together to surround the young, creating power in numbers to overcome the threat. Researchers have labeled this strategy tend-and-befriend.

This concept should be considered when we assess the impacts of policies. Whether you respond with fight-or-flight, or with tend-and-befriend, each option is a response to what your brain perceives as a threat. If you think about it, many policies are created in response to what some groups consider to be threatening situations or conditions.

Consider school discipline. Disruptions in the classroom, fights between students, bullying and other threats of violence are considered threats by many students and teachers. “Zero tolerance” policies that mandate suspension or expulsion of students who engage in these activities might be considered a fight-or-flight response, by fighting back.

On the other hand, restorative justice policies, which focus on repairing the harm caused by misbehavior and getting students to take responsibility for their actions, might be considered a tend-and-befriend response. These policies suggest that the threat of a lack of discipline (and potential violence) should be addressed by tending to those who are perpetrating the violence, as well as those who have experienced it, encouraging them to befriend each other. Research shows that this approach, and other trauma-informed approaches to improving education outcomes, are more effective – both in reducing the threats and also in improving health and education outcomes.

Let’s look at another example. Human Impact Partners recently assessed the potential health impacts of California Proposition 47, which proposed reducing six low-level, non-serious offenses of drug possession and various forms of petty theft from felonies to misdemeanors and redirecting resources to services to treat the mental health and substance abuse problems underlying many of these offenses. Labeling these behaviors as felonies is often seen as “tough on crime” – fighting the threat of criminal activity.

But providing treatment instead of incarceration tends to the needs of those with mental health and substance abuse problems rather than harshly criminalizing them. Again, research shows that providing mental health and substance abuse services is more effective in reducing crime, as well as improving physical and mental health outcomes.

There are many other examples. For instance, it often costs less and is more effective to take care of people by providing paid sick days, protecting against wage theft, and keeping families intact – tending to their needs – than to deny access to resources or enforce harsh immigration policies and then deal with the domino effect of more expensive public resources required afterward.

Tend-and-befriend policies, reflecting a traditionally feminine perspective, can be equally, if not more, effective than the flight-or-fight approach. If we’re truly interested in improving health outcomes, we should look to them more often.

Public Health Research and Advocacy, Not Mutually Exclusive

This week’s post was originally published in Scienceblogs.com and was written by Celeste Monforton, DrPH, MPH of George Washington University School of Public Health & Health Services on November 21, 2013.

Public Health Research and Advocacy, Not Mutually Exclusive

By Celeste Monforton

My public health colleague, Adam Finkel, ScD, MPP, received this month the 2013 Alumni Leadership award from the Harvard School of Public Health (HSPH), as part of the school’s 100th birthday celebration. Finkel and I were co-workers in the mid-1990’s at the Occupational Safety and Health Administration, where he was the Director of the Office of Health Standards. I learned more from him about risk analysis and risk assessment than in any semester–long course. Why? Because agency risk assessments are not academic exercises when they are used to inform regulatory decisions…. Click here to read the full post 

HIA Research: What’s the Right Approach for Your Question?

[As research director at Human Impact Partners, Holly Avey spends a lot of time not just looking at our findings but thinking about how we conduct and use research. This is one in a series of blogs about the role of research in HIA.]

Last week I discussed philosophies of research, and how different people might see the same information as either an appropriate source of data or a source of bias. This week, let’s think about different approaches to answering research questions. While your philosophy influences how you think about research, the questions you ask influence how you collect and analyze your data.

A document from the National Institutes of Health (NIH), explains the difference between quantitative and qualitative approaches to research. When people have strong reactions about the pros and cons of these, I believe it stems from a difference in their underlying philosophy of research.

Quantitative research uses numeric data that can be analyzed statistically to assess relationships among variables and understand cause and effect

Qualitative research uses interviews, observations, and reviews of documents (among other methods) to understand the context and meaning of the situation

So which is right for HIA? Our personal philosophy of research will guide how we think about this initially, but the next question should be what kinds of questions do we want our research to answer?

First, what is the purpose of HIA? In 2001, the Merseyside Guidelines for Health Impact Assessment, were published for HIA practitioners in the UK. They state that the aims of HIA are:

  • “to assess the potential health impacts, both positive and negative, of projects, programmes and policies
  • to improve the quality of public policy decision making through recommendations to enhance predicted positive health impacts and minimise negative ones”

Based on this thinking, your overarching research questions might be:  “What are the relationships between the pending decision and any potential health impacts?” “Is the pending decision likely to cause any health effects?” The quantitative approach is good for assessing relationships among variables and cause-and-effect, so you should use a quantitative approach, right? But what happens when you don’t have the quantitative data to answer those questions? Often there are times when HIAs are focused on neighborhood or local-level decisions, with significant limitations on the available quantitative data. In these cases, a combination of methods may be the best bet.

Let’s look back at that NIH document, which defines this combination of methods in this way:

Mixed methods research “involves the intentional collection of both quantitative and qualitative data and the combination of the strengths of each to answer research questions.” (p. 4-5).

One example of combining quantitative and qualitative data is a story that is often told by Aaron Wernham, of the Health Impact Project. Wernham tells about a small community where a natural resource extraction processing facility was operating. Quantitative air quality data for the area did not show any significant violations of air quality standards after the facility began operating. Asthma rates tracked by the state also didn’t show an increase. But community members consistently reported that they perceived asthma rates to be higher. During the HIA, community members offered testimony at public meetings, which was tracked by the HIA team. During the testimony, one of the community members specified that the asthma rates got worse for people when certain conditions aligned – when the facility flared gas under certain weather conditions, with the wind directed toward the village. Community members also testified that the air quality data would not be likely to detect emissions under these conditions because of the location of the air quality monitor for the area.

In this case, quantitative data was available but limited to one monitor, which provided a limited perspective on conditions for the area. Qualitative data from initial discussions with community was also limited, as it provided general perceptions without specificity. Additional qualitative data from the testimony provided the specific context that allowed the HIA team to interpret some of the quantitative data from a new perspective, and understand the discrepancies between the two types of data in other cases. The combination of these two approaches allowed the HIA team to explore a new causal pathway for the HIA to investigate potential health impacts. Thus, combining the two approaches provided the opportunity for the HIA researchers to explore a more complete and accurate picture, and identified data gaps that were limiting the ability to address community concerns. Ultimately, this contributed to a recommendation that was adopted by the decision-makers as a formal requirement for more specific air quality modeling and modeling near potentially affected communities.

How far should we go with qualitative research in HIA? Is it just used when we don’t have enough quantitative data to answer our research question, or are there other reasons to consider incorporating qualitative research into your HIA work? That’s the next research blog topic.

Bias in HIA Research – What is Your Research Philosophy?

[As research director at Human Impact Partners, Holly Avey spends a lot of time not just looking at our findings but thinking about how we conduct and use research. This is the first in a series of blogs about the role of research in HIA.]

A persistent discussion in the HIA world is bias: Are findings biased if they are too heavily influenced by the participation of members of the community being studied? Although HIA practitioners in North America have concluded that input from stakeholders is an essential part of the process – guides have been written about how to engage stakeholders – there is still a tension in the field about how to do this and how it might impact the quality of the research. The National Collaborating Centre for Healthy Public Policy has summarized this tension in two fact sheets that discuss the risks and obstacles of citizen participation and the principal reasons to support it for HIA.

A core argument about bias is that if you involve community members in research, you’re just getting their opinions, and opinions aren’t the same as scientific fact (as Celia Harris discussed in her blog about the recent National HIA Meeting). You’re muddying the waters; you’re diluting or contaminating the scientific validity of the process if you include unsubstantiated opinion as part of the data in your final report. Interviews aren’t the same as air quality data.

That’s clearly true – interviews aren’t the same as air quality data. But is one a source of data and the other a source of bias?  Michael Crotty, author of The Foundations of Social Science Research: Meaning and Perspective in the Research Process, says your answer might depend on your philosophy about research as a whole. I’ve summarized his argument to show that the perceived difference between interviews and statistical analysis of data might really be a reflection of how different researchers see the world in very different ways.

Four Basic Elements of Research1

Four elements of research process

Example 1

(often associated with quantitative research)

Example 2

(often associated with qualitative research)

Your research philosophy: what knowledge is and how to get it

Objectivism: things exist in an objective reality. The way they exist does not have anything to do with the way we think about them or experience them. Good research can measure this objective truth.

Constructionism: everything is relative and depends on context. The way things are is just a construct of the way we make sense of them. It’s just our own personal theory. Research needs to capture this context and personal meaning.

Your theoretical perspective: how you explain what things mean through research

Positivism: information is not scientific unless it can be proven right or wrong by observation and experiment.

Critical inquiry: reality is constantly changing. Every action changes the context. We must constantly be critical of our assumptions when we do research.

Methodology: how you design your research

Experimental research: start with a general scientific theory of how things work, then propose an explanation for how something more specific works, then try to prove your idea wrong (if you can’t prove it wrong, we’ll assume it’s right).

Action research: design your research so that the data that is collected and analyzed can be used for problem solving actions. This should be a collaborative process that allows you to understand the context of the information collected and how it can be used.

Methods: the research toolbox of tools you use to ask and answer your questions

Statistical analysis

Interviews

1 Adapted from Crotty, M. 1998.The Foundations of Social Science Research: Meaning and Perspective in the Research Process. Sage Publications, London.

If you think things exist in an objective reality and the purpose of research is to measure this objective truth, information from interviews might indeed seem biased. But if you believe that everything is relative and depends on the context and the meaning of the events and experiences, interviews might seem like very valuable data.

My personal research philosophy – and HIP’s – leans more toward constructionism – the context and personal meaning influence the realities people experience. Does this mean that we don’t see the value in experimental research and quantitative data like air quality monitoring data? Not at all. We agree that this information is important as well. What it does mean is that the research we do will likely be a combination of these types of data, whenever possible. It means we think context is extremely important, that we need to pay close attention to our assumptions, and that our role as practitioners is to use research and the research process to inform decisions in a way that improves health and reduces inequities.

Tune in next week for another blog on research, where I’ll muse about how to come up with the right research questions to match the purpose of your HIA.