In your career as an evaluator, programmer, or community change leader you may have faced a question similar to one that I grappled with time and again as an evaluator using a mixed methods approach in supporting our partners. Put simply, that question has often been some articulation starting with: Can I do better than traditional statistical techniques when working in complex settings?

Specifically, can I find an analysis method that promotes utility in formative and developmental evaluation, in a way that traditional methods are less suited for? Is there a method that better attends to nuance, and embraces complexity than what I have been using? In fact, in formative or developmental support, shouldn’t my method have been developed to uncover the causes of future effects, rather than proving impacts, in other words, the effects of past causes?

Further, our values drive us to tactically engage stakeholders through our collection methods to elevate the voice of those that are often not heard, and to make visible those that go unseen. However, in a mixed methods approach, our data analysis is already predestined to take those voices and reduce them to means or regress them to averages, completely ignoring the important nuance we strove to elevate.

Bob Williams argued that we should approach “outlying data with the possibility of it being there for a reason” rather than by chance, in his “Using Puzzling to Facilitate Exploration of Narrative” post that he hosts at his website: I will contend here, that we should mirror this spirit in our quantitative analysis efforts if we are to provide the data necessary to inform conversations that can ultimately change hearts and minds at multiple levels of a system.

We recently uncovered one possible solution to the questions that we’ve posed in working to support a large statewide professional development initiative. Here, we were purposefully investigating nuance, asking in what ways, and to what extent did multiple pathways influence desired implementation across buildings in different regions of the state.

We set out from the beginning knowing that each of our cases (school buildings) were complex systems. In particular, a trio of major considerations were a particular sticking point for us as we contemplated methodology, and probably have been for you in the past as well. They were:

  1. Equifinality: We expect that there will be more than one pathway to implementation
  2. Conjuncturality: We expect that variable influence is in combination rather than isolation
  3. Asymmetry: We do not expect that a simple absence of a variable(s) responsible for high implementation will lead to low implementation

In our estimation, using parametric, statistical analysis on our dataset would not have attended to equifinality, conjuncturality, and asymmetry. However, by applying a nonparametric set-theoretic approach we are able to investigate causal complexity and identify multiple pathways to outcomes. The solution identifed for this analysis was a Qualitative Comparative Analysis (QCA), desirably based on set theory and logic and not statistics (Thygeson, Peikes & Zutushi, 2013).

QCA is a case-oriented method that allows systematic and scientific comparison of cases as configurations, based on attributes, to set memberships (Warren, Wistow, Bambra, 2014). It formalizes comparisons as a means to interpret data from a larger sample while retaining the “integrity of individual cases” (Krook, 2010). Perhaps most importantly for any utilization-focused evaluation, QCA allows for the testing of theories of change to answer the question “What works best, why and under what circumstances” (Baptist & Befani, 2015) using replicable empirical analysis (Ragin, 2008).

QCA has traditionally been of the crisp-set variety where all conditions are judged to present or absent. From our experience, in evaluating educator change and education systems, fully “in” or “out” decicions are rarely advisable. Enter fuzzy set QCA (fsQCA) developed by Ragin (2000, 2007) to allow for sets in which elements are not limited to status as either a member or non-member, but in which different degrees of calibrated membership exist (Wagemann & Schneider, 2007).

Through its mechanitions, fsQCA specifically attempts to decipher a number of different causal configurations influence implementation fidelity. In this way, this technique minded extensive causal diversity (Krook, 2010), and attended to the equifinality among our cases.

By Cepheus (Own work) [Public domain], via Wikimedia Commons

fsQCA also considers causation, using qualitative data and modal logic through statements about what is necessarily the case and what is possibly the case (Stevenson, 2013). As a result, fuzzy-set QCA explores “conjunctural causation” across observed cases. In other words, those of Rihoux & Ragin (2008), constellations of factors influence results rather than variables acting alone.

At the same time, QCA logic and analysis will address the asymmetry expected related to the absence of a positive outcome, as it is its own separate analysis. In fact, our results indicated conditions present in both high implementation and low implementation schools that could be explained by other intersecting conditions. These findings would have been ignored had we regressed all variables to a mean and investigated them in isolation, or had we concocted an examination of the correlations between variables and outcomes.

In the end, our analysis of a medium-sized set of 21 buildings, set across 6 districts in the state allowed us to bring an important message forward in conversations with our programming partners. In short, we found that though no conditions (variables are termed conditions in QCA) were necessary for either high or low levels of implementation, that there was a union (the logical or) of sufficient causal conditions for high and low levels of implementation that depended on the intersection (the logical and) of membership in multiple conditions. One of the most important for our conversation was the identification of a pathway towards high engagement that relied on ALL 3 of the following factors:

  1. Project engagement
  2. Leadership and infrastructure
  3. Data collection and use

The number of QCA applications has increased dramatically during the past few years (Rihoux, et. al., 2013), though there are still relatively few applications within the education sector. However, since it was introduced by Charles Ragin (1987) QCA has been modified, extended, and improved contributing to a better applicability of QCA to empirical social science research (Wagemann & Schneider, 2007) and, we argue, to evaluation. Here are a few places to start.

  1. Better Evaluation has a great overview:
  2. Charles Ragin, who pioneered QCA and then fsQCA, houses information that he finds pertinent to the technique, and tools that he has developed to complete analysis at:
  3. Compasss hosts a bibliographical database where the user can sort through previous applications of fsQCA by areas (such as evaluation, education, or social sciences):


By Jason Altman, radical advocate for social change through evaluation in support of those aggressively pursuing equity, justice, and human rights.

Jason Altman is a Board Member of the TerraLuna Collaborative who began his career providing evaluation support to entities providing advocacy for, policy support for and programming in K-12 public schools, extension services, and community development in 2002. In all project work, he emphasizes elevating the voices of the most affected, and often least empowered stakeholders as well as deeply considering nuance and context at the local level. His desire is that evaluative support can contribute to the changing of hearts and minds about what is important in local communities and the responsibility to serve local communities in the way that they choose to be served.

Baptist, C., & Befani, B. (2015, June). Qualitative Comparative Analysis – A Rigorous Qualitative Method for Assessing Impact. Coffey.

Krook, M. L. (2010). Women’s Representation in Parliament: A Qualitative Comparative Analysis. Political Studies, 58(5), 886–908.

Ragin, Charles C. (1987). The Comparative Method. Moving Beyond Qualitative and Quantitative Strategies. Berkeley/Los Angeles/London: University of California Press.

Ragin, Charles C. (2000). Fuzzy-Set Social Science. Chicago: University of Chicago Press

Ragin, C. C. (2007). From fuzzy sets to crisp truth tables. Compasss Working Paper. Retrieved from

Ragin, C. C. (2008). Redesigning social inquiry: Fuzzy sets and beyond (Vol. 240). Chicago: University of Chicago Press.

Rihoux, B., Alamos-Concha, P., Bol, D., Marx, A., & Rezsohazy, I. (2013). From Niche to Mainstream Method? A Comprehensive Mapping of QCA Applications in Journal Articles from 1984 to 2011. Political Research Quarterly, 66(1), 175–184.

Rihoux, B., & Ragin, C. C. (2008). Configurational comparative methods: Qualitative comparative analysis (QCA) and related techniques (Vol. 51). Sage Publications.

Thygeson, N., Peikes, D., & Zutushi, A. (2013, February). Fuzzy-Set Qualitative Comparative Analysis: A Confgurational Comparative Method to Identify Multiple Pathways to Improve Patient-Centered Medical Home Models. Rockville, MD: Agency for Healthcare Research and Quality.

Wagemann, C., & Schneider, C. Q. (2007). Standards of Good Practice in Qualitative Comparative Ananlysis (QCA) and Fuzzy-Sets.

Warren, J., Wistow, J., & Bambra, C. (2014). Applying qualitative comparative analysis (QCA) in public health: a case study of a health improvement service for long-term incapacity benefit recipients. Journal of Public Health, 36(1), 126–133.