The arts are humanity’s developmental evaluation. Arts-Based Evaluation (ABE) collects, analyzes, and reports data through artistic methods. Examples of ABE methods include photovoice, the visual matrix, verbatim theatre, and gesture harvesting. Coupled with applied neuroaesthetics, Arts-Based Evaluation creates ongoing and dynamic emotional connections to data, inspiring positive change.
Developmental evaluation (DE) informs and supports innovative and adaptive development or programs, projects, strategies and initiatives in complex dynamic environments. DE brings to innovation and adaptation the processes of asking evaluative questions, applying evaluation logic, and gathering and reporting evaluative data to support project, program, product, and organizational development with timely feedback.
Evaluation in the Education Sector
TerraLuna Collaborative frequently focuses on young people in school settings. We have worked with urban school districts and parochial schools striving to close the achievement gap, state-wide educational initiatives focused on student learning and educator professional development, and with educational partners in Minnesota, Nebraska, New York, Louisiana, and Missouri.
Evaluation Capacity Building
Our team will train your staff in data collection, analysis, and reporting. Our team can also develop evaluative training and programs tailored to produce results for your organization. Evaluation Capacity Building is a part of every evaluation project TerraLuna leads.
Principles provide guidance and direction for collaboration and systems change. Principles provide a framework for how people and organizations engage, interact and make decisions in the pursuit of change. Principles provide the anchor for the evaluative inquiry and change process. Organizations collaborating for large-scale systems change in complex social issues, such as economic inequality, public safety, health, and education often benefit from principles-focused evaluation.
A Realist Perspective
The TerraLuna evaluation team strives to engage clients and partners in a theory-driven approach to evaluation, couched in realist philosophical underpinnings, with specific attention to nuance and context. We live the argument that in order to be useful for decision makers, evaluations need to identify ‘what works in which circumstances and for whom?’, rather than merely ‘does it work? (Pawson and Tilley, 1997). To this end, our evaluations lend themselves to answering questions like, "What works? For whom? In what respects? To what extent? In what contexts? And how?" As a result, our evaluations purposefully explore local complexity and unassumed variables, the unpredictability of systemic implementation, and a 360-degree view of systems.
We believe that all evaluations should be developed for a specific use by specific users (Patton, 2008 and 2011). Our team will include the users of the evaluation in every step of the process, from developing questions, to selecting methods, to identifying how and when to communicate findings, to increase the likelihood that the evaluation will be both useful and used.