Did law school teach me something about evaluation?
My teenage sons enjoy making fun of me for spending three years in law school and then getting a job that does not involve being a lawyer. I try to explain that you learn a lot in law school that is useful for other professions (how to write, think, and argue, for example), but they remain skeptical.
I was thinking about this recently because I was engaged in several conversations about program evaluation and how we can better understand whether our efforts are having the desired impact. As I wrote in my last article, our field is often wrongly and harshly judged because poverty rates in America have not declined in the last fifty years. For some, this is evidence that what we are doing is not working. What if poverty rates are the wrong benchmark to measure our field’s success? If so, how do we measure our efficacy and overall performance?
I certainly don’t have a simple or complete answer for these questions, nor does anyone else. At some level, it is impossible to answer given that the community development field seeks to serve many different constituencies with different needs and goals. And many actions and programs may benefit some at the expense of others.
Given the complexity of the questions and the murkiness of the data, I would like to suggest that we consider applying some legal thinking to the question. Perhaps, the way our legal system resolves conflicts – in particular civil law suits – offers some guidance.
First, in a civil legal case, the jury must make a decision based on a preponderance of the evidence, unlike a criminal case in which the burden of proof is “beyond a reasonable doubt.” This strikes me as the right standard for community developers because the complexity of human life means that there will always be some doubts about what caused certain outcomes. But that does not mean we can’t make judgments and decisions about what is or is not working; about which programs should be funded or not funded. Most of the time, a preponderance of the evidence will point in one direction or the other; at a minimum it can significantly reduce the guess work.
Second, in a civil case, the attorney assembles as much evidence and data as s/he can to support his or her version of the case. This evidence can take many forms – physical evidence, testimonials, statistical data, photos, and other sources of information. The fact that there may be some evidence that is contradictory, confusing or incomplete does not automatically negate the case depending on the overall weight of the evidence. This can be applied in our context, where community developers and others can use stories, anecdotes, output data, surveys, population level data, pictures, and other forms of information that begins to paint a picture of what is happening.
Third, and most important, a good trial attorney puts the evidence into a story that the jury can understand. Like all good stories, a lawyer’s story must have a beginning, middle and end, with a logical flow throughout that allows a jury to conclude, “yeah, that makes sense.” Without a story, the evidence is just noise and is likely to be unconvincing. Without a story, jurors will have a hard time reconciling contradictory evidence and they will be unable to fill in the blanks if the evidence is incomplete. With a clear and logical story, jurors can make reasonable assumptions and inferences based on the inevitably incomplete evidentiary record. Jurors can also rely on scholarly research to help them understand the evidence and how it fits in the story. In fact, jurors can even apply common sense! All of this is also true in our context. We need to have a story (in evaluation jargon we might say “theory of change” or “logic model”) about how we think the world operates and how we can change or alter its course. Data and evidence can then be used to see if things are playing out in a manner consistent with our story/theory/logic model that we have developed – or not.
As we work with DHCD and our members to devise an evaluation system for the newly enacted Community Investment Tax Credit program, I hope we can apply some of these ideas. CDCs should be able to articulate a theory or story about why they think their efforts are creating the desired change. They should have both quantitative and qualitative evidence that can shed light on whether they are having the desired impact.
We should not expect proof beyond a reasonable doubt, but rather a preponderance of the evidence as to whether our efforts are working. Gaps in data or knowledge should not be seen as evidence that programs are failing, nor should the complexity of human and community behavior deter us from seeking better understanding about the impact of our work.
In future articles, I will offer some thoughts on an overall theory of change for the CITC program. I will also reflect on what I have learned from collecting data for the GOALs initiative for the past 10 years and how those lessons might be applied in the CITC context. With a clear story or theory about our work and with more accurate and complete data, I am confident we will be able to effectively evaluate the work of our member organizations and the CITC program itself. Of course, this model does leave a major challenge which is aggregating and comparing the work of different CDCs given that each group may have a slightly different approach and set of goals.
I’m not sure I needed three years of law school to figure this out – most people know the basic elements of a civil trial. And I suspect my kids will always think that I wasted my time and money going to law school. But for me, I continue to believe, based on a preponderance of the evidence, that I made the right decision to go to law school. If nothing else, I met my wife during law school and my kids ought to see the value in that – even if meeting someone was not part of the logic model that motivated me to enroll in the first place.