navigating the data swamp

Last week, I joined a session of Charles Martin-Shields’ course on technology and conflict response. The course is offered by TechChange, a Washington, DC-based start-up that develops education resources for peacebuilding practitioners, technologists, and policymakers. The company’s competitive edge is its distance-learning platform, and the course boasted participants–on-the-ground, at an organization’s headquarters, and somewhere in between–from several countries. The discussion was based on my recent back-and-forth with Charles and Christopher Neu, a TechChange operations guru, on navigating data about violent conflict and mass atrocities.

The participants’ questions provoked a constructive discussion about the mass atrocity data “swamp,” its information and security risks, and how practitioners can navigate both. Participants agreed that the best information exists between big, computed data and small, user-generated data. This agreement, however, opens new dilemmas: how peacebuilding organizations balance the moral act of “bearing witness,” with the no-less-moral act of protecting their local officers and sources; and, how analysts assess conflict amid small amounts of low-quality information. Below, I summarize my initial thoughts on these two dilemmas, based on the TechChange discussion and relevant reading since.

Proprietary data aren’t private, and open data aren’t public: Better data, big or small, can only emerge from stronger computer and human information networks. For big data practitioners, this means expanding systems like the recently-suspended Global Database of Events, Language, and Tone (GDELT), which I discussed in my last post as an example of useful machine-coded datasets. Before the platform’s suspension, GDELT’s data scientists viewed–and may still view–its future in these terms: the platform’s value grows as it acquires more reliable and diverse news sources.

For small data practitioners, a “network” refers to the human relationships, bolstered by communication technologies, that transfer information from local sources to global headquarters. This information, about where a conflict occurs, which populations are vulnerable, and what their needs are, often informs the distribution of peacebuilding resources. Additionally, organizations carry the amorphous public responsibility of “bearing witness” to ongoing abuses. These dual hats create internal contradictions between an organization’s public face and its private needs.

Many practitioners, however, view this dilemma as an unresolvable dichotomy, rather than, more accurately, a give-and-take. Peacebuilding data are effective when an organization shares them–within its organization, but also with others. Small data are the property of an organization and its sources, and not the private confidence of a tiny group of people. Peacebuilding organizations should weigh the burden of risky information, but also grant their local sources the agency to shape, if not determine how the data are used.

The best analysis is transparent, not definitive: Analysis is never an independent affair. An analyst’s client may be a practitioner, a policymaker, another, more senior analyst, or the general public, but the relationship is consistent: the client clarifies expectations that an analyst uses to determine priorities, hone datasets, and frame conclusions. In our discussion, several practitioners lamented their clients’ demand for certain assessments amid uncertain data. I cited a scene from Zero Dark Thirty, Kathryn Bigelow’s dramatic rendering of the manhunt for Al Qaeda chief Osama Bin Laden, that resounds with my own, brief exposure to the U.S. intelligence community. In the scene, then-CIA director Leon Panetta asks a cohort of senior analysts whether Osama bin Laden is located in a compound on Abbottabad, Pakistan, where Navy SEAL Team 6 later killed him. The CIA’s deputy director, presumably for intelligence, suggests that bin Laden is more than likely located at the compound; Panetta, visibly disgruntled, presses his junior colleague for a more confident response. The deputy director sighs, “We don’t deal in certainty, we deal in probability. I’d say there’s a sixty percent probability he’s there.”

As in the information security problem, peacebuilding experiences mirror their national security counterparts. If an analyst says, “according to qualitative and quantitative tools, a conflict in a local village in northern Kenya will probably emerge over the next six months,” the client–a UN agency or a private foundation or a humanitarian aid group–may request a more definitive response. In these circumstances, given poor-quality data, the analyst’s best option is transparency–about the quantity, quality, and limited reach of the data and its conclusions. Better peacebuilding emerges from an acceptance of uncertainty, rather than the creation of certainty where it cannot exist.

Advertisements

the murky swamp of mass atrocity data

Evangelists of “big data,” the possibility of computed knowledge at unprecedented scale, often describe our contemporary world as a “sea” of information. Data scientists have more and better knowledge of how humans behave, how they interact, how they cooperate, and how they conflict, generated as much by our own actions–through the Internet, mostly–as by those who surveil us. For some problems, the dataset is a near perfect match. Commercial airlines use “frequent flyer” programs to track when their customers fly, and to where; electoral strategists manipulate marketing information to infer norms, cultural preferences, and political opinions among likely voters. Amid a unfathomable sea, these data are intimate and human. Sgt. Pepper’s “day in the life,” once framed by a cup of coffee, is now an ever-present data-stream. We wake up, we create data; we go to the bodega, we create data; we set up shop in a six-by-six cubicle. We create data.

Violent conflict, especially on a mass scale, is never so neat. Acts of violence don’t create data, but rather destroy them. Both local and global information economies suffer during conflict, as warring rumors proliferate and trickle into the exchange of information–knowledge, data–beyond a community’s borders. Observers create complex categories to simplify events, and to (barely) fathom violence as it scales and fragments and coheres and collapses. A “mass atrocity” is a fiction; an analytically and morally useful one, but a fiction nonetheless. We expect system to follow scale, but it rarely does. So rarely, in fact, that observers identify little more than one hundred mass atrocity events since the end of the cataclysmic Second World War. One hundred is a large number, but it’s a negligible fraction of the individual violence that comprises its subjects.

Mass atrocity data have improved in fits and starts. The Global Dataset of Events, Language, and Tone (GDELT), a massive open-source computing effort, uses an automated, iterative data-stream to collect events. GDELT ingests information, imperfectly, to create a more perfect portrait of where events, including violence, globally occur. John Beieler, a political science PhD student at Penn State, recently experimented with the GDELT dataset of violent events in the Central African Republic (CAR) and South Sudan, both of which are embroiled in ongoing mass atrocities. Beieler uses the dataset to assess the likelihood of future mass atrocities in either country, but came up short. Local and international media sources feature both conflicts–gruesome portraits grace A1, and prominent global officials publish opinion pieces to “bear witness” to CAR and South Sudan’s respective horrors. But media publications cover these events as “mass atrocities,” and not as a sequential series of individual violent events. In a coda, Beieler contrasts this to Egypt, which, because of a glut of foreign journalism, the availability of citizen reporting tools like Twitter, and robust foreign diplomatic engagement, appears as both “mass repression” and a sequential series. Our understanding of the conflict’s progression throughout time–what it is, as a global event–determines its media coverage, and therefore its usefulness as a big data subject.

The convergence of scarce media, knowledge, and data is not unique to massive datasets, nor to time-bounded events. The information that local aid groups use to assist conflict-affected communities is small, in comparison. Small data are complementary, not subordinate, to their massive counterparts. Humanitarian networks, mediators, and civil society organizations want to know where violence occurred and, consequently, where vulnerabilities persist. While time is a useful data point, location is essential. Without location, aid groups won’t know where to go or how far to extend their operations. As Christopher Neu, a peacebuilding technologist, observes, the usefulness of public small data rests on an ethical quandary: In a live conflict, do humanitarian small data expose the same vulnerabilities they aim to fix? Where GDELT’s big data are open-source, small data are inherently proprietary–they’re generated by a user, one who sometimes risks physical safety to report a violent event’s location. Proximity, so praised among peacebuilders as big data’s lacking nuance, also muddies the data pond it aspires to clarify.

the toolbox’s dilemma, or why mass atrocity response advocates should think more about power

After many trials, tribulations, tabulations, and Chavez-esque levels of coffee consumption, I’ve finished my undergraduate thesis. I’ll make a couple of tweaks to the typological analysis tonight, in order to prepare for my 15 April due date, but I feel comfortable sharing the nearly-completed product. I’ve spent plenty of time blogging about the thesis, as well as distracting myself from it, in this space, so I figured I’d share:

Why do some responses to mass atrocities succeed in mitigating violence against civilians, while others fail? The nascent academic and policy literature on mass atrocity response emphasizes the varied functions of particular policy tools, emphasizing the dividends for mass atrocity response of economic sanctions, preventive diplomacy, or third-party military intervention. In this paper, I argue a greater role for power relationships, as opposed to tools of statecraft, in determining response outcomes. I posit a middle-range theory of relational power—an actor’s ability to influence organizational preferences, processes, and decision-making through social, interactive means—in mass atrocity response, through which I identify a taxonomy of social relationships between respondent actors and mass atrocity perpetrators. Using this theoretical framework, I conduct a typological analysis of mass atrocity response events between 1991 and 2011, and apply the social taxonomy to two case studies, in Sri Lanka and Darfur, Sudan. The case study analysis offers some support for the hypothesis that a perpetrator’s patron—an external actor, with overlapping social preferences—is the primary determinant of mass atrocity response outcomes.

If this seems like your run-of-the-mill, “undergrad-discovers-Wendt” analysis, I view it a bit differently: the “logic of consequences,” which undergirds realist interpretations of power in the international system, is important, but its importance emerges from the social implications of consequentialism, as well as their practical effects. tl;dr: tools of statecraft are important, but their social context bears further explication.

Far from a finished product, my thesis represents the contemporary consolidation of my thoughts about mass atrocity response, which I fully expect to evolve, in keeping with the field’s evolution. I enjoyed the process of qualitative research design, but it certainly has its limitations, as I discuss in the paper–with forty-eight mass atrocity events, limited capacities for statistical inference, and limited expertise in anything but a handful of case studies, my relational-power theory carries marginal weight. Over the next few years, I’d like to expand my research, and to develop the theory. How? Here are some ideas:

  • Add a bit of quantitative grist: My history of “successful” Stata regression analyses amounts to a bivariate assessment of ethnic fractionalization and conflict, and it wasn’t statistically significant. Needless to say, my quantitative skills aren’t what they should be. My typological table, in progress until twenty-four hours from now, should underscore a few avenues for quantitative measurement, which would disaggregate my measurement of relational power in mass atrocity environments. Take, for example, the friendly exercise of compulsory power, which might occur in the context of trade incentives for mass atrocity mitigation. Without statistical inference, it’s difficult to identify the correlative relationship between a friendly actor’s trade patterns, a proxy for compulsory power, and response outcomes. As far as I’m aware, the quantitative models needn’t be complex, but a basic, applied understanding might be useful.
  • Visualize the qualitative data: As I’ve noted in a previous post, the blogosphere’s preference for visually appealing, easily digestible data poses a (surmountable) challenge to qualitative researchers. Some platforms, like the World Peace Foundation’s Reinventing Peace blog, provide excellent case-study analysis, but three-thousand-word, no-visual posts about mass atrocity response have limited reach. In addition to the quantitative skills, a bit of coding might be useful, “at least” enough to design a rudimentary, quasi-interactive map of global mass atrocity events, and the vectors of local, national, international, and non-governmental response. I would scale up my two case studies, Sri Lanka’s civil war and Darfur’s conflict, to forty-eight, in keeping with my typological analysis. In the meantime, a Google Maps visualization, sans chronological analysis, might do the trick.
  • Disaggregate the mass atrocity events: As I note in my paper’s conclusion, my rudimentary, middle-range theory of relational power in mass atrocity response opens several additional dilemmas: within relational power, broadly speaking, do different forms of power yield different response outcomes, as a result of different kinds of social interaction? Probably. Does the exercise of military force, which I describe as an idiosyncratic tool of foreign-policy statecraft, differ in its impact from other applications of relational power? Again, probably. But the most interesting question to me, which I didn’t delve into in much detail, is the sub-event implications of relational power. As I observe, most discussions of mass atrocity response refer to meta-events, or aggregations of political conflict between various parties: consider, for example, the “Rwandan genocide,” which we remember as the genocide per se, rather than the escalation of local grievances, local organization, and local violence. In some contexts, this tendency is understandable, but in others, as Severine Autesserre observes, it’s less applicable: the Democratic Republic of the Congo, Kenya’s post-election violence, and Pakistan’s micro-insurgencies, in particular, come to mind. In keeping with my emphasis on mass atrocity events as evolutionary, adaptive phenomena, I’d like to get a better sense of how we analyze the atrocities’ component parts through a similar framework.

What else should I be looking at? Keep me posted, and I’ll try to do the same.

wherefore qualitative blogging?

This being my final semester, I’m currently in the throes of my undergraduate thesis. While I’ve previously described my project, it’s changed enough to merit a revised summary. By my count, one of the central failings of the mass atrocity response literature is its failure to integrate power as a characteristic of qualitative analysis. Tools-based analyses are nearly ubiquitous among both academic and policy studies of mass atrocity response, while power gets the short shrift. Consider, for example, the interpretive framework of Samantha Power’s A Problem from Hell, which is widely perceived as the literature’s formative text: Power dismisses the U.S. foreign policy bureaucracy’s willful negligence towards postwar atrocities as in keeping with Albert Hirschman’s “futility” thesis, despite the inconclusiveness of Power’s mass atrocity response counterfactual. Tools-based analyses often portray response mechanisms as inherently fungible (see, e.g. Rory Stewart and Gerard Knaus’ Can Intervention Work?), while underemphasizing the context in which mass atrocity response occurs.

For my thesis, I’m re-centering the varied functions of power in select case studies of mass atrocity response. As Michael Barnett and Raymond Duvall’s taxonomy of power demonstrates, power operates in different ways, through different actors, and with different consequences. I’ve adopted a “relational” approach to power, which synthesizes Barnett and Duvall’s power taxonomy with Joseph Nye’s three “faces”. I’ve defined relational power as “an actor’s ability to influence organizational preferences, processes, and decision-making through interactive means.” tl;dr: power is a social phenomenon, and operates through relationships between local, national, international, and non-governmental actors.

Using Jay Ulfelder’s 110-n dataset of state-sponsored mass killing between 1945 and 2011, and Kate Cronin-Furman’s 80-n dataset of mass atrocities between 1970 and 2010, I’ve extracted a case population of mass atrocity response events between 1991 and 2011 (at time of writing, forthcoming). Due to time constraints, I’m only writing two case studies: currently, Sudan’s Darfur conflict, and Sri Lanka’s counterinsurgency against the Liberation Tigers of Tamil Eelam (LTTE), although these selections are subject to change. With that said, I’m enjoying my interpretive framework, and it appears to bear out as a descriptive mechanism, at least in my initial observations. And, more relevantly, I’m enjoying the process of qualitative case research. To leave Ulfelder and Cronin-Furman’s respective datasets to the perils of logistic regression analysis, therefore, seems unfortunate.

This enjoyment, however, is extracurricular–at this point, it’s difficult to fathom the case studies’ application beyond a blogging platform. To riff on Adam Elkus’ Abu Muqawama piece on policy relevance, it’s not unreasonable to argue that blogging platforms have a diffuse impact, at least in shaping how students, practitioners, and scholars of foreign policy, comparative politics, and international relations understand the world. There’s a “logic of presentation” here, I think, which Stephen Walt implies in his widely-discussed post on “why academics write poorly?”: scholars may “discover” a particular political phenomenon, but the blog’s primary value is its wide distribution capability–it costs five minutes of iPhone data to read this blog post, whereas your run-of-the-mill International Security article might cost fifteen dollars. And, as Jay Ulfelder hinted, academic journals have not adapted well to the creative possibilities of the “information age”.

The past decade has witnessed a broad proliferation of quantitatively-oriented blogs, as well as those by regional experts. The Duck of Minerva, the Disorder of Things, the Monkey Cage, and Political Violence at a Glance each diffuse political-science research in blogosphere discourse, often in idiosyncratic ways. With chance exceptions, however, there appear to be few blogs in the political-science space that embrace qualitative research as a defining methodology, and which apply case-based empirical analysis to cross-regional, transhistorical trends. This is not to construct a Waltian/Mearsheimerian fallacy about the poverty of theory, of course–plenty of bloggers theorize well, and frequently–but to make an empirical observation about the blogosphere’s discrete components.

It’s easy enough to speculate about why this is the case: qualitative methods are, to my understanding, less than “in vogue”; large-n visualizations are sexier than, say, 3,000-word essays; and, as Andrew Moravscik has recently implied (hat-tip: @stratbuzz), qualitative citations are a challenging beast. Harder, of course, is the process of identifying solutions. As I suggested above, I’m interested in expanding my thesis’ emphasis on relational power’s qualitative applications for mass atrocity response, although professional restrictions will continue to limit the scope and frequency of my analysis. Regardless, I’m interested in exploring platforms for engaging, effective qualitative research. In that vein, dear reader, a few questions:

  • What would an accessible qualitative blog look like, aesthetically speaking?
  • Assuming space limitations (<1500 words, as a standard practice), how would a qualitative blog apply theory to its component case studies? Policy applications?
  • What kind of case studies would a qualitative blog include? Theoretical variations on the same event? Event variations on the same theory?

Let me know if you think of any more, and I look forward to your comments.

lessons learned: the case for a human rights impact assessment

The institutionalization of human rights policy throughout the past two decades has led to the sweeping operationalization of liberal norms, through human rights policy mechanisms, new legal bodies, and burgeoning non-governmental organizations. The causal impact of Steven Pinker’s “humanitarian revolution” may be slightly overstated, but the heightened prioritization of human rights in various local, national, regional, and international political spheres is apparent. The political consolidated of human rights advocacy has allowed a second normative stage to emerge: the Critical Backlash. Political resistance was one thing, but the evolution of human rights advocacy’s moral critique is quite another. As any active tweeter will tell you, “moral hazard” and “unintended consequences” have become essential characteristics of human rights discourse. The basic concept: rather than maintaining a siloed, moral distance from policy, human rights operate within a political sphere; accordingly, the implementation of rights has unanticipated repercussions and moral ambiguities. Because they occur within an amoral context, human rights policies can lead to immoral consequences, furthering the disenfranchisement and dislocation they intended to address.

The notion of public policy’s nth-order effects is not new. As a recent NYT analysis notes, “moral hazard” emerged as an “obscure insurance term,” rooted in microeconomic principles. In social-service parlance, the hazard is often discussed as self-reliance’s counterpart, underscored by the ethically corrosive implications of welfare, communitarian support, and fiscal bailout. In an international human rights context, the norm’s critics perceive moral hazards within human rights and humanitarian interventions, due to the muddled politics of policy implementation. See, for example, Sarah Lischer’s classic, controversial study on the role of humanitarian assistance in fueling civil conflict (ungated), or Nathan Nunn and Nancy Qian’s more recent, comprehensive analysis of food aid and conflict onset (ungated). Due to their reliance on politics, human rights and humanitarian initiatives often fall subject to exploitation and manipulation by rights-restricting conflict actors.

“Unintended consequences” pose a similar challenge to moral policy implementation. For kinetic operations, including covert action, no-fly zones, and ground deployments, the unintended consequences are readily apparent: collateral civilian deaths, infrastructural damage, restricted aid routes, and the escalation of targeted violence against civilian populations. As policy interventions become less coercive, the unintended consequences become less apparent and, in most circumstances, less severe: economic and social dislocation, lagging human development, and public health challenges (see, for example, the oft-cited case of mineral extraction and violence in the Democratic Republic of the Congo). In theory, advocates perceive human rights policy as internally and externally consistent–that is, with itself, as well as related policy frameworks, including development, public health, and peacebuilding. Frequently, however, the consequences of amoral politics undermines moral consistency, pitching human rights policy into an ambiguous sphere.

Public policy institutions, including both domestic and foreign bodies, have recognized the presence of moral hazards and unintended consequences within  the policy implementation process, and have sought to mitigate its occurrence. In the early 1960s, the Johnson administration established the Research, Programming, Planning, and Evaluation division of the Office of Economic Opportunity, the administration’s clearing house for “war on poverty” policy. By the end of the 1980s, national agencies and international bodies initiated social and environmental impact assessment programs, seeking to apply social-scientific methodologies to the understanding of political institutions, economic and social interactions, and ecological phenomena. (See McKinsey Social Sector Office for a concise timeline.)

From a public policy perspective, impact evaluation serves two roles: first, evaluative processes provide an opportunity to improve organizational inefficiencies, redundancies, and resource gaps; second, evaluation serves a moral function, allowing organizations to conduct scalable initiatives more responsibly, with an eye towards constituent, rather than organizational needs. As impact evaluation became a prominent component of international public policymaking, the non-governmental, international development community followed suit, applying medical randomization practices to existing impact evaluation frameworks. Randomized controlled trials, as they’re called, test the micro-level implementation of a development project’s “theory of change.” Development impact evaluations, according to Chris Blattman, seek to challenge basic assumptions about institutional behavior, organizational cultures, and, on a fundamental level, human nature. Innovations for Poverty Action, a development implementation, research, and evaluation organization, has championed the RCT approach, using the framework to assess the impact of peace education in Liberia, reconciliation projects in Sierra Leone, and youth demobilization and reintegration campaigns in Uganda. Most recently, the peacebuilding community–including donors bodies, like the OECD-DAC, and non-governmental organizations, like the U.S. Institute of Peace–has matured slowly towards an evaluative model. While research limitations, a vicious normative cycle, and accountability restraints continue to limit effective peacebuilding evaluation, recent USIP-guided gatherings suggest an emerging enthusiasm for the evaluative process.

Despite their relative youthfulness, human rights advocacy organizations have a great deal to learn from selective cultures of self-criticism within the public policy, development, and peacebuilding communities. As I’ve written before, cognitive biases, closed-feedback loops, and simplified narratives within human rights organizations disincentive due consideration of human rights policies’ unintended consequences and moral hazards. Policy interventions are perceived through a problematic “act first, ask later” lens; from a moral perspective, nth-order implications and political side-effects receive much less attention from organizational leadership than the burden of urgent reaction. Social impact evaluation would encompass sustained, repeated reflections on organizational theories of change–not just domestically, but within conflict zones, as well. As a formalized, institutionalized mechanism for self-criticism, communicative transparency, and responsible leadership, evaluative procedures would create more credible opportunities for responsible analyses of the long-term moral and political characteristics of rights-based initiatives.

What would a human rights impact assessment look like? McKinsey’s “Learning Driven Assessment” concept provides an important first-principles framework for assessments: ensure local, constituent-based transparency; allow constant, unfettered interaction between evaluative processes and organizational strategy; and, lastly, utilize the evaluative process to foster a learning, self-critical culture within the human rights organization. An effective evaluative process would conduct on-the-ground, systematic surveys among affected communities, allowing international advocates to move beyond politically-motivated diaspora networks as vessels for human rights evaluations. Evaluations would function as a key component of future human rights policy conversations, allowing advocacy-oriented policy analysts to consider more nuanced, fine-tuned, and less hazardous mechanisms for policy intervention. Evaluations would be conducted by third-party consultants, conceivably unaffiliated with the cognitive biases, ideological orientations, and organizational motivations of internal advocates. Darfurian Voices, a multi-organization, multi-sector survey of Darfurian perspectives on conflict resolution and human rights in Sudan, functions as a valuable model, but its organizational-level impact remains unclear.

An evaluative framework is not without its challenges, of course. As Bec Hamilton conveys in her study of the Darfur advocacy movement, human rights advocates were effective at mobilizing attention towards Darfur, but their results stopped short of comprehensive conflict resolution. Amidst the myriad of political motivations for government action, it’s difficult to differentiate between the circumstances where advocates tip the scale, and those in which advocates merely support a predetermined initiative. Additionally, there’s the financial element: in an international development context, RCTs require hundreds of thousands of dollars to implement, due to the logistical, human, and organizational burdens of field research. In a human rights context, effect evaluation would likely require coordination with on-the-ground actors, or, ideally, the deployment of a field research team to the affected area. Between the fragile circumstances, the communication challenges, and the development of local implementation networks, evaluative projects would require a significant investment. However, advocacy organizations could work with the public sector to ensure the effective, well-resourced, and successful fulfillment of evaluative pilot projects.

Related Reading: Last September, Justice in Conflict’s Patrick Wegner offered a compelling case for the creation of a “Department of Impact Assessment” at the International Criminal Court, which would evaluate the political and moral repercussions of international criminal justice interventions.

the intervention ratchet’s lexicon: confronting the teleology of mass atrocities prevention

This is the second post in a series on the lexicon of intervention’s slippery slope. The series is intended to educate human rights advocates about the opportunities, costs, and opportunity costs of coercive responses to mass atrocities.

Alex de Waal, Jens Meierhenrich, and Bridget Conley-Zilkic, three genocide scholars, have penned an exceptional essay on the analytical shortcomings of the present discourse on mass atrocities prevention. Disaggregating historical models of atrocities termination, de Waal, Meierhenrich, and Conley-Zilkic complicate popular trends in atrocities scholarship. The authors outline three dominant characteristics of the “genocide and mass atrocities” narrative: the teleological sliding scale of genocide’s emergence, the epistemological assumption of military intervention’s effectiveness, and the subsequent ethical imperative underlying our cognitive perceptions of mass atrocities. For the authors, the policy-based, moral, and analytical fixation on the Holocaust and Rwanda as historical atrocity models lays the foundation for a deterministic, static paradigm for prevention:

In its simplest form this [“essentialist logic of violence”] seen as a graduated scale of warnings of genocide that corral the full complexity of conflict and inter-ethnic relations into a one-dimensional slippery slope that leads inexorably to genocide, and reduce the varied instrumental political logics of violence to evil motive alone. These cases model only two possible outcomes: either a completed extermination of the target group or an external military intervention to bring an end to the killing.

The essay is worth reading in full. Given this blog’s focus on mass atrocities prevention and policy, I’m planning over the next week to address each component of de Waal, Meierhenrich, and Conley-Zilkic’s analysis, starting with an assessment of the “teleology of mass atrocities.” The authors’ conclusions are apt, but generally unattributed, and I’d like to expand on their analysis of literature trends, cognitive narratives, and these narratives’ implications for policy formation and implementation.

So, to the teleology. It’s worth starting our assessment with James Young’s “texture of memory”–that is, the ways in which public discourse, memorial institutions, and narratives shape our collective understanding of the Holocaust, in particular. During high school, I spent two summers working as an education intern at New York’s Museum of Jewish Heritage, the city’s relatively nascent Holocaust memorial museum. Compared to its counterparts in Los Angeles, Washington, DC, and Jerusalem, MoJH sits squarely in the middle of the “Jewish particularism vs. Holocaust universalism” spectrum. The core exhibition progresses chronologically, but also thematically: the first floor emphasizes the cultural origins of Eastern European Jewry, where the third floor focuses on the moral universalization of the Holocaust. The Simon Wiesenthal Center’s interactive, cinematic, and LA-style Museum of Tolerance, on the other hand, features a “Tolerance Center,” transferring the Holocaust’s moral lessons to postwar and contemporary civil rights, human rights, and anti-bigotry struggles; similarly, DC’s US Holocaust Memorial Museum hosts “From Memory to Action,” a semi-permanent exhibit on post-Holocaust mobilization surrounding mass atrocities prevention and international human rights.

The moral project of Holocaust remembrance underlines public perceptions of subsequent crises, united under the essential ethics of common human dignity and justice. See, for example, President Obama’s 2009 Holocaust Remembrance Day address at the DC Holocaust museum, which articulates the post-Holocaust, moral stain of mass atrocities: [W]e have the opportunity to make a habit of empathy, to recognize ourselves in each other, to commit ourselves to resisting injustice and intolerance and indifference…[by] doing everything we can to prevent and end atrocities like those that took place in Rwanda, those taking place in Darfur.” The moral narrative of mass atrocities demonstrates de Waal et al.’s “graduated scale” of early warning and preventive opportunity; the “lessons-learned” understanding of humanity’s universal, collective responsibility offers little distinction between the rights hierarchy. Under this ethical logic, hate-crime prevention and anti-bigotry education are the natural, fluid counterparts to atrocities prevention–as de Waal et al. observe, the resulting narrative is “one-dimensional,” defined by genocide’s inevitable emergence. Thus, the teleology: if we perceive genocide or large-scale atrocities as the unavoidable end-point of political violence, our cognitive approach to policy formation and implementation becomes maximalist. Resolving localized outbreaks, internal political disputes, and regional divisions becomes a moot point, because the perpetrator’s underlying immorality transcends the political power of short-term, non-coercive interventions.

Under the moral narrative of mass atrocities, conscientious policymakers bear overwhelming responsibility for the prevention of the world’s worst crimes; transgressions against said responsibility are redeemable through decisive displays of courageous, moral leadership. Again, the teleology of mass atrocities prevention is in play. Political leadership emerges as a rough approximation of Godwin’s Law: as atrocity events escalate, the probability of an ahistorical, misappropriated comparison to past atrocities approaches 1. The penchant for non-rigorous, comparative analysis undermines responsible discourse and, at the formation level, intervention: conflict resolution approaches become “intervention by analogy,” an inexcusably shoddy model for public policy. Rwanda 1994 is no longer Rwanda 1994, but an unhappy synergy of Somalia 1993 and Rwanda 1994; Libya 2011 is no longer Libya 2011, but a misplaced moral reflection on Rwanda 1994, Darfur 2004, and Libya 2011; etcetera.

Public textures of atrocity memory carry significant relevance for academic and policy understandings of mass atrocities, not least because high-level policymakers perceive and depict the common policymaking discourse on mass atrocities through a moral lens. The field of anthropology has long fixated on the social origins of inter-communal conflict, violence, and atrocities memory (for an excellent example, see Liisa Malkki’s Purity and Exile, a field study of ethnic politics in the aftermath , constructed through the lens of Burundian Hutu refugees in Tanzania). Over the past decade, sociocultural anthropologists have proposed an “anthropology of genocide,” which probes the social foundations of dehumanization, “Otherization,” and inter-communal animosity. Similarly, two decades of experimental research on the collective and individual psychology of mass atrocities, victimization, and perpetration has extended academia’s perceptions of atrocities’ social origins.

Disaggregated, socially-oriented research is important, particularly for policy and programmatic approaches to trauma relief, post-conflict reconciliation, and restorative justice. But, for public perspectives on mass atrocities, the “socialization” of genocide research possesses an unfortunate side-effect: an over-emphasis on social dynamics, perceptions of the “Other,” and the “psychology of evil” de-politicizes mass atrocities, reducing the social phenomena to easily replicable models of “eliminationism” (to use Daniel Goldhagen’s uniquely unhelpful term). Anthropological and psychological frameworks for genocide and mass atrocities explain how individuals and groups mobilize against civilians, and how basic, human goodness declines into the world’s worst crime. But they don’t explain why. Crucial questions remain: Why do political institutions perpetrate atrocities? How do atrocities expand, limit, and perpetuate national, regional, and local political priorities? Justice, empathy, and human dignity are important, but the moral narrative of mass atrocities doesn’t begin to address the incentives and disincentives that transform institutional actors into perpetrators, third-party bystanders into interveners, and targeted communities into victims.

In carving a path forward for non-teleological research, de Waal et al. reference Meierhenrich’s disaggregated framework for atrocities termination, presumably present in his forthcoming Oxford introductory surveys. Meierhenrich differentiates between three characteristics of genocide’s emergence: genocidal acts, which are one-off instances of massacre (periodic outbreaks of ethnicized violence in northern Nigeria, for example); genocidal campaigns, which may include instrumentalist forms of genocide-by-counterinsurgency, genocide-by-resistance, and genocide-by-occupation (de Waal et al. cite the Ethiopian Red Terror as one such example); and genocidal regimes, whose existence, survival, and political legitimacy is reliant on a genocidal ideology (Hutu Power in Rwanda, Germany’s Nazi regime). In some sense, Meierhenrich’s model represents a confluence of trends within the larger research literature on conflict emergence and political violence. Large-scale genocide studies, such as Ben Kiernan’s Blood and Soil, have disaggregated historical models of mass atrocity throughout time, rather than Meierhenrich’s institutional distinctions. Meanwhile, advances in data collection technology (geographic information systems, especially) have allowed civil war researchers to prioritize the spatial disaggregation of conflict onset, duration, and termination (see, in particular, Cederman and Gleditsch’s 2009 JCR issue on “disaggregating civil war” (ungated), including excellent papers on ethnic marginalization, absolute/relative economic disparity, and geographic terrain, all gated). Atrocities analysis might apply a similar research model, using GIS data to trace complex local, regional, and national overlays of violence to determine the trajectory of and interaction between genocidal episodes.

In addition to the Intervention Ratchet’s Lexicon series, this post is the first in a three-part assessment of contemporary narratives of mass atrocities prevention and genocide termination, sparked by de Waal et al.’s essay. Check back in a couple of days for the second installment, which will address the “epistemological assumption” and its implications for policy formation.

Hat-tips: to AIPR’s Alex Zucker, for the essay recommendation; and, to Holger Schmidt, my Georgetown professor, for the “disaggregated civil war” literature.