what is america’s foremost public intellectual?

Earlier this week, following a too familiar media scuffle over Mitt Romney, the American family, and race, Ta-Nehisi Coates published an article that described MSNBC anchor Melissa Harris-Perry as America’s “foremost public intellectual.” Coates’ Harris-Perry is a brave and unusual public voice, in large part because of her identity: an educated black woman in an overwhelmingly white, male public discourse. Coates’ hyperbole–an intellectual’s “foremost”-ness ranks that which defies measurement–provoked a predictable firestorm. According to these comments, Harris-Perry cannot be a “foremost” public intellectual because she is not Noam Chomsky, the profession’s prototype, or because she is not often cited in the halls of power, or because she does not shape the common structure of our public sphere.

These objections contain subtle half-truths, and little more. What a public intellectual is–professionally, socially, politically, and otherwise–rests entirely on two dilemmas, awkwardly combined: what a “public” is, and whose ideas the public accepts. These questions are as fresh in our contemporary discourse as they are age-old–in fact, they underpin the very contours of our political conversation.

Over the past few years, I have engaged a slow, plodding biographical study of Tony Judt, the late historian whose profound prose Coates and I recently share in common. For a short two decades between 1992, when Judt published a controversial study of postwar French intellectuals, and 2010, when he died, very slowly, of ALS, Judt was a “public intellectual,” by many definitions. His biography, a very interesting one, offers three preliminary insights into what a public intellectual is, and how we should think about the pillars of such a person’s identity.

Ideology: An intellectual’s ideas are not independent–they exist in an ideology, the society of ideas with which they publicly and privately cohere and differ. An intellectual can proclaim themselves “non-ideological,” as a politician proclaims themselves “non-partisan,” but the social act of discussing ideas requires that they agree with some similar idea, whether in structure, logic, or outcome. Even the discussions of Socrates, Western history’s most storied gadfly, created ideologies–modes of thought with which his students disagreed, and against which they defined the substance of their own ideas. Judt was also ideological, despite Coates’ insistence to the contrary: in refuting the totalitarian-lite preferences of postwar French intellectuals, in Past Imperfect, he associated with a particular (Isaiah Berlin-inspired) definition of “freedom”; in dismissing the Zionism of the hawkish American left, he adopted a particular concept of political Jewishness. Judt’s intellectual milieu made much of Vaclav Havel’s trope, “speaking truth to power.” But his ideas, like those of all public intellectuals, were just another power, crafting another truth, however more righteous.

Access: When Coates described Harris-Perry’s qualifications, he was quick to cite her credentials: degrees from top universities, a prominent job. Of course, these are metrics of influence, but they also indicate access, a feature not necessarily driven by the quality of an intellectual’s ideas. As social justice advocates often observe, access is more reliably an indication of privilege, less of inherent intellectual value. Harris-Perry may be morally correct on some topics, and wrong on others, but it usually doesn’t matter: her influence rests on her credentials, the currency of our current meritocracy. Like Harris-Perry, who brushed against a public racial taboo, Judt’s post-Zionism ruffled the delicate pro-Israel opinions of a liberal intelligentsia. He faced significant objections, and some alleged efforts to restrict his public commentary, but remained tied to NYU, his home institution, and the New York Review of Books, which first published his critical commentary on Israel in 2003. Judt could afford to be a public intellectual because his public offered few sustained consequences for his dissidence.

Audience: When Coates asserted that Harris-Perry was the foremost public intellectual in America, he took two statements for granted: that “America” is a unitary public, and that its publicness carries social and moral meaning. Surveying the American media landscape, the most reliable proxy for our national public sphere, both statements appear patently, unavoidably false. “America” is surely a salient concept and identity for many, including those who live within the borders of the United States, but its salience, symbols, and virtues diverge along that stretch of highway between Chicago’s South Side and the plains of Oklahoma. Logically, its publics must also split. There are those who view a stretch of 57th Street as their common ground, and others who view morning-time Fox News talk-shows as a vessel for their cultural norms. Judt also faced these divided publics. Left-wing French intellectuals incited literary riots over his damning historical portrait of their predecessors; the New York Times Book Review, in contrast, featured a generous front-page essay on Past Imperfect, but few would describe its American reception as “publicly prominent.”

Advertisements

the murky swamp of mass atrocity data

Evangelists of “big data,” the possibility of computed knowledge at unprecedented scale, often describe our contemporary world as a “sea” of information. Data scientists have more and better knowledge of how humans behave, how they interact, how they cooperate, and how they conflict, generated as much by our own actions–through the Internet, mostly–as by those who surveil us. For some problems, the dataset is a near perfect match. Commercial airlines use “frequent flyer” programs to track when their customers fly, and to where; electoral strategists manipulate marketing information to infer norms, cultural preferences, and political opinions among likely voters. Amid a unfathomable sea, these data are intimate and human. Sgt. Pepper’s “day in the life,” once framed by a cup of coffee, is now an ever-present data-stream. We wake up, we create data; we go to the bodega, we create data; we set up shop in a six-by-six cubicle. We create data.

Violent conflict, especially on a mass scale, is never so neat. Acts of violence don’t create data, but rather destroy them. Both local and global information economies suffer during conflict, as warring rumors proliferate and trickle into the exchange of information–knowledge, data–beyond a community’s borders. Observers create complex categories to simplify events, and to (barely) fathom violence as it scales and fragments and coheres and collapses. A “mass atrocity” is a fiction; an analytically and morally useful one, but a fiction nonetheless. We expect system to follow scale, but it rarely does. So rarely, in fact, that observers identify little more than one hundred mass atrocity events since the end of the cataclysmic Second World War. One hundred is a large number, but it’s a negligible fraction of the individual violence that comprises its subjects.

Mass atrocity data have improved in fits and starts. The Global Dataset of Events, Language, and Tone (GDELT), a massive open-source computing effort, uses an automated, iterative data-stream to collect events. GDELT ingests information, imperfectly, to create a more perfect portrait of where events, including violence, globally occur. John Beieler, a political science PhD student at Penn State, recently experimented with the GDELT dataset of violent events in the Central African Republic (CAR) and South Sudan, both of which are embroiled in ongoing mass atrocities. Beieler uses the dataset to assess the likelihood of future mass atrocities in either country, but came up short. Local and international media sources feature both conflicts–gruesome portraits grace A1, and prominent global officials publish opinion pieces to “bear witness” to CAR and South Sudan’s respective horrors. But media publications cover these events as “mass atrocities,” and not as a sequential series of individual violent events. In a coda, Beieler contrasts this to Egypt, which, because of a glut of foreign journalism, the availability of citizen reporting tools like Twitter, and robust foreign diplomatic engagement, appears as both “mass repression” and a sequential series. Our understanding of the conflict’s progression throughout time–what it is, as a global event–determines its media coverage, and therefore its usefulness as a big data subject.

The convergence of scarce media, knowledge, and data is not unique to massive datasets, nor to time-bounded events. The information that local aid groups use to assist conflict-affected communities is small, in comparison. Small data are complementary, not subordinate, to their massive counterparts. Humanitarian networks, mediators, and civil society organizations want to know where violence occurred and, consequently, where vulnerabilities persist. While time is a useful data point, location is essential. Without location, aid groups won’t know where to go or how far to extend their operations. As Christopher Neu, a peacebuilding technologist, observes, the usefulness of public small data rests on an ethical quandary: In a live conflict, do humanitarian small data expose the same vulnerabilities they aim to fix? Where GDELT’s big data are open-source, small data are inherently proprietary–they’re generated by a user, one who sometimes risks physical safety to report a violent event’s location. Proximity, so praised among peacebuilders as big data’s lacking nuance, also muddies the data pond it aspires to clarify.

why did mass killing increase in 2013?

For Joseph Brodsky, mass violence was a close, unwanted companion. The Russian poet’s career began in earnest under repression’s shadow, in the exiled cold of the northern Arkhangelsk region. In 1964, Brodsky wrote “Spring Season of Muddy Roads,” a quaint and subtly tragic pastoral. The verse portrays a weary road, recently muddied by a spring rain. The road’s new season, so often refreshing, is now uncertain: “It’s not quite spring, but some- / thing like it. / The world is scattered now, / and crooked. / The ragged villages / are limping. / There’s straightness only in / bored glances.”

The year 2013 was scattered, too, and crooked. For all that went well, much also went poorly. In the wake of South Sudan’s horrific violence, Jay Ulfelder reports, “2013 may become worst year for onsets of state-led mass killing since early 1990s.” By Ulfelder’s count, mass killing began, nationally, in South Sudan, the Central African Republic, Nigeria, and Egypt, and continued in Syria and Iraq. That these events–“mass killing in South Sudan,” for example–are statistical inventions does not less the human significance of their politics. Mass killing is a very particular form of conflict, with particularly grave consequences for its victims. Its persistence–in those four new events, and in the thousands of microscopic conflicts that comprise them–may explain a tragic feature of our present politics.

Since the regional Arab uprisings of 2011, the “global upsurge” in protest has become an accepted proxy for our present era of instability. Global civil society appears, often in tandem, to carve new political space against a backdrop of regime repression. These civil society actions represent an increasing fraction of anti-state activity, writ large. Beyond them, the Central African Republic’s anti-balaka militias now violently contest the Seleka movement’s revolutionary rule; until last spring, when a heavy-handed counterinsurgency pushed them out, Boko Haram’s implants secured near-complete authority over Nigeria’s northeastern Borno State. In this context, mass killing is often the state’s last barricade–an extreme measure in the most desperate times.

Anti-state activity was not necessarily more frequent in 2013, nor will it always precipitate mass killing. But as protest and insurgency reemerge, the type of extraordinary politics that mass killing represents may prove more frequent. As Brodsky writes, there’s straightness only in bored glances.

Update: In the comments, Jay Ulfelder chimed in with the following comment on the above-mentioned data:

One point of clarification: I don’t see *state-led* mass killing in all of the cases you list, and I do see state-led and non-state mass killing in some cases not listed here. Here’s a quick list that will probably turn into a blog post before year’s end:

* Ongoing episodes of state-led mass killing: Syria (opposition), Sudan (multiple), Egypt (Islamists), North Korea (gulags), Myanmar (Kachin); now maybe also Nigeria (anti-Boko Haram), South Sudan (anti-Machar faction/Nuer); and maybe still DRC (eastern)

* Other conflicts producing episodes of mass killing that aren’t state-led: Iraq, Pakistan, Nigeria (Boko Haram), Mexico, now also CAR, and surely others I’m forgetting

I look forward to Ulfelder’s post.

finding meaning in genocide

The legend of Raphael Lemkin is by now well-tread. The Polish Jew witnessed Turkey’s Armenian massacres as a bystanding linguistics student in Lvov, newly-independent Poland, and later experienced the Holocaust from afar, as an American émigré. In the telling of Samantha Power, now the U.S. ambassador to the UN, Lemkin was taken by Winston Churchill’s description of Nazi violence, a “crime without a name.” Till he coined the term “genocide” in Axis Rule in Occupied Europe, a dry, heavy legal tome, Lemkin remembered Armenia’s suffering–and described his contemporary Holocaust–as an uncommon horror, forever nameless.

Whatever the eventual goodness of Lemkin’s innovation, both Churchill and his admiring jurist were wrong. The moral language of Lemkin’s era amply accommodated the war’s profound terror, often at the scale Lemkin’s term implies. Destruction, atrocity–in 1945, Hannah Arendt used both terms to describe the recent annihilation of European Jewry. Whatever her later controversial understanding of Nazi power, Arendt’s early postwar essays made clear that killing at the unforeseen scale of Nazi concentration camps and roving Einsatzgruppen could and did have many names.

Given this moral context, the importance of Lemkin’s term cannot have been its mass, nor its systematic procedure. It is a consequence of our imperfect memory of Nazi power that genocide implies an expansive “order, deliberateness,” as Philip Gourevitch suggests. Nazi, Hutu Power, and Khmer Rouge politics were often as disordered as their murderous counterparts in the Central African Republic, with which they are now contrasted. The political process by which Hitler’s regime successfully achieved Mein Kampf‘s totalitarian goals often failed, though not often enough–in Denmark, where fascism’s collaborative reach collapsed, and in eastern Poland, where a popular resistance welcomed Stalin’s alternative violent rule. American soldiers had no certain power as they marched the Cherokee nation towards an unknown frontier, nor did Khartoum’s janjaweed militias as they scorched Darfuri villages on Sudan’s western margins. For Andrew Jackson, as for Omar al-Bashir, genocide was a power-seeking enterprise–authority was taken, not affirmed. Genocide’s systematic order, the alleged smoking gun of mass killing, only exists in the terror’s successful conclusion. In the thick of it, the mass violence is anarchic, meaningless.

Instead, the power of Lemkin’s genocide comes from an idea now so obvious as to approach cliché: that identity matters. Lemkin was a Polish-American Jew, but he was also a committed globalist in an era of dismal chauvinism. He wrote Axis Rule as the last paltry legacies of the interwar League of Nations crumbled, and before Dumbarton Oaks recast the postwar international consensus. Identity shaped the moral imagination of interwar Europe in the ugliest of ways, and so many innovations of the late Forties–institutional, literary, political–sought to redeem Europe’s collective wrongs. These innovations were often contradictory: the cosmopolitanism of the Universal Declaration of Human Rights affirmed humanity’s common dignity as the UN Security Council secured the predominance of great power politics. But they were also complementary. That the Genocide Convention was the UN’s first human rights treaty is no mere trivia–for the UN’s framers, the symbolic protection of the provincial was both the prerequisite for and the partner of the universal. Lemkin’s term codified identity, previously a privilege of particular groups such as the Levant’s persecuted Christians, as a meaningful pillar of an international rights pantheon. In his design, the future’s proverbial European Jews would no longer fear annihilation for who they were or who they saw themselves to be.

That’s a very different thing than a universal right to life. To suggest that identity matters is to suggest that society is part of life’s meaning–that what makes us human is not simply that we exist, but that we interact in common, collectively. Lemkin’s anthropology of violence rebuts the interwar era’s eugenic consensus: it implies that, whatever ethnic, racial, religious, or national group one might belong to, that group and its members deserve an equal opportunity to thrive and, more importantly, to live.

The consequences of Lemkin’s term are profound and, as such, controversial. Being social, human identity cannot exist in a vacuum–it is defined as much by those who oppose it as by those who claim it. In eastern Poland, this means that an Orthodox Jewish congregation in Bialystok who has never associated with a Reform Jewish congregation in Berlin becomes “Jewish.” In the Central African Republic, this means that disparate Christian groups become “Christian,” because the Muslim government and the international community say so, but not because they’ve received the blood, body, and spirit of Christ. These identities are often false, or are insufficiently nuanced; but that they are constructed, either internally or externally, does not make them any less real, politically, morally, and anthropologically.

will south sudan’s recent fighting kill more people?

It’s difficult to tell what has happened over the last week in Juba, South Sudan’s capital city. Early reports suggested a military coup, and President Salva Kiir, donned in a general’s uniform, did his best to convince both South Sudanese citizens and his international patrons as much. The U.S. State Department, among other diplomatic bodies, has issued a travel advisory for U.S. citizens–say what you will about Benghazi, diplomatic security, and threat inflation, but U.S. advisories are often a good indicator that an international crisis is getting bad, quickly. However we describe this week’s events–as a coup, as the early stages of civil conflict, as ethnic infighting–it’s clear that more people continue to die, and that disparate security factions are among the perpetrators. Will South Sudan’s current violence kill more people in the future, and if so, why?

A recent Economist dispatch places the root of the fighting with South Sudan’s tumultuous ethnic politics. This is not to say that “ethnicity” qua identity is responsible, but that there is a particular group of people (in this case, Salva Kiir’s Dinka affiliates) that controversially holds more power than another particular group of people (Riek Machar’s Nuer group), and which often excludes the latter from political decision-making. Inasmuch as South Sudan is a state, it is a state because these groups, as well as tens of others, have decided that this arrangement works–its elites have enough money, and their supporters have enough services to support their livelihoods. That arrangement has crumbled since 8 July 2011, and probably before. It dates to South Sudan’s contested transition from rebel nation, during the Sudanese civil war, to international trustee, in the aftermath of the 2005 Comprehensive Peace Agreement, to sovereign state. It’s hard to identify the proximate spark, but this context is, as always, important.

The “meaning” of this week’s violence depends in part on your dataset’s timestamp: if you look to 2011, it’s likely a symptom of a decaying proto-state; on the other hand, it may be a longer-term consequence of Kiir’s heavy-handed consolidation. Of course, it may be both: perceiving its own weakness, Kiir’s coterie attempts to seize authority by violently repressing potential insecurity. This is the explanation Jay Ulfelder prefers, and I think he’s right: civilians always suffer as elites scuffle. As I write, UN officials report that multiple hundreds of South Sudanese civilians have died in Juba’s clashes. It appears that South Sudan is well on its way to simultaneous mass atrocities–one in Juba, and one in Jonglei, an eastern region on the capital’s margins. As South Sudan’s crisis deepens, it’s worth thinking about how these crises both overlap and don’t; how the weakness of a corrosive regime and its efforts to shore up authority may cause–both directly and not–greater suffering.

2013 may have been the best year in human history, but 2014 might not be

Zack Beauchamp published a very fine essay yesterday, provocatively titled, “5 Reasons Why 2013 Was the Best Year in Human History.” Beauchamp’s human history is not a natural force, as it was for Hegel or, to a lesser extent, Marx–instead, 2013 was humanity’s best year because we made it so, and because the systems, technologies, and societies humans create allow us, collectively, to prosper better than before. Others–Steven Pinker, whose work Beauchamp cites, comes to mind–have written as much before, alternately leaving more and less space for human decisions. Beauchamp’s is a soft, purposeful history, and one which, he acknowledges, is easily reversible.

Beauchamp describes war’s aggregate decline, one widely used proxy for our common prosperity, as “[not] accidental…by design.” This turn-of-phrase is a helpful metaphor, as design implies intent, decision, and agency. As much as the design metaphor suggests human improvement, it also illustrates our prosperity’s present and future fragility. Here’re two reasons why:

1. The human ecosystem is fragile: When we discuss ecology, we usually refer to our natural environment: the atmosphere, oceans, forests, and mountain ranges that human actions often destroy. Indeed, climate change–and, more specifically, its humanitarian consequences–is the greatest data point against Beauchamp’s argument, and one which he anticipates. But our human ecosystem comprises more than the natural environment, though the two forces often intersect and conflict. Humanity is distinguished by our ecological complexity–by the institutions, systems, and structures that shape how our species interacts. The UN, for example, is an ecological mainstay–where it operates, it informs how human societies live, how they prosper, how they die, and how they are remembered. But these systems are fragile, as they have been throughout human history. They may be strong one year, and weak the next, and then strong again the following year. The “life-saving technologies” Beauchamp describes–anti-retrovirals, genetically modified foods–are symptoms of the systems that work: when they don’t, there are fewer technologies, and fewer lives saved. When the Global Fund, among the largest grant pools for disease prevention and treatment, announced massive grant cuts in 2011, anti-HIV/AIDS research took a hit. Funding in the aftermath of the global economic crisis has returned, and anti-retroviral medical innovations continue apace, but the Fund’s momentary shock underscores the ever-present risk of systemic–and, consequently, technological–failure.

2. Whose human history?: Human prosperity is an aggregate measure, which necessarily and perhaps defensibly glazes over its own outliers. Beauchamp’s humanitarianism is cosmopolitan, as all total human histories must be. It views human progress as a global dilemma, and one which rises and falls in totum. But Beauchamp’s is scarcely the only humanitarianism, because this cosmopolitan imagination fails where it succeeds: in representing human prosperity as “positive, on the balance.” The rebuttal, “It probably wasn’t the best year if you’re Syrian,” is cheeky and unhelpful, but there is some validity to it. Human prosperity can only be “positive, on the balance” if all accept cosmopolitanism–that is, common humanity–as a shared identity. Shades of tragedy emerge for those who reject cosmopolitanism, or, as is more often the case, simultaneously accept more proximate identities–an interfaith minister who is also a Jew; a black African nationalist who is also Xhosa. The tragedies these groups experience are morally and politically meaningful, and they define histories equal to humanity’s–of the self, of the family, of the community, of the nation. So no, it probably wasn’t the best year if you’re Syrian, and that is also the human history of 2013.

the ethnic politics of love actually

tumblr_lwqrvsMdgZ1qi5zdv

Over the past week, Atlantic film critic Christopher Orr has waged a noble, yet foolish battle against equally-foolish evangelists of Love Actually, the now-notorious 2003 British Christmas-movie-qua-romantic-comedy. Unintentionally and (perhaps too) whimsically, The Atlantic‘s Great Love Actually Conflict of December 2013 illustrates the evolution of ethnic identity during political conflict:

1. First, a belligerent force asserts an identity, and seeks to impose that identity on others through coercive means.

2. Second, a marginalized group asserts a counter-identity, which exists simultaneously in itself and in opposition to the “original” identity.

3. Third, the belligerent force, placed on the political–and, often, military–defensive by its counter-identity, wages attritional conflict against the marginalized group and its supporters.

4. Civilians are often harmed.