March 9, 2016

Computers don’t care about your MFA

by

From Cliparts.co

Richard Jean So and Andrew Piper—“two professors of language and literature who regularly use computation to test common assumptions about culture“—have just published an essay in The Atlantic claiming to have finally settled the age old debate over the usefulness of an MFA, using—predictably—MACHINE LEARNING! *smash cut to footage of the robot apocalypse from Terminator 2*

The TL;DR version of the article is that So and Piper jammed almost 400 novels published in the last 15 years through a (presumably thorough) series of heuristics, algorithms, and computational text analyzers to see if there was any empirical or statistical difference between those that were the product of an MFA program and those which were not. Their conclusion: nope.

We began by looking at writers’ diction: whether the words used by MFA writers are noticeably different than those of their non-MFA counterparts…there are some words that are different, but given that we’re talking about over 200,000 unique words, this is hardly surprising. For example, MFA novels tend to focus more on lawns, lakes, counters, stomachs, and wrists. They prefer names like Ruth, Pete, Bobby, Charlotte, and Pearl (while non-MFA novels seem to like Anna, Tom, John, and Bill)…while MFA novels do tend to slightly favor certain themes like “family” or “home,” overall there’s no predictable way these topics appear with any regularity in novels written by creative writing graduates more than other people who write novels…

…MFA novels tend to use pairs of adjectives or adverbs less often, or avoid the more straightforward structure of a noun followed by a verb in the present tense. But other than that, there’s nothing detectably unique about the so-called “MFA style.”

According to So and Piper, MFA novels seem to suffer from exactly the same demographic narrowness that plagues American fiction in general, despite the fact that the MFA “promises to make the distinction of race come alive, take on literary heft, through learning how to write and the work of writing.” In general, the cast of MFA fiction is just as white and male as the rest of the field.

These bold, authoritative statements are made with cool, statistical conviction. But one of the problematic things about the piece is that the authors are masking a shaky premise—that we should be able to find an empirical/statistical difference between authors with MFAs and New York Times reviewed authors without them—and a questionable methodology—that the best way to discover these differences is to let computers read books for us—with the language and authority of statistical and computational analysis.

Lincoln Michel at Electric Literature has a pretty excellent take on So and Piper’s questionable premise:

Who argues that MFA grads write differently from their mainstream literary fiction peers? Most aspiring novelists go to MFAs precisely to be able to write the kind of work that gets published by big houses and reviewed in major papers—i.e., mainstream literary fiction. So and Piper might have found very different results if they compared the works of MFA grads to, say, small press horror novels or self-published romance ebooks.

Moreover, the authority that the authors are attempting to leverage doesn’t rest in the numbers themselves, or in the computers that generate them, but in the institutional structures and practices that constitute the scientific method. So and Piper’s data and methods aren’t available for cross-examination, although they have promised to release “more details and findings about our experiment” in the coming weeks. Until then, we’re stuck with a very unscientific lack of transparency, which is problematic when you’re positioning yourself as an above-the-fray empiricist.

Again, Michel is spot on, and points out exactly how a lack of transparency completely discredits this sort of data driven analysis:

…one of the three examples The Atlantic gives for a non-MFA writer they analyzed is Akhil Sharma. Sharma studied under writers like Joyce Carol Oates and Paul Auster in undergrad, then was awarded a prestigious Stegner creative writing fellowship, and has taught in the MFA program at Rutgers. It is only a technicality that Sharma doesn’t count as an MFA author (the Stegner is an MFA-style creative writing program at Stanford that is largely awarded to people who already hold MFAs). The authors don’t make their data public, but there’s little doubt that their “non-MFA” data set is filled with writers who similarly either studied creative writing in undergrad or teach in MFA programs.

It may be that the authors have submitted their findings for peer-review, that the tools they used are industry-standard, their methods beyond reproach. But again, we’re unable to say for sure, because none of that information is available, either in The Atlantic piece or on their blog, at least not yet. And until it is, this kind of analysis is really just a sophisticated form of intellectual techno-bullying, which attempts to bluster its way into a position of authority by faking its credentials.

Admittedly, we’re not talking about gravitational waves here; we’re talking about a culture piece in The Atlantic. But bullshit is bullshit, and we shouldn’t let it work its way into a debate that, as Jo and Piper correctly assert, has a very real, very serious impact on writers and writing.

 

 

Simon Reichley is the rights and operations manager at Melville House.

MobyLives