June 18, 2015
Poetry For Robots!
by Andrew Karpan
For millennia, the multifarious language of man was ours and ours alone and we were (generally) given to understand that the robot (and before that, golums and minotaurs) world communicated to us only through the sinister modulated voice of Douglas Rain. Now, the kind folks down at Arizona State’s Center for Science and the Imagination have teamed up with Neologic and Webvisions to change that—putting together a collection of images, asking any and all visitors to their site to write a few lines of metaphor-heavy verse. And just like the Center for Science and Imagination’s last project, collecting “optimistic” sci-fi short stories and bundling it together to be put out by HarperCollins’s William Morrow imprint as Hieroglyph: Stories and Visions for a Better Future, this crowdsourced collection of poetic imagery is lined up for quick distribution: this time to robots!
Indeed, as The Guardian reports, the project will be figuratively (I suppose) “feeding poems to the robots.” While the promise of this new brand of consumer will certainly be an immediate relief to frequently cash-strapped publishers of poetry, Poetry For Robots’s goals are more expansive—and complex—than expanding a small subset of the book market. Researchers have tasked themselves to purportedly prove one of Jorge Luis Borges’ more well-known language theories: that (in short), despite the theoretically infinite capacities of language, certain common correlational patterns of imagery and their linguistic signifiers (ex: stars commonly correlate, figuratively, to eyes) exist and can, theoretically, be summed in total—like synonyms in a thesaurus. In order to take Borges’ idea out of the realm of mere thought experiment, ASC’s Poetry For Robots is asking all of us to contribute a few lines of our own emotional portent toward this compelling array of images.
Simply click on what jogs your fancy and type (20 words or 150 characters) away.
If successful, Poetry For Robots hopes to apply this data to reverse the ways in which people interact with computers—allowing us to employ a much more vernacular language when using, for instance, search engines. “The researchers,” The Guardian goes on to quote, “‘want to ‘teach the database the metaphors’ that humans associate with pictures ‘and see what happens.’” Where previous efforts mainly focused on using computer programming to ‘fool’ humans playing games like chess or, even, selecting poetry for publication in an acclaimed college literary magazine, Poetry For Robots wants to do the reverse; ‘fool’ computers into utilizing the “poetic quality of human language.” Researchers at ASU will compile all the data entered into the site to create large blocks of metadata associated with certain images that will then be manipulated into a kind of search engine to be presented at Webvisions Chicago in late September.
Looking over some of the images ASU has put up,I wondered if, maybe, some hundred or thousand or million people would use figurative language like “cold” or “conformist corporate synergy” to describe this cunningly off-center photo of a Mac and a iPad in happy harmony. Would a Google-like search engine of the future be able to suggest me an Apple-like product if I typed in symptoms of depression or likewise maladjustment?
But not all of ASU’s default-desktop-background photography is that commercially crass. Another depicts a mangy bald eagle, a friendly, if somewhat haggard, symbol of national pride that makes you wish Colbert Nation (to say nothing of its fearless leader) was still an active and ironically spamming concern while…oh wait—another iPhone. All, of course, bringing up another issue with the experiment: the colligate genericness of the imagery provided seems designed to generate the very opposite of conventionally understood good poetry. And more importantly: won’t crowdsourcing just unload a lot of poetry.com-esque junk on our computing friends?
Regardless of how the experiment proceeds, the decision to use crowdsourced associations combined with ASU’s particular choice of keenly generic imagery is bound to be of some armchair sociological interest when the results appear this fall. Will the robots we train read into this inconspicuous strawberry a reminder of eerily similar-looking boxes of cereal or will it broadcast the sincere language of nutritional fruity hope in the morning?