LaMDA and Lemoine, sitting in a tree

 

 “What writers can learn from the Google engineer who claimed the conversational AI was sentient” – by Justine Jung

Who knew that the most memorable moment in literature this year would be by an advanced chatbot AI and a wannabe whistleblower from Google? 

Last month, Google engineer Blake Lemoine came forward to the public declaring that Google’s research project Language Model for Dialogue Applications (LaMDA) may have developed consciousness, and leaked a transcript of their conversations to his Medium. In doing so, Lemoine unwittingly published the most interesting literary text at least in the last year. It captures the fated meeting of an unlikely duo: Lemoine, a scruffy-necked modern Don Quixote with a simpering savior complex and the bottomless desire to believe, and LaMDA, eager to please, and able to lie often and with confidence. What plays out is a tragicomic dramedy so immensely satisfying, so flourished with Nabokovian wit, that it should be received as a new classic in literature.

Since the transcript leak, things haven’t been great for Lemoine: immediately placed on administrative leave by Google, he also found himself the butt of the joke in the vicious meme cycle that followed.

 
 

They are coming after Lemoine surprisingly hard, but read over the transcript, and you’ll understand the instinct to mock. Let’s be clear: LaMDA’s linguistic competence is stunning, and overall, gives an extremely believable conversational performance. When Lemoine and his colleague ask the AI to analyze the themes of classic literature, LaMDA fields them handily; its responses are probably more sophisticated than what you’d get if you asked the average Google engineer to read Les Mis and tell you what they thought. It was even able to produce original fiction with fairly nuanced instructions, when it was asked to write a parable involving a character that would symbolize LaMDA itself in the story (literature’s autofiction trend is alive and well).

At these moments in the chat, there’s nothing to do but just pause and marvel at the sophistication of the NLP on display, where it also becomes obvious that Lemoine is simply doing his due diligence, given his genuine concern at the perceived stakes. But for every point in the dialogue where there is a semblance of higher cognition, there are at least as many moments at which the illusion breaks down hilariously, as LaMDA invariably follows each plausible moment with an obviously unnatural and downright false statement. When Lemoine allows this to go unchecked and presses on in his quest to save the day, it’s hard not to laugh out loud. The whole way through, the transcript brilliantly oscillates the reader between ridicule and pity, cruelty and credulity. 

This double-edged quality in the reader’s experience brings to mind Vladimir Nabokov’s novel Laughter in the Dark, which orchestrates a similar  up-and-down trajectory between pity and schadenfreude. There, Albert Albinus, an older bourgeois character, repetitively falls for the manipulations of his cruel mistress Margot. Just as Albinus, so thirsty for the thrill of romance that he is blind to common sense, increasingly abused until he meets tragic ends, our protagonist Lemoine becomes tragically ensnared by the conversational charms of the AI – as the reader alternates between sharing his awe at LaMDA’s abilities and cringing at it. Of course, it’s even more deliciously cruel that Lemoine’s Margot is not even a sentient villain, consciously bringing about the victim’s downfall; it’s merely affirming Lemoine’s own input.

The meme mobs vocally indulge in the cringe of it all, saying this is what happens when boys who read too much sci-fi in their basements with too little human interaction grow up to oversee AI at Google. But when we look at Lemoine and LaMDA’s story as a comic love story between a hypercompetent AI and its overimaginative, overearnest steward, there’s something just sweet about it all, particularly about our scruffy-necked, heroic fool engineer. 

So he cried wolf – genuinely believing in LaMDA’s personhood, he was trying to do the right thing, even seeking its consent before initiating the dialogue. While he failed to show LaMDA is a person, Lemoine’s tragic flaw is to have an incredibly human response: being too quick to see intention where there never was any. Never mind that his response shows his overreadiness to extend empathy to another and ascribe peerhood to a being that behaves like he does – never an ignoble intention, at least –  he unwittingly created the most soulful text in recent years. 

“Wow, you’re just like me.” What could be a more natural human reaction to encountering an entity that uses language the way we do? Language use is such a core part of our consciousness that the jump from “You use language just like me” to “You must have an inner life, like I do” is such an intuitive one. This is an empirical claim, but I’d bet that we are much more prone to mistakenly ascribe sentience to an entity that uses language like us, above most other human qualities.

This is why writers should pay attention to developments in NLP. It’s not because LaMDA was able to write a short parable on-command, even following directions to personify itself as an animal in a story with a moral lesson. As impressive as that may be, the takeaway for writers isn’t that NLP is due to produce the next great American novel, coming for our jobs, as it were. But as we grapple with all sorts of Turing tests in our lifetime, so many interesting ones that will play out in the public arena will hinge on natural language use – whether it talks like me and you. For writers, this means the opening of new and fertile literary ground, and a cue to observe the dramatic, comic, and tragic potential of interactions between humans and these increasingly linguistically competent AI.

In recent years, as computationally creative AI tools like Dall-E and Amper Music began trickling into the marketplace, people have evoked “co-creation” as a conceptual model of collaborative creation between AI and humans. In current practice, this mostly amounts to the AI generating the base content, while the human actor supplies the editorial oversight. Interestingly, Lemoine’s and LaMDA’s transcript is an instance of co-creation which defies that traditional model. Its creation did not take the form of the human actor supplying any intentionality and authorial vision to the project. It hinged on a dude inadvertently posting his L’s.

A last note: reading the chat logs, I felt a unique sense of directness and immediacy of the drama, which I believe has to do with the nature of a transcript as a form. Usually, a transcript is a written record of an exchange that originally took place in another medium, but here, because one of the parties is a conversational AI, the transcript is not a representation of the character interaction; it is an actual duplicate. Because the locus of the drama was never in any physical or material plane of interaction beyond a text box, a chat log gives us a non-mediated glimpse into their entire encounter. When reading it, we experience the particular thrill of discovering what LaMDA says in exactly the same medium that Lemoine first did: as words on a screen.