Quantcast
Channel: life – Kathleen Fitzpatrick
Viewing all articles
Browse latest Browse all 25

Becoming Human

$
0
0

This brief presentation was part of a panel at MLA 2020 in Seattle entitled “Being Human, Seeming Human.” The panel brought together researchers from Microsoft with a couple of DH folks (me and Mark Sample) to talk about the history of research into artificial intelligence and conversational agents, some current experiments and challenges in the field, and the possibilities this work creates for literary artists today. The role I took on, as the last speaker in the session, was to raise some questions about how our engagements with these conversational agents might be affecting us.

Becoming Human

My presentation required me to open with a mildly mortifying revelation: When I was young, I took a lot of unconscious cues for how relationships were supposed to work from the ways they were represented on television.

Screenshot from Dynasty

This, perhaps needless to say, was a terrible mistake, which I discovered full-force the first time I ended an argument with an incisive, cutting one-liner and stormed out of the room. The person with whom I was arguing did not chase after me; there was no stirring emotional reunion. There was no sense in which I got to feel like I’d won. There was only a deep breach of trust, leading eventually to the loss of a relationship and the realization that so much of what I’d ingested as a child had been utterly wrong, that real connections between actual humans could not survive the kinds of dramatic behavior I’d been encouraged to think I was supposed to emulate.

This of course seems like a no-brainer now. Perhaps it’s just one of those things you shed in the process of maturing, but it’s hard for me today to imagine taking the relationships I see enacted onscreen to have much to do with my actual relationships in the world.

Screenshot from Friends

I know I’m not alone in my prior mistake, though; I have a close family member who once confided in me that she had been sorely disappointed to discover that as an adult she did not develop a cluster of relationships like those portrayed in “Friends.” I understand her disappointment; I was similarly saddened to discover that the world was not inclined to serve as a receptive backdrop for my self-dramatization.

What does this have to do with the current state of the development and deployment of artificial intelligences and conversational agents in online environments?

Screenshot from Her

Only this: as we engage with more and more non-human actors in technological environments, we may be prone to think of one another — and indeed ourselves — as less than human.

I want to be clear, though: like my failed understanding of the ways that relationships on television distorted and misrepresented actual emotional interactions among actually existing humans, the fault is not in the quality of the writing. “Better” television would not have produced a better understanding of human engagement.

Siri

Similarly, “better” conversational agents will not lead to more humane interactions online. The problem lies rather in a prior category error that makes it difficult for us to separate selves from self-representations. And it’s this category error that has led to what I increasingly think of as the failed sociality of social media.

That argument, in very brief, points to the ways that social media has promoted and benefited from a misunderstanding of and mistaking of connected individualism for real sociality.

Facebook

Yes, we engage with one another’s self-representations on these platforms, but the engagements are not real sociality, any more than the self-representations are our actual selves. We are cardboard characters in a poorly imagined drama, often behaving toward one another in ways that real relationships cannot survive — in no small part because social media platforms are heavily based around and in turn feed our cultural tendency toward competitive individualism, a tendency that slides all too easily, inexorably, into the cruel.

This argument — that social media as we participate in it has never been and in fact could never be social — requires me in this presentation not only to acknowledge my somewhat mortifying childhood failures to discriminate between representations of relationships and actual relationships, but also to acknowledge my much more recent failures to think all the way through the potentials of the proliferating platforms we use for online interaction and the ways they might transform scholarly communication. My assumption in my earlier arguments was that such two-way, many-to-many communication would open up channels for new, better ways of working together. Today, I am far less sure. This is not to say that I want to abandon those platforms or the possibilities they present for communication, but it is to say that I now recognize the extent to which our networked interactions with one another are not going to transform the academy, much less our society, for the better until we become better humans. To the point of today’s conversation: a huge part of becoming better humans is bound up in how we recognize the humanity of others, and the representations we create of that humanity — whether dramatized on television or functionalized as conversational agents — not only draw heavily on our most unspoken assumptions about one another but also set the course for how we’ll treat one another in the future.

Code over a cyborg face

Here’s the thing: what we’re producing in more human-seeming agents is in fact more human-representation-seeming agents, which is to say portrayals of our ideas about what “humans” are. In the case of conversational agents and other kinds of AIs, the emphasis is on intelligence — and intelligence, at least in the ways it can be modeled, is not the same thing as humanity. And perhaps that’s all fine as long as those agents remain tools. But countless examples, from adorable kids talking to Siri and Alexa, to trolls online tormenting bots like Tay, demonstrate the ways we all blur the lines in our interactions with these agents. And I don’t think there’s that much of a leap between trolls tormenting Tay and Gamergate, or revenge porn, or swatting, or any of the other innumerable ways that new technologies have facilitated the violent, racist, misogynist, dehumanizing treatment of people online.

Shadows of people on a beach

So we have to ask some hard questions not just about the AIs and conversational agents being developed, and not just about the algorithms that allow us to interact with them, but also about the ways that we interact with one another on equally technologically mediated platforms. For what definitions of “human” are we building human-seeming agents, and why? If our models for the human mistakenly substitute intelligence for humanity, what becomes of emotion, of kindness, of generosity, of empathy? How do those absences in models for the human pave the way for similar absences in actual human interactions? And how does the consequence-free inhumane treatment of conversational agents encourage the continued disintegration of the possibilities for real sociality online?


Viewing all articles
Browse latest Browse all 25

Latest Images

Trending Articles





Latest Images