04_brochure.jpg
 
Vinyl.jpg

Thesis exhibition, Art museum of Toronto

Written by Jayne Wilkinson

The human face is often described metaphorically: it can be a mask, a mystery or, in opposition, a revelatory truth-teller. The tendencies to see the face as a window—the eyes a direct route to the soul, the skin a veil that conceals an interior—are profound and persistent. But hovering between metaphors of transparency and opacity, a face both reveals and conceals its representational subject. We instinctually read feelings and emotions into the minute muscular changes that are ever-present in a face, but the reasons we do are far from obvious. As Deleuze and Guattari tell us, a face is a story, but that story is a horror story.1

What does it mean then if one cannot read the narratives, emotions, and intentions that the face communicates to the world? And in our contemporary computational society, what does it mean to have a machine read the nuances of such faciality instead of a human? Can machines actually determine the basic emotions of human experience by scanning the surface of an image or the textures of a 3D print? The newest body of work by installation artist Lea Grantham-Charbonneau considers these questions by overlapping personal narratives, medical diagnoses, emotional legibility and the perceptual role of technology in determining the boundaries of what it is to be human.

Entering a dark and dramatically lit gallery, a group of faces confront us. [fact-check installation set-up] They don’t seem human, but neither are they robotic or non-human. In high-contrast black and white, with smooth but pixelated skins, with not-quite-life-sized features, the repetitive, generic, even artificial facial images don’t immediately read as self-portraits, but in fact that’s what they are. The artist worked with engineers at a Toronto-based gaming company to produce detailed computer renderings of her own face as she attempted expressions that represent the six “universal” emotions: anger, disgust, fear, happiness, sadness, surprise. She centred herself in a ring of thirty-five cameras, which then automatically recorded her likeness and rendered images of her face into a three-dimensional, computer generated file. Mimicking a process of automation that is currently being used by psychiatrists and psychologists who study emotional recognition in patients on the autism spectrum, the artist pushed the process further by printing the files into fully round, three-dimensional objects. [fact-check, is this accurate?] The results are the literal reduction, or a flattening, of a performed emotion into a typecast mask. A self-image that is not quite a likeness, these uncanny portraits serve to universalize and generalize the wide range of human emotions.

This type of flattening is at work elsewhere in the exhibition through the use of automated, un-nuanced, computer-generated language. Computers can read information and scan data to recognize patterns and form conclusions, using increasingly complex software to read human inputs and translate information into diagnosis. Take for example, the automated tests that are self-administered online to diagnose a range of conditions, anything from minor physical ailments to psychological trauma. In the video piece, a series of questions scroll insistently across the screen These questions are drawn from tests that determine if one is suffering from loneliness, interspersed with questions asked of patients who may be at risk of developing Asperger’s syndrome. This unrelenting form of address demands a response from the viewer by positioning him or her as a de facto medical subject. One may already know the fate of one’s health, or at least have an expected answer, by the time such a machine analytic is delivered but the repetitive nature of the screen’s questions reinforces the isolation, anxiety and even paranoia that both medical and computational technologies produce in their subjects.

In her work on the politics of the face, political theorist Jenny Edkins argues that, “The face, rather than being an expression of humanity, is something that stultifies human being, that makes it submit to particular regimes of power, that produces it as the object or subject of a gaze.”2 Following Edkins, if the face is what indexes subjectivity in a submission to regimes of power, then the face itself produces the human as the subject of a gaze. Understood as a production, or a construct, what then can any face really tell us about a subject? Do we even need to be able to recognize standard emotions in order to develop an intersubjective relation with a person? How contingent upon our own subjectivities are the nuances of facial expression—the reading of an arched eyebrow, an open mouth, a dilated pupil? How wide is the gulf between recognizing happiness and sadness?

In Grantham-Charbonneau’s work, the gaze that produces power by de-humanizing its subjects is the gaze of medical science which, through its continual acts of interpolation, flattens the diversity of experiences, beings, histories, thoughts and ideas that comprise the human subject into a single binary system that recognizes only normal or abnormal. This is the violence of diagnosis, writ large in low light. With reference to funeral masks, robotics, simulated human emotions, and artificial intelligence, Grantham-Charbonneau’s body of work, When put into a specific category, the subject might (will) lose its colour and specificity, intertwines questions of recognition and subject identity with the production of images through machine vision, an impulse that runs through the work in both its process and its resolution. We may be fearful of the de-humanization of the world brought about by the acceleration of technologies and the increasing power of non-human or artificial intelligence, but the bodies that we hold as precious signifiers of humanity are themselves not unified, not contained, not controllable. The face is merely the most obvious contradiction: “The inhuman in human beings: that is what the face is from the start. It is by nature a close-up, with its inanimate white surfaces, its shining black holes, its emptiness and boredom.”3

Against the impulse to anthropomorphize all other forms of intelligence, perhaps what Grantham-Charbonneau’s work reminds us, is of the treachery in doing so, of the possibility that intelligence is a contingent, multiply determined and as alien as the script of faciality humans are expected to recognize. Indeed, as theorist Benjamin Bratton states, “The real philosophical lessons of [artificial intelligence] will have less to do with humans teaching machines how to think than with machines teaching humans a fuller and truer range of what thinking can be.”4 Instead of reading muscular twitches and facial cues, the machine may yet provide a more accurate scan of the emotional traits and revelatory clues that are integral to the human subject, through the smooth simulacra of computational intelligence.

1 Gilles Deleuze and Felix Guattari, A Thousand Plateaus: capitalism and schizophrenia, (Minneapolis: University of Minnesota Press, 1987), 168.

2 Jenny Edkins, Face Politics, (New York: Routledge, 2015), 5. Emphasis mine.

3 Deleuze and Guattari, 171.

4 Benjamin Bratton, “Outing Artificial Intelligence: Reckoning with Turing Tests,” in Alleys of Your Mind: Augmented Intelligence and its Traumas, edited by Matteo Pasquinelli, (Lüneburg, Germany: meson press, 2015), 72.