Engineered into the back of the forthcoming Apple Watch will be something called an “Opto-Electronic Sensor Array.” This mechanism will shoot light into the wearer’s wrist, and by measuring the bounce-back, discern their pulse, blood pressure, and when combined with the enclosed accelerometer and gyroscope, the motions of one’s hands. The watch will thus “read” gestures and use them as an interface to understand user intention. (Imagine accepting a phone call by making a thumbs up gesture, or declining one with a thumbs down.) The Apple Watch is thus a sensor smashup in the shape of a timepiece, a wearable computer deploying a combination of algorithmic intelligences to understand what a user wants through the movement of their body in space and time. In other words, the Apple Watch and the wearer will communicate through the medium of choreography.
For better and for worse, the Apple Watch is not unique in its gestural prowess. Optoelectronics, like all sensor and computing mechanisms, have grown smaller and cheaper over time. In the coming year, gesture sensors will be built into Volkswagens, Samsung TVs, thermostats and sex toys, each passively monitoring subjects for gestures spoken or kinesthetically issued. Consumer electronics companies argue that gestural interfaces are more intuitive and efficient than older user-experience modalities, but one can easily imagine how a home filled by a lattice of Internet-connected sensory devices could go awry, and frack up user intentionality by dint of sheer device density. Imagine accepting a phone call by giving your watch a “thumbs up” and inadvertently telling your TV to Facebook “like” an Olive Garden commercial. Or deploying a marital aid that inadvertently opens and closes, and opens and closes, and opens and closes, your garage door.
There are no shared choreographic standards for these devices. They don’t share data, and tend to define any given gesture differently. The significance of a “thumbs up,” for example, will vary depending on which applications your Samsung TV has running in the background, and it may mean something else entirely in your car, where that same gesture can tell that car to accept a phone call, or confirm for the car that it has interpreted a gesture correctly. Gestural confusion between devices will no doubt increase due to geographic peculiarities: In the U.S., a “thumbs up” usually means, roughly, “Yes,” or signals approval. In Iran, the same gesture means, roughly, “Sit on my thumb.” In Greece, it means “Up yours.” In the words of Stephan Moore, there is no “MIDI for movement,” and the communicative difficulties between bodies and devices are only beginning to emerge. As a choreographer, I see billions of wearable computing devices being manufactured over the next five years, hundreds of millions of which will understand gesture, as the largest, most complex, most expensive and most radically expansive choreographic problem set of all time.
While the real-time utility of gesture-enabled devices is hardly known, these emerging interfaces could scarcely be worse than those we already have. Yet how we interact with these devices will have a prodigious effect on, among other things, our health, and it doesn’t seem to me that our interactions with technology are getting healthier. A few decades into the personal computing revolution, we look at screens an average of seven and a half hours a day, and we own five times as many mobile computing devices than we did just a decade ago. We typically sit to interact with our devices, and as the Annals of Internal Medicine has noted, the engineering of society towards sedentary lifestyles is linked to everything from diabetes to cancer to heart disease and then some. These interfaces matter because the very technologies that will increasingly augment our bodies have been demonstrated to shave years off our lives.
I have a hunch that bringing dancers, choreographers and performers into consumer product engineering cycles can help. Who better to confront problems of kinesthetic communication than dancers, performers and choreographers, with their combined experience in such matters? Despite our culture’s pervasive discomfort with the body (and demotion of artists working in the physically expressive realm), the future of computing interfaces seems increasingly focused on the expressivity of the body. I confess to hoping that the meaningful incorporation of somatic intelligence into engineering could ignite a broader cultural shift, resulting in artistic experience becoming increasingly valued, alongside the sciences and humanities, as a key to prosperity and accomplishment.
These questions are no doubt driven, on some level, by my own nagging insecurity about the future of the performing arts in this country. Perhaps performative interface technologies could be a gesture, if you will, for those of us in the field toward re-valuing dance and dancerly expertise, and thus build a cultural ecology less hobbled by undercapitalization, underemployment and the suspicion that technology is a harbinger of doom. Perhaps it would be nice, for once, to be too preoccupied with making the future to fear it.
Endnote: The ideas in this post are the result of discussions with generous colleagues. I want to shout out to Catie Cuan, Ian Garrett, Jennifer Edwards, Jordan Isadore, Katherine Helen Fisher, Shimmy Boyle, Kristen Bell, Murray McMillan, Ranjit Bhatnagar, Sheiva Rezvani, Keira Heu-Jwyn Chang, Stephan Moore, Victoria Nece, Jamie Jewett, Jessica Modrall, Ken Tabachnick and Chad Herzog for their generosity of ideation and interest in the artistic possibilities of emerging technologies.