Before we can understand the choreography of the Internet of Things, one would do well to ponder the history of simple, single technologies. Consider: the thermostat. Invented by a 17th century Dutch artist and engineer named Cornelis Drebbel, the device consisted of a cylinder of alcohol topped off with a splash of liquid mercury. Through the expansion and contraction of the fluid, and in combination with a series of switches connected to a furnace, this Rube Goldberg antecedent effectively regulated the temperature of a chicken incubator. The device required careful mixing of mercuric alcohol (which was, of course, incredibly toxic) and constant calibration of weights and levers that, on the basis of temperature feedback from the carcinogenic slurry, opened and closed a flue that regulated a fire that warmed some eggs. The system provided a basically level temperature, provided it didn’t catch on fire and or accidentally kill one’s family.
In the mid-19th century, Andrew Ure patented a bi-metallic thermostat using the heat-induced expansion of a metal coil to trigger a circuit that turned on a furnace. In contrast to the mercury thermostat, this device included an intuitive interface for a user to indicate the desired level of heat. The adjustment of a lever within a panel (marked by various temperature levels) moved a circuit closer or further from the metal coil, thus changing the requisite amount of heat needed for the furnace to activate. This interface provided a direct, intuitive and non-fatal means to interact with mechanical complexity.
Today’s avant-garde in thermostat technology is the Nest. The device leverages an always-on Internet connection to tap a sensory surveillance mesh that includes your iPhone, your home security system, your washer and dryer, ceiling fans, light bulbs, electrical outlets, lawn sprinklers and your car. The Nest positions itself at the center of a complex lattice of data, and on the basis of signals from a slew of Internet-connected devices, adjusts a room’s temperature by algorithmically intuiting a user’s intentions. The Nest is notable not only for its use of state-of-the-art artificial intelligence, but for the near absence of user interface. It actively learns a household’s habits and preferred temperature levels. The user interacts with the technology ambiantially, which works as though by magic.
The evolution of the thermostat illustrates how user interfaces diminish as technologies advance over time. “Smart” devices like the Nest rely on ambiantial cues (such as the presence and absence of the user) in combination with data garnered from an ever-widening matrix of devices, to algorithmically guess intended user behaviors. The success of such devices is contingent on their observation of patterns in the movement of the human body over time. The technologies of the Internet of Things work when they successfully create meaning from the choreographies of a user’s life.
Computers, perhaps unsurprisingly to anyone who has seen The Terminator, are getting very good at understanding the mechanics of the human body. The most recent version of the Microsoft Kinect deploys multiple, simultaneous skeletal models to understand dynamic impulses and interface intentions of users. The Leap Motion, a gestural interface controller for computers, includes models of hand bones in software design protocols, and predictively attempts to understand the location of digits it can’t directly observe. The recently announced “Project Soli” from Google uses a radar beacon shrunk to the size of a pin to perceive micro-movements in fingertips. The latest in affective computing allows Internet-connected systems to modulate behavior on the basis of micro-facial expressions a subject makes unconsciously. The sensors and software that power machine kinetic learning are so small as to be nearly unobservable. It follows that the only interface mechanism to remain after such miniaturization is our own bodies.
The aim is to make devices so “smart” at reading human intentionality that they seem to just work. But as any choreographer can tell you, deciding the meaning of any particular movement is no easy feat. For example, an early version of the Nest smoke detector included a function (ostensibly for the customer’s ease of use) that allowed a beeping alarm to be turned off by waving one’s hands in front of the device. The alarm was programmed to read a gestural motion — a wave of the hand — as having singular significance: turn off. But this is a reductive choreographic understanding of a gesture that, for humans, could signify a great deal, including, but not limited to, “my house is on fire and we’re all going to die.” The Nest couldn’t differentiate between an intentional gesture of “please turn off” and the inadvertent gesture of “we’re all going to die.” The functionality was eventually turned off via a software update with apologies from the CEO of the company.
Over the next decade, devices once administered by mechanical interfaces — light switches, keys and locks, etc. — will be replaced by digital counterparts. In the absence of mechanical contrivances — and interfaces requiring mechanical interactions to create desirable effects — new utilities will rely on ambiantial and gestural cues. The absence of a tangible user interface poses profound questions of design and user experience, and will, I believe, place new value on somatic and choreographic intelligences. Bodies may never lie, as Agnes de Mille notoriously said, but that doesn’t mean they’re easy to read.