I have always found emulation, especially of computer systems, one of the hardest concepts to explain to folks who have little knowledge of computing (a la the nitty-gritty details of what goes on under that plastic/aluminium chassis). In part, that is because emulation is such a subterranean process. Business prefer their customers not to be aware of the nuts and bolts of what makes their services seamless; the failure to provide a “seamless” experience is considered a UX failure, a let-down in service experience.
Reading The Sixth Stage of Grief a few months ago (yes, this post is way overdue), I found an interesting description of OS emulation.
You typically need four things to emulate an old computer:
An operating system. Once you have the emulator and the ROM it’s like you actually own a new, old, computer—but it lacks for an operating system. Want to experience System 6.08 for your Mac? Workbench 2 for the Amiga? Microsoft DOS 6.22? You’ll likely make a fake hard drive. Then you actually install the real, authentic operating system onto the fake hard drive. Sometimes you will need to “insert” fake “floppy disks” into the fake “floppy drive” in order to install the real operating system onto the fake “hard drive” on the fake “computer.” (This is accomplished by clicking buttons.) Then you’ll “reboot.” It’s all very weird.
The word “fake” is used six times. The word “real”, two times. The word “fake” is used in reference to hardware (hard drive, floppies, computer), “real” for software (operating system). Paul Ford knew what he was writing, and this usage of the two terms reflects more about the nature of emulation than about our perceptions of them.
Fake hardware, real experience #
It’s no strange irony that physical, existing hardware is spoken of as “fake”, and abstract, pliable information is spoken of as “real” [I will dispense with the quote marks from this point on, hopefully having dispelled the usual connotative uses of the two words]. The Cartesian model of reality (a la The Matrix) presents all sensation as perception; we don’t actually know what our senses tell us, only how our mind interprets it.
Is this thing in my hand a fake thumb drive, or a real one? We could take X-rays of it, try to determine the hardware inside the chassis, but we run the risk of forgetting what a thumbdrive is really about: A thing to put data into, and get data out of. A thumbdrive is a UX experience, not a strictly defined hardware-bundle. The ultimate test of a thumbdrive is not hardware X-rays, but to actually try to use it like one and see if it stands up to the task.
Fake backend, real service #
In Eric Ries’s Lean Startup, he gives some example of emulating a business product (which can be a service) manually before going to the effort of automating the workflow.
Example: P-2-P Payments platform (name is confidential). The founding team created a peer-to-peer payment workflow via a web “responsive” application, where user can request payments from friend and family. This workflow was presented to the user as as functional website. In reality, the founding team manually processed all requests by hand, meaning they manually sent email notifications, tracked payment requests and captured billing information.
When we cash a cheque, apply for a visa, or sign up for a digitalising workflow (like above), we don’t ask many questions about how the service works—that’s why we’re paying for the service after all, to avoid sweating the details. Whether it is done by hand or by automated systems, as long as it behaves like we expect it to, the service is real, no matter how fake the backend may be. And conversely, no matter how complete the backend, if it fails at even one step, the service is considered a failure, and one may even find customers making claims that the service is a scam; a fake service.
Fake person, real interaction #
Let’s get into the creepy end of things. Highly ritualised interactions are being emulated by non-humans—I trust that this example from telemarketing is sufficient illustration; it is not fully automated yet because a human is needed to interpret accents and mumbling, but it’s not hard to imagine a full-automated version in the near-future. This emulation has been around with us for a long time, in lower-tech forms such as out-of-office autoreplies and I’m-busy-now-talk-later autoresponders.
Are these response systems real, or fake? That hinges on what you consider as personhood. If we think of personhood along Kevin Simler’s paradigm of personhood-as-interface, then the closer they come to presenting human-like responses, the closer we come to considering them as real “people” (at least, in a Turing-test manner). Ritualised interactions that do not involve hard-to-fake information (e.g. body language, gestures) are particularly susceptible to this real-or-fake confusion.
This is as far as I intend to go with this post, until I figure out in further detail what I’m trying to say. But I think the above three examples point toward technology, “the fake”, in a developmental trend that increasingly tries to emulate “the real”, which—I don’t have a better phrase for—seems to be our expectations of reality and how it should behave with us. Or in other words, magic.