Revisiting the ghost’s shell: adjusting to new bodies and minds

[Caution] This post is going to be a potentially confusing sequence of vignettes. I’ll connect them as much as I can, but I may not be on to anything here at all—the power of the human mind to rationalise anything is powerful, and I am in its grip here. Sometimes I think it’s better I try not to explain anything, and just describe the confusion as it is.


I have never been much of an exercising person, so it was with some surprise (and relief) that I got my Silver fitness award and realised that I wouldn’t have to enlist early and join the two-month preparatory fitness programme. I felt anything but fit, and ready for military training.

I got a Gold award once, at some point during my two years of compulsory service, a feat I never repeated. I don’t remember how I did it, but it didn’t feel harder than all the other fitness tests I had gone through.

Now, ten years on, I am still not an exercising person, and fitness tests feel as difficult as before. It’s no surprise to me now that I keep failing a particular section of the test: the 2.4km run. Instead, what surprises me sometimes is how I manage to achieve a better timing than what I thought I would, based on how ready I feel. I hear some exercising folks are able to predict their timings down to the minute. I really would like to know how they do it, although I suspect they don’t—they feel it.

How does one know how far one will jump at a given instant? How does one know how quickly one can cover a distance?

Continue reading Revisiting the ghost’s shell: adjusting to new bodies and minds

Inhabiting a desktop

I once tried an MOOC, edX’s Future Cities. It was everything I expected, and I “dropped out” after the third lesson. I had what I wanted: a new term for a new discipline, “Information Architecture”, and examples of how not to do it—the MOOC itself was such an example.

Information Architecture is a discipline that looks at how information is structured. If physical architecture is the partitioning and ordering of space and material, information architecture is the sorting and hierarchical organisation of information and the ways in which it interacts. Databases, contacts, calendars, these are forms of structured information which we are familiar with. They come in a certain expected format: an event invitation would be very strange if it did not mention date, time, and venue, at the very least. But it can quickly get complicated as well. A huge event, such as a conference, with multiple breakoff seminars and sessions, can itself contain multiple overlapping events in multiple venues with multiple people. How is this information to be organised and presented? As we walk through the conference hallways and foyers, how do we see this information arrayed around us?

I’ve been paying more attention to coworkers’ desks lately. I don’t mean the physical desk, the physical structure of wood and steel, but the tangible desk, the way things are laid out. Which photographs and mementos take pride of place, the way in which paper is stacked, and spills over to adjacent spaces, the arrangement of the tiny paper-flanked cubbyhole where the laptop sits, . . . . We all have the same desk and cabinet and shelves, but over time we come to identify each coworker’s desk by this unique arrangement of personal effects. We inhabit our desks. Continue reading Inhabiting a desktop

Revisiting the ghost’s shell: the possessor and the puppeteer

Previously, we touched on the affordances of a game controller (and a game’s control scheme) as an extension of the body. And we examined the process of getting into a game proprioceptively, in some depth: this is a mediation of two interfaces, the body–controller interface, and the controller–game interface. Successful proprioception involves compounding the two interfaces into one, such that bodily movements get mapped directly to on-screen outcomes. For example, instead of subconsciously thinking “right index finger presses trigger, trigger causes main weapon to fire”, the mapping becomes “right finger ? main weapon fire”, so that one is soon able to fire the main weapon without grappling consciously with the controller interface.

This calls to mind a sporting analogy: to make the racket/stick/sporting-instrument “an extension of your body”. This certainly is not an overnight process. One starts out getting used to the various sensations, of catching a ball at the wrong part of the swing, of angling the racket in various positions, of reaching with the racket while mid-stride, … then one starts compiling an experiential library of various scenarios—overhead balls, flat shots, smashes, and so on. (These stages often do not separate cleanly, but we can think of them as separate processes.) What follows then is a conscious analysis of one’s technique, thinking about rational responses to each situation, and then training the body to respond to these situations subconsciously. We can say this is a mapping of proprioceptive responses to sensed scenarios.

It readily follows, then, that each game has its own set of strategies, which calls for its own set of mappings. A response–scenario mapping that works for squash would not work for tennis or badminton. And it is a ready leap from this analogy to video gaming: different mappings for different games.

But there is a key difference here. The sportsman has “direct access” to his sporting weapon, while the videogamer’s experience is mediated through his perception of his avatar. How exactly does that work? How does a video gamer come to make their avatar “an extension of their body”? Continue reading Revisiting the ghost’s shell: the possessor and the puppeteer

Revisiting the ghost’s shell: movement maps and the gameplay tell

In a brief rant on the future of interaction design, Bret Victor talks about human capabilities. Specifically, the capabilities of human hands.

We live in a three-dimensional world. Our hands are designed for moving and rotating objects in three dimensions, for picking up objects and placing them over, under, beside, and inside each other. No creature on earth has a dexterity that compares to ours.
— Bret Victor, A Brief Rant on the Future of Interaction Design

If you’ve been following this series of posts, you know where this is headed. Aside from recent developments in VR, thus far almost all game controllers released to this date have relied on hands.

On Bret’s recommendation, I picked up John Napier’s book on hands. It is a comprehensive book comparing capabilities of different hands, particularly differences between those of humans and other primates. It’s guaranteed to change the way you look at your hands.

It also changed the way I look at controllers. Particularly, the way controllers change the way we play games. I’m not going to open up the can of keyboard-and-mouse–vs–controller-debate here, because frankly that’s just not very interesting to talk about. Here’s a more interesting question: how does your game controller change the way you play a game? Here’s another interesting question: what does this change look like? Continue reading Revisiting the ghost’s shell: movement maps and the gameplay tell

Revisiting the ghost’s shell: proprioception in gaming

[Featured image from PS4Fans.net]

Anyone who has played a local-multiplayer game (multiple players sharing one screen) knows how messy the initial conditioning is. You wiggle your controller’s thumbstick, press a few buttons, determine which of the viewports shown is your own (if split-screen first-person) or which of the on-screen characters running around is yours (if third-person). And even then, at some point in the game massive explosions happen, or you need a toilet break, and when you’re finally back with full attention you have to re-spot your character all over again.

This can get really messy with games like Assault Android Cactus. It is not uncommon to mix up another player’s character for your own, especially when there is lots of on-screen movement.

Assault Android Cactus [Comicbuzz]
Assault Android Cactus [Comicbuzz]

There are even games that exploit this difficulty of matching intention to movement—proprioception in medical parlance. The player who recognises his character first is much more likely to win in that game. Continue reading Revisiting the ghost’s shell: proprioception in gaming

Revisiting the ghost’s shell: Prologue

I recall, once, watching my cousin playing Deus Ex: Human Revolution. At some point in the intro cutscene, he enters an elevator, where a female non-playable character (NPC) is waiting. While they converse, my attention was pulled away by a strange observation: I could see the NPC in the mirrored walls of the elevator, but not my own reflection.

I don’t particularly care for the rationalisations of the effect—low graphics quality settings, the difficulties of the uncanny valley, whatever else the technical difficulties are. How did I come to recognise, however vaguely, that perspective as my own, and what led me to expect a reflection in the same mirrored surface?

I think some very interesting questions—and hopefully answers as well—lie at this intersection of perception, cognition, and gaming, and I’m going to try at least fortnightly posts while keeping up my output on small-form-factor computing (of which not very much is left). If you know me from somewhere, give me a poke if I haven’t been keeping up like I promised.