Revisiting the ghost’s shell: movement maps and the gameplay tell

In a brief rant on the future of interaction design, Bret Victor talks about human capabilities. Specifically, the capabilities of human hands.

We live in a three-dimensional world. Our hands are designed for moving and rotating objects in three dimensions, for picking up objects and placing them over, under, beside, and inside each other. No creature on earth has a dexterity that compares to ours.
— Bret Victor, A Brief Rant on the Future of Interaction Design

If you’ve been following this series of posts, you know where this is headed. Aside from recent developments in VR, thus far almost all game controllers released to this date have relied on hands.

On Bret’s recommendation, I picked up John Napier’s book on hands. It is a comprehensive book comparing capabilities of different hands, particularly differences between those of humans and other primates. It’s guaranteed to change the way you look at your hands.

It also changed the way I look at controllers. Particularly, the way controllers change the way we play games. I’m not going to open up the can of keyboard-and-mouse–vs–controller-debate here, because frankly that’s just not very interesting to talk about. Here’s a more interesting question: how does your game controller change the way you play a game? Here’s another interesting question: what does this change look like? Continue reading Revisiting the ghost’s shell: movement maps and the gameplay tell

Revisiting the ghost’s shell: proprioception in gaming

[Featured image from PS4Fans.net]

Anyone who has played a local-multiplayer game (multiple players sharing one screen) knows how messy the initial conditioning is. You wiggle your controller’s thumbstick, press a few buttons, determine which of the viewports shown is your own (if split-screen first-person) or which of the on-screen characters running around is yours (if third-person). And even then, at some point in the game massive explosions happen, or you need a toilet break, and when you’re finally back with full attention you have to re-spot your character all over again.

This can get really messy with games like Assault Android Cactus. It is not uncommon to mix up another player’s character for your own, especially when there is lots of on-screen movement.

Assault Android Cactus [Comicbuzz]
Assault Android Cactus [Comicbuzz]

There are even games that exploit this difficulty of matching intention to movement—proprioception in medical parlance. The player who recognises his character first is much more likely to win in that game. Continue reading Revisiting the ghost’s shell: proprioception in gaming

Revisiting the ghost’s shell: Prologue

I recall, once, watching my cousin playing Deus Ex: Human Revolution. At some point in the intro cutscene, he enters an elevator, where a female non-playable character (NPC) is waiting. While they converse, my attention was pulled away by a strange observation: I could see the NPC in the mirrored walls of the elevator, but not my own reflection.

I don’t particularly care for the rationalisations of the effect—low graphics quality settings, the difficulties of the uncanny valley, whatever else the technical difficulties are. How did I come to recognise, however vaguely, that perspective as my own, and what led me to expect a reflection in the same mirrored surface?

I think some very interesting questions—and hopefully answers as well—lie at this intersection of perception, cognition, and gaming, and I’m going to try at least fortnightly posts while keeping up my output on small-form-factor computing (of which not very much is left). If you know me from somewhere, give me a poke if I haven’t been keeping up like I promised.