In a brief rant on the future of interaction design, Bret Victor talks about human capabilities. Specifically, the capabilities of human hands.
We live in a three-dimensional world. Our hands are designed for moving and rotating objects in three dimensions, for picking up objects and placing them over, under, beside, and inside each other. No creature on earth has a dexterity that compares to ours.
Bret Victor, A Brief Rant on the Future of Interaction Design
If you’ve been following this series of posts, you know where this is headed. Aside from recent developments in VR, thus far almost all game controllers released to this date have relied on hands.
On Bret’s recommendation, I picked up John Napier’s book on hands. It is a comprehensive book comparing capabilities of different hands, particularly differences between those of humans and other primates. It’s guaranteed to change the way you look at your hands.
It also changed the way I look at controllers. Particularly, the way controllers change the way we play games. I’m not going to open up the can of keyboard-and-mouse–vs–controller-debate here, because frankly that’s just not very interesting to talk about. Here’s a more interesting question: how does your game controller change the way you play a game? Here’s another interesting question: what does this change look like? Continue reading
[Featured image from PS4Fans.net]
Anyone who has played a local-multiplayer game (multiple players sharing one screen) knows how messy the initial conditioning is. You wiggle your controller’s thumbstick, press a few buttons, determine which of the viewports shown is your own (if split-screen first-person) or which of the on-screen characters running around is yours (if third-person). And even then, at some point in the game massive explosions happen, or you need a toilet break, and when you’re finally back with full attention you have to re-spot your character all over again.
This can get really messy with games like Assault Android Cactus. It is not uncommon to mix up another player’s character for your own, especially when there is lots of on-screen movement.
Assault Android Cactus [Comicbuzz]
There are even games that exploit
this difficulty of matching intention to movement—proprioception
in medical parlance. The player who recognises his character first is much more likely to win in that game. Continue reading
(Part 10 in a series of posts on small-form-factor computing) In my previous posts on Thin-ITX, I made two complaints:
- There is still some cabling required in Thin-ITX, to connect the front panel ports (USB, audio) and power button to the motherboard.
- A Thin-ITX system can still be further shrunk, since nothing really limits it to a minimum size of 17×17cm.
- There are still more sources of heat than are really required for a Thin-ITX system
The first two issues have been resolved by another (unofficial) form factor: Intel’s Next Unit of Computing (NUC). The NUCs are barebones small-form-factor systems—just add CPU, RAM, SSD,
and water—with motherboards measuring 10×10cm. They carry the full range of connectivity (USB, audio, HDMI/Displayport) enjoyed by most PCs, although perhaps not the full complement.
These things are deliciously small. In fact,at 12×11×3.5cm, they are barely larger than the motherboards they house. This is an approach to cabling elimination that we’ve seen before in ITX: If you can’t bring the board closer to the sides, bring the sides closer to the board. Continue reading
I recall, once, watching my cousin playing Deus Ex: Human Revolution. At some point in the intro cutscene, he enters an elevator, where a female non-playable character (NPC) is waiting. While they converse, my attention was pulled away by a strange observation: I could see the NPC in the mirrored walls of the elevator, but not my own reflection.
I don’t particularly care for the rationalisations of the effect—low graphics quality settings, the difficulties of the uncanny valley, whatever else the technical difficulties are. How did I come to recognise, however vaguely, that perspective as my own, and what led me to expect a reflection in the same mirrored surface?
I think some very interesting questions—and hopefully answers as well—lie at this intersection of perception, cognition, and gaming, and I’m going to try at least fortnightly posts while keeping up my output on small-form-factor computing (of which not very much is left). If you know me from somewhere, give me a poke if I haven’t been keeping up like I promised.
(Part 9 in a series of posts on small-form-factor computing)
I wish I had the equipment to make the kind of heatmaps that Puget Systems does in their comparison of horizontal vs vertical cooling. But I don’t, so we’ll just have to make do with overlays again.
Heat sources in a passively cooled system
(Part 8 in a series of posts on small-form-factor computing)
When Intel released the first thin-ITX motherboards at Computex in 2011, many folks were left scratching their heads: what use is thin-ITX when we already have ITX? It quickly seemed that aside from use in all-in-one (AIO) systems, thin-ITX was a stillborn idea, consigned to a quiet fizzle-out once Intel had a more robust strategy figured out.
Today it is still hard to tell if thin-ITX is going to really take off. But what I noticed, assembling my own thin-ITX system, is that it’s not about the “thin” at all. Continue reading
Despite being startup-less, I attended a session on Social Media for Startups on a whim*, and walked away with a very tangible gut feeling that some real civilising is underfoot in social media and advertising. Continue reading