For my Physical Computing midterm project, I worked with Lirong Liu on a glove-based game controller.
Our initial discussions revolved around a variety of gestural control schemes. In particular, we discussed various ways that hand gestures in space could be tracked, using gloves, computer vision, and other approaches. Some topics of discussion were pointing/moving in 3-d space, and various gestures for instrument control. After much discussion, we settled on a controller for a Mario-style side-scrolling game, where the player would make their character run by physically “running” their index and middle fingers along a table top. I think this gesture is attractive for a number of reasons. It has a very clear correspondence with the action performed on screen, and although the controller gives little physical feedback, the assumption that users would run in place on a table top helps ground their actions. Also, it seemed like a lot of fun.
From there, I began working with a physical prototype. To begin with, I took a pair of flex sensors (shown at left with a plug we added for better interfacing) and attached them to my fingers using strips of elastic. From this prototype, it was clear that the best sensor response came when the tip of the flex sensor was firmly attached to the fingertip and the rest of the flex sensor could slide forward and back along the finger as it bends.
Reading this sensor data into Processing, I was able to quickly map the movement to a pair of “legs” and get a sense of the motion in the context of a running character. For a standing character, we found that just two changing bend values (one for each sensor) could produce some very sophisticated, lifelike motion. Meanwhile, as I worked on the physical prototype, Lirong set up a basic game engine in JavaFX with a scrolling world and three different obstacle types.
At this point, we both worked on the software for a while, with Lirong setting up a system for converting various finger movements to discrete events (for steps, jumps, etc) and me working on various issues related to graphics and animation. In the end, we wound up using our sensor input with two different levels of abstraction: the high level (the specific running and jumping events) controls the actual game logic, while the low level (the actual bend values of the sensors) controls the character’s animation.
After that, I sewed up the two pairs of gloves shown in the video above, allowing the flex sensors to slide back and forth along the fingers. As we worked on the glove design, we tested with various users to identify potential sizing issues. From there, we built a simple, self-contained system for doing basic user control, and wired everything up.
Code can be found here: https://github.com/mindofmatthew/pcomp_midterm
A few challenges we faced:
- With only one dimension of data, there’s only so much gesture tracking that can be done. We also wanted to include a kicking mechanic in the game, but the gesture often got confused with running and vice versa.
- Walking is much more difficult to animate than standing: after the initial Processing-based tests, I was excited about controlling leg animation on a deep level with the finger motion. I’ve got some animation background, so I knew basically what the animation should look like, but this logic required to interpret a walking animation on to the user’s movement was too complex to really get perfect by the deadline.
A few things I think we did well:
- Our initial player choice screen allowed for the user to join the game, play around with the control scheme a bit, and secretly calibrate the glove’s response to the user’s specific range of motion.
- The note above about the complexity of gesture detection aside, the few gestures we did detect worked quite well. Jumping, in particular, I found to be quite satisfying.
- Our glove is completely left/right-handed agnostic, which I’m particularly pleased with.