Creating a UI feedback and interaction paradigm
In part 1, I discussed how the mobile platform factors into the world design for touch applications. To give players the best experience with Infestation, I made a movable camera that can not only pan around the map, but also zoom in and out. This presents its’ own set of challenges from the design perspective, since it’s easy for players to focus only what is on the screen, and miss out on what’s not. My solution for this is to use UI design to remind players that there is a wide world around them!
The touch interface also presents some interesting challenges. Currently, players use a two-finger drag to pan the map, and the familiar pinch/stretch gesture to zoom in/out. Since so much of the player interaction centers around the gesture mechanics of drawing shapes to activate special moves, I want player to draw gestures with the tip of an index finger, like drawing a shape in a steamed window. What makes this interesting is that early play testing revealed an interesting fact. The game is played in landscape mode, and many players would consequently hold the phone in both hands, using their thumbs to interact with the screen, like a console controller! Not only does this highlight the need for game developers to have others play test their creations, but it also reveals a need for the game to anticipate that style of playing.
I’ve iterated over a few different approaches that will satisfy these requirements. My first attempt was to provide pulsing, semi-transparent arrows in centered on the edge of the screen in directions that the player can scroll. Once the player reaches an edge of the screen, the arrow disappears.
While this sure looks perty, it has a couple of problems. First, although the transparency doesn’t obscure the action too much, there’s plenty of things going on already. The player shouldn’t have to shoulder the burden of visually parsing yet another moving element. Since human vision processing is geared towards drawing attention to movement in the visual field, the pulsation of the arrows tends to attract attention – a mixed blessing. Second, it’s a bit redundant. Experienced gamers will only need to be told they can pan/zoom once or twice. That’s something that can easily be accomplished with tips explaining it in a loading or opening screen. Casual, young, old, or gamers inexperienced with touch interfaces might take a bit longer to grasp the concept, but ultimately, they will. Once they’ve gotten it, the arrows become like the appendix – once useful, now irrelevant, and sometimes painful.
Here’s a concept that I’m working on to address this: instead of pulsing the arrows continuously at the edge of the screen, I put the arrows together into a software d-pad near the lower-right corner of the screen. Why lower right? Even though I’m left-handed, I have to face the fact that most people aren’t; thus the right-side. When nothing’s going on, they’ll be almost completely transparent and non-interactive. Direction that are scrollable will be shaded a slightly darker shade (or perhaps just have a couple px border), so that when the user presses and holds on that arrow, the camera moves in that direction. This display could serve more uses than just scrolling, though. If something interesting is happening off-screen, the corresponding arrow could begin flashing an appropriate color – red for danger, yellow for warning, green for FYI. Tapping or pressing a flashing arrow could jump the camera to the point of interest.
Something that came up in the same conversation that spawned this idea was that there is an added bonus of implementing this: the code that hooks up to the software d-pad could just as easily respond to an actual physical one for phones with keyboards/pads.
Because the two-fingered drag is already in place, and also because I believe it is still a good way of navigation, I’ll leave that system in place. This gives players more than one means of doing the same thing, whether they’re playing with their thumbs or with their index finger. Still, the one drawback that I can see with doing it this way is that it does detract from the gesture mechanics.
I know I promised in my last post that I’d be sharing some reusable code to go along with this post, but since there’s so much to cover, I suppose I’ll have to do a part 3 to this series!
There is a whole world of game developers out there who have dealt with this issue (better than I, no doubt!) in some way or another in their games so if any of you have insight into this topic, feel free to leave a comment or throw me a tweet @liquid_electron with your thoughts. I’d love to hear how other people approach and think about these things.