Low-cost Head Mounted Displays (LC-HMDs) have become a common phenomenon in exploring and experimenting Virtual Reality (VR) interfaces. This is mainly due to inexpensive cardboards, increased availability, ability to make-it-yourself, ability to scale and reduced dependency on external hardware platforms. LC-HMDs are often seen in form of Google cardboard or open sourced 3rd party cardboards.
Among others, a common challenge in LC-HMDs is the limited choice of input interactions. The input interactions are limited to (a) gaze based and (b) fuse button (press) based input methods. They are primarily used to select a virtual object/element to activate another state of designed interface. Despite being primarily used in a simple task of “selection”, both the methods present significant challenges in complex interfaces. In this blog post, I will walk through some common challenges experienced by these two methods – especially in “selection” of a virtual object/element.
Fuse button based input interactions
- Difficulty in selecting an option – when there are more than 4 options. Ever wondered how to select 4th training module out of 8 demonstrated modules on virtual medical training? Fuse button based input interactions can trigger frustration among users when they become a primary method to select a 3rd or 4th option, especially when a number of button press represents the chosen option (E.g. press the button thrice to choose 3rd option). A better approach is to skip fuse button based method or to adapt to magical user interactions.
- Difficulty in (physically) locating fuse button. Novice users find it difficult to locate the fuse button position as HMDs disconnect users from real world environment. Moreover, current state of art does not provide any visual feedback on the location of fuse button which could prove beneficial to novice users.
- High dependency on cardboard hardware. Although most cardboards do provide a fuse button, it is still demanding dependency on a physical hardware. This causes problems in cases of broken fuse buttons or low-quality cardboards
Gaze based input interactions
- Inaccurate and error prone selection. Similar to the 1st problem of fuse button based input interaction, gaze based method is error prone when performed across more than 4 selection options. The problem is further amplified when the interface is completely new and user needs time to explore the contents. If you have used a VR interface utilizing gaze based method, it often takes 1 second prior to confirming the selection. However, 1 second is extremely less to explore all the options. Either a user keeps track of her head position (she need to ensure to move her head before 1 second) to eliminate accidental selection or keep her head to a location where no options are presented. In both cases, it impacts the readability and engagement among the users.
- Inability to locate visual feedback. A small circle is shown (often constantly) on the screen to visually present user’s head position. However, it becomes difficult to locate in cases of (a) high opacity visual feedback (b) when visual feedback merges with background screen (c) when studied across low technology literate users. Either user needs to be trained or relied on users capability to learn the interaction method – not an ideal approach in this competitive world.
- Occlusion. Despite this method uses a small visual feedback (often through a small circle demonstrating head position), a constant visual feedback in a small screen space may be occluding for space critical VR interfaces.
The question remains then – which is potentially an appropriate method for accurate, reliable and user-friendly “selection” in VR interfaces.
One approach is to combine both methods and adapt to the positives they propose. The combined method propose steps of (a) mentally choose the desired object/element for selection (b) locate gaze/head to desired object (c) tap on fuse button and (d) tapping trigger the selection which is accomplished in 1-second gaze based selection window. Use of fuse button combined with gaze based input interaction method can present following advantages
- Controlled and accurate selection. A method confirming the selection through an active trigger by a user will enable significant control over accidental selection in error prone and complex VR interfaces. Selection of an object/element in a VR interface presenting more than 4 objects/elements in close proximity is accomplished by mentally mapping the object/element of choice, locating through gaze/head position, tapping the fuse button to active the gaze based trigger and finally selecting the object/element.
Although this method demands higher task completion time, tasks/activities or applications prioritizing accuracy over task completion time can explore this method.
Having an academic background, this disclaimer is the most important for me. I do not have any scientific proof justifying significant accuracy increase, but our current explorations in VR propose the results in this direction. My lab will conduct a scientific study and I will share the results soon with all of you.