Eyes Free Interaction

Traditional mobile interaction requires users’ visual attention. But in many mobile scenarios such as walking, running, or driving, visual attention or feedback is not always available. Due to this, the mobile interaction based on visual feedback often cannot work very well. Thus, kinds of eyes-free interactive systems were developed to provide alternative solutions by leveraging non-visual input (e.g. gestures) or output (e.g. audio feedback) modalities. However, two fundamental problems for eyes-free are not solved very well:

1) how eyes-free should be defined, measured and evaluated; and
2) how to evaluate the affect of different scenarios.

As a result, current eyes-free systems are often cannot be operated by starting from eyes-free interactions. In addition, despising the constraints of specific scenarios often decreased the usability of these systems because they often cannot perform well even in the scenarios where they were expected well. In other words, current researchers despise users themselves and the characteristics of the scenarios where users are. Therefore, our research will focus on solving these two fundamental problems to help future researchers in this field. The premise is to understand users. In particular, users’ basic capabilities in different scenarios decide whether eyes-free interaction is feasible and the related implementation. Thus, we hope we can provide common design considerations of eyes-free interactive techniques and at the same time establish common principles of evaluations in mobile contexts by conceptualizing the term “eyes-free” in mobile contexts, exploring and understanding users’ capabilities in different circumstances.