Three Papers to Appear at CHI 2020

Fairwell Party of Sonne and Xinke
December 15, 2019
Welcome to the Lab Shardul!
April 30, 2020

Three Papers to Appear at CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems is the premier international conference of Human-Computer Interaction. CHI  – pronounced ‘kai’ – is a place where researchers and practitioners gather from across the world to discuss the latest in interactive technology. - chi2020.acm.org

On this prestigious conference, three papers from NUS-HCI lab got accepted! Here they are:



1. EYEditor: Towards On-the-Go Heads-up Text Editing Using Voice and Manual Input

Authors: Debjyoti Ghosh, Pin Sym Foong, Shengdong Zhao, Can Liu, Nuwan Janaka2, Vinitha Erusu

Abstract:

On-the-go text-editing is difficult, yet frequently done in everyday lives. Using smartphones for editing text forces users into a heads-down posture which can be undesirable and unsafe. We present EYEditor, a heads-up smartglass-based solution that displays the text on a see-through peripheral display and allows text-editing with voice and manual input. The choices of output modality (visual and/or audio) and content presentation were made after a controlled experiment, which showed that sentence-by-sentence visual-only presentation is best for optimizing users’ editing and path-navigation capabilities. A second experiment formally evaluated EYEditor against the standard smartphone-based solution for tasks with varied editing complexities and navigation difficulties. The results showed that EYEditor outperformed smartphones as either the path OR the task became more difficult. Yet, the advantage of EYEditor became less salient when both the editing and navigation was difficult. We discuss trade-offs and insights gained for future heads-up text-editing solutions.



2. Learning with Haptics: Improving Vocabulary Recall with Free-form Digital Annotation on Touchscreen Mobiles

Authors: Smitha Sheshadri, Shengdong Zhao, Yang Chen, Morten Fjeld

Abstract:

Mobile vocabulary learning interfaces typically present material only in auditory and visual channels, underutilizing the haptic modality. We explored haptic-integrated learning by adding free-form digital annotation to mobile vocabulary learning interfaces. Through a series of pilot studies, we identified three design factors: annotation mode, presentation sequence, and vibrotactile feedback, that influence recall in haptic-integrated vocabulary interfaces. These factors were then evaluated in a within-subject comparative study using a digital flashcard interface as baseline. Results using a 84-item vocabulary showed that the ‘whole word’ annotation mode is highly effective, yielding a 24.21% increase in immediate recall scores and a 30.36% increase in the 7-day delayed scores.
Effects of presentation sequence and vibrotactile feedback were more transient; they affected the results of immediate tests, but not the delayed tests. We discuss the implications of these factors for designing future mobile learning applications.



3. Virtually-Extended Proprioception: Providing Spatial Reference in VR through an Appended Virtual Limb

Authors: Yang Tian, Yuming Bai, Shengdong Zhao, Chi-Wing Fu, Tianpei Yang, Pheng Ann Heng

Abstract:

Selecting targets directly in the virtual world is difficult due to the lack of haptic feedback and inaccurate estimation of egocentric distances. Proprioception, the sense of self-movement and body position, can be utilized to improve virtual target selection by placing targets on or around one’s body. However, its effective scope is limited closely around one’s body. We explore the concept of virtually-extended proprioception by appending virtual body parts mimicking real body parts to users’ avatars, to provide spatial reference to virtual targets. Our studies suggest that our approach facilitates more efficient target selection in VR as compared to no reference or using an everyday object as reference. Besides, by cultivating users’ sense of ownership on the appended virtual body part, we can further enhance target selection performance. The effects of transparency and granularity of the virtual body part on target selection performance are also discussed.



Comments are closed.

%d bloggers like this: