Hui-Shyong Yeo

is a 2nd year PhD student in University of St Andrews,
advised by Aaron Quigley.
He was also a researcher in UVR Lab, KAIST.

hsy@st-andrews.ac.uk

Hui-Shyong Yeo

is a 2nd year PhD student in University of St Andrews,
advised by Aaron Quigley.
He was also a researcher in UVR Lab, KAIST.

hsy@st-andrews.ac.uk

About Yeo

I am particularly interested in exploring and developing novel  interaction techniques that transcend the barrier between human and  computers, rendering more natural and intuitive interaction. I am  interested in topics such as Gestural/Mid-air Interaction, Mobile/Wearable Interaction, Augmented/Virtual Reality and Text Entry.

Currently, I focus on Single Handed Interaction Techniques for mobile and wearable devices for my PhD thesis.

Beyond HCI, I am also interested in cloud computing and cloud storage. When I proscrastinate, I post interesting stuffs to my HCI Research Fan page and blog.

Selected Projects

Mirror Mirror: An On-Body Clothing Design System [CHI16 Note]

We contribute the Mirror Mirror system that supports not only mixing and  matching existing fashion items, but also lets users design new items in  front of the mirror and export designs to fabrication devices. Mirror  Mirror makes use of spatial augmented reality and a mirror Virtual  garments are visible both on the body for precise manipulation as well  as in the reflection to obtain a third person perspective. While much  previous work deals with re-texturing and registering virtual garments  to live user data, we focus on collaborative design and show various  ways of designing using real bodies as mannequins.  Sample: Facebook, Google Drive, Twitter 

WatchMI: Pressure Touch, Twist and Pan Gesture Input on Unmodified Smartwatches [MobileHCI16 Honorable Mention Award]

We present WatchMI (Watch Movement Input) that enhances touch  interaction on a smartwatch to support continuous pressure touch, twist,  pan gestures and their combinations. Our novel approach relies on  software that analyzes, in real-time, the data from a built-in Inertial  Measurement Unit (IMU) in order to determine with great accuracy and  different levels of granularity the actions performed by the user,  without requiring additional hardware or modification of the watch, all  seamlessly integrated in an unmodified smart watch. 

RadarCat: Radar Categorization for Input & Interaction [UIST17 Paper]

In RadarCat we present a small, versatile radar-based system for  material and object classification which enables new forms of everyday  proximate interaction with digital devices. We further demonstrate four  working examples including a physical object dictionary, painting and  photo editing application, body shortcuts and automatic refill based on  RadarCat. 

SWiM: Shape Writing in Motion [CHI17 Paper]

We propose and evaluate a novel design point around a tilt-based text  entry technique which supports single handed usage. Our technique is  based on the gesture keyboard (shape writing). However, instead of  drawing gestures with a finger or stylus, users articulate a gesture by  tilting the device. This can be especially useful when the user’s other  hand is otherwise encumbered or unavailable. 

Sidetap and Slingshot Gestures on Unmodified Smartwatches [UIST17 Best Poster]

We present a technique for detecting gestures on the edge of an  unmodified smartwatch. We demonstrate two exemplary gestures, i) SideTap  - tapping on any side and ii) Slingshot - pressing on the edge and then  releasing quickly. Our technique is lightweight, as it relies on  measuring the data from internal Inertial measurement unit (IMU) only.  With these two gestures, we expand the input expressiveness of a  smartwatch, allowing users to use intuitive gestures with natural  tactile feedback, instead of limiting the interaction to the small touch  screen only. 

TiTAN: Typing in Thin Air Naturally [CHI17 LBW, to appear]

We present a virtual keyboard system that enables freehand midair text  entry for distant display while only requiring a low-cost depth sensor.  Leveraging user’s spatial familiarity with the QWERTY layout, our system  allows users to input text in thin air by mimicking the typing action  they usually perform on a physical keyboard or touchscreen device. Both  hands and ten fingers are individually tracked, along with clicking  action detection to enable a wide variety of interactions. We propose  three midair text entry techniques: bi-manual hunt-and-peck, ten fingers  touch-typing and one hand shape writing. 

An HMD-based Mixed Reality System for Avatar-Mediated Remote Collaboration with Bare-hand Interaction [ICAT-EVGE15]

We present a novel framework for mixed reality based remote  collaboration system, which enables a local user to interact and  collaborate with another user from remote space using natural hand  motion. Unlike conventional system where the remote user appears only  inside the screen, our system is able to summon the remote user into the  local space, which appears as a virtual avatar in the real world view  seen by the local user. 

image-placeholder
image-placeholder

SpeCam [MobileHCI17, to appear]

Material sensing using front camera. 

image-placeholder
image-placeholder

Hand Tracking and Gesture Recognition System for Human-Computer Interaction using Low-cost Hardware [MTAP Journal]

We present a robust marker-less hand/finger tracking and gesture  recognition system using low-cost hardware. We propose a simple but  efficient method that allows robust and fast hand tracking despite  complex background and motion blur. Our system is able to translate the  detected hands or gestures into different functional inputs and  interfaces with other applications via several methods. We also  developed sample applications that can utilize the inputs from the hand  tracking system