Nos partenaires



Accueil du site > Français > Evénements > Soutenances > Soutenances d’HDR

Soutenances d’HDR



Ubiquitous displays: mobile, multi-display and freeform interfaces


Jeudi 15 Novembre 2018, 9h30
UT3 Paul Sabatier, IRIT, Auditorium J. Herbrand
Version PDF :


- Michel Beaudouin-Lafon, Professeur, Université Paris-Sud, France
- Stephen Brewster, Professor, University of Glasgow, UK
- Andy Cockburn, Professor, University of Canterbury, New-Zealand
- Caroline Appert, DR CNRS, LRI - Université Paris-Sud, France
- Antonio Krüger, Professor, Saarland University, Germany
- Emmanuel Dubois, Professeur, Université Paul Sabatier, France


My research activities have focused on solving some of the main challenges of three types of ubiquitous displays: mobile, multi-display, and freeform interfaces.

By 2017, the number of smartphone users in the world was beyond 2 billions, i.e. one third of the world population. Not only the number of smartphones has dramatically increased, but also their computing capabilities and hence the number of available applications and complex data to manipulate. This raises numerous challenges on how to easily and rapidly interact with such growing quantity of data on mobile platforms. We presented four major contributions in this context: we created a novel type of gestures called Bezel-Tap Gestures to solve the problem of rapidly launching commands from sleep mode; we proposed using on-body gestures on the face, i.e. hand-to-face input, to facilitate navigating spatial data on head-worn displays; we explored mobile true-3D displays to facilitate mobile interaction with volumetric data; and finally we studied using the smartwatch output capabilities to facilitate non-visual exploration of spatial data for visually-impaired people.

Multi-display environments (MDEs) are common on desktop environments and more and more common on professional and public environments. However they also bring new challenges due to the complexity of the overall system, which can be composed of multiple and heterogeneous devices, arranged in dynamic spatial topologies. We addressed two major challenges in MDEs: the need for fluid interactions in MDEs and complex data exploration tools using multiple monitors. To tackle the first challenge we adopted two approaches: either using an existing device, in our case a head-worn display interface that we named Gluey, or creating a novel dedicated device, namely TDome. To tackle the second challenge, i.e. facilitate the exploration of complex data on MDEs, we studied the use of around-device gestures to manipulate 3D data on public display and extended the overview+detail interface paradigm with multiple detailed views to explore multiple regions of the data simultaneously.

Finally, to fulfill the adoption of pervasive displays, we need to facilitate a seamless integration of displays into existing environments, ranging from in-vehicle and wearable displays to public displays. Traditional rectangular displays are not well-suited for these applications, as they can not be easily integrated. Emerging technologies allow for the creation of nonrectangular displays with unlimited constraints in shapes. With this eminent adoption comes the urgent challenge of rethinking the way we present content on non-rectangular displays. Our approach was threefold: first, we carried focus groups to gather both concrete usage scenarios and display shapes; then, we studied text content only, being the fundamental brick of any UI; finally, we explored more complexe content layouts combining text and icons.