"Text+ Eye" on Autonomous Taxi to Provide Geospatial Instructions to Passenger¶
Research Problem: While text-based external human-machine interface (eHMI) is widely accepted, one limitation is the lack of capability to communicate spatial information such as a different person or location.
Solution: We built a mixed-eHMI using "eye" as a target-specifier when "text" shows the clear intention to their communication partners.
Content: We conducted a pre-experimental observation to develop two testbed scenarios, followed by a video-based user study via life-size projection with a real-car prototype mounted a text display and a set of robotic eyes.
Result: The results demonstrated that our proposed "text + eye" combination may represent geospatial information by increasing the success pick-up rate.
Please check our paper[pdf] for more details!
Authors¶
- Xinyue Gui, The University of Tokyo
- Ehsan Javanmardi, The University of Tokyo
- Stela H. Seo, Kyoto University
- Vishal Chauhan, The University of Tokyo
- Chia-Ming Chang, National Taiwan University of Arts
- Manabu Tsukada, The University of Tokyo
- Takeo Igarashi, The University of Tokyo
Click the image to watch the video¶
Publication¶
Xinyue Gui, Ehsan Javanmardi, Stela Hanbyeol Seo, Vishal Chauhan, Chia-Ming Chang, Manabu Tsukada, and Takeo Igarashi. 2024. "Text + Eye" on Autonomous Taxi to Provide Geospatial Instructions to Passenger. In Proceedings of the 12th International Conference on Human-Agent Interaction (HAI '24). Association for Computing Machinery, New York, NY, USA, 429–431. https://doi.org/10.1145/3687272.3690906