Date of Completion
8-10-2020
Embargo Period
2-6-2021
Advisors
Ashwin Dani, Yaakov Bar-Shalom, Liang Zhang
Field of Study
Electrical Engineering
Degree
Master of Science
Open Access
Open Access
Abstract
Enabling robots with the ability to quickly and accurately determine the intention of their human counterparts is a very important problem in Human-Robot Collaboration (HRC). The focus of this work is to provide a framework wherein multiple modalities of information, available to the robot through different sensors, are fused to estimate a human's action intent. In this thesis, two human intention estimation schemes are presented. In both cases, human intention is defined as a motion profile associated with a single goal location. The first scheme presents the first human intention estimator to fuse information from pupil tracking data as well as skeletal tracking data during each iteration of an Interacting Multiple Model (IMM) filter in order to predict the goal location of a reaching motion. In the second, two variable structure IMM (VS-IMM) filters, which track gaze and skeletal motion, respectively, are run in parallel and their associated model probabilities fused. This method is advantageous over the first as it can be easily scaled to include more models and provides greater disparity between the most likely model and the other models. For each VS-IMM filter, a model selection algorithm is proposed which chooses the most likely models in each iteration based on physical constraints of the human body. Experimental results are provided to validate the proposed human intention estimation schemes.
Recommended Citation
Trombetta, Daniel, "Human Intention Inference using Fusion of Gaze and Motion Information" (2020). Master's Theses. 1549.
https://digitalcommons.lib.uconn.edu/gs_theses/1549
Major Advisor
Ashwin Dani