Date of Completion


Embargo Period


Major Advisor

Dr. Ashwin Dani

Associate Advisor

Dr. Krishna Pattipati

Associate Advisor

Dr. Liang Zhang

Field of Study

Electrical Engineering


Doctor of Philosophy

Open Access

Open Access


Robots are becoming integral parts of our environments, from factory floors to hospitals, and all the way to our homes. Unlike robots enclosed in cages performing repetitive tasks with high precision, there is an ever-increasing need for robots that can seamlessly interact and collaborate with humans in close proximity. Hence, it is imperative that robots are provided with the tools necessary to both safely and efficiently collaborate with their human partners. For achieving safe and efficiently human-robot collaboration, methods to infer humans intentions and methods for quick robot programming are required. To this end, this dissertation presents methods that fall into two categories. The first category consists of methods to infer humans intentions (modeled as goal locations of reaching motions) from noisy observations of humans motion. First, a maximum likelihood estimator for the early prediction of reaching goal location is presented. Second, a maximum a-posteriori estimator, that uses information about the human eye gaze to construct the prior distribution, is presented. The second category consists of imitation learning methods to learn movement primitives from demonstrations. These methods are particularly useful to teach robots new tasks by showing of examples. In the proposed methods, movement primitives are represented as statistical dynamical systems and the corresponding parameters are learned under constraints developed based on contraction analysis. Enforcement of these constraints provides theoretical guarantees on the learned model, such as convergence to the desired end-effector position and orientation, and robustness to sudden perturbations and target changes. The methods presented in this dissertation are rigorously tested using experiments conducted by observing human motion and using a 7 DOF dual-arm Baxter robot.