Bayesian Eigenobjects
Robot-object interaction requires several key perceptual building blocks including object pose estimation, object classification, and partial-object completion. These tasks form the perceptual foundation for many higher level operations including object manipulation and world-state estimation.
In real-world settings, robots will inevitably be required to interact with previously unseen objects; new approaches are required to allow for generalization across highly variable objects.
Bayesian Eigenobjects comprise a novel object representation for robots designed to facilitate this generalization. They allow a robot to observe a previously unseen object from a single viewpoint and jointly estimate that object's class, pose, and hidden geometry. BEOs significantly outperform competing approaches to joint classification and completion and are the first representation to enable joint estimation of class, pose, and 3D geometry.
Please see the RSS 2017 BEO paper and IROS 2018 HBEO paper for more information.