2 research outputs found

    An energy saving approach to active object recognition and localization

    Get PDF
    We propose an Active Object Recognition (AOR) strategy explicitly suited to work with robotic arms in human-robot cooperation scenarios. So far, AOR policies on robotic arms have focused on heterogeneous constraints, most of them related to classification accuracy, classification confidence, number of moves etc., discarding physical and energetic constraints a real robot has to fulfill. Our strategy overcomes this weakness by exploiting a POMDP-based AOR algorithm that explicitly considers manipulability and energetic terms in the planning optimization. The manipulability term avoids the robotic arm to get close to singularities, which require expensive and straining backtracking steps; the energetic term deals with the arm gravity compensation when in static conditions, which is crucial in AOR policies where time is spent in the classifier belief update, before doing the next movement. Several experiments have been carried out on a redundant, 7-DoF Panda arm manipulator, on a multi-object recognition task. This allows to appreciate the improvement of our solution with respect to other competitors evaluated on simulations only

    Recognition self-awareness for active object recognition on depth images

    Get PDF
    We propose an active object recognition framework that introduces the recognition self-awareness, which is an intermediate level of reasoning to decide which views to cover during the object exploration. This is built first by learning a multi-view deep 3D object classifier; subsequently, a 3D dense saliency volume is generated by fusing together single-view visualization maps, these latter obtained by computing the gradient map of the class label on different image planes. The saliency volume indicates which object parts the classifier considers more important for deciding a class. Finally, the volume is injected in the observation model of a Partially Observable Markov Decision Process (POMDP). In practice, the robot decides which views to cover, depending on the expected ability of the classifier to discriminate an object class by observing a specific part. For example, the robot will look for the engine to discriminate between a bicycle and a motorbike, since the classifier has found that part as highly discriminative. Experiments are carried out on depth images with both simulated and real data, showing that our framework predicts the object class with higher accuracy and lower energy consumption than a set of alternatives
    corecore