183 research outputs found

    A Comparison Study of Saliency Models for Fixation Prediction on Infants and Adults

    Get PDF
    Various saliency models have been developed over the years. The performance of saliency models is typically evaluated based on databases of experimentally recorded adult eye fixations. Although studies on infant gaze patterns have attracted much attention recently, saliency based models have not been widely applied for prediction of infant gaze patterns. In this study, we conduct a comprehensive comparison study of eight state-ofthe- art saliency models on predictions of experimentally captured fixations from infants and adults. Seven evaluation metrics are used to evaluate and compare the performance of saliency models. The results demonstrate a consistent performance of saliency models predicting adult fixations over infant fixations in terms of overlap, center fitting, intersection, information loss of approximation, and spatial distance between the distributions of saliency map and fixation map. In saliency and baselines models performance ranking, the results show that GBVS and Itti models are among the top three contenders, infants and adults have bias toward the centers of images, and all models and the center baseline model outperformed the chance baseline model

    Performance evaluation for tracker-level fusion in video tracking

    Get PDF
    PhDTracker-level fusion for video tracking combines outputs (state estimations) from multiple trackers, to address the shortcomings of individual trackers. Furthermore, performance evaluation of trackers at run time (online) can determine low performing trackers that can be removed from the fusion. This thesis presents a tracker-level fusion framework that performs online tracking performance evaluation for fusion. We first introduce a method to determine time instants of tracker failure that is divided into two steps. First, we evaluate tracking performance by comparing the distributions of the tracker state and a region around the state. We use Distribution Fields to generate the distributions of both regions and compute a tracking performance score by comparing the distributions using the L1 distance. Then, we model this score as a time series and employ the Auto Regressive Moving Average method to forecast future values of the performance score. A difference between the original and forecast returns the forecast error signal that we use to detect tracking failure. We test the method with different datasets and then demonstrate its flexibility using tracking results and sequences from the Visual Object Tracking (VOT) challenge. The second part presents a tracker-level fusion method that combines the outputs of multiple trackers. The method is divided into three steps. First, we group trackers into clusters based on the spatio-temporal pair-wise relationships of their outputs. Then, we evaluate tracking performance based on reverse-time analysis with an adaptive reference frame and define the cluster with trackers that appear to be successfully following the target as the on-target cluster. Finally, we fuse the outputs of the trackers in the on-target cluster to obtain the final target state. The fusion approach uses standard tracker outputs and can therefore combine various types of trackers. We test the method with several combinations of state-of-the-art trackers, and also compare it with individual trackers and other fusion approaches.EACEA, under the EMJD ICE Project

    Planning Algorithms for Multi-Robot Active Perception

    Get PDF
    A fundamental task of robotic systems is to use on-board sensors and perception algorithms to understand high-level semantic properties of an environment. These semantic properties may include a map of the environment, the presence of objects, or the parameters of a dynamic field. Observations are highly viewpoint dependent and, thus, the performance of perception algorithms can be improved by planning the motion of the robots to obtain high-value observations. This motivates the problem of active perception, where the goal is to plan the motion of robots to improve perception performance. This fundamental problem is central to many robotics applications, including environmental monitoring, planetary exploration, and precision agriculture. The core contribution of this thesis is a suite of planning algorithms for multi-robot active perception. These algorithms are designed to improve system-level performance on many fronts: online and anytime planning, addressing uncertainty, optimising over a long time horizon, decentralised coordination, robustness to unreliable communication, predicting plans of other agents, and exploiting characteristics of perception models. We first propose the decentralised Monte Carlo tree search algorithm as a generally-applicable, decentralised algorithm for multi-robot planning. We then present a self-organising map algorithm designed to find paths that maximally observe points of interest. Finally, we consider the problem of mission monitoring, where a team of robots monitor the progress of a robotic mission. A spatiotemporal optimal stopping algorithm is proposed and a generalisation for decentralised monitoring. Experimental results are presented for a range of scenarios, such as marine operations and object recognition. Our analytical and empirical results demonstrate theoretically-interesting and practically-relevant properties that support the use of the approaches in practice
    • …
    corecore