2,639 research outputs found

    Full Reference Objective Quality Assessment for Reconstructed Background Images

    Full text link
    With an increased interest in applications that require a clean background image, such as video surveillance, object tracking, street view imaging and location-based services on web-based maps, multiple algorithms have been developed to reconstruct a background image from cluttered scenes. Traditionally, statistical measures and existing image quality techniques have been applied for evaluating the quality of the reconstructed background images. Though these quality assessment methods have been widely used in the past, their performance in evaluating the perceived quality of the reconstructed background image has not been verified. In this work, we discuss the shortcomings in existing metrics and propose a full reference Reconstructed Background image Quality Index (RBQI) that combines color and structural information at multiple scales using a probability summation model to predict the perceived quality in the reconstructed background image given a reference image. To compare the performance of the proposed quality index with existing image quality assessment measures, we construct two different datasets consisting of reconstructed background images and corresponding subjective scores. The quality assessment measures are evaluated by correlating their objective scores with human subjective ratings. The correlation results show that the proposed RBQI outperforms all the existing approaches. Additionally, the constructed datasets and the corresponding subjective scores provide a benchmark to evaluate the performance of future metrics that are developed to evaluate the perceived quality of reconstructed background images.Comment: Associated source code: https://github.com/ashrotre/RBQI, Associated Database: https://drive.google.com/drive/folders/1bg8YRPIBcxpKIF9BIPisULPBPcA5x-Bk?usp=sharing (Email for permissions at: ashrotreasuedu

    Monitoring young associations and open clusters with Kepler in two-wheel mode

    Full text link
    We outline a proposal to use the Kepler spacecraft in two-wheel mode to monitor a handful of young associations and open clusters, for a few weeks each. Judging from the experience of similar projects using ground-based telescopes and the CoRoT spacecraft, this program would transform our understanding of early stellar evolution through the study of pulsations, rotation, activity, the detection and characterisation of eclipsing binaries, and the possible detection of transiting exoplanets. Importantly, Kepler's wide field-of-view would enable key spatially extended, nearby regions to be monitored in their entirety for the first time, and the proposed observations would exploit unique synergies with the GAIA ESO spectroscopic survey and, in the longer term, the GAIA mission itself. We also outline possible strategies for optimising the photometric performance of Kepler in two-wheel mode by modelling pixel sensitivity variations and other systematics.Comment: 10 pages, 6 figures, white paper submitted in response to NASA call for community input for alternative science investigations for the Kepler spacecraf

    Human mobility monitoring in very low resolution visual sensor network

    Get PDF
    This paper proposes an automated system for monitoring mobility patterns using a network of very low resolution visual sensors (30 30 pixels). The use of very low resolution sensors reduces privacy concern, cost, computation requirement and power consumption. The core of our proposed system is a robust people tracker that uses low resolution videos provided by the visual sensor network. The distributed processing architecture of our tracking system allows all image processing tasks to be done on the digital signal controller in each visual sensor. In this paper, we experimentally show that reliable tracking of people is possible using very low resolution imagery. We also compare the performance of our tracker against a state-of-the-art tracking method and show that our method outperforms. Moreover, the mobility statistics of tracks such as total distance traveled and average speed derived from trajectories are compared with those derived from ground truth given by Ultra-Wide Band sensors. The results of this comparison show that the trajectories from our system are accurate enough to obtain useful mobility statistics

    A self-growing Bayesian network classifier for online learning of human motion patterns

    Get PDF
    This paper proposes a new self-growing Bayesian network classifier for online learning of human motion patterns (HMPs) in dynamically changing environments. The proposed classifier is designed to represent HMP classes based on a set of historical trajectories labeled by unsupervised clustering. It then assigns HMP class labels to current trajectories. Parameters of the proposed classifier are recalculated based on the augmented dataset of labeled trajectories and all HMP classes are accordingly updated. As such, the proposed classifier allows current trajectories to form new HMP classes when they are sufficiently different from existing HMP classes. The performance of the proposed classifier is evaluated by a set of real-world data. The results show that the proposed classifier effectively learns new HMP classes from current trajectories in an online manner. © 2010 IEEE.published_or_final_versionThe 2010 International Conference of Soft Computing and Pattern Recognition (SoCPaR 2010), Paris, France, 7-10 December 2010. In Proceedings of SoCPaR2010, 2010, p. 182-18

    Hierarchical modelling and adaptive clustering for real-time summarization of rush videos

    Get PDF
    In this paper, we provide detailed descriptions of a proposed new algorithm for video summarization, which are also included in our submission to TRECVID'08 on BBC rush summarization. Firstly, rush videos are hierarchically modeled using the formal language technique. Secondly, shot detections are applied to introduce a new concept of V-unit for structuring videos in line with the hierarchical model, and thus junk frames within the model are effectively removed. Thirdly, adaptive clustering is employed to group shots into clusters to determine retakes for redundancy removal. Finally, each most representative shot selected from every cluster is ranked according to its length and sum of activity level for summarization. Competitive results have been achieved to prove the effectiveness and efficiency of our techniques, which are fully implemented in the compressed domain. Our work does not require high-level semantics such as object detection and speech/audio analysis which provides a more flexible and general solution for this topic

    Scanpath modeling and classification with Hidden Markov Models

    Get PDF
    How people look at visual information reveals fundamental information about them; their interests and their states of mind. Previous studies showed that scanpath, i.e., the sequence of eye movements made by an observer exploring a visual stimulus, can be used to infer observer-related (e.g., task at hand) and stimuli-related (e.g., image semantic category) information. However, eye movements are complex signals and many of these studies rely on limited gaze descriptors and bespoke datasets. Here, we provide a turnkey method for scanpath modeling and classification. This method relies on variational hidden Markov models (HMMs) and discriminant analysis (DA). HMMs encapsulate the dynamic and individualistic dimensions of gaze behavior, allowing DA to capture systematic patterns diagnostic of a given class of observers and/or stimuli. We test our approach on two very different datasets. Firstly, we use fixations recorded while viewing 800 static natural scene images, and infer an observer-related characteristic: the task at hand. We achieve an average of 55.9% correct classification rate (chance = 33%). We show that correct classification rates positively correlate with the number of salient regions present in the stimuli. Secondly, we use eye positions recorded while viewing 15 conversational videos, and infer a stimulus-related characteristic: the presence or absence of original soundtrack. We achieve an average 81.2% correct classification rate (chance = 50%). HMMs allow to integrate bottom-up, top-down, and oculomotor influences into a single model of gaze behavior. This synergistic approach between behavior and machine learning will open new avenues for simple quantification of gazing behavior. We release SMAC with HMM, a Matlab toolbox freely available to the community under an open-source license agreement.published_or_final_versio

    Recent Trends in Computational Intelligence

    Get PDF
    Traditional models struggle to cope with complexity, noise, and the existence of a changing environment, while Computational Intelligence (CI) offers solutions to complicated problems as well as reverse problems. The main feature of CI is adaptability, spanning the fields of machine learning and computational neuroscience. CI also comprises biologically-inspired technologies such as the intellect of swarm as part of evolutionary computation and encompassing wider areas such as image processing, data collection, and natural language processing. This book aims to discuss the usage of CI for optimal solving of various applications proving its wide reach and relevance. Bounding of optimization methods and data mining strategies make a strong and reliable prediction tool for handling real-life applications

    Research on a modifeied RANSAC and its applications to ellipse detection from a static image and motion detection from active stereo video sequences

    Get PDF
    制度:新 ; 報告番号:甲3091号 ; 学位の種類:博士(国際情報通信学) ; 授与年月日:2010/2/24 ; 早大学位記番号:新535

    A Genetic Bayesian Approach for Texture-Aided Urban Land-Use/Land-Cover Classification

    Get PDF
    Urban land-use/land-cover classification is entering a new era with the increased availability of high-resolution satellite imagery and new methods such as texture analysis and artificial intelligence classifiers. Recent research demonstrated exciting improvements of using fractal dimension, lacunarity, and Moran’s I in classification but the integration of these spatial metrics has seldom been investigated. Also, previous research focuses more on developing new classifiers than improving the robust, simple, and fast maximum likelihood classifier. The goal of this dissertation research is to develop a new approach that utilizes a texture vector (fractal dimension, lacunarity, and Moran’s I), combined with a new genetic Bayesian classifier, to improve urban land-use/land-cover classification accuracy. Examples of different land-use/land-covers using post-Katrina IKONOS imagery of New Orleans were demonstrated. Because previous geometric-step and arithmetic-step implementations of the triangular prism algorithm can result in significant unutilized pixels when measuring local fractal dimension, the divisor-step method was developed and found to yield more accurate estimation. In addition, a new lacunarity estimator based on the triangular prism method and the gliding-box algorithm was developed and found better than existing gray-scale estimators for classifying land-use/land-cover from IKONOS imagery. The accuracy of fractal dimension-aided classification was less sensitive to window size than lacunarity and Moran’s I. In general, the optimal window size for the texture vector-aided approach is 27x27 to 37x37 pixels (i.e., 108x108 to 148x148 meters). As expected, a texture vector-aided approach yielded 2-16% better accuracy than individual textural index-aided approach. Compared to the per-pixel maximum likelihood classification, the proposed genetic Bayesian classifier yielded 12% accuracy improvement by optimizing prior probabilities with the genetic algorithm; whereas the integrated approach with a texture vector and the genetic Bayesian classifier significantly improved classification accuracy by 17-21%. Compared to the neural network classifier and genetic algorithm-support vector machines, the genetic Bayesian classifier was slightly less accurate but more computationally efficient and required less human supervision. This research not only develops a new approach of integrating texture analysis with artificial intelligence for classification, but also reveals a promising avenue of using advanced texture analysis and classification methods to associate socioeconomic statuses with remote sensing image textures
    corecore