328,309 research outputs found

    Projective rectification from the fundamental matrix

    Get PDF
    This paper describes a direct, self-contained method for planar image rectification of stereo pairs. The method is based solely on an examination of the Fundamental matrix, where an improved method is given for the derivation of two projective transformations that horizontally align all the epipolar projections. A novel approach is proposed to uniquely optimise each transform in order to minimise perspective distortions. This ensures the rectified images resemble the original images as closely as possible. Detailed results show that the rectification precision exactly matches the estimation error of the Fundamental matrix. In tests the remaining perspective distortion offers on average less than one percent viewpoint distortion. Both these factors offer superior robustness and performance compared with existing techniques

    Gaussian states under coarse-grained continuous variable measurements

    Get PDF
    The quantum-to-classical transition of a quantum state is a topic of great interest in fundamental and practical aspects. A coarse-graining in quantum measurement has recently been suggested as its possible account in addition to the usual decoherence model. We here investigate the reconstruction of a Gaussian state (single mode and two modes) by coarse-grained homodyne measurements. To this aim, we employ two methods, the direct reconstruction of the covariance matrix and the maximum likelihood estimation (MLE), respectively, and examine the reconstructed state under each scheme compared to the state interacting with a Gaussian (squeezed thermal) reservoir. We clearly demonstrate that the coarse-graining model, though applied equally to all quadrature amplitudes, is not compatible with the decoherence model by a thermal (phase-insensitive) reservoir. Furthermore, we compare the performance of the direct reconstruction and the MLE methods by investigating the fidelity and the nonclassicality of the reconstructed states and show that the MLE method can generally yield a more reliable reconstruction, particularly without information on a reference frame (phase of input state).Comment: published version, 9 pages, 5 figure

    Refractive Structure-From-Motion Through a Flat Refractive Interface

    Get PDF
    Recovering 3D scene geometry from underwater images involves the Refractive Structure-from-Motion (RSfM) problem, where the image distortions caused by light refraction at the interface between different propagation media invalidates the single view point assumption. Direct use of the pinhole camera model in RSfM leads to inaccurate camera pose estimation and consequently drift. RSfM methods have been thoroughly studied for the case of a thick glass interface that assumes two refractive interfaces between the camera and the viewed scene. On the other hand, when the camera lens is in direct contact with the water, there is only one refractive interface. By explicitly considering a refractive interface, we develop a succinct derivation of the refractive fundamental matrix in the form of the generalised epipolar constraint for an axial camera. We use the refractive fundamental matrix to refine initial pose estimates obtained by assuming the pinhole model. This strategy allows us to robustly estimate underwater camera poses, where other methods suffer from poor noise-sensitivity. We also formulate a new four view constraint enforcing camera pose consistency along a video which leads us to a novel RSfM framework. For validation we use synthetic data to show the numerical properties of our method and we provide results on real data to demonstrate performance within laboratory settings and for applications in endoscopy

    Homography-Based Positioning and Planar Motion Recovery

    Get PDF
    Planar motion is an important and frequently occurring situation in mobile robotics applications. This thesis concerns estimation of ego-motion and pose of a single downwards oriented camera under the assumptions of planar motion and known internal camera parameters. The so called essential matrix (or its uncalibrated counterpart, the fundamental matrix) is frequently used in computer vision applications to compute a reconstruction in 3D of the camera locations and the observed scene. However, if the observed points are expected to lie on a plane - e.g. the ground plane - this makes the determination of these matrices an ill-posed problem. Instead, methods based on homographies are better suited to this situation.One section of this thesis is concerned with the extraction of the camera pose and ego-motion from such homographies. We present both a direct SVD-based method and an iterative method, which both solve this problem. The iterative method is extended to allow simultaneous determination of the camera tilt from several homographies obeying the same planar motion model. This extension improves the robustness of the original method, and it provides consistent tilt estimates for the frames that are used for the estimation. The methods are evaluated using experiments on both real and synthetic data.Another part of the thesis deals with the problem of computing the homographies from point correspondences. By using conventional homography estimation methods for this, the resulting homography is of a too general class and is not guaranteed to be compatible with the planar motion assumption. For this reason, we enforce the planar motion model at the homography estimation stage with the help of a new homography solver using a number of polynomial constraints on the entries of the homography matrix. In addition to giving a homography of the right type, this method uses only \num{2.5} point correspondences instead of the conventional four, which is good \eg{} when used in a RANSAC framework for outlier removal

    High-resolution sinusoidal analysis for resolving harmonic collisions in music audio signal processing

    Get PDF
    Many music signals can largely be considered an additive combination of multiple sources, such as musical instruments or voice. If the musical sources are pitched instruments, the spectra they produce are predominantly harmonic, and are thus well suited to an additive sinusoidal model. However, due to resolution limits inherent in time-frequency analyses, when the harmonics of multiple sources occupy equivalent time-frequency regions, their individual properties are additively combined in the time-frequency representation of the mixed signal. Any such time-frequency point in a mixture where multiple harmonics overlap produces a single observation from which the contributions owed to each of the individual harmonics cannot be trivially deduced. These overlaps are referred to as overlapping partials or harmonic collisions. If one wishes to infer some information about individual sources in music mixtures, the information carried in regions where collided harmonics exist becomes unreliable due to interference from other sources. This interference has ramifications in a variety of music signal processing applications such as multiple fundamental frequency estimation, source separation, and instrumentation identification. This thesis addresses harmonic collisions in music signal processing applications. As a solution to the harmonic collision problem, a class of signal subspace-based high-resolution sinusoidal parameter estimators is explored. Specifically, the direct matrix pencil method, or equivalently, the Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) method, is used with the goal of producing estimates of the salient parameters of individual harmonics that occupy equivalent time-frequency regions. This estimation method is adapted here to be applicable to time-varying signals such as musical audio. While high-resolution methods have been previously explored in the context of music signal processing, previous work has not addressed whether or not such methods truly produce high-resolution sinusoidal parameter estimates in real-world music audio signals. Therefore, this thesis answers the question of whether high-resolution sinusoidal parameter estimators are really high-resolution for real music signals. This work directly explores the capabilities of this form of sinusoidal parameter estimation to resolve collided harmonics. The capabilities of this analysis method are also explored in the context of music signal processing applications. Potential benefits of high-resolution sinusoidal analysis are examined in experiments involving multiple fundamental frequency estimation and audio source separation. This work shows that there are indeed benefits to high-resolution sinusoidal analysis in music signal processing applications, especially when compared to methods that produce sinusoidal parameter estimates based on more traditional time-frequency representations. The benefits of this form of sinusoidal analysis are made most evident in multiple fundamental frequency estimation applications, where substantial performance gains are seen. High-resolution analysis in the context of computational auditory scene analysis-based source separation shows similar performance to existing comparable methods

    Population dynamics

    Get PDF
    Increases or decreases in the size of populations over space and time are, arguably, the motivation for much of pure and applied ecological research. The fundamental model for the dynamics of any population is straightforward: the net change over time in the abundance of some population is the simple difference between the number of additions (individuals entering the population) minus the number of subtractions (individuals leaving the population). Of course, the precise nature of the pattern and process of these additions and subtractions is often complex, and population biology is often replete with fairly dense mathematical representations of both processes. While there is no doubt that analysis of such abstract descriptions of populations has been of considerable value in advancing our, there has often existed a palpable discomfort when the ‘beautiful math’ is faced with the often ‘ugly realities’ of empirical data. In some cases, this attempted merger is abandoned altogether, because of the paucity of ‘good empirical data’ with which the theoretician can modify and evaluate more conceptually–based models. In some cases, the lack of ‘data’ is more accurately represented as a lack of robust estimates of one or more parameters. It is in this arena that methods developed to analyze multiple encounter data from individually marked organisms has seen perhaps the greatest advances. These methods have rapidly evolved to facilitate not only estimation of one or more vital rates, critical to population modeling and analysis, but also to allow for direct estimation of both the dynamics of populations (e.g., Pradel, 1996), and factors influencing those dynamics (e.g., Nichols et al., 2000). The interconnections between the various vital rates, their estimation, and incorporation into models, was the general subject of our plenary presentation by Hal Caswell (Caswell & Fujiwara, 2004). Caswell notes that although interest has traditionally focused on estimation of survival rate (arguably, use of data from marked individuals has been used for estimation of survival more than any other parameter, save perhaps abundance), it is only one of many transitions in the life cycle. Others discussed include transitions between age or size classes, breeding states, and physical locations. The demographic consequences of these transitions can be captured by matrix population models, and such models provide a natural link connecting multi–stage mark–recapture methods and population dynamics. The utility of the matrix approach for both prospective, and retrospective, analysis of variation in the dynamics of populations is well–known; such comparisons of results of prospective and retrospective analysis is fundamental to considerations of conservation management (sensu Caswell, 2000). What is intriguing is the degree to which these methods can be combined, or contrasted, with more direct estimation of one or more measures of the trajectory of a population (e.g., Sandercock & Beissinger, 2002). The five additional papers presented in the population dynamics session clearly reflected these considerations. In particular, the three papers submitted for this volume indicate the various ways in which complex empirical data can be analyzed, and often combined with more classical modeling approaches, to provide more robust insights to the dynamics of the study population. The paper by Francis & Saurola (2004) is an example of rigorous analysis and modeling applied to a large, carefully collected dataset from a long–term study of the biology of the Tawny Owl. Using a combination of live encounters and dead recoveries, the authors were able to separate the relative contributions of various processes (emigration, mortality) on variation in survival rates. These analyses were combined with periodic matrix models to explore comparisons of direct estimation of changes in population size (based on both census and mark–recapture analysis) with model estimates. The utility of combining sources of information into analysis of populations was the explicit subject of the other two papers. Gauthier & Lebreton (2004) draw on a long–term study of an Arctic–breeding Goose population, where both extensive mark–recapture, ring recovery, and census data are available. The primary goal is to use these various sources of information to to evaluate the effect of increased harvests on dynamics of the population. A number of methods are compared; most notably they describe an approach based on the Kalman filter which allows for different sources of information to be used in the same model, that is demographic data (i.e. transition matrix) and census data (i.e. annual survey). They note that one advantage of this approach is that it attempts to minimize both uncertainties associated with the survey and demographic parameters based on the variance of each estimate. The final paper, by Brooks, King and Morgan (Brooks et al., 2004) extends the notion of the combining information in a common model further. They present a Bayesian analysis of joint ring–recovery and census data using a state–space model allowing for the fact that not all members of the population are directly observable. They then impose a Leslie–matrix–based model on the true population counts describing the natural birth–death and age transition processes. Using a Markov Chain Monte Carlo (MCMC) approach (which eliminates the need for some of the standard assumption often invoked in use of a Kalman filter), Brooks and colleagues describe methods to combine information, including potentially relevant covariates that might explain some of the variation, within a larger framework that allows for discrimination (selection) amongst alternative models. We submit that all of the papers presented in this session indicate clearly significant interest in approaches for combining data and modeling approaches. The Bayesian framework appears a natural framework for this effort, since it is able to not only provide a rigorous way to evaluate and integrate multiple sources of information, but provides an explicit mechanism to accommodate various sources of uncertainty about the system. With the advent of numerical approaches to addressing some of the traditionally ‘tricky’ parts of Bayesian inference (e.g., MCMC), and relatively user–friendly software, we suspect that there will be a marked increase in the application of Bayesian inference to the analysis of population dynamics. We believe that the papers presented in this, and other sessions, are harbingers of this trend

    A general and efficient method for estimating continuous IBD functions for use in genome scans for QTL

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Identity by descent (IBD) matrix estimation is a central component in mapping of Quantitative Trait Loci (QTL) using variance component models. A large number of algorithms have been developed for estimation of IBD between individuals in populations at discrete locations in the genome for use in genome scans to detect QTL affecting various traits of interest in experimental animal, human and agricultural pedigrees. Here, we propose a new approach to estimate IBD as continuous functions rather than as discrete values.</p> <p>Results</p> <p>Estimation of IBD functions improved the computational efficiency and memory usage in genome scanning for QTL. We have explored two approaches to obtain continuous marker-bracket IBD-functions. By re-implementing an existing and fast deterministic IBD-estimation method, we show that this approach results in IBD functions that produces the exact same IBD as the original algorithm, but with a greater than 2-fold improvement of the computational efficiency and a considerably lower memory requirement for storing the resulting genome-wide IBD. By developing a general IBD function approximation algorithm, we show that it is possible to estimate marker-bracket IBD functions from IBD matrices estimated at marker locations by any existing IBD estimation algorithm. The general algorithm provides approximations that lead to QTL variance component estimates that even in worst-case scenarios are very similar to the true values. The approach of storing IBD as polynomial IBD-function was also shown to reduce the amount of memory required in genome scans for QTL.</p> <p>Conclusion</p> <p>In addition to direct improvements in computational and memory efficiency, estimation of IBD-functions is a fundamental step needed to develop and implement new efficient optimization algorithms for high precision localization of QTL. Here, we discuss and test two approaches for estimating IBD functions based on existing IBD estimation algorithms. Our approaches provide immediately useful techniques for use in single QTL analyses in the variance component QTL mapping framework. They will, however, be particularly useful in genome scans for multiple interacting QTL, where the improvements in both computational and memory efficiency are the key for successful development of efficient optimization algorithms to allow widespread use of this methodology.</p
    • 

    corecore