10,942 research outputs found

    Parametric Regression on the Grassmannian

    Get PDF
    We address the problem of fitting parametric curves on the Grassmann manifold for the purpose of intrinsic parametric regression. As customary in the literature, we start from the energy minimization formulation of linear least-squares in Euclidean spaces and generalize this concept to general nonflat Riemannian manifolds, following an optimal-control point of view. We then specialize this idea to the Grassmann manifold and demonstrate that it yields a simple, extensible and easy-to-implement solution to the parametric regression problem. In fact, it allows us to extend the basic geodesic model to (1) a time-warped variant and (2) cubic splines. We demonstrate the utility of the proposed solution on different vision problems, such as shape regression as a function of age, traffic-speed estimation and crowd-counting from surveillance video clips. Most notably, these problems can be conveniently solved within the same framework without any specifically-tailored steps along the processing pipeline.Comment: 14 pages, 11 figure

    A method for the microlensed flux variance of QSOs

    Full text link
    A fast and practical method is described for calculating the microlensed flux variance of an arbitrary source by uncorrelated stars. The required inputs are the mean convergence and shear due to the smoothed potential of the lensing galaxy, the stellar mass function, and the absolute square of the Fourier transform of the surface brightness in the source plane. The mathematical approach follows previous authors but has been generalized, streamlined, and implemented in publicly available code. Examples of its application are given for Dexter and Agol's inhomogeneous-disk models as well as the usual gaussian sources. Since the quantity calculated is a second moment of the magnification, it is only logarithmically sensitive to the sizes of very compact sources. However, for the inferred sizes of actual QSOs, it has some discriminatory power and may lend itself to simple statistical tests. At the very least, it should be useful for testing the convergence of microlensing simulations.Comment: 10 pages, 6 figure

    The Third Gravitational Lensing Accuracy Testing (GREAT3) Challenge Handbook

    Full text link
    The GRavitational lEnsing Accuracy Testing 3 (GREAT3) challenge is the third in a series of image analysis challenges, with a goal of testing and facilitating the development of methods for analyzing astronomical images that will be used to measure weak gravitational lensing. This measurement requires extremely precise estimation of very small galaxy shape distortions, in the presence of far larger intrinsic galaxy shapes and distortions due to the blurring kernel caused by the atmosphere, telescope optics, and instrumental effects. The GREAT3 challenge is posed to the astronomy, machine learning, and statistics communities, and includes tests of three specific effects that are of immediate relevance to upcoming weak lensing surveys, two of which have never been tested in a community challenge before. These effects include realistically complex galaxy models based on high-resolution imaging from space; spatially varying, physically-motivated blurring kernel; and combination of multiple different exposures. To facilitate entry by people new to the field, and for use as a diagnostic tool, the simulation software for the challenge is publicly available, though the exact parameters used for the challenge are blinded. Sample scripts to analyze the challenge data using existing methods will also be provided. See http://great3challenge.info and http://great3.projects.phys.ucl.ac.uk/leaderboard/ for more information.Comment: 30 pages, 13 figures, submitted for publication, with minor edits (v2) to address comments from the anonymous referee. Simulated data are available for download and participants can find more information at http://great3.projects.phys.ucl.ac.uk/leaderboard

    Discovery and recognition of motion primitives in human activities

    Get PDF
    We present a novel framework for the automatic discovery and recognition of motion primitives in videos of human activities. Given the 3D pose of a human in a video, human motion primitives are discovered by optimizing the `motion flux', a quantity which captures the motion variation of a group of skeletal joints. A normalization of the primitives is proposed in order to make them invariant with respect to a subject anatomical variations and data sampling rate. The discovered primitives are unknown and unlabeled and are unsupervisedly collected into classes via a hierarchical non-parametric Bayes mixture model. Once classes are determined and labeled they are further analyzed for establishing models for recognizing discovered primitives. Each primitive model is defined by a set of learned parameters. Given new video data and given the estimated pose of the subject appearing on the video, the motion is segmented into primitives, which are recognized with a probability given according to the parameters of the learned models. Using our framework we build a publicly available dataset of human motion primitives, using sequences taken from well-known motion capture datasets. We expect that our framework, by providing an objective way for discovering and categorizing human motion, will be a useful tool in numerous research fields including video analysis, human inspired motion generation, learning by demonstration, intuitive human-robot interaction, and human behavior analysis

    3D velocity-depth model building using surface seismic and well data

    Get PDF
    The objective of this work was to develop techniques that could be used to rapidly build a three-dimensional velocity-depth model of the subsurface, using the widest possible variety of data available from conventional seismic processing and allowing for moderate structural complexity. The result is a fully implemented inversion methodology that has been applied successfully to a large number of diverse case studies. A model-based inversion technique is presented and shown to be significantly more accurate than the analytical methods of velocity determination that dominate industrial practice. The inversion itself is based around two stages of ray-tracing. The first takes picked interpretations in migrated-time and maps them into depth using a hypothetical interval velocity field; the second checks the validity of this field by simulating fully the kinematics of seismic acquisition and processing as accurately as possible. Inconsistencies between the actual and the modelled data can then be used to update the interval velocity field using a conventional linear scheme. In order to produce a velocity-depth model that ties the wells, the inversion must include anisotropy. Moreover, a strong correlation between anisotropy and lithology is found. Unfortunately, surface seismic and well-tie data are not usually sufficient to uniquely resolve all the anisotropy parameters; however, the degree of non-uniqueness can be measured quantitatively by a resolution matrix which demonstrates that the model parameter trade-offs are highly dependent on the model and the seismic acquisition. The model parameters are further constrained by introducing well seismic traveltimes into the inversion. These introduce a greater range of propagation angles and reduce the non- uniqueness
    corecore