66,425 research outputs found

    A survey of smoothing techniques for ME models

    Full text link

    Robust Singular Smoothers For Tracking Using Low-Fidelity Data

    Full text link
    Tracking underwater autonomous platforms is often difficult because of noisy, biased, and discretized input data. Classic filters and smoothers based on standard assumptions of Gaussian white noise break down when presented with any of these challenges. Robust models (such as the Huber loss) and constraints (e.g. maximum velocity) are used to attenuate these issues. Here, we consider robust smoothing with singular covariance, which covers bias and correlated noise, as well as many specific model types, such as those used in navigation. In particular, we show how to combine singular covariance models with robust losses and state-space constraints in a unified framework that can handle very low-fidelity data. A noisy, biased, and discretized navigation dataset from a submerged, low-cost inertial measurement unit (IMU) package, with ultra short baseline (USBL) data for ground truth, provides an opportunity to stress-test the proposed framework with promising results. We show how robust modeling elements improve our ability to analyze the data, and present batch processing results for 10 minutes of data with three different frequencies of available USBL position fixes (gaps of 30 seconds, 1 minute, and 2 minutes). The results suggest that the framework can be extended to real-time tracking using robust windowed estimation.Comment: 9 pages, 9 figures, to be included in Robotics: Science and Systems 201

    Nonparametric Methods in Astronomy: Think, Regress, Observe -- Pick Any Three

    Get PDF
    Telescopes are much more expensive than astronomers, so it is essential to minimize required sample sizes by using the most data-efficient statistical methods possible. However, the most commonly used model-independent techniques for finding the relationship between two variables in astronomy are flawed. In the worst case they can lead without warning to subtly yet catastrophically wrong results, and even in the best case they require more data than necessary. Unfortunately, there is no single best technique for nonparametric regression. Instead, we provide a guide for how astronomers can choose the best method for their specific problem and provide a python library with both wrappers for the most useful existing algorithms and implementations of two new algorithms developed here.Comment: 19 pages, PAS

    A Survey of Location Prediction on Twitter

    Full text link
    Locations, e.g., countries, states, cities, and point-of-interests, are central to news, emergency events, and people's daily lives. Automatic identification of locations associated with or mentioned in documents has been explored for decades. As one of the most popular online social network platforms, Twitter has attracted a large number of users who send millions of tweets on daily basis. Due to the world-wide coverage of its users and real-time freshness of tweets, location prediction on Twitter has gained significant attention in recent years. Research efforts are spent on dealing with new challenges and opportunities brought by the noisy, short, and context-rich nature of tweets. In this survey, we aim at offering an overall picture of location prediction on Twitter. Specifically, we concentrate on the prediction of user home locations, tweet locations, and mentioned locations. We first define the three tasks and review the evaluation metrics. By summarizing Twitter network, tweet content, and tweet context as potential inputs, we then structurally highlight how the problems depend on these inputs. Each dependency is illustrated by a comprehensive review of the corresponding strategies adopted in state-of-the-art approaches. In addition, we also briefly review two related problems, i.e., semantic location prediction and point-of-interest recommendation. Finally, we list future research directions.Comment: Accepted to TKDE. 30 pages, 1 figur

    Atmospheric PSF Interpolation for Weak Lensing in Short Exposure Imaging Data

    Full text link
    A main science goal for the Large Synoptic Survey Telescope (LSST) is to measure the cosmic shear signal from weak lensing to extreme accuracy. One difficulty, however, is that with the short exposure time (\simeq15 seconds) proposed, the spatial variation of the Point Spread Function (PSF) shapes may be dominated by the atmosphere, in addition to optics errors. While optics errors mainly cause the PSF to vary on angular scales similar or larger than a single CCD sensor, the atmosphere generates stochastic structures on a wide range of angular scales. It thus becomes a challenge to infer the multi-scale, complex atmospheric PSF patterns by interpolating the sparsely sampled stars in the field. In this paper we present a new method, PSFent, for interpolating the PSF shape parameters, based on reconstructing underlying shape parameter maps with a multi-scale maximum entropy algorithm. We demonstrate, using images from the LSST Photon Simulator, the performance of our approach relative to a 5th-order polynomial fit (representing the current standard) and a simple boxcar smoothing technique. Quantitatively, PSFent predicts more accurate PSF models in all scenarios and the residual PSF errors are spatially less correlated. This improvement in PSF interpolation leads to a factor of 3.5 lower systematic errors in the shear power spectrum on scales smaller than 13\sim13', compared to polynomial fitting. We estimate that with PSFent and for stellar densities greater than 1/arcmin2\simeq1/{\rm arcmin}^{2}, the spurious shear correlation from PSF interpolation, after combining a complete 10-year dataset from LSST, is lower than the corresponding statistical uncertainties on the cosmic shear power spectrum, even under a conservative scenario.Comment: 18 pages,12 figures, accepted by MNRA

    Context-aware person identification in personal photo collections

    Get PDF
    Identifying the people in photos is an important need for users of photo management systems. We present MediAssist, one such system which facilitates browsing, searching and semi-automatic annotation of personal photos, using analysis of both image content and the context in which the photo is captured. This semi-automatic annotation includes annotation of the identity of people in photos. In this paper, we focus on such person annotation, and propose person identification techniques based on a combination of context and content. We propose language modelling and nearest neighbor approaches to context-based person identification, in addition to novel face color and image color content-based features (used alongside face recognition and body patch features). We conduct a comprehensive empirical study of these techniques using the real private photo collections of a number of users, and show that combining context- and content-based analysis improves performance over content or context alone

    Multi-Scale Morphological Analysis of SDSS DR5 Survey using the Metric Space Technique

    Full text link
    Following novel development and adaptation of the Metric Space Technique (MST), a multi-scale morphological analysis of the Sloan Digital Sky Survey (SDSS) Data Release 5 (DR5) was performed. The technique was adapted to perform a space-scale morphological analysis by filtering the galaxy point distributions with a smoothing Gaussian function, thus giving quantitative structural information on all size scales between 5 and 250 Mpc. The analysis was performed on a dozen slices of a volume of space containing many newly measured galaxies from the SDSS DR5 survey. Using the MST, observational data were compared to galaxy samples taken from N-body simulations with current best estimates of cosmological parameters and from random catalogs. By using the maximal ranking method among MST output functions we also develop a way to quantify the overall similarity of the observed samples with the simulated samples

    From Frequency to Meaning: Vector Space Models of Semantics

    Full text link
    Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field
    corecore