6,079 research outputs found

    Algorithms for Colourful Simplicial Depth and Medians in the Plane

    Full text link
    The colourful simplicial depth of a point x in the plane relative to a configuration of n points in k colour classes is exactly the number of closed simplices (triangles) with vertices from 3 different colour classes that contain x in their convex hull. We consider the problems of efficiently computing the colourful simplicial depth of a point x, and of finding a point, called a median, that maximizes colourful simplicial depth. For computing the colourful simplicial depth of x, our algorithm runs in time O(n log(n) + k n) in general, and O(kn) if the points are sorted around x. For finding the colourful median, we get a time of O(n^4). For comparison, the running times of the best known algorithm for the monochrome version of these problems are O(n log(n)) in general, improving to O(n) if the points are sorted around x for monochrome depth, and O(n^4) for finding a monochrome median.Comment: 17 pages, 8 figure

    Photon-Efficient Computational 3D and Reflectivity Imaging with Single-Photon Detectors

    Get PDF
    Capturing depth and reflectivity images at low light levels from active illumination of a scene has wide-ranging applications. Conventionally, even with single-photon detectors, hundreds of photon detections are needed at each pixel to mitigate Poisson noise. We develop a robust method for estimating depth and reflectivity using on the order of 1 detected photon per pixel averaged over the scene. Our computational imager combines physically accurate single-photon counting statistics with exploitation of the spatial correlations present in real-world reflectivity and 3D structure. Experiments conducted in the presence of strong background light demonstrate that our computational imager is able to accurately recover scene depth and reflectivity, while traditional maximum-likelihood based imaging methods lead to estimates that are highly noisy. Our framework increases photon efficiency 100-fold over traditional processing and also improves, somewhat, upon first-photon imaging under a total acquisition time constraint in raster-scanned operation. Thus our new imager will be useful for rapid, low-power, and noise-tolerant active optical imaging, and its fixed dwell time will facilitate parallelization through use of a detector array.Comment: 11 pages, 8 figure

    A Geometric Approach to Covariance Matrix Estimation and its Applications to Radar Problems

    Full text link
    A new class of disturbance covariance matrix estimators for radar signal processing applications is introduced following a geometric paradigm. Each estimator is associated with a given unitary invariant norm and performs the sample covariance matrix projection into a specific set of structured covariance matrices. Regardless of the considered norm, an efficient solution technique to handle the resulting constrained optimization problem is developed. Specifically, it is shown that the new family of distribution-free estimators shares a shrinkagetype form; besides, the eigenvalues estimate just requires the solution of a one-dimensional convex problem whose objective function depends on the considered unitary norm. For the two most common norm instances, i.e., Frobenius and spectral, very efficient algorithms are developed to solve the aforementioned one-dimensional optimization leading to almost closed form covariance estimates. At the analysis stage, the performance of the new estimators is assessed in terms of achievable Signal to Interference plus Noise Ratio (SINR) both for a spatial and a Doppler processing assuming different data statistical characterizations. The results show that interesting SINR improvements with respect to some counterparts available in the open literature can be achieved especially in training starved regimes.Comment: submitted for journal publicatio

    Robust Learning from Bites for Data Mining

    Get PDF
    Some methods from statistical machine learning and from robust statistics have two drawbacks. Firstly, they are computer-intensive such that they can hardly be used for massive data sets, say with millions of data points. Secondly, robust and non-parametric confidence intervals for the predictions according to the fitted models are often unknown. Here, we propose a simple but general method to overcome these problems in the context of huge data sets. The method is scalable to the memory of the computer, can be distributed on several processors if available, and can help to reduce the computation time substantially. Our main focus is on robust general support vector machines (SVM) based on minimizing regularized risks. The method offers distribution-free confidence intervals for the median of the predictions. The approach can also be helpful to fit robust estimators in parametric models for huge data sets. --Breakdown point,convex risk minimization,data mining,distributed computing,influence function,logistic regression,robustness,scalability
    corecore