3,053 research outputs found

    Color Filter Array Demosaicking Using High-Order Interpolation Techniques With a Weighted Median Filter for Sharp Color Edge Preservation

    Get PDF
    Demosaicking is an estimation process to determine missing color values when a single-sensor digital camera is used for color image capture. In this paper, we propose a number of new methods based on the application of Taylor series and cubic spline interpolation for color filter array demosaicking. To avoid the blurring of an edge, interpolants are first estimated in four opposite directions so that no interpolation is carried out across an edge. A weighted median filter, whose filter coefficients are determined by a classifier based on an edge orientation map, is then used to produce an output from the four interpolants to preserve edges. Using the proposed methods, the original color can be faithfully reproduced with minimal amount of color artifacts even at edges

    Time-ordered data simulation and map-making for the PIXIE Fourier transform spectrometer

    Get PDF
    We develop a time-ordered data simulator and map-maker for the proposed PIXIE Fourier transform spectrometer and use them to investigate the impact of polarization leakage, imperfect collimation, elliptical beams, sub-pixel effects, correlated noise and spectrometer mirror jitter on the PIXIE data analysis. We find that PIXIE is robust to all of these effects, with the exception of mirror jitter which could become the dominant source of noise in the experiment if the jitter is not kept significantly below 0.1μms0.1\mu m\sqrt{s}. Source code is available at https://github.com/amaurea/pixie.Comment: 27 pages, 15 figures. Accepted for publication in JCA

    Phase History Decomposition for Efficient Scatterer Classification in SAR Imagery

    Get PDF
    A new theory and algorithm for scatterer classification in SAR imagery is presented. The automated classification process is operationally efficient compared to existing image segmentation methods requiring human supervision. The algorithm reconstructs coarse resolution subimages from subdomains of the SAR phase history. It analyzes local peaks in the subimages to determine locations and geometric shapes of scatterers in the scene. Scatterer locations are indicated by the presence of a stable peak in all subimages for a given subaperture, while scatterer shapes are indicated by changes in pixel intensity. A new multi-peak model is developed from physical models of electromagnetic scattering to predict how pixel intensities behave for different scatterer shapes. The algorithm uses a least squares classifier to match observed pixel behavior to the model. Classification accuracy improves with increasing fractional bandwidth and is subject to the high-frequency and wide-aperture approximations of the multi-peak model. For superior computational efficiency, an integrated fast SAR imaging technique is developed to combine the coarse resolution subimages into a final SAR image having fine resolution. Finally, classification results are overlaid on the SAR image so that analysts can deduce the significance of the scatterer shape information within the image context

    Towards low-latency real-time detection of gravitational waves from compact binary coalescences in the era of advanced detectors

    Get PDF
    Electromagnetic (EM) follow-up observations of gravitational wave (GW) events will help shed light on the nature of the sources, and more can be learned if the EM follow-ups can start as soon as the GW event becomes observable. In this paper, we propose a computationally efficient time-domain algorithm capable of detecting gravitational waves (GWs) from coalescing binaries of compact objects with nearly zero time delay. In case when the signal is strong enough, our algorithm also has the flexibility to trigger EM observation before the merger. The key to the efficiency of our algorithm arises from the use of chains of so-called Infinite Impulse Response (IIR) filters, which filter time-series data recursively. Computational cost is further reduced by a template interpolation technique that requires filtering to be done only for a much coarser template bank than otherwise required to sufficiently recover optimal signal-to-noise ratio. Towards future detectors with sensitivity extending to lower frequencies, our algorithm's computational cost is shown to increase rather insignificantly compared to the conventional time-domain correlation method. Moreover, at latencies of less than hundreds to thousands of seconds, this method is expected to be computationally more efficient than the straightforward frequency-domain method.Comment: 19 pages, 6 figures, for PR

    Airborne LiDAR for DEM generation: some critical issues

    Get PDF
    Airborne LiDAR is one of the most effective and reliable means of terrain data collection. Using LiDAR data for DEM generation is becoming a standard practice in spatial related areas. However, the effective processing of the raw LiDAR data and the generation of an efficient and high-quality DEM remain big challenges. This paper reviews the recent advances of airborne LiDAR systems and the use of LiDAR data for DEM generation, with special focus on LiDAR data filters, interpolation methods, DEM resolution, and LiDAR data reduction. Separating LiDAR points into ground and non-ground is the most critical and difficult step for DEM generation from LiDAR data. Commonly used and most recently developed LiDAR filtering methods are presented. Interpolation methods and choices of suitable interpolator and DEM resolution for LiDAR DEM generation are discussed in detail. In order to reduce the data redundancy and increase the efficiency in terms of storage and manipulation, LiDAR data reduction is required in the process of DEM generation. Feature specific elements such as breaklines contribute significantly to DEM quality. Therefore, data reduction should be conducted in such a way that critical elements are kept while less important elements are removed. Given the highdensity characteristic of LiDAR data, breaklines can be directly extracted from LiDAR data. Extraction of breaklines and integration of the breaklines into DEM generation are presented

    Video modeling via implicit motion representations

    Get PDF
    Video modeling refers to the development of analytical representations for explaining the intensity distribution in video signals. Based on the analytical representation, we can develop algorithms for accomplishing particular video-related tasks. Therefore video modeling provides us a foundation to bridge video data and related-tasks. Although there are many video models proposed in the past decades, the rise of new applications calls for more efficient and accurate video modeling approaches.;Most existing video modeling approaches are based on explicit motion representations, where motion information is explicitly expressed by correspondence-based representations (i.e., motion velocity or displacement). Although it is conceptually simple, the limitations of those representations and the suboptimum of motion estimation techniques can degrade such video modeling approaches, especially for handling complex motion or non-ideal observation video data. In this thesis, we propose to investigate video modeling without explicit motion representation. Motion information is implicitly embedded into the spatio-temporal dependency among pixels or patches instead of being explicitly described by motion vectors.;Firstly, we propose a parametric model based on a spatio-temporal adaptive localized learning (STALL). We formulate video modeling as a linear regression problem, in which motion information is embedded within the regression coefficients. The coefficients are adaptively learned within a local space-time window based on LMMSE criterion. Incorporating a spatio-temporal resampling and a Bayesian fusion scheme, we can enhance the modeling capability of STALL on more general videos. Under the framework of STALL, we can develop video processing algorithms for a variety of applications by adjusting model parameters (i.e., the size and topology of model support and training window). We apply STALL on three video processing problems. The simulation results show that motion information can be efficiently exploited by our implicit motion representation and the resampling and fusion do help to enhance the modeling capability of STALL.;Secondly, we propose a nonparametric video modeling approach, which is not dependent on explicit motion estimation. Assuming the video sequence is composed of many overlapping space-time patches, we propose to embed motion-related information into the relationships among video patches and develop a generic sparsity-based prior for typical video sequences. First, we extend block matching to more general kNN-based patch clustering, which provides an implicit and distributed representation for motion information. We propose to enforce the sparsity constraint on a higher-dimensional data array signal, which is generated by packing the patches in the similar patch set. Then we solve the inference problem by updating the kNN array and the wanted signal iteratively. Finally, we present a Bayesian fusion approach to fuse multiple-hypothesis inferences. Simulation results in video error concealment, denoising, and deartifacting are reported to demonstrate its modeling capability.;Finally, we summarize the proposed two video modeling approaches. We also point out the perspectives of implicit motion representations in applications ranging from low to high level problems
    • …
    corecore