140,456 research outputs found

    A Monte Carlo Template based analysis for Air-Cherenkov Arrays

    Get PDF
    We present a high-performance event reconstruction algorithm: an Image Pixel-wise fit for Atmospheric Cherenkov Telescopes (ImPACT). The reconstruction algorithm is based around the likelihood fitting of camera pixel amplitudes to an expected image template. A maximum likelihood fit is performed to find the best-fit shower parameters. A related reconstruction algorithm has already been shown to provide significant improvements over traditional reconstruction for both the CAT and H.E.S.S. experiments. We demonstrate a significant improvement to the template generation step of the procedure, by the use of a full Monte Carlo air shower simulation in combination with a ray-tracing optics simulation to more accurately model the expected camera images. This reconstruction step is combined with an MVA-based background rejection. Examples are shown of the performance of the ImPACT analysis on both simulated and measured (from a strong VHE source) gamma-ray data from the H.E.S.S. array, demonstrating an improvement in sensitivity of more than a factor two in observation time over traditional image moments-fitting methods, with comparable performance to previous likelihood fitting analyses. ImPACT is a particularly promising approach for future large arrays such as the Cherenkov Telescope Array (CTA) due to its improved high-energy performance and suitability for arrays of mixed telescope types.Comment: 13 pages, 10 figure

    Sampling and Reconstruction of Shapes with Algebraic Boundaries

    Get PDF
    We present a sampling theory for a class of binary images with finite rate of innovation (FRI). Every image in our model is the restriction of \mathds{1}_{\{p\leq0\}} to the image plane, where \mathds{1} denotes the indicator function and pp is some real bivariate polynomial. This particularly means that the boundaries in the image form a subset of an algebraic curve with the implicit polynomial pp. We show that the image parameters --i.e., the polynomial coefficients-- satisfy a set of linear annihilation equations with the coefficients being the image moments. The inherent sensitivity of the moments to noise makes the reconstruction process numerically unstable and narrows the choice of the sampling kernels to polynomial reproducing kernels. As a remedy to these problems, we replace conventional moments with more stable \emph{generalized moments} that are adjusted to the given sampling kernel. The benefits are threefold: (1) it relaxes the requirements on the sampling kernels, (2) produces annihilation equations that are robust at numerical precision, and (3) extends the results to images with unbounded boundaries. We further reduce the sensitivity of the reconstruction process to noise by taking into account the sign of the polynomial at certain points, and sequentially enforcing measurement consistency. We consider various numerical experiments to demonstrate the performance of our algorithm in reconstructing binary images, including low to moderate noise levels and a range of realistic sampling kernels.Comment: 12 pages, 14 figure

    Accurate object reconstruction by statistical moments

    No full text
    Statistical moments can offer a powerful means for object description in object sequences. Moments used in this way provide a description of the changing shape of the object with time. Using these descriptions to predict temporal views of the object requires efficient and accurate reconstruction of the object from a limited set of moments, but accurate reconstruction from moments has as yet received only limited attention. We show how we can improve accuracy not only by consideration of formulation, but also by a new adaptive thresholding technique that removes one parameter needed in reconstruction. Both approaches are equally applicable for Legendre and other orthogonal moments to improve accuracy in reconstruction

    A Comparison on Features Efficiency in Automatic Reconstruction of Archeological Broken Objects

    Get PDF
    Automatic reconstruction of archeological broken objects is an invaluable tool for restoration purposes and personnel. In this paper, we assume that broken pieces have similar characteristics on their common boundaries, when they are correctly combined. In this paper we work in a framework for the full reconstruction of the original objects using texture and surface design information on the sherd. The texture of a band outside the border of pieces is predicted by inpainting and texture synthesis methods. Feature values are derived from these original and predicted images of pieces. We present a quantitative and qualitative comparison over a large set of features and over a large set of synthetic and real archeological broken objects

    Automatic detection of limb prominences in 304 A EUV images

    Get PDF
    A new algorithm for automatic detection of prominences on the solar limb in 304 A EUV images is presented, and results of its application to SOHO/EIT data discussed. The detection is based on the method of moments combined with a classifier analysis aimed at discriminating between limb prominences, active regions, and the quiet corona. This classifier analysis is based on a Support Vector Machine (SVM). Using a set of 12 moments of the radial intensity profiles, the algorithm performs well in discriminating between the above three categories of limb structures, with a misclassification rate of 7%. Pixels detected as belonging to a prominence are then used as starting point to reconstruct the whole prominence by morphological image processing techniques. It is planned that a catalogue of limb prominences identified in SOHO and STEREO data using this method will be made publicly available to the scientific community
    corecore