34,101 research outputs found

    A novel method for transient detection in high-cadence optical surveys: Its application for a systematic search for novae in M31

    Get PDF
    [abridged] In large-scale time-domain surveys, the processing of data, from procurement up to the detection of sources, is generally automated. One of the main challenges is contamination by artifacts, especially in regions of strong unresolved emission. We present a novel method for identifying candidates for variables and transients from the outputs of such surveys' data pipelines. We use the method to systematically search for novae in iPTF observations of the bulge of M31. We demonstrate that most artifacts produced by the iPTF pipeline form a locally uniform background of false detections approximately obeying Poissonian statistics, whereas genuine variables and transients as well as artifacts associated with bright stars result in clusters of detections, whose spread is determined by the source localization accuracy. This makes the problem analogous to source detection on images produced by X-ray telescopes, enabling one to utilize tools developed in X-ray astronomy. In particular, we use a wavelet-based source detection algorithm from the Chandra data analysis package CIAO. Starting from ~2.5x10^5 raw detections made by the iPTF data pipeline, we obtain ~4000 unique source candidates. Cross-matching these candidates with the source-catalog of a deep reference image, we find counterparts for ~90% of them. These are either artifacts due to imperfect PSF matching or genuine variable sources. The remaining ~400 detections are transient sources. We identify novae among these candidates by applying selection cuts based on the expected properties of nova lightcurves. Thus, we recovered all 12 known novae registered during the time span of the survey and discovered three nova candidates. Our method is generic and can be applied for mining any target out of the artifacts in optical time-domain data. As it is fully automated, its incompleteness can be accurately computed and corrected for.Comment: 16 pages, 8 figures, accepted to A&

    Automated Detection of Regions of Interest for Brain Perfusion MR Images

    Get PDF
    Images with abnormal brain anatomy produce problems for automatic segmentation techniques, and as a result poor ROI detection affects both quantitative measurements and visual assessment of perfusion data. This paper presents a new approach for fully automated and relatively accurate ROI detection from dynamic susceptibility contrast perfusion magnetic resonance and can therefore be applied excellently in the perfusion analysis. In the proposed approach the segmentation output is a binary mask of perfusion ROI that has zero values for air pixels, pixels that represent non-brain tissues, and cerebrospinal fluid pixels. The process of binary mask producing starts with extracting low intensity pixels by thresholding. Optimal low-threshold value is solved by obtaining intensity pixels information from the approximate anatomical brain location. Holes filling algorithm and binary region growing algorithm are used to remove falsely detected regions and produce region of only brain tissues. Further, CSF pixels extraction is provided by thresholding of high intensity pixels from region of only brain tissues. Each time-point image of the perfusion sequence is used for adjustment of CSF pixels location. The segmentation results were compared with the manual segmentation performed by experienced radiologists, considered as the reference standard for evaluation of proposed approach. On average of 120 images the segmentation results have a good agreement with the reference standard. All detected perfusion ROIs were deemed by two experienced radiologists as satisfactory enough for clinical use. The results show that proposed approach is suitable to be used for perfusion ROI detection from DSC head scans. Segmentation tool based on the proposed approach can be implemented as a part of any automatic brain image processing system for clinical use

    Automated multimodal volume registration based on supervised 3D anatomical landmark detection

    Get PDF
    We propose a new method for automatic 3D multimodal registration based on anatomical landmark detection. Landmark detectors are learned independantly in the two imaging modalities using Extremely Randomized Trees and multi-resolution voxel windows. A least-squares fitting algorithm is then used for rigid registration based on the landmark positions as predicted by these detectors in the two imaging modalities. Experiments are carried out with this method on a dataset of pelvis CT and CBCT scans related to 45 patients. On this dataset, our fully automatic approach yields results very competitive with respect to a manually assisted state-of-the-art rigid registration algorithm

    Service Knowledge Capture and Reuse

    Get PDF
    The keynote will start with the need for service knowledge capture and reuse for industrial product-service systems. A novel approach to capture the service damage knowledge about individual component will be presented with experimental results. The technique uses active thermography and image processing approaches for the assessment. The paper will also give an overview of other non-destructive inspection techniques for service damage assessment. A robotic system will be described to automate the damage image capture. The keynote will then propose ways to reuse the knowledge to predict remaining life of the component and feedback to design and manufacturing

    A Statistical Modeling Approach to Computer-Aided Quantification of Dental Biofilm

    Full text link
    Biofilm is a formation of microbial material on tooth substrata. Several methods to quantify dental biofilm coverage have recently been reported in the literature, but at best they provide a semi-automated approach to quantification with significant input from a human grader that comes with the graders bias of what are foreground, background, biofilm, and tooth. Additionally, human assessment indices limit the resolution of the quantification scale; most commercial scales use five levels of quantification for biofilm coverage (0%, 25%, 50%, 75%, and 100%). On the other hand, current state-of-the-art techniques in automatic plaque quantification fail to make their way into practical applications owing to their inability to incorporate human input to handle misclassifications. This paper proposes a new interactive method for biofilm quantification in Quantitative light-induced fluorescence (QLF) images of canine teeth that is independent of the perceptual bias of the grader. The method partitions a QLF image into segments of uniform texture and intensity called superpixels; every superpixel is statistically modeled as a realization of a single 2D Gaussian Markov random field (GMRF) whose parameters are estimated; the superpixel is then assigned to one of three classes (background, biofilm, tooth substratum) based on the training set of data. The quantification results show a high degree of consistency and precision. At the same time, the proposed method gives pathologists full control to post-process the automatic quantification by flipping misclassified superpixels to a different state (background, tooth, biofilm) with a single click, providing greater usability than simply marking the boundaries of biofilm and tooth as done by current state-of-the-art methods.Comment: 10 pages, 7 figures, Journal of Biomedical and Health Informatics 2014. keywords: {Biomedical imaging;Calibration;Dentistry;Estimation;Image segmentation;Manuals;Teeth}, http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758338&isnumber=636350
    • …
    corecore