57,266 research outputs found

    Visual Quality Enhancement in Optoacoustic Tomography using Active Contour Segmentation Priors

    Full text link
    Segmentation of biomedical images is essential for studying and characterizing anatomical structures, detection and evaluation of pathological tissues. Segmentation has been further shown to enhance the reconstruction performance in many tomographic imaging modalities by accounting for heterogeneities of the excitation field and tissue properties in the imaged region. This is particularly relevant in optoacoustic tomography, where discontinuities in the optical and acoustic tissue properties, if not properly accounted for, may result in deterioration of the imaging performance. Efficient segmentation of optoacoustic images is often hampered by the relatively low intrinsic contrast of large anatomical structures, which is further impaired by the limited angular coverage of some commonly employed tomographic imaging configurations. Herein, we analyze the performance of active contour models for boundary segmentation in cross-sectional optoacoustic tomography. The segmented mask is employed to construct a two compartment model for the acoustic and optical parameters of the imaged tissues, which is subsequently used to improve accuracy of the image reconstruction routines. The performance of the suggested segmentation and modeling approach are showcased in tissue-mimicking phantoms and small animal imaging experiments.Comment: Accepted for publication in IEEE Transactions on Medical Imagin

    Fast human detection for video event recognition

    Get PDF
    Human body detection, which has become a research hotspot during the last two years, can be used in many video content analysis applications. This paper investigates a fast human detection method for volume based video event detection. Compared with other object detection systems, human body detection brings more challenge due to threshold problems coming from a wide range of dynamic properties. Motivated by approaches successfully introduced in facial recognition applications, it adapts and adopts feature extraction and machine learning mechanism to classify certain areas from video frames. This method starts from the extraction of Haar-like features from large numbers of sample images for well-regulated feature distribution and is followed by AdaBoost learning and detection algorithm for pattern classification. Experiment on the classifier proves the Haar-like feature based machine learning mechanism can provide a fast and steady result for human body detection and can be further applied to reduce negative aspects in human modelling and analysis for volume based event detection

    The MUSE-Wide Survey: Survey Description and First Data Release

    Get PDF
    We present the MUSE-Wide survey, a blind, 3D spectroscopic survey in the CANDELS/GOODS-S and CANDELS/COSMOS regions. Each MUSE-Wide pointing has a depth of 1 hour and hence targets more extreme and more luminous objects over 10 times the area of the MUSE-Deep fields (Bacon et al. 2017). The legacy value of MUSE-Wide lies in providing "spectroscopy of everything" without photometric pre-selection. We describe the data reduction, post-processing and PSF characterization of the first 44 CANDELS/GOODS-S MUSE-Wide pointings released with this publication. Using a 3D matched filtering approach we detected 1,602 emission line sources, including 479 Lyman-α\alpha (Lya) emitting galaxies with redshifts 2.9≲z≲6.32.9 \lesssim z \lesssim 6.3. We cross-match the emission line sources to existing photometric catalogs, finding almost complete agreement in redshifts and stellar masses for our low redshift (z < 1.5) emitters. At high redshift, we only find ~55% matches to photometric catalogs. We encounter a higher outlier rate and a systematic offset of Δ\Deltaz≃\simeq0.2 when comparing our MUSE redshifts with photometric redshifts. Cross-matching the emission line sources with X-ray catalogs from the Chandra Deep Field South, we find 127 matches, including 10 objects with no prior spectroscopic identification. Stacking X-ray images centered on our Lya emitters yielded no signal; the Lya population is not dominated by even low luminosity AGN. A total of 9,205 photometrically selected objects from the CANDELS survey lie in the MUSE-Wide footprint, which we provide optimally extracted 1D spectra of. We are able to determine the spectroscopic redshift of 98% of 772 photometrically selected galaxies brighter than 24th F775W magnitude. All the data in the first data release - datacubes, catalogs, extracted spectra, maps - are available on the website https://musewide.aip.de. [abridged]Comment: 25 pages 15+1 figures. Accepted, A&A. Comments welcom
    • …
    corecore