5,407 research outputs found

    Edge and Line Feature Extraction Based on Covariance Models

    Get PDF
    age segmentation based on contour extraction usually involves three stages of image operations: feature extraction, edge detection and edge linking. This paper is devoted to the first stage: a method to design feature extractors used to detect edges from noisy and/or blurred images. The method relies on a model that describes the existence of image discontinuities (e.g. edges) in terms of covariance functions. The feature extractor transforms the input image into a “log-likelihood ratio” image. Such an image is a good starting point of the edge detection stage since it represents a balanced trade-off between signal-to-noise ratio and the ability to resolve detailed structures. For 1-D signals, the performance of the edge detector based on this feature extractor is quantitatively assessed by the so called “average risk measure”. The results are compared with the performances of 1-D edge detectors known from literature. Generalizations to 2-D operators are given. Applications on real world images are presented showing the capability of the covariance model to build edge and line feature extractors. Finally it is shown that the covariance model can be coupled to a MRF-model of edge configurations so as to arrive at a maximum a posteriori estimate of the edges or lines in the image

    Ventral-stream-like shape representation : from pixel intensity values to trainable object-selective COSFIRE models

    Get PDF
    Keywords: hierarchical representation, object recognition, shape, ventral stream, vision and scene understanding, robotics, handwriting analysisThe remarkable abilities of the primate visual system have inspired the construction of computational models of some visual neurons. We propose a trainable hierarchical object recognition model, which we call S-COSFIRE (S stands for Shape and COSFIRE stands for Combination Of Shifted FIlter REsponses) and use it to localize and recognize objects of interests embedded in complex scenes. It is inspired by the visual processing in the ventral stream (V1/V2 → V4 → TEO). Recognition and localization of objects embedded in complex scenes is important for many computer vision applications. Most existing methods require prior segmentation of the objects from the background which on its turn requires recognition. An S-COSFIRE filter is automatically configured to be selective for an arrangement of contour-based features that belong to a prototype shape specified by an example. The configuration comprises selecting relevant vertex detectors and determining certain blur and shift parameters. The response is computed as the weighted geometric mean of the blurred and shifted responses of the selected vertex detectors. S-COSFIRE filters share similar properties with some neurons in inferotemporal cortex, which provided inspiration for this work. We demonstrate the effectiveness of S-COSFIRE filters in two applications: letter and keyword spotting in handwritten manuscripts and object spotting in complex scenes for the computer vision system of a domestic robot. S-COSFIRE filters are effective to recognize and localize (deformable) objects in images of complex scenes without requiring prior segmentation. They are versatile trainable shape detectors, conceptually simple and easy to implement. The presented hierarchical shape representation contributes to a better understanding of the brain and to more robust computer vision algorithms.peer-reviewe

    Determination of the Physical Conditions of the Knots in the Helix Nebula from Optical and Infrared Observations

    Get PDF
    [Abridged] We use new HST and archived images to clarify the nature of the knots in the Helix Nebula. We employ published far infrared spectrophotometry and existing 2.12 micron images to establish that the population distribution of the lowest ro-vibrational states of H2 is close to the distribution of a gas in LTE at 988 +- 119 K. We derive a total flux from the nebula in H2 lines and compare this with the power available from the central star for producing this radiation. We establish that neither soft X-rays nor FUV radiation has enough energy to power the H2 radiation, only the stellar EUV radiation shortward of 912 Angstrom does. Advection of material from the cold regions of the knots produces an extensive zone where both atomic and molecular hydrogen are found, allowing the H2 to directly be heated by Lyman continuum radiation, thus providing a mechanism that can explain the excitation temperature and surface brightness of the cusps and tails. New images of the knot 378-801 reveal that the 2.12 micron cusp and tail lie immediately inside the ionized atomic gas zone. This firmly establishes that the "tail" structure is an ionization bounded radiation shadow behind the optically thick core of the knot. A unique new image in the HeII 4686 Angstrom line fails to show any emission from knots that might have been found in the He++ core of the nebula. We also re-examined high signal-to-noise ratio ground-based telescope images of this same inner region and found no evidence of structures that could be related to knots.Comment: Astronomical Journal, in press. Some figures are shown at reduced resolution. A full resolution version is available at http://www.ifront.org/wiki/Helix_Nebula_2007_Pape

    Arrival time and magnitude of airborne fission products from the Fukushima, Japan, reactor incident as measured in Seattle, WA, USA

    Full text link
    We report results of air monitoring started due to the recent natural catastrophe on 11 March 2011 in Japan and the severe ensuing damage to the Fukushima Dai-ichi nuclear reactor complex. On 17-18 March 2011, we registered the first arrival of the airborne fission products 131-I, 132-I, 132-Te, 134-Cs, and 137-Cs in Seattle, WA, USA, by identifying their characteristic gamma rays using a germanium detector. We measured the evolution of the activities over a period of 23 days at the end of which the activities had mostly fallen below our detection limit. The highest detected activity amounted to 4.4 +/- 1.3 mBq/m^3 of 131-I on 19-20 March.Comment: 7 pages, 5 figures, published in Journal of Environmental Radioactivit

    The VAST Survey - IV. A wide brown dwarf companion to the A3V star ζ\zeta Delphini

    Full text link
    We report the discovery of a wide co-moving substellar companion to the nearby (D=67.5±1.1D=67.5\pm1.1 pc) A3V star ζ\zeta Delphini based on imaging and follow-up spectroscopic observations obtained during the course of our Volume-limited A-Star (VAST) multiplicity survey. ζ\zeta Del was observed over a five-year baseline with adaptive optics, revealing the presence of a previously-unresolved companion with a proper motion consistent with that of the A-type primary. The age of the ζ\zeta Del system was estimated as 525±125525\pm125 Myr based on the position of the primary on the colour-magnitude and temperature-luminosity diagrams. Using intermediate-resolution near-infrared spectroscopy, the spectrum of ζ\zeta Del B is shown to be consistent with a mid-L dwarf (L5±25\pm2), at a temperature of 1650±2001650\pm200 K. Combining the measured near-infrared magnitude of ζ\zeta Del B with the estimated temperature leads to a model-dependent mass estimate of 50±1550\pm15 MJup_{\rm Jup}, corresponding to a mass ratio of q=0.019±0.006q=0.019\pm0.006. At a projected separation of 910±14910\pm14 au, ζ\zeta Del B is among the most widely-separated and extreme-mass ratio substellar companions to a main-sequence star resolved to-date, providing a rare empirical constraint of the formation of low-mass ratio companions at extremely wide separations.Comment: 12 pages, 11 figures, accepted for publication in the Monthly Notices of the Royal Astronomical Society, 2014 September 25. Revised to incorporate typographical errors noted during the proofing proces

    SXDF-ALMA 1.5 arcmin^2 deep survey. A compact dusty star-forming galaxy at z=2.5

    Get PDF
    We present first results from the SXDF-ALMA 1.5 arcmin^2 deep survey at 1.1 mm using Atacama Large Millimeter Array (ALMA). The map reaches a 1sigma depth of 55 uJy/beam and covers 12 Halpha-selected star-forming galaxies at z = 2.19 or z=2.53. We have detected continuum emission from three of our Halpha-selected sample, including one compact star-forming galaxy with high stellar surface density, NB2315-07. They are all red in the rest-frame optical and have stellar masses of log (M*/Msun)>10.9 whereas the other blue, main-sequence galaxies with log(M*/Msun)=10.0-10.8 are exceedingly faint, <290 uJy (2sigma upper limit). We also find the 1.1 mm-brightest galaxy, NB2315-02, to be associated with a compact (R_e=0.7+-0.1 kpc), dusty star-forming component. Given high gas fraction (44^{+20}_{-8}% or 37^{+25}_{-3}%) and high star formation rate surface density (126^{+27}_{-30} Msun yr^{-1}kpc^{-2}), the concentrated starburst can within less than 50^{+12}_{-11} Myr build up a stellar surface density matching that of massive compact galaxies at z~2, provided at least 19+-3% of the total gas is converted into stars in the galaxy centre. On the other hand, NB2315-07, which already has such a high stellar surface density core, shows a gas fraction (23+-8%) and is located in the lower envelope of the star formation main-sequence. This compact less star-forming galaxy is likely to be in an intermediate phase between compact dusty star-forming and quiescent galaxies.Comment: 6 pages, 4 figures, 1 table, accepted for publication in ApJ

    Saliency Prediction for Mobile User Interfaces

    Full text link
    We introduce models for saliency prediction for mobile user interfaces. A mobile interface may include elements like buttons, text, etc. in addition to natural images which enable performing a variety of tasks. Saliency in natural images is a well studied area. However, given the difference in what constitutes a mobile interface, and the usage context of these devices, we postulate that saliency prediction for mobile interface images requires a fresh approach. Mobile interface design involves operating on elements, the building blocks of the interface. We first collected eye-gaze data from mobile devices for free viewing task. Using this data, we develop a novel autoencoder based multi-scale deep learning model that provides saliency prediction at the mobile interface element level. Compared to saliency prediction approaches developed for natural images, we show that our approach performs significantly better on a range of established metrics.Comment: Paper accepted at WACV 201
    corecore