15,634 research outputs found

    ImageJ2: ImageJ for the next generation of scientific image data

    Full text link
    ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software's ability to handle the requirements of modern science. Due to these new and emerging challenges in scientific imaging, ImageJ is at a critical development crossroads. We present ImageJ2, a total redesign of ImageJ offering a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. ImageJ2 provides a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs

    Reconstructing the Stellar Mass Distributions of Galaxies Using S4G IRAC 3.6 and 4.5 μm Images. I. Correcting for Contamination by Polycyclic Aromatic Hydrocarbons, Hot Dust, and Intermediate-age Stars

    Get PDF
    With the aim of constructing accurate two-dimensional maps of the stellar mass distribution in nearby galaxies from Spitzer Survey of Stellar Structure in Galaxies 3.6 and 4.5 μm images, we report on the separation of the light from old stars from the emission contributed by contaminants. Results for a small sample of six disk galaxies (NGC 1566, NGC 2976, NGC 3031, NGC 3184, NGC 4321, and NGC 5194) with a range of morphological properties, dust content, and star formation histories are presented to demonstrate our approach. To isolate the old stellar light from contaminant emission (e.g., hot dust and the 3.3 μm polycyclic aromatic hydrocarbon (PAH) feature) in the IRAC 3.6 and 4.5 μm bands we use an independent component analysis (ICA) technique designed to separate statistically independent source distributions, maximizing the distinction in the [3.6]-[4.5] colors of the sources. The technique also removes emission from evolved red objects with a low mass-to-light ratio, such as asymptotic giant branch (AGB) and red supergiant (RSG) stars, revealing maps of the underlying old distribution of light with [3.6]-[4.5] colors consistent with the colors of K and M giants. The contaminants are studied by comparison with the non-stellar emission imaged at 8 μm, which is dominated by the broad PAH feature. Using the measured 3.6 μm/8 μm ratio to select individual contaminants, we find that hot dust and PAHs together contribute between ~5% and 15% to the integrated light at 3.6 μm, while light from regions dominated by intermediate-age (AGB and RSG) stars accounts for only 1%-5%. Locally, however, the contribution from either contaminant can reach much higher levels; dust contributes on average 22% to the emission in star-forming regions throughout the sample, while intermediate-age stars contribute upward of 50% in localized knots. The removal of these contaminants with ICA leaves maps of the old stellar disk that retain a high degree of structural information and are ideally suited for tracing stellar mass, as will be the focus in a companion paper

    Behavior of nanoparticle clouds around a magnetized microsphere under magnetic and flow fields

    Get PDF
    When a micron-sized magnetizable particle is introduced into a suspension of nanosized magnetic particles, the nanoparticles accumulate around the microparticle and form thick anisotropic clouds extended in the direction of the applied magnetic field. This phenomenon promotes colloidal stabilization of bimodal magnetic suspensions and allows efficient magnetic separation of nanoparticles used in bioanalysis and water purification. In the present work, size and shape of nanoparticle clouds under the simultaneous action of an external uniform magnetic field and the flow have been studied in details. In experiments, dilute suspension of iron oxide nanoclusters (of a mean diameter of 60 nm) was pushed through a thin slit channel with the nickel microspheres (of a mean diameter of 50μ\mum) attached to the channel wall. The behavior of nanocluster clouds was observed in the steady state using an optical microscope. In the presence of strong enough flow, the size of the clouds monotonically decreases with increasing flow speed in both longitudinal and transverse magnetic fields. This is qualitatively explained by enhancement of hydrodynamic forces washing the nanoclusters away from the clouds. In the longitudinal field, the flow induces asymmetry of the front and the back clouds. To explain the flow and the field effects on the clouds, we have developed a simple model based on the balance of the stresses and particle fluxes on the cloud surface. This model, applied to the case of the magnetic field parallel to the flow, captures reasonably well the flow effect on the size and shape of the cloud and reveals that the only dimensionless parameter governing the cloud size is the ratio of hydrodynamic-to-magnetic forces - the Mason number. At strong magnetic interactions considered in the present work (dipolar coupling parameter α2\alpha \geq 2), the Brownian motion seems not to affect the cloud behavior

    Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    Full text link
    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform - Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment.Comment: 22 pages, 9 figure

    CNN based Learning using Reflection and Retinex Models for Intrinsic Image Decomposition

    Get PDF
    Most of the traditional work on intrinsic image decomposition rely on deriving priors about scene characteristics. On the other hand, recent research use deep learning models as in-and-out black box and do not consider the well-established, traditional image formation process as the basis of their intrinsic learning process. As a consequence, although current deep learning approaches show superior performance when considering quantitative benchmark results, traditional approaches are still dominant in achieving high qualitative results. In this paper, the aim is to exploit the best of the two worlds. A method is proposed that (1) is empowered by deep learning capabilities, (2) considers a physics-based reflection model to steer the learning process, and (3) exploits the traditional approach to obtain intrinsic images by exploiting reflectance and shading gradient information. The proposed model is fast to compute and allows for the integration of all intrinsic components. To train the new model, an object centered large-scale datasets with intrinsic ground-truth images are created. The evaluation results demonstrate that the new model outperforms existing methods. Visual inspection shows that the image formation loss function augments color reproduction and the use of gradient information produces sharper edges. Datasets, models and higher resolution images are available at https://ivi.fnwi.uva.nl/cv/retinet.Comment: CVPR 201
    corecore