4,095 research outputs found

    Colour displays for categorical images

    Get PDF
    We propose a method for identifying a set of colours for displaying 2-D and 3-D categorical images when the categories are unordered labels. The principle is to find maximally distinct sets of colours. We either generate colours sequentially, to maximise the dissimilarity or distance between a new colour and the set of colours already chosen, or use a simulated annealing algorithm to find a set of colours of specified size. In both cases, we use a Euclidean metric on the perceptual colour space, CIE-LAB, to specify distances

    Astronomical Data Analysis and Sparsity: from Wavelets to Compressed Sensing

    Get PDF
    Wavelets have been used extensively for several years now in astronomy for many purposes, ranging from data filtering and deconvolution, to star and galaxy detection or cosmic ray removal. More recent sparse representations such ridgelets or curvelets have also been proposed for the detection of anisotropic features such cosmic strings in the cosmic microwave background. We review in this paper a range of methods based on sparsity that have been proposed for astronomical data analysis. We also discuss what is the impact of Compressed Sensing, the new sampling theory, in astronomy for collecting the data, transferring them to the earth or reconstructing an image from incomplete measurements.Comment: Submitted. Full paper will figures available at http://jstarck.free.fr/IEEE09_SparseAstro.pd

    Application of the quantum spin glass theory to image restoration

    Get PDF
    Quantum fluctuation is introduced into the Markov random fields (MRF's) model for image restoration in the context of Bayesian approach. We investigate the dependence of the quantum fluctuation on the quality of BW image restoration by making use of statistical mechanics. We find that the maximum posterior marginal (MPM) estimate based on the quantum fluctuation gives a fine restoration in comparison with the maximum a posterior (MAP) estimate or the thermal fluctuation based MPM estimate.Comment: 19 pages, 9 figures, 1 table, RevTe

    Quantum Annealing - Foundations and Frontiers

    Full text link
    We briefly review various computational methods for the solution of optimization problems. First, several classical methods such as Metropolis algorithm and simulated annealing are discussed. We continue with a description of quantum methods, namely adiabatic quantum computation and quantum annealing. Next, the new D-Wave computer and the recent progress in the field claimed by the D-Wave group are discussed. We present a set of criteria which could help in testing the quantum features of these computers. We conclude with a list of considerations with regard to future research.Comment: 22 pages, 6 figures. EPJ-ST Discussion and Debate Issue: Quantum Annealing: The fastest route to large scale quantum computation?, Eds. A. Das, S. Suzuki (2014

    Recognition of partially occluded threat objects using the annealed Hopefield network

    Get PDF
    Recognition of partially occluded objects has been an important issue to airport security because occlusion causes significant problems in identifying and locating objects during baggage inspection. The neural network approach is suitable for the problems in the sense that the inherent parallelism of neural networks pursues many hypotheses in parallel resulting in high computation rates. Moreover, they provide a greater degree of robustness or fault tolerance than conventional computers. The annealed Hopfield network which is derived from the mean field annealing (MFA) has been developed to find global solutions of a nonlinear system. In the study, it has been proven that the system temperature of MFA is equivalent to the gain of the sigmoid function of a Hopfield network. In our early work, we developed the hybrid Hopfield network (HHN) for fast and reliable matching. However, HHN doesn't guarantee global solutions and yields false matching under heavily occluded conditions because HHN is dependent on initial states by its nature. In this paper, we present the annealed Hopfield network (AHN) for occluded object matching problems. In AHN, the mean field theory is applied to the hybird Hopfield network in order to improve computational complexity of the annealed Hopfield network and provide reliable matching under heavily occluded conditions. AHN is slower than HHN. However, AHN provides near global solutions without initial restrictions and provides less false matching than HHN. In conclusion, a new algorithm based upon a neural network approach was developed to demonstrate the feasibility of the automated inspection of threat objects from x-ray images. The robustness of the algorithm is proved by identifying occluded target objects with large tolerance of their features

    Image Reconstruction in Optical Interferometry

    Full text link
    This tutorial paper describes the problem of image reconstruction from interferometric data with a particular focus on the specific problems encountered at optical (visible/IR) wavelengths. The challenging issues in image reconstruction from interferometric data are introduced in the general framework of inverse problem approach. This framework is then used to describe existing image reconstruction algorithms in radio interferometry and the new methods specifically developed for optical interferometry.Comment: accepted for publication in IEEE Signal Processing Magazin

    Adaptive pattern recognition by mini-max neural networks as a part of an intelligent processor

    Get PDF
    In this decade and progressing into 21st Century, NASA will have missions including Space Station and the Earth related Planet Sciences. To support these missions, a high degree of sophistication in machine automation and an increasing amount of data processing throughput rate are necessary. Meeting these challenges requires intelligent machines, designed to support the necessary automations in a remote space and hazardous environment. There are two approaches to designing these intelligent machines. One of these is the knowledge-based expert system approach, namely AI. The other is a non-rule approach based on parallel and distributed computing for adaptive fault-tolerances, namely Neural or Natural Intelligence (NI). The union of AI and NI is the solution to the problem stated above. The NI segment of this unit extracts features automatically by applying Cauchy simulated annealing to a mini-max cost energy function. The feature discovered by NI can then be passed to the AI system for future processing, and vice versa. This passing increases reliability, for AI can follow the NI formulated algorithm exactly, and can provide the context knowledge base as the constraints of neurocomputing. The mini-max cost function that solves the unknown feature can furthermore give us a top-down architectural design of neural networks by means of Taylor series expansion of the cost function. A typical mini-max cost function consists of the sample variance of each class in the numerator, and separation of the center of each class in the denominator. Thus, when the total cost energy is minimized, the conflicting goals of intraclass clustering and interclass segregation are achieved simultaneously
    corecore