2,842 research outputs found

    Magnitude Sensitive Competitive Neural Networks

    Get PDF
    En esta Tesis se presentan un conjunto de redes neuronales llamadas Magnitude Sensitive Competitive Neural Networks (MSCNNs). Se trata de un conjunto de algoritmos de Competitive Learning que incluyen un término de magnitud como un factor de modulación de la distancia usada en la competición. Al igual que otros métodos competitivos, MSCNNs realizan la cuantización vectorial de los datos, pero el término de magnitud guía el entrenamiento de los centroides de modo que se representan con alto detalle las zonas deseadas, definidas por la magnitud. Estas redes se han comparado con otros algoritmos de cuantización vectorial en diversos ejemplos de interpolación, reducción de color, modelado de superficies, clasificación, y varios ejemplos sencillos de demostración. Además se introduce un nuevo algoritmo de compresión de imágenes, MSIC (Magnitude Sensitive Image Compression), que hace uso de los algoritmos mencionados previamente, y que consigue una compresión de la imagen variable según una magnitud definida por el usuario. Los resultados muestran que las nuevas redes neuronales MSCNNs son más versátiles que otros algoritmos de aprendizaje competitivo, y presentan una clara mejora en cuantización vectorial sobre ellos cuando el dato está sopesado por una magnitud que indica el ¿interés¿ de cada muestra

    Fast Color Quantization Using Weighted Sort-Means Clustering

    Full text link
    Color quantization is an important operation with numerous applications in graphics and image processing. Most quantization methods are essentially based on data clustering algorithms. However, despite its popularity as a general purpose clustering algorithm, k-means has not received much respect in the color quantization literature because of its high computational requirements and sensitivity to initialization. In this paper, a fast color quantization method based on k-means is presented. The method involves several modifications to the conventional (batch) k-means algorithm including data reduction, sample weighting, and the use of triangle inequality to speed up the nearest neighbor search. Experiments on a diverse set of images demonstrate that, with the proposed modifications, k-means becomes very competitive with state-of-the-art color quantization methods in terms of both effectiveness and efficiency.Comment: 30 pages, 2 figures, 4 table

    Vowel recognition using Kohonen\u27s self-organizing feature maps

    Get PDF
    An important organizing principle observed in the sensory pathways in the brain is the orderly placement of neurons. Although the neurons are structurally identical, the specialized role played by each unit is determined by its internal parameters that are made to change during early learning processes. In the human auditory system, the nerve cells and fibres are arranged in a manner that would elicit maximum response from the neurons when they are activated. Although most of this organization is genetically determined, some of the high level organization is created due to algorithms that promote self-organization. Kohonen\u27s self-organizing feature map is a neural net model that produces feature maps similar to the ones produced in the brain. These maps are capable of describing topological relationships of input signals using a one or two dimensional representation. This technique uses unlabeled data and requires no training as in supervised learning algorithms. It is hence immensely useful in speech and vision applications. This neutral net has been implemented for the recognition of vowels in the American English language. The net has been trained and tested with vowel data. The formation of internal clusters or categories has been observed and closely reflects the tonotopic relationships between the vowels. An analysis of the results has been carried out and the performance has been compared to other classification techniques. A graphical user interface has also been developed using Xview to help visualize the formation of the maps during the training and testing processes

    Computational physics of the mind

    Get PDF
    In the XIX century and earlier such physicists as Newton, Mayer, Hooke, Helmholtz and Mach were actively engaged in the research on psychophysics, trying to relate psychological sensations to intensities of physical stimuli. Computational physics allows to simulate complex neural processes giving a chance to answer not only the original psychophysical questions but also to create models of mind. In this paper several approaches relevant to modeling of mind are outlined. Since direct modeling of the brain functions is rather limited due to the complexity of such models a number of approximations is introduced. The path from the brain, or computational neurosciences, to the mind, or cognitive sciences, is sketched, with emphasis on higher cognitive functions such as memory and consciousness. No fundamental problems in understanding of the mind seem to arise. From computational point of view realistic models require massively parallel architectures
    • …
    corecore