26 research outputs found

    Innovations in nature inspired optimization and learning methods

    Get PDF
    The nine papers included in this special issue represent a selection of extended contributions presented at the Third World Congress on Nature and Biologically Inspired Computing (NaBIC2011), held in Salamanca, Spain, October 19–21, 2011. Papers were selected on the basis of fundamental ideas and concepts rather than the direct usage of well-established techniques. This special issue is then aimed at practitioners, researchers and postgraduate students, who are engaged in developing and applying, advanced Nature and Biologically Inspired Computing Models to solving real-world problems. The papers are organized as follows. The first paper by Apeh et al. present a comparative investigation of 4 approaches for classifying dynamic customer profiles built using evolving transactional data over time. The changing class values of the customer profiles were analyzed together with the challenging problem of deciding whether to change the class label or adapt the classifier. The results from the experiments conducted on a highly sparse and skewed real-world transactional data show that adapting the classifiers leads to more stable classification of customer profiles in the shorter time windows; while relabelling the changed customer profile classes leads to more accurate and stable classification in the longer time windows. Frolov et al. suggested in the second paper a new approach to Boolean factor analysis, which is an extension of the previously proposed Boolean factor analysis method: Hopfield-like attractor neural network with increasing activity. The authors increased its applicability and robustness when complementing this method by a maximization of the learning set likelihood function defined according to the Noisy-OR generative model. They demonstrated the efficiency of the new method using the data set generated according to the model. Successful application of the method to the real data is shown when analyzing the data from the Kyoto Encyclopedia of Genes and Genomes database which contains full genome sequencing for 1368 organisms. In the sequel, Triguero et al. analyze the integration of a wide variety of noise filters into the self-training process to distinguish the most relevant features of filters. They are focused on the nearest neighbour rule as a base classifier and ten different noise filters. Then, they provide an extensive analysis of the performance of these filters considering different ratios of labelled data. The results are contrasted with nonparametric statistical tests that allow us to identify relevant filters, and their main characteristics, in the field of semi-supervised learning. In the Fourth paper, Gutiérrez-Avilés et al. present the TriGen algorithm, a genetic algorithm that finds triclusters of gene expression that take into account the experimental conditions and the time points simultaneously. The authors have used TriGen to mine datasets related to synthetic data, yeast (Saccharomyces Cerevisiae) cell cycle and human inflammation and host response to injury experiments. TriGen has proved to be capable of extracting groups of genes with similar patterns in subsets of conditions and times, and these groups have shown to be related in terms of their functional annotations extracted from the Gene Ontology project. In the following paper, Varela et al. introduce and study the application of Constrained Sampling Evolutionary Algorithms in the framework of an UAV based search and rescue scenario. These algorithms have been developed as a way to harness the power of Evolutionary Algorithms (EA) when operating in complex, noisy, multimodal optimization problems and transfer the advantages of their approach to real time real world problems that can be transformed into search and optimization challenges. These types of problems are denoted as Constrained Sampling problems and are characterized by the fact that the physical limitations of reality do not allow for an instantaneous determination of the fitness of the points present in the population that must be evolved. A general approach to address these problems is presented and a particular implementation using Differential Evolution as an example of CS-EA is created and evaluated using teams of UAVs in search and rescue missions.The results are compared to those of a Swarm Intelligence based strategy in the same type of problem as this approach has been widely used within the UAV path-planning field in different variants by many authors. In the Sixth paper, Zhao et al. introduce human intelligence into the computational intelligent algorithms, namely particle swarm optimization (PSO) and immune algorithms (IA). A novel human-computer cooperative PSO-based immune algorithm (HCPSO-IA) is proposed, in which the initial population consists of the initial artificial individuals supplied by human and the initial algorithm individuals are generated by a chaotic strategy. Some new artificial individuals are introduced to replace the inferior individuals of the population. HCPSO-IA benefits by giving free rein to the talents of designers and computers, and contributes to solving complex layout design problems. The experimental results illustrate that the proposed algorithm is feasible and effective. In the sequel, Rebollo-Ruiz and Graña give an extensive empirical evaluation of the innovative nature inspired Gravitational Swarm Intelligence (GSI) algorithm solving the Graph Coloring Problem (GCP). GSI follows Swarm Intelligence problem solving approach, where spatial position of agents are interpreted as problem solutions and agent motion is determined solely by local information, avoiding any central control system. To apply GSI to search for solutions of GCP, the authors map agents to graph's nodes. Agents move as particles in the gravitational field defined by goal objects corresponding to colors. When the agents fall in the gravitational well of the color goal, their corresponding nodes are colored by this color. Graph's connectivity is mapped into a repulsive force between agents corresponding to adjacent nodes. The authors discuss the convergence of the algorithm by testing it over a extensive suite of well-known benchmarking graphs. Comparison of this approach to state-of-the-art approaches in the literature show improvements in many of the benchmark graphs. In the Eighth paper, Macaˇs et al. demonstrates how the novel algorithms can be derived from opinion formation models and empirically demonstrates their usability in the area of binary optimization. Particularly, it introduces a general SITO algorithmic framework and describes four algorithms based on this general framework. Recent applications of these algorithms to pattern recognition in electronic nose, electronic tongue, newborn EEG and ICU patient mortality prediction are discussed. Finally, an open source SITO library for MATLAB and JAVA is introduced. In the final paper, Madureira et al. present a negotiation mechanism for dynamic scheduling based on social and collective intelligence. Under the proposed negotiation mechanism, agents must interact and collaborate in order to improve the global schedule. Swarm Intelligence is considered a general aggregation term for several computational techniques which use ideas and get inspiration from the social behaviors of insects and other biological systems. This work is concerned with negotiation, where multiple self-interested agents can reach agreement over the exchange of operations on competitive resources. A computational study was performed in order to validate the influence of negotiation mechanism in the system performance and the SI technique. From the obtained results it was possible to conclude about statistical evidence that negotiation mechanism influence significantly the overall system performance and about advantage of Artificial Bee Colony on effectiveness of makespan minimization and on the machine occupation maximization. We would like to thank our peer-reviewers for their diligent work and efficient efforts.We are also grateful to the Editor-in-Chief of Neurocomputing, Prof. Tom Heskes, for his continued support for the NABIC conference and for the opportunity to organize this Special issue

    A quantum neural network computes its own relative phase

    Full text link
    Complete characterization of the state of a quantum system made up of subsystems requires determination of relative phase, because of interference effects between the subsystems. For a system of qubits used as a quantum computer this is especially vital, because the entanglement, which is the basis for the quantum advantage in computing, depends intricately on phase. We present here a first step towards that determination, in which we use a two-qubit quantum system as a quantum neural network, which is trained to compute and output its own relative phase

    On the correction of anomalous phase oscillation in entanglement witnesses using quantum neural networks

    Full text link
    Entanglement of a quantum system depends upon relative phase in complicated ways, which no single measurement can reflect. Because of this, entanglement witnesses are necessarily limited in applicability and/or utility. We propose here a solution to the problem using quantum neural networks. A quantum system contains the information of its entanglement; thus, if we are clever, we can extract that information efficiently. As proof of concept, we show how this can be done for the case of pure states of a two-qubit system, using an entanglement indicator corrected for the anomalous phase oscillation. Both the entanglement indicator and the phase correction are calculated by the quantum system itself acting as a neural network

    Hybridization of machine learning for advanced manufacturing

    Get PDF
    Tesis por compendio de publicacioines[ES] En el contexto de la industria, hoy por hoy, los términos “Fabricación Avanzada”, “Industria 4.0” y “Fábrica Inteligente” están convirtiéndose en una realidad. Las empresas industriales buscan ser más competitivas, ya sea en costes, tiempo, consumo de materias primas, energía, etc. Se busca ser eficiente en todos los ámbitos y además ser sostenible. El futuro de muchas compañías depende de su grado de adaptación a los cambios y su capacidad de innovación. Los consumidores son cada vez más exigentes, buscando productos personalizados y específicos con alta calidad, a un bajo coste y no contaminantes. Por todo ello, las empresas industriales implantan innovaciones tecnológicas para conseguirlo. Entre estas innovaciones tecnológicas están la ya mencionada Fabricación Avanzada (Advanced Manufacturing) y el Machine Learning (ML). En estos campos se enmarca el presente trabajo de investigación, en el que se han concebido y aplicado soluciones inteligentes híbridas que combinan diversas técnicas de ML para resolver problemas en el campo de la industria manufacturera. Se han aplicado técnicas inteligentes tales como Redes Neuronales Artificiales (RNA), algoritmos genéticos multiobjetivo, métodos proyeccionistas para la reducción de la dimensionalidad, técnicas de agrupamiento o clustering, etc. También se han utilizado técnicas de Identificación de Sistemas con el propósito de obtener el modelo matemático que representa mejor el sistema real bajo estudio. Se han hibridado diversas técnicas con el propósito de construir soluciones más robustas y fiables. Combinando técnicas de ML específicas se crean sistemas más complejos y con una mayor capacidad de representación/solución. Estos sistemas utilizan datos y el conocimiento sobre estos para resolver problemas. Las soluciones propuestas buscan solucionar problemas complejos del mundo real y de un amplio espectro, manejando aspectos como la incertidumbre, la falta de precisión, la alta dimensionalidad, etc. La presente tesis cubre varios casos de estudio reales, en los que se han aplicado diversas técnicas de ML a distintas problemáticas del campo de la industria manufacturera. Los casos de estudio reales de la industria en los que se ha trabajado, con cuatro conjuntos de datos diferentes, se corresponden con: • Proceso de fresado dental de alta precisión, de la empresa Estudio Previo SL. • Análisis de datos para el mantenimiento predictivo de una empresa del sector de la automoción, como es la multinacional Grupo Antolin. Adicionalmente se ha colaborado con el grupo de investigación GICAP de la Universidad de Burgos y con el centro tecnológico ITCL en los casos de estudio que forman parte de esta tesis y otros relacionados. Las diferentes hibridaciones de técnicas de ML desarrolladas han sido aplicadas y validadas con conjuntos de datos reales y originales, en colaboración con empresas industriales o centros de fresado, permitiendo resolver problemas actuales y complejos. De esta manera, el trabajo realizado no ha tenido sólo un enfoque teórico, sino que se ha aplicado de modo práctico permitiendo que las empresas industriales puedan mejorar sus procesos, ahorrar en costes y tiempo, contaminar menos, etc. Los satisfactorios resultados obtenidos apuntan hacia la utilidad y aportación que las técnicas de ML pueden realizar en el campo de la Fabricación Avanzada

    Heuristic optimization of electrical energy systems: Refined metrics to compare the solutions

    Get PDF
    Many optimization problems admit a number of local optima, among which there is the global optimum. For these problems, various heuristic optimization methods have been proposed. Comparing the results of these solvers requires the definition of suitable metrics. In the electrical energy systems literature, simple metrics such as best value obtained, the mean value, the median or the standard deviation of the solutions are still used. However, the comparisons carried out with these metrics are rather weak, and on these bases a somehow uncontrolled proliferation of heuristic solvers is taking place. This paper addresses the overall issue of understanding the reasons of this proliferation, showing a conceptual scheme that indicates how the assessment of the best solver may result in the unlimited formulation of new solvers. Moreover, this paper shows how the use of more refined metrics defined to compare the optimization result, associated with the definition of appropriate benchmarks, may make the comparisons among the solvers more robust. The proposed metrics are based on the concept of first-order stochastic dominance and are defined for the cases in which: (i) the globally optimal solution can be found (for testing purposes); and (ii) the number of possible solutions is so large that practically it cannot be guaranteed that the global optimum has been found. Illustrative examples are provided for a typical problem in the electrical energy systems area – distribution network reconfiguration. The conceptual results obtained are generally valid to compare the results of other optimization problem

    Multi-objective single agent stochastic search in non-dominated sorting genetic algorithm

    Get PDF
    A hybrid multi-objective optimization algorithm based on genetic algorithm and stochastic local search is developed and evaluated. The single agent stochastic search local optimization algorithm has been modified in order to be suitable for multi-objective optimization where the local optimization is performed towards non-dominated points. The presented algorithm has been experimentally investigated by solving a set of well known test problems, and evaluated according to several metrics for measuring the performance of algorithms for multi-objective optimization. Results of the experimental investigation are presented and discussed

    Personal State and Emotion Monitoring by Wearable Computing and Machine Learning

    Get PDF
    One of the major scientific undertakings over the past few years has been exploring the interaction between humans and machines in mobile environments. Wearable computers, embedded in clothing or seamlessly integrated into everyday devices, have an incredible advantage to become the main gateway to personal health management. Current state of the art devices are capable in monitoring basic physical or physiological parameters. Traditional health systems procedures depend on the physical presence of the patient and a medical specialist that not only is a reason of overall costs but also reduces the quality of patients' lives, particularly elderly patients. Usually, patients have to go through the following steps for the traditional procedure: Firstly, patients need to visit the clinic, get registered at reception, wait for the turn, go to the lab for the physiological measurement, wait for the medical experts call, to finally receive feedback from the medical expert. In this work, we examined how to utilize existing technology in order to develop an e-health monitoring system especially for heart patients. This system should support the interaction between the patient and the physician even when the patient is not in the clinic. The supporting wearable health monitoring system WHMS should recognize physical activities, emotional states and transmit this information to the physician along with relevant physiological data; in this way patients do not need to visit the clinic every time for the physician's feed-back. After the discussion with medical experts, we identified relevant physical activities, emotional states and physiological data needed for the patients' examinations. A prototype of this concept for a health monitoring system of the proposed solution was implemented taking into account physical activities, emotional states and physiological data

    Voice Disorder Classification Based on Multitaper Mel Frequency Cepstral Coefficients Features

    Get PDF
    The Mel Frequency Cepstral Coefficients (MFCCs) are widely used in order to extract essential information from a voice signal and became a popular feature extractor used in audio processing. However, MFCC features are usually calculated from a single window (taper) characterized by large variance. This study shows investigations on reducing variance for the classification of two different voice qualities (normal voice and disordered voice) using multitaper MFCC features. We also compare their performance by newly proposed windowing techniques and conventional single-taper technique. The results demonstrate that adapted weighted Thomson multitaper method could distinguish between normal voice and disordered voice better than the results done by the conventional single-taper (Hamming window) technique and two newly proposed windowing methods. The multitaper MFCC features may be helpful in identifying voices at risk for a real pathology that has to be proven later
    corecore