30 research outputs found

    Cooperative Particle Swarm Optimization for Combinatorial Problems

    Get PDF
    A particularly successful line of research for numerical optimization is the well-known computational paradigm particle swarm optimization (PSO). In the PSO framework, candidate solutions are represented as particles that have a position and a velocity in a multidimensional search space. The direct representation of a candidate solution as a point that flies through hyperspace (i.e., Rn) seems to strongly predispose the PSO toward continuous optimization. However, while some attempts have been made towards developing PSO algorithms for combinatorial problems, these techniques usually encode candidate solutions as permutations instead of points in search space and rely on additional local search algorithms. In this dissertation, I present extensions to PSO that by, incorporating a cooperative strategy, allow the PSO to solve combinatorial problems. The central hypothesis is that by allowing a set of particles, rather than one single particle, to represent a candidate solution, combinatorial problems can be solved by collectively constructing solutions. The cooperative strategy partitions the problem into components where each component is optimized by an individual particle. Particles move in continuous space and communicate through a feedback mechanism. This feedback mechanism guides them in the assessment of their individual contribution to the overall solution. Three new PSO-based algorithms are proposed. Shared-space CCPSO and multispace CCPSO provide two new cooperative strategies to split the combinatorial problem, and both models are tested on proven NP-hard problems. Multimodal CCPSO extends these combinatorial PSO algorithms to efficiently sample the search space in problems with multiple global optima. Shared-space CCPSO was evaluated on an abductive problem-solving task: the construction of parsimonious set of independent hypothesis in diagnostic problems with direct causal links between disorders and manifestations. Multi-space CCPSO was used to solve a protein structure prediction subproblem, sidechain packing. Both models are evaluated against the provable optimal solutions and results show that both proposed PSO algorithms are able to find optimal or near-optimal solutions. The exploratory ability of multimodal CCPSO is assessed by evaluating both the quality and diversity of the solutions obtained in a protein sequence design problem, a highly multimodal problem. These results provide evidence that extended PSO algorithms are capable of dealing with combinatorial problems without having to hybridize the PSO with other local search techniques or sacrifice the concept of particles moving throughout a continuous search space

    Intelligent systems in manufacturing: current developments and future prospects

    Get PDF
    Global competition and rapidly changing customer requirements are demanding increasing changes in manufacturing environments. Enterprises are required to constantly redesign their products and continuously reconfigure their manufacturing systems. Traditional approaches to manufacturing systems do not fully satisfy this new situation. Many authors have proposed that artificial intelligence will bring the flexibility and efficiency needed by manufacturing systems. This paper is a review of artificial intelligence techniques used in manufacturing systems. The paper first defines the components of a simplified intelligent manufacturing systems (IMS), the different Artificial Intelligence (AI) techniques to be considered and then shows how these AI techniques are used for the components of IMS

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    The 1995 Goddard Conference on Space Applications of Artificial Intelligence and Emerging Information Technologies

    Get PDF
    This publication comprises the papers presented at the 1995 Goddard Conference on Space Applications of Artificial Intelligence and Emerging Information Technologies held at the NASA/Goddard Space Flight Center, Greenbelt, Maryland, on May 9-11, 1995. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed

    Partner selection in virtual enterprises

    Get PDF
    Tese de doutoramento. Engenharia Industrial e Gestão. Faculdade de Engenharia. Universidade do Porto. 200

    Numerical and Evolutionary Optimization 2020

    Get PDF
    This book was established after the 8th International Workshop on Numerical and Evolutionary Optimization (NEO), representing a collection of papers on the intersection of the two research areas covered at this workshop: numerical optimization and evolutionary search techniques. While focusing on the design of fast and reliable methods lying across these two paradigms, the resulting techniques are strongly applicable to a broad class of real-world problems, such as pattern recognition, routing, energy, lines of production, prediction, and modeling, among others. This volume is intended to serve as a useful reference for mathematicians, engineers, and computer scientists to explore current issues and solutions emerging from these mathematical and computational methods and their applications

    Knowledge and Reasoning for Image Understanding

    Get PDF
    abstract: Image Understanding is a long-established discipline in computer vision, which encompasses a body of advanced image processing techniques, that are used to locate (“where”), characterize and recognize (“what”) objects, regions, and their attributes in the image. However, the notion of “understanding” (and the goal of artificial intelligent machines) goes beyond factual recall of the recognized components and includes reasoning and thinking beyond what can be seen (or perceived). Understanding is often evaluated by asking questions of increasing difficulty. Thus, the expected functionalities of an intelligent Image Understanding system can be expressed in terms of the functionalities that are required to answer questions about an image. Answering questions about images require primarily three components: Image Understanding, question (natural language) understanding, and reasoning based on knowledge. Any question, asking beyond what can be directly seen, requires modeling of commonsense (or background/ontological/factual) knowledge and reasoning. Knowledge and reasoning have seen scarce use in image understanding applications. In this thesis, we demonstrate the utilities of incorporating background knowledge and using explicit reasoning in image understanding applications. We first present a comprehensive survey of the previous work that utilized background knowledge and reasoning in understanding images. This survey outlines the limited use of commonsense knowledge in high-level applications. We then present a set of vision and reasoning-based methods to solve several applications and show that these approaches benefit in terms of accuracy and interpretability from the explicit use of knowledge and reasoning. We propose novel knowledge representations of image, knowledge acquisition methods, and a new implementation of an efficient probabilistic logical reasoning engine that can utilize publicly available commonsense knowledge to solve applications such as visual question answering, image puzzles. Additionally, we identify the need for new datasets that explicitly require external commonsense knowledge to solve. We propose the new task of Image Riddles, which requires a combination of vision, and reasoning based on ontological knowledge; and we collect a sufficiently large dataset to serve as an ideal testbed for vision and reasoning research. Lastly, we propose end-to-end deep architectures that can combine vision, knowledge and reasoning modules together and achieve large performance boosts over state-of-the-art methods.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Regularized model learning in EDAs for continuous and multi-objective optimization

    Get PDF
    Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods

    Clinical decision support: Knowledge representation and uncertainty management

    Get PDF
    Programa Doutoral em Engenharia BiomédicaDecision-making in clinical practice is faced with many challenges due to the inherent risks of being a health care professional. From medical error to undesired variations in clinical practice, the mitigation of these issues seems to be tightly connected to the adherence to Clinical Practice Guidelines as evidence-based recommendations The deployment of Clinical Practice Guidelines in computational systems for clinical decision support has the potential to positively impact health care. However, current approaches to Computer-Interpretable Guidelines evidence a set of issues that leave them wanting. These issues are related with the lack of expressiveness of their underlying models, the complexity of knowledge acquisition with their tools, the absence of support to the clinical decision making process, and the style of communication of Clinical Decision Support Systems implementing Computer-Interpretable Guidelines. Such issues pose as obstacles that prevent these systems from showing properties like modularity, flexibility, adaptability, and interactivity. All these properties reflect the concept of living guidelines. The purpose of this doctoral thesis is, thus, to provide a framework that enables the expression of these properties. The modularity property is conferred by the ontological definition of Computer-Interpretable Guidelines and the assistance in guideline acquisition provided by an editing tool, allowing for the management of multiple knowledge patterns that can be reused. Flexibility is provided by the representation primitives defined in the ontology, meaning that the model is adjustable to guidelines from different categories and specialities. On to adaptability, this property is conferred by mechanisms of Speculative Computation, which allow the Decision Support System to not only reason with incomplete information but to adapt to changes of state, such as suddenly knowing the missing information. The solution proposed for interactivity consists in embedding Computer-Interpretable Guideline advice directly into the daily life of health care professionals and provide a set of reminders and notifications that help them to keep track of their tasks and responsibilities. All these solutions make the CompGuide framework for the expression of Clinical Decision Support Systems based on Computer-Interpretable Guidelines.A tomada de decisão na prática clínica enfrenta inúmeros desafios devido aos riscos inerentes a ser um profissional de saúde. Desde o erro medico até às variações indesejadas da prática clínica, a atenuação destes problemas parece estar intimamente ligada à adesão a Protocolos Clínicos, uma vez que estes são recomendações baseadas na evidencia. A operacionalização de Protocolos Clínicos em sistemas computacionais para apoio à decisão clínica apresenta o potencial de ter um impacto positivo nos cuidados de saúde. Contudo, as abordagens atuais a Protocolos Clínicos Interpretáveis por Maquinas evidenciam um conjunto de problemas que as deixa a desejar. Estes problemas estão relacionados com a falta de expressividade dos modelos que lhes estão subjacentes, a complexidade da aquisição de conhecimento utilizando as suas ferramentas, a ausência de suporte ao processo de decisão clínica e o estilo de comunicação dos Sistemas de Apoio à Decisão Clínica que implementam Protocolos Clínicos Interpretáveis por Maquinas. Tais problemas constituem obstáculos que impedem estes sistemas de apresentarem propriedades como modularidade, flexibilidade, adaptabilidade e interatividade. Todas estas propriedades refletem o conceito de living guidelines. O propósito desta tese de doutoramento é, portanto, o de fornecer uma estrutura que possibilite a expressão destas propriedades. A modularidade é conferida pela definição ontológica dos Protocolos Clínicos Interpretáveis por Maquinas e pela assistência na aquisição de protocolos fornecida por uma ferramenta de edição, permitindo assim a gestão de múltiplos padrões de conhecimento que podem ser reutilizados. A flexibilidade é atribuída pelas primitivas de representação definidas na ontologia, o que significa que o modelo é ajustável a protocolos de diferentes categorias e especialidades. Quanto à adaptabilidade, esta é conferida por mecanismos de Computação Especulativa que permitem ao Sistema de Apoio à Decisão não só raciocinar com informação incompleta, mas também adaptar-se a mudanças de estado, como subitamente tomar conhecimento da informação em falta. A solução proposta para a interatividade consiste em incorporar as recomendações dos Protocolos Clínicos Interpretáveis por Maquinas diretamente no dia a dia dos profissionais de saúde e fornecer um conjunto de lembretes e notificações que os auxiliam a rastrear as suas tarefas e responsabilidades. Todas estas soluções constituem a estrutura CompGuide para a expressão de Sistemas de Apoio à Decisão Clínica baseados em Protocolos Clínicos Interpretáveis por Máquinas.The work of the PhD candidate Tiago José Martins Oliveira is supported by a grant from FCT - Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) with the reference SFRH/BD/85291/ 2012
    corecore