266 research outputs found

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation

    An interactive product development model in remanufacturing environment: a chaos-based artificial bee colony approach

    Get PDF
    This research presents an interactive product development model in re-manufacturing environment. The product development model defined a quantitative value model considering product design and development tasks and their value attributes responsible to describe functions of the product. At the last stage of the product development process, re-manufacturing feasibility of used components is incorporated. The consummate feature of this consideration lies in considering variability in cost, weight, and size of the constituted components depending on its types and physical states. Further, this research focuses on reverse logistics paradigm to drive environmental management and economic concerns of the manufacturing industry after the product launching and selling in the market. Moreover, the model is extended by integrating it with RFID technology. This RFID embedded model is aimed at analyzing the economical impact on the account of having advantage of a real time system with reduced inventory shrinkage, reduced processing time, reduced labor cost, process accuracy, and other directly measurable benefits. Consideration the computational complexity involved in product development process reverse logistics, this research proposes; Self-Guided Algorithms & Control (S-CAG) approach for the product development model, and Chaos-based Interactive Artificial Bee Colony (CI-ABC) approach for re-manufacturing model. Illustrative Examples has been presented to test the efficacy of the models. Numerical results from using the S-CAG and CI-ABC for optimal performance are presented and analyzed. The results clearly reveal the efficacy of proposed algorithms when applied to the underlying problems. --Abstract, page iv

    Cooperative Particle Swarm Optimization for Combinatorial Problems

    Get PDF
    A particularly successful line of research for numerical optimization is the well-known computational paradigm particle swarm optimization (PSO). In the PSO framework, candidate solutions are represented as particles that have a position and a velocity in a multidimensional search space. The direct representation of a candidate solution as a point that flies through hyperspace (i.e., Rn) seems to strongly predispose the PSO toward continuous optimization. However, while some attempts have been made towards developing PSO algorithms for combinatorial problems, these techniques usually encode candidate solutions as permutations instead of points in search space and rely on additional local search algorithms. In this dissertation, I present extensions to PSO that by, incorporating a cooperative strategy, allow the PSO to solve combinatorial problems. The central hypothesis is that by allowing a set of particles, rather than one single particle, to represent a candidate solution, combinatorial problems can be solved by collectively constructing solutions. The cooperative strategy partitions the problem into components where each component is optimized by an individual particle. Particles move in continuous space and communicate through a feedback mechanism. This feedback mechanism guides them in the assessment of their individual contribution to the overall solution. Three new PSO-based algorithms are proposed. Shared-space CCPSO and multispace CCPSO provide two new cooperative strategies to split the combinatorial problem, and both models are tested on proven NP-hard problems. Multimodal CCPSO extends these combinatorial PSO algorithms to efficiently sample the search space in problems with multiple global optima. Shared-space CCPSO was evaluated on an abductive problem-solving task: the construction of parsimonious set of independent hypothesis in diagnostic problems with direct causal links between disorders and manifestations. Multi-space CCPSO was used to solve a protein structure prediction subproblem, sidechain packing. Both models are evaluated against the provable optimal solutions and results show that both proposed PSO algorithms are able to find optimal or near-optimal solutions. The exploratory ability of multimodal CCPSO is assessed by evaluating both the quality and diversity of the solutions obtained in a protein sequence design problem, a highly multimodal problem. These results provide evidence that extended PSO algorithms are capable of dealing with combinatorial problems without having to hybridize the PSO with other local search techniques or sacrifice the concept of particles moving throughout a continuous search space

    Task scheduling for application integration: A strategy for large volumes of data

    Get PDF
    Enterprise Application Integration is the research field, which provides methodologies, techniques and tools for modelling and implementing integration processes. An integration process performs the orchestration of a set of applications to keep them synchronised or to allow the creation of new features. It can be represented by a workflow composed of tasks and communication channels. Integration platforms are tools for the design and execution of integration processes in which, the runtime system is the component responsible for execution time of the tasks and the allocation of computational resources that perform them. The processing of a large volume of data, corresponding to execution of millions of tasks, can cause situations of overload, characterised by the accumulation of tasks in internal queues awaiting computational resources in the runtime systems, resulting in unacceptable response time for the external applications and users. Our research hypothesis is that the runtime systems of the integration platforms use simplistic heuristics for scheduling tasks, which does not allow them to maintain acceptable levels of performance when there are overload situations. In this research work, we developed (i) a representation for integration processes, (ii) a characterisation for your task schedules, (iii) a heuristic to deal with situations of overload, (iv) a mathematical model for a performance metric of the execution of integration processes and (v) a simulation tool for task scheduling heuristics. Our research results indicate that, in situations of overload, our heuristic promotes a balanced workload distribution and an increase in the performance of the execution of the integration processes.Integração de Aplicações Empresariais é o campo de pesquisa, que fornece metodologias, técnicas e ferramentas para modelar e implementar processos de integração. Um processo de integração executa a orquestração de um conjunto de aplicações para mantê-las sincronizadas ou para permitir a criação de novas funcionalidades. Ele pode ser representado por um fluxo de trabalho composto por tarefas e canais de comunicação. Plataformas de integração são ferramentas para projetar e executar processos de integração, nas quais o motor de execução é o componente responsável pelo tempo de execução das tarefas e pela alocação de recursos computacionais que as executam. O processamento de um grande volume de dados, correspondendo a execução de milhões de tarefas, pode causar situações de sobrecarga, caracterizadas pelo acúmulo de tarefas em filas internas que aguardam recursos computacionais nos motores de execução, resultando em tempos de resposta inaceitáveis para aplicações e usuários externos. Nossa hipótese de pesquisa é que os motores de execução das plataformas de integração usam heurísticas simplistas para agendar tarefas, o que não lhes permitem manter níveis aceitáveis de desempenho em situações de sobrecarga. Neste trabalho de pesquisa, desenvolvemos (i) uma representação para processos de integração, (ii) uma caracterização para seus agendamentos de tarefas, (iii) uma heurística para lidar com situações de sobrecarga, (iv) um modelo matemático para uma métrica de desempenho da execução de processos de integração e (v) uma ferramenta de simulação para heurísticas de agendamento de tarefas. Nossos resultados de pesquisa indicam que, em situações de sobrecarga, nossa heurística promove uma distribuição equilibrada da carga de trabalho e um aumento no desempenho da execução dos processos de integração

    Dynamic Core Community Detection and Information Diffusion Processes on Networks

    Full text link
    Interest in network science has been increasingly shared among various research communities due to its broad range of applications. Many real world systems can be abstracted as networks, a group of nodes connected by pairwise edges, and examples include friendship networks, metabolic networks, and world wide web among others. Two of the main research areas in network science that have received a lot of focus are community detection and information diffusion. As for community detection, many well developed algorithms are available for such purposes in static networks, for example, spectral partitioning and modularity function based optimization algorithms. As real world data becomes richer, community detection in temporal networks becomes more and more desirable and algorithms such as tensor decomposition and generalized modularity function optimization are developed. One scenario not well investigated is when the core community structure persists over long periods of time with possible noisy perturbations and changes only over periods of small time intervals. The contribution of this thesis in this area is to propose a new algorithm based on low rank component recovery of adjacency matrices so as to identify the phase transition time points and improve the accuracy of core community structure recovery. As for information diffusion, traditionally it was studied using either threshold models or independent interaction models as an epidemic process. But information diffusion mechanism is different from epidemic process such as disease transmission because of the reluctance to tell stale news and to address this issue other models such as DK model was proposed taking into consideration of the reluctance of spreaders to diffuse the information as time goes by. However, this does not capture some cases such as the losing interest of information receivers as in viral marketing. The contribution of this thesis in this area is we proposed two new models coined susceptible-informed-immunized (SIM) model and exponentially time decaying susceptible-informed (SIT) model to successfully capture the intrinsic time value of information from both the spreader and receiver points of view. Rigorous analysis of the dynamics of the two models were performed based mainly on mean field theory. The third contribution of this thesis is on the information diffusion optimization. Controlling information diffusion has been widely studied because of its important applications in areas such as social census, disease control and marketing. Traditionally the problem is formulated as identifying the set of k seed nodes, informed initially, so as to maximize the diffusion size. Heuristic algorithms have been developed to find approximate solutions for this NP-hard problem, and measures such as k-shell, node degree and centrality have been used to facilitate the searching for optimal solutions. The contribution of this thesis in this field is to design a more realistic objective function and apply binary particle swarm optimization algorithm for this combinatorial optimization problem. Instead of fixating the seed nodes size and maximize the diffusion size, we maximize the profit defined as the revenue, which is simply the diffusion size, minus the cost of setting those seed nodes, which is designed as a function of degrees of the seed nodes or a measure that is similar to the centrality of nodes. Because of the powerful algorithm, we were able to study complex scenarios such as information diffusion optimization on multilayer networks.PHDPhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145937/1/wbao_1.pd

    Distributed and Lightweight Meta-heuristic Optimization method for Complex Problems

    Get PDF
    The world is becoming more prominent and more complex every day. The resources are limited and efficiently use them is one of the most requirement. Finding an Efficient and optimal solution in complex problems needs to practical methods. During the last decades, several optimization approaches have been presented that they can apply to different optimization problems, and they can achieve different performance on various problems. Different parameters can have a significant effect on the results, such as the type of search spaces. Between the main categories of optimization methods (deterministic and stochastic methods), stochastic optimization methods work more efficient on big complex problems than deterministic methods. But in highly complex problems, stochastic optimization methods also have some issues, such as execution time, convergence to local optimum, incompatible with distributed systems, and dependence on the type of search spaces. Therefore this thesis presents a distributed and lightweight metaheuristic optimization method (MICGA) for complex problems focusing on four main tracks. 1) The primary goal is to improve the execution time by MICGA. 2) The proposed method increases the stability and reliability of the results by using the multi-population strategy in the second track. 3) MICGA is compatible with distributed systems. 4) Finally, MICGA is applied to the different type of optimization problems with other kinds of search spaces (continuous, discrete and order based optimization problems). MICGA has been compared with other efficient optimization approaches. The results show the proposed work has been achieved enough improvement on the main issues of the stochastic methods that are mentioned before.Maailmasta on päivä päivältä tulossa yhä monimutkaisempi. Resurssit ovat rajalliset, ja siksi niiden tehokas käyttö on erittäin tärkeää. Tehokkaan ja optimaalisen ratkaisun löytäminen monimutkaisiin ongelmiin vaatii tehokkaita käytännön menetelmiä. Viime vuosikymmenien aikana on ehdotettu useita optimointimenetelmiä, joilla jokaisella on vahvuutensa ja heikkoutensa suorituskyvyn ja tarkkuuden suhteen erityyppisten ongelmien ratkaisemisessa. Parametreilla, kuten hakuavaruuden tyypillä, voi olla merkittävä vaikutus tuloksiin. Optimointimenetelmien pääryhmistä (deterministiset ja stokastiset menetelmät) stokastinen optimointi toimii suurissa monimutkaisissa ongelmissa tehokkaammin kuin deterministinen optimointi. Erittäin monimutkaisissa ongelmissa stokastisilla optimointimenetelmillä on kuitenkin myös joitain ongelmia, kuten korkeat suoritusajat, päätyminen paikallisiin optimipisteisiin, yhteensopimattomuus hajautetun toteutuksen kanssa ja riippuvuus hakuavaruuden tyypistä. Tämä opinnäytetyö esittelee hajautetun ja kevyen metaheuristisen optimointimenetelmän (MICGA) monimutkaisille ongelmille keskittyen neljään päätavoitteeseen: 1) Ensisijaisena tavoitteena on pienentää suoritusaikaa MICGA:n avulla. 2) Lisäksi ehdotettu menetelmä lisää tulosten vakautta ja luotettavuutta käyttämällä monipopulaatiostrategiaa. 3) MICGA tukee hajautettua toteutusta. 4) Lopuksi MICGA-menetelmää sovelletaan erilaisiin optimointiongelmiin, jotka edustavat erityyppisiä hakuavaruuksia (jatkuvat, diskreetit ja järjestykseen perustuvat optimointiongelmat). Työssä MICGA-menetelmää verrataan muihin tehokkaisiin optimointimenetelmiin. Tulokset osoittavat, että ehdotetulla menetelmällä saavutetaan selkeitä parannuksia yllä mainittuihin stokastisten menetelmien pääongelmiin liittyen

    Multi-Criteria Performance Evaluation and Control in Power and Energy Systems

    Get PDF
    The role of intuition and human preferences are often overlooked in autonomous control of power and energy systems. However, the growing operational diversity of many systems such as microgrids, electric/hybrid-electric vehicles and maritime vessels has created a need for more flexible control and optimization methods. In order to develop such flexible control methods, the role of human decision makers and their desired performance metrics must be studied in power and energy systems. This dissertation investigates the concept of multi-criteria decision making as a gateway to integrate human decision makers and their opinions into complex mathematical control laws. There are two major steps this research takes to algorithmically integrate human preferences into control environments: MetaMetric (MM) performance benchmark: considering the interrelations of mathematical and psychological convergence, and the potential conflict of opinion between the control designer and end-user, a novel holistic performance benchmark, denoted as MM, is developed to evaluate control performance in real-time. MM uses sensor measurements and implicit human opinions to construct a unique criterion that benchmarks the system\u27s performance characteristics. MM decision support system (DSS): the concept of MM is incorporated into multi-objective evolutionary optimization algorithms as their DSS. The DSS\u27s role is to guide and sort the optimization decisions such that they reflect the best outcome desired by the human decision-maker and mathematical considerations. A diverse set of case studies including a ship power system, a terrestrial power system, and a vehicular traction system are used to validate the approaches proposed in this work. Additionally, the MM DSS is designed in a modular way such that it is not specific to any underlying evolutionary optimization algorithm

    Advances in De Novo Drug Design : From Conventional to Machine Learning Methods

    Get PDF
    De novo drug design is a computational approach that generates novel molecular structures from atomic building blocks with no a priori relationships. Conventional methods include structure-based and ligand-based design, which depend on the properties of the active site of a biological target or its known active binders, respectively. Artificial intelligence, including ma-chine learning, is an emerging field that has positively impacted the drug discovery process. Deep reinforcement learning is a subdivision of machine learning that combines artificial neural networks with reinforcement-learning architectures. This method has successfully been em-ployed to develop novel de novo drug design approaches using a variety of artificial networks including recurrent neural networks, convolutional neural networks, generative adversarial networks, and autoencoders. This review article summarizes advances in de novo drug design, from conventional growth algorithms to advanced machine-learning methodologies and high-lights hot topics for further development.Peer reviewe

    Computational Optimizations for Machine Learning

    Get PDF
    The present book contains the 10 articles finally accepted for publication in the Special Issue “Computational Optimizations for Machine Learning” of the MDPI journal Mathematics, which cover a wide range of topics connected to the theory and applications of machine learning, neural networks and artificial intelligence. These topics include, among others, various types of machine learning classes, such as supervised, unsupervised and reinforcement learning, deep neural networks, convolutional neural networks, GANs, decision trees, linear regression, SVM, K-means clustering, Q-learning, temporal difference, deep adversarial networks and more. It is hoped that the book will be interesting and useful to those developing mathematical algorithms and applications in the domain of artificial intelligence and machine learning as well as for those having the appropriate mathematical background and willing to become familiar with recent advances of machine learning computational optimization mathematics, which has nowadays permeated into almost all sectors of human life and activity
    corecore