422 research outputs found

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation

    A Framework for Life Cycle Cost Estimation of a Product Family at the Early Stage of Product Development

    Get PDF
    A cost estimation method is required to estimate the life cycle cost of a product family at the early stage of product development in order to evaluate the product family design. There are difficulties with existing cost estimation techniques in estimating the life cycle cost for a product family at the early stage of product development. This paper proposes a framework that combines a knowledge based system and an activity based costing techniques in estimating the life cycle cost of a product family at the early stage of product development. The inputs of the framework are the product family structure and its sub function. The output of the framework is the life cycle cost of a product family that consists of all costs at each product family level and the costs of each product life cycle stage. The proposed framework provides a life cycle cost estimation tool for a product family at the early stage of product development using high level information as its input. The framework makes it possible to estimate the life cycle cost of various product family that use any types of product structure. It provides detailed information related to the activity and resource costs of both parts and products that can assist the designer in analyzing the cost of the product family design. In addition, it can reduce the required amount of information and time to construct the cost estimation system

    SOLID-SHELL FINITE ELEMENT MODELS FOR EXPLICIT SIMULATIONS OF CRACK PROPAGATION IN THIN STRUCTURES

    Get PDF
    Crack propagation in thin shell structures due to cutting is conveniently simulated using explicit finite element approaches, in view of the high nonlinearity of the problem. Solidshell elements are usually preferred for the discretization in the presence of complex material behavior and degradation phenomena such as delamination, since they allow for a correct representation of the thickness geometry. However, in solid-shell elements the small thickness leads to a very high maximum eigenfrequency, which imply very small stable time-steps. A new selective mass scaling technique is proposed to increase the time-step size without affecting accuracy. New ”directional” cohesive interface elements are used in conjunction with selective mass scaling to account for the interaction with a sharp blade in cutting processes of thin ductile shells

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    2022 Review of Data-Driven Plasma Science

    Get PDF
    Data-driven science and technology offer transformative tools and methods to science. This review article highlights the latest development and progress in the interdisciplinary field of data-driven plasma science (DDPS), i.e., plasma science whose progress is driven strongly by data and data analyses. Plasma is considered to be the most ubiquitous form of observable matter in the universe. Data associated with plasmas can, therefore, cover extremely large spatial and temporal scales, and often provide essential information for other scientific disciplines. Thanks to the latest technological developments, plasma experiments, observations, and computation now produce a large amount of data that can no longer be analyzed or interpreted manually. This trend now necessitates a highly sophisticated use of high-performance computers for data analyses, making artificial intelligence and machine learning vital components of DDPS. This article contains seven primary sections, in addition to the introduction and summary. Following an overview of fundamental data-driven science, five other sections cover widely studied topics of plasma science and technologies, i.e., basic plasma physics and laboratory experiments, magnetic confinement fusion, inertial confinement fusion and high-energy-density physics, space and astronomical plasmas, and plasma technologies for industrial and other applications. The final section before the summary discusses plasma-related databases that could significantly contribute to DDPS. Each primary section starts with a brief introduction to the topic, discusses the state-of-the-art developments in the use of data and/or data-scientific approaches, and presents the summary and outlook. Despite the recent impressive signs of progress, the DDPS is still in its infancy. This article attempts to offer a broad perspective on the development of this field and identify where further innovations are required

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    Integrating geological uncertainty and dynamic data into modelling procedures for fractured reservoirs

    Get PDF
    Modelling, simulating and characterising flow through naturally fractured reservoirs is a multi-disciplinary effort. The scarcity of data combined with the additional layer of complexity that fractures add to a reservoir makes an efficient integration of all available data fundamental. However, the vast range of data types to be considered and the multitude of disciplines giving their input often results in communication barriers and individuals working within their comfort area, creating further challenges for uncertainty propagation. It is however critical for decision-making to develop geologically consistent reservoir models that recognise the challenges of simulating flow through systems with high permeability and scale contrasts and address the need for an ensemble of reservoir models to sufficiently cover geological uncertainties and their impact on fluid flow. In this work I developed several workflows for naturally fractured reservoir modelling that invite cross-disciplinary thinking by integrating geological uncertainties and dynamic data into the modelling procedure and foster ensemble modelling from the start. The workflows are tested on a synthetic field that is based upon a conceptual model for fold-related fracture distributions. The first workflow involves the use of multiple-point statistics to efficiently model reservoir-scale fracture distribution by upscaling discrete fracture networks and converting them into training images. To cover the impact of fracture-related geological uncertainties on fluid flow efficiently, flow diagnostics were used to screen and afterwards cluster and select training images according to their flow response for further reservoir modelling. The second workflow proposes a novel reservoir modelling technique that considers both static and dynamic data and utilises entropy to generate a diverse ensemble of reservoir models that all match an outset objective. Finally, an agent-based reservoir modelling workflow is introduced, where within a reservoir model, independent but interacting agents follow a set of rules to generate reservoir models that take into account geological prior information and expected dynamic flow responses to drive modelling efforts. Overall, we demonstrated that combining approaches from various disciplines into cross-disciplinary workflows provides great potential for subsurface characterisation. What workflow to adopt within a project, depends on various boundary conditions. The availability of data and time, the confidence in the understanding of the reservoir and the ultimate goal behind the modelling exercise. These factors can impact whether moving along with the simpler, more parametric multiple-point statistics workflow, the entropy-driven workflow that utilises static and dynamic data or the more data-driven agent-based modelling workflow is the right choice.James Watt Scholarshi

    Applied Methuerstic computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC
    corecore