2,522 research outputs found

    Data-driven modelling of biological multi-scale processes

    Full text link
    Biological processes involve a variety of spatial and temporal scales. A holistic understanding of many biological processes therefore requires multi-scale models which capture the relevant properties on all these scales. In this manuscript we review mathematical modelling approaches used to describe the individual spatial scales and how they are integrated into holistic models. We discuss the relation between spatial and temporal scales and the implication of that on multi-scale modelling. Based upon this overview over state-of-the-art modelling approaches, we formulate key challenges in mathematical and computational modelling of biological multi-scale and multi-physics processes. In particular, we considered the availability of analysis tools for multi-scale models and model-based multi-scale data integration. We provide a compact review of methods for model-based data integration and model-based hypothesis testing. Furthermore, novel approaches and recent trends are discussed, including computation time reduction using reduced order and surrogate models, which contribute to the solution of inference problems. We conclude the manuscript by providing a few ideas for the development of tailored multi-scale inference methods.Comment: This manuscript will appear in the Journal of Coupled Systems and Multiscale Dynamics (American Scientific Publishers

    Optimal treatment allocations in space and time for on-line control of an emerging infectious disease

    Get PDF
    A key component in controlling the spread of an epidemic is deciding where, whenand to whom to apply an intervention.We develop a framework for using data to informthese decisionsin realtime.We formalize a treatment allocation strategy as a sequence of functions, oneper treatment period, that map up-to-date information on the spread of an infectious diseaseto a subset of locations where treatment should be allocated. An optimal allocation strategyoptimizes some cumulative outcome, e.g. the number of uninfected locations, the geographicfootprint of the disease or the cost of the epidemic. Estimation of an optimal allocation strategyfor an emerging infectious disease is challenging because spatial proximity induces interferencebetween locations, the number of possible allocations is exponential in the number oflocations, and because disease dynamics and intervention effectiveness are unknown at outbreak.We derive a Bayesian on-line estimator of the optimal allocation strategy that combinessimulation–optimization with Thompson sampling.The estimator proposed performs favourablyin simulation experiments. This work is motivated by and illustrated using data on the spread ofwhite nose syndrome, which is a highly fatal infectious disease devastating bat populations inNorth America

    Environment Search Planning Subject to High Robot Localization Uncertainty

    Get PDF
    As robots find applications in more complex roles, ranging from search and rescue to healthcare and services, they must be robust to greater levels of localization uncertainty and uncertainty about their environments. Without consideration for such uncertainties, robots will not be able to compensate accordingly, potentially leading to mission failure or injury to bystanders. This work addresses the task of searching a 2D area while reducing localization uncertainty. Wherein, the environment provides low uncertainty pose updates from beacons with a short range, covering only part of the environment. Otherwise the robot localizes using dead reckoning, relying on wheel encoder and yaw rate information from a gyroscope. As such, outside of the regions with position updates, there will be unconstrained localization error growth over time. The work contributes a Belief Markov Decision Process formulation for solving the search problem and evaluates the performance using Partially Observable Monte Carlo Planning (POMCP). Additionally, the work contributes an approximate Markov Decision Process formulation and reduced complexity state representation. The approximate problem is evaluated using value iteration. To provide a baseline, the Google OR-Tools package is used to solve the travelling salesman problem (TSP). Results are verified by simulating a differential drive robot in the Gazebo simulation environment. POMCP results indicate planning can be tuned to prioritize constraining uncertainty at the cost of increasing path length. The MDP formulation provides consistently lower uncertainty with minimal increases in path length over the TSP solution. Both formulations show improved coverage outcomes

    Probabilistic and artificial intelligence modelling of drought and agricultural crop yield in Pakistan

    Get PDF
    Pakistan is a drought-prone, agricultural nation with hydro-meteorological imbalances that increase the scarcity of water resources, thus, constraining water availability and leading major risks to the agricultural productivity sector and food security. Rainfall and drought are imperative matters of consideration, both for hydrological and agricultural applications. The aim of this doctoral thesis is to advance new knowledge in designing hybridized probabilistic and artificial intelligence forecasts models for rainfall, drought and crop yield within the agricultural hubs in Pakistan. The choice of these study regions is a strategic decision, to focus on precision agriculture given the importance of rainfall and drought events on agricultural crops in socioeconomic activities of Pakistan. The outcomes of this PhD contribute to efficient modelling of seasonal rainfall, drought and crop yield to assist farmers and other stakeholders to promote more strategic decisions for better management of climate risk for agriculturalreliant nations

    Model Learning for Look-ahead Exploration in Continuous Control

    Full text link
    We propose an exploration method that incorporates look-ahead search over basic learnt skills and their dynamics, and use it for reinforcement learning (RL) of manipulation policies . Our skills are multi-goal policies learned in isolation in simpler environments using existing multigoal RL formulations, analogous to options or macroactions. Coarse skill dynamics, i.e., the state transition caused by a (complete) skill execution, are learnt and are unrolled forward during lookahead search. Policy search benefits from temporal abstraction during exploration, though itself operates over low-level primitive actions, and thus the resulting policies does not suffer from suboptimality and inflexibility caused by coarse skill chaining. We show that the proposed exploration strategy results in effective learning of complex manipulation policies faster than current state-of-the-art RL methods, and converges to better policies than methods that use options or parametrized skills as building blocks of the policy itself, as opposed to guiding exploration. We show that the proposed exploration strategy results in effective learning of complex manipulation policies faster than current state-of-the-art RL methods, and converges to better policies than methods that use options or parameterized skills as building blocks of the policy itself, as opposed to guiding exploration.Comment: This is a pre-print of our paper which is accepted in AAAI 201

    An early-stage decision-support framework for the implementation of intelligent automation

    Get PDF
    The constant pressure on manufacturing companies to improve productivity, reduce the lead time and progress in quality requires new technological developments and adoption.The rapid development of smart technology and robotics and autonomous systems (RAS) technology has a profound impact on manufacturing automation and might determine winners and losers of the next generation’s manufacturing competition. Simultaneously, recent smart technology developments in the areas enable an automation response to new production paradigms such as mass customisation and product-lifecycle considerations in the context of Industry 4.0. New paradigms, like mass customisation, increased both the complexity of the tasks and the risk due to smart technology integration. From a manufacturing automation perspective, intelligent automation has been identified as a possible response to arising demands. The presented research aims to support the industrial uptake of intelligent automation into manufacturing businesses by quantifying risks at the early design stage and business case development. An early-stage decision-support framework for the implementation of intelligent automation in manufacturing businesses is presented in this thesis.The framework is informed by an extensive literature review, updated and verified with surveys and workshops to add to the knowledge base due to the rapid development of the associated technologies. A paradigm shift from cost to a risk-modelling perspective is proposed to provide a more flexible and generic approach applicable throughout the current technology landscape. The proposed probabilistic decision-support framework consists of three parts:• A clustering algorithm to identify the manufacturing functions in manual processes from task analysis to mitigate early-stage design uncertainties• A Bayesian Belief Network (BBN) informed by an expert elicitation via the DELPHI method, where the identified functions become the unit of analysis.• A Markov-Chain Monte-Carlo method modelling the effects of uncertainties on the critical success factors to address issues of factor interdependencies after expert elicitation.Based on the overall decision framework a toolbox was developed in Microsoft Excel. Five different case studies are used to test and validate the framework. Evaluation of the results derived from the toolbox from the industrial feedback suggests a positive validation for commercial use. The main contributions to knowledge in the presented thesis arise from the following four points:• Early-stage decision-support framework for business case evaluation of intelligent automation.• Translating manual tasks to automation function via a novel clustering approach• Application of a Markov-Chain Monte-Carlo Method to simulate correlation between decision criteria• Causal relationship among Critical Success Factors has been established from business and technical perspectives.The implications on practise might be promising. The feedback arising from the created tool was promising from the industry, and a practical realisation of the decision-support tool seems to be desired from an industrial point of view.With respect to further work, the decision-support tool might have established a ground to analyse a human task automatically for automation purposes. The established clustering mechanisms and the related attributes could be connected to sensorial data and analyse a manufacturing task autonomously without the subjective input of task analysis experts. To enable such an autonomous process, however, the psychophysiological understanding must be increased in the future.</div

    A survey on computational intelligence approaches for predictive modeling in prostate cancer

    Get PDF
    Predictive modeling in medicine involves the development of computational models which are capable of analysing large amounts of data in order to predict healthcare outcomes for individual patients. Computational intelligence approaches are suitable when the data to be modelled are too complex forconventional statistical techniques to process quickly and eciently. These advanced approaches are based on mathematical models that have been especially developed for dealing with the uncertainty and imprecision which is typically found in clinical and biological datasets. This paper provides a survey of recent work on computational intelligence approaches that have been applied to prostate cancer predictive modeling, and considers the challenges which need to be addressed. In particular, the paper considers a broad definition of computational intelligence which includes evolutionary algorithms (also known asmetaheuristic optimisation, nature inspired optimisation algorithms), Artificial Neural Networks, Deep Learning, Fuzzy based approaches, and hybrids of these,as well as Bayesian based approaches, and Markov models. Metaheuristic optimisation approaches, such as the Ant Colony Optimisation, Particle Swarm Optimisation, and Artificial Immune Network have been utilised for optimising the performance of prostate cancer predictive models, and the suitability of these approaches are discussed

    Self Adaptive Reinforcement Learning for High-Dimensional Stochastic Systems with Application to Robotic Control

    Get PDF
    A long standing goal in the field of artificial intelligence (AI) is to develop agents that can perceive richer problem space and effortlessly plan their activity in minimal duration. Several strides have been made towards this goal over the last few years due to simultaneous advances in compute power, optimized algorithms, and most importantly evident success of AI based machines in nearly every discipline. The progress has been especially rapid in area of reinforcement learning (RL) where computers can now plan-ahead their activities and outperform their human rivals in complex problem domains like chess or Go game. However, despite encouraging progress, most of the advances in RL-based planning still take place in deterministic context (e.g. constant grid size, known action sets, etc.) which does not adapts well to stochastic variations in problem domain. In this dissertation we develop techniques that enable self-adaptation of agent\u27s behavioral policy when exposed to variations in problem domain. In particular, first we introduce an initial model that loosely realizes problem domain\u27s characteristics. The domain characteristics are embedded into a common multi-modal embedding space set. The embedding space set then allows us to identify initial beliefs and establish prior distributions without being constrained to only finite collection of agent\u27s state-action-reward experiences to choose from. We describe a learning technique that adapts to variations in problem domain by retaining only salient features of preceding domains, and inferring posterior for newly introduced variation as direct perturbation to aggregated priors. Besides having theoretical guarantees, we demonstrate end-to-end solution by establishing FPGA-based recurrent neural network, that can change its synaptic architecture temporally, thus eliminating the need of maintaining dual networks. We argue that our hardware based neural implementation has practical benefits, due to the fact it only uses sparse network architecture and multiplex it on circuit level to exhibit recurrence, which can reduce inference latency on circuit-level, while maintaining equivalence to dense neural architecture
    • …
    corecore