301 research outputs found

    Fault Recovery in Swarm Robotics Systems using Learning Algorithms

    Get PDF
    When faults occur in swarm robotic systems they can have a detrimental effect on collective behaviours, to the point that failed individuals may jeopardise the swarm's ability to complete its task. Although fault tolerance is a desirable property of swarm robotic systems, fault recovery mechanisms have not yet been thoroughly explored. Individual robots may suffer a variety of faults, which will affect collective behaviours in different ways, therefore a recovery process is required that can cope with many different failure scenarios. In this thesis, we propose a novel approach for fault recovery in robot swarms that uses Reinforcement Learning and Self-Organising Maps to select the most appropriate recovery strategy for any given scenario. The learning process is evaluated in both centralised and distributed settings. Additionally, we experimentally evaluate the performance of this approach in comparison to random selection of fault recovery strategies, using simulated collective phototaxis, aggregation and foraging tasks as case studies. Our results show that this machine learning approach outperforms random selection, and allows swarm robotic systems to recover from faults that would otherwise prevent the swarm from completing its mission. This work builds upon existing research in fault detection and diagnosis in robot swarms, with the aim of creating a fully fault-tolerant swarm capable of long-term autonomy

    A comparative analysis of algorithms for satellite operations scheduling

    Get PDF
    Scheduling is employed in everyday life, ranging from meetings to manufacturing and operations among other activities. One instance of scheduling in a complex real-life setting is space mission operations scheduling, i.e. instructing a satellite to perform fitting tasks during predefined time periods with a varied frequency to achieve its mission goals. Mission operations scheduling is pivotal to the success of any space mission, choreographing every task carefully, accounting for technological and environmental limitations and constraints along with mission goals.;It remains standard practice to this day, to generate operations schedules manually ,i.e. to collect requirements from individual stakeholders, collate them into a timeline, compare against feasibility and available satellite resources, and find potential conflicts. Conflict resolution is done by hand, checked by a simulator and uplinked to the satellite weekly. This process is time consuming, bears risks and can be considered sub-optimal.;A pertinent question arises: can we automate the process of satellite mission operations scheduling? And if we can, what method should be used to generate the schedules? In an attempt to address this question, a comparison of algorithms was deemed suitable in order to explore their suitability for this particular application.;The problem of mission operations scheduling was initially studied through literature and numerous interviews with experts. A framework was developed to approximate a generic Low Earth Orbit satellite, its environment and its mission requirements. Optimisation algorithms were chosen from different categories such as single-point stochastic without memory (Simulated Annealing, Random Search), multi-point stochastic with memory (Genetic Algorithm, Ant Colony System, Differential Evolution) and were run both with and without Local Search.;The aforementioned algorithmic set was initially tuned using a single 89-minute Low Earth Orbit of a scientific mission to Mars. It was then applied to scheduling operations during one high altitude Low Earth Orbit (2.4hrs) of an experimental mission.;It was then applied to a realistic test-case inspired by the European Space Agency PROBA-2 mission, comprising a 1 day schedule and subsequently a 7 day schedule - equal to a Short Term Plan as defined by the European Space Agency.;The schedule fitness - corresponding to the Hamming distance between mission requirements and generated schedule - are presented along with the execution time of each run. Algorithmic performance is discussed and put at the disposal of mission operations experts for consideration.Scheduling is employed in everyday life, ranging from meetings to manufacturing and operations among other activities. One instance of scheduling in a complex real-life setting is space mission operations scheduling, i.e. instructing a satellite to perform fitting tasks during predefined time periods with a varied frequency to achieve its mission goals. Mission operations scheduling is pivotal to the success of any space mission, choreographing every task carefully, accounting for technological and environmental limitations and constraints along with mission goals.;It remains standard practice to this day, to generate operations schedules manually ,i.e. to collect requirements from individual stakeholders, collate them into a timeline, compare against feasibility and available satellite resources, and find potential conflicts. Conflict resolution is done by hand, checked by a simulator and uplinked to the satellite weekly. This process is time consuming, bears risks and can be considered sub-optimal.;A pertinent question arises: can we automate the process of satellite mission operations scheduling? And if we can, what method should be used to generate the schedules? In an attempt to address this question, a comparison of algorithms was deemed suitable in order to explore their suitability for this particular application.;The problem of mission operations scheduling was initially studied through literature and numerous interviews with experts. A framework was developed to approximate a generic Low Earth Orbit satellite, its environment and its mission requirements. Optimisation algorithms were chosen from different categories such as single-point stochastic without memory (Simulated Annealing, Random Search), multi-point stochastic with memory (Genetic Algorithm, Ant Colony System, Differential Evolution) and were run both with and without Local Search.;The aforementioned algorithmic set was initially tuned using a single 89-minute Low Earth Orbit of a scientific mission to Mars. It was then applied to scheduling operations during one high altitude Low Earth Orbit (2.4hrs) of an experimental mission.;It was then applied to a realistic test-case inspired by the European Space Agency PROBA-2 mission, comprising a 1 day schedule and subsequently a 7 day schedule - equal to a Short Term Plan as defined by the European Space Agency.;The schedule fitness - corresponding to the Hamming distance between mission requirements and generated schedule - are presented along with the execution time of each run. Algorithmic performance is discussed and put at the disposal of mission operations experts for consideration

    Hydrolink 2021/2. Artificial Intelligence

    Get PDF
    Topic: Artificial Intelligenc

    Optimisation Strategies for Power Management of Autonomous Systems

    Get PDF

    Real time tracking using nature-inspired algorithms

    Get PDF
    This thesis investigates the core difficulties in the tracking field of computer vision. The aim is to develop a suitable tuning free optimisation strategy so that a real time tracking could be achieved. The population and multi-solution based approaches have been applied first to analyse the convergence behaviours in the evolutionary test cases. The aim is to identify the core misconceptions in the manner the search characteristics of particles are defined in the literature. A general perception in the scientific community is that the particle based methods are not suitable for the real time applications. This thesis improves the convergence properties of particles by a novel scale free correlation approach. By altering the fundamental definition of a particle and by avoiding the nostalgic operations the tracking was expedited to a rate of 250 FPS. There is a reasonable amount of similarity between the tracking landscapes and the ones generated by three dimensional evolutionary test cases. Several experimental studies are conducted that compares the performances of the novel optimisation to the ones observed with the swarming methods. It is therefore concluded that the modified particle behaviour outclassed the traditional approaches by huge margins in almost every test scenario

    Machine learning in drug supply chain management during disease outbreaks: a systematic review

    Get PDF
    The drug supply chain is inherently complex. The challenge is not only the number of stakeholders and the supply chain from producers to users but also production and demand gaps. Downstream, drug demand is related to the type of disease outbreak. This study identifies the correlation between drug supply chain management and the use of predictive parameters in research on the spread of disease, especially with machine learning methods in the last five years. Using the Publish or Perish 8 application, there are 71 articles that meet the inclusion criteria and keyword search requirements according to Kitchenham's systematic review methodology. The findings can be grouped into three broad groupings of disease outbreaks, each of which uses machine learning algorithms to predict the spread of disease outbreaks. The use of parameters for prediction with machine learning has a correlation with drug supply management in the coronavirus disease case. The area of drug supply risk management has not been heavily involved in the prediction of disease outbreaks

    Comprehensive Review on Detection and Classification of Power Quality Disturbances in Utility Grid With Renewable Energy Penetration

    Get PDF
    The global concern with power quality is increasing due to the penetration of renewable energy (RE) sources to cater the energy demands and meet de-carbonization targets. Power quality (PQ) disturbances are found to be more predominant with RE penetration due to the variable outputs and interfacing converters. There is a need to recognize and mitigate PQ disturbances to supply clean power to the consumer. This article presents a critical review of techniques used for detection and classification PQ disturbances in the utility grid with renewable energy penetration. The broad perspective of this review paper is to provide various concepts utilized for extraction of the features to detect and classify the PQ disturbances even in the noisy environment. More than 220 research publications have been critically reviewed, classified and listed for quick reference of the engineers, scientists and academicians working in the power quality area

    12th EASN International Conference on "Innovation in Aviation & Space for opening New Horizons"

    Get PDF
    Epoxy resins show a combination of thermal stability, good mechanical performance, and durability, which make these materials suitable for many applications in the Aerospace industry. Different types of curing agents can be utilized for curing epoxy systems. The use of aliphatic amines as curing agent is preferable over the toxic aromatic ones, though their incorporation increases the flammability of the resin. Recently, we have developed different hybrid strategies, where the sol-gel technique has been exploited in combination with two DOPO-based flame retardants and other synergists or the use of humic acid and ammonium polyphosphate to achieve non-dripping V-0 classification in UL 94 vertical flame spread tests, with low phosphorous loadings (e.g., 1-2 wt%). These strategies improved the flame retardancy of the epoxy matrix, without any detrimental impact on the mechanical and thermal properties of the composites. Finally, the formation of a hybrid silica-epoxy network accounted for the establishment of tailored interphases, due to a better dispersion of more polar additives in the hydrophobic resin

    Deep Learning in Demand Side Management: A Comprehensive Framework for Smart Homes

    Full text link
    The advent of deep learning has elevated machine intelligence to an unprecedented high level. Fundamental concepts, algorithms, and implementations of differentiable programming, including gradient-based measures such as gradient descent and backpropagation, have powered many deep learning algorithms to accomplish millions of tasks in computer vision, signal processing, natural language comprehension, and recommender systems. Demand-side management (DSM) serves as a crucial tactic on the customer side of meters which regulates electricity consumption without hampering the occupant comfort of homeowners. As more residents participate in the energy management program, DSM will further contribute to grid stability protection, economical operation, and carbon emission reduction. However, DSM cannot be implemented effectively without the penetration of smart home technologies that integrate intelligent algorithms into hardware. Resident behaviors being analyzed and comprehended by deep learning algorithms based on sensor-collected human activities data is one typical example of such technology integration. This thesis applies deep learning to DSM and provides a comprehensive framework for smart home management. Firstly, a detailed literature review is conducted on DSM, smart homes, and deep learning. Secondly, the four papers published during the candidate’s Ph.D. career are utilized in lieu of thesis chapters: “A Demand-Side Load Event Detection Algorithm Based on Wide-Deep Neural Networks and Randomized Sparse Backpropagation,” “A Novel High-Performance Deep Learning Framework for Load Recognition: Deep-Shallow Model Based on Fast Backpropagation,” “An Object Surveillance Algorithm Based on Batch-Normalized CNN and Data Augmentation in Smart Home,” “Integrated optimization algorithm: A metaheuristic approach for complicated optimization.” Thirdly, a discussion section is offered to synthesize ideas and key results of the four papers published. Conclusion and directions for future research are provided in the final section of this thesis

    In pursuit of autonomous distributed satellite systems

    Get PDF
    Satellite imagery has become an essential resource for environmental, humanitarian, and industrial endeavours. As a means to satisfy the requirements of new applications and user needs, novel Earth Observation (EO) systems are exploring the suitability of Distributed Satellite Systems (DSS) in which multiple observation assets concurrently sense the Earth. Given the temporal and spatial resolution requirements of EO products, DSS are often envisioned as large-scale systems with multiple sensing capabilities operating in a networked manner. Enabled by the consolidation of small satellite platforms and fostered by the emerging capabilities of distributed systems, these new architectures pose multiple design and operational challenges. Two of them are the main pillars of this research, namely, the conception of decision-support tools to assist the architecting process of a DSS, and the design of autonomous operational frameworks based on decentralised, on-board decision-making. The first part of this dissertation addresses the architecting of heterogeneous, networked DSS architectures that hybridise small satellite platforms with traditional EO assets. We present a generic design-oriented optimisation framework based on tradespace exploration methodologies. The goals of this framework are twofold: to select the most optimal constellation design; and to facilitate the identification of design trends, unfeasible regions, and tensions among architectural attributes. Oftentimes in EO DSS, system requirements and stakeholder preferences are not only articulated through functional attributes (i.e. resolution, revisit time, etc.) or monetary constraints, but also through qualitative traits such as flexibility, evolvability, robustness, or resiliency, amongst others. In line with that, the architecting framework defines a single figure of merit that aggregates quantitative attributes and qualitative ones-the so-called ilities of a system. With that, designers can steer the design of DSS both in terms of performance or cost, and in terms of their high-level characteristics. The application of this optimisation framework has been illustrated in two timely use-cases identified in the context of the EU-funded ONION project: a system that measures ocean and ice parameters in Polar regions to facilitate weather forecast and off-shore operations; and a system that provides agricultural variables crucial for global management of water stress, crop state, and draughts. The analysis of architectural features facilitated a comprehensive understanding of the functional and operational characteristics of DSS. With that, this thesis continues to delve into the design of DSS by focusing on one particular functional trait: autonomy. The minimisation of human-operator intervention has been traditionally sought in other space systems and can be especially critical for large-scale, structurally dynamic, heterogeneous DSS. In DSS, autonomy is expected to cope with the likely inability to operate very large-scale systems in a centralised manner, to improve the science return, and to leverage many of their emerging capabilities (e.g. tolerance to failures, adaptability to changing structures and user needs, responsiveness). We propose an autonomous operational framework that provides decentralised decision-making capabilities to DSS by means of local reasoning and individual resource allocation, and satellite-to-satellite interactions. In contrast to previous works, the autonomous decision-making framework is evaluated in this dissertation for generic constellation designs the goal of which is to minimise global revisit times. As part of the characterisation of our solution, we stressed the implications that autonomous operations can have upon satellite platforms with stringent resource constraints (e.g. power, memory, communications capabilities) and evaluated the behaviour of the solution for a large-scale DSS composed of 117 CubeSat-like satellite units.La imatgeria per satèl·lit ha esdevingut un recurs essencial per assolir tasques ambientals, humanitàries o industrials. Per tal de satisfer els requeriments de les noves aplicacions i usuaris, els sistemes d’observació de la Terra (OT) estan explorant la idoneïtat dels Sistemes de Satèl·lit Distribuïts (SSD), on múltiples observatoris espacials mesuren el planeta simultàniament. Degut al les resolucions temporals i espacials requerides, els SSD sovint es conceben com sistemes de gran escala que operen en xarxa. Aquestes noves arquitectures promouen les capacitats emergents dels sistemes distribuïts i, tot i que són possibles gràcies a l’acceptació de les plataformes de satèl·lits petits, encara presenten molts reptes en quant al disseny i operacions. Dos d’ells són els pilars principals d’aquesta tesi, en concret, la concepció d’eines de suport a la presa de decisions pel disseny de SSD, i la definició d’operacions autònomes basades en gestió descentralitzada a bord dels satèl·lits. La primera part d’aquesta dissertació es centra en el disseny arquitectural de SSD heterogenis i en xarxa, imbricant tecnologies de petits satèl·lits amb actius tradicionals. Es presenta un entorn d’optimització orientat al disseny basat en metodologies d’exploració i comparació de solucions. Els objectius d’aquest entorn són: la selecció el disseny de constel·lació més òptim; i facilitar la identificació de tendències de disseny, regions d’incompatibilitat, i tensions entre atributs arquitecturals. Sovint en els SSD d’OT, els requeriments del sistema i l’expressió de prioritats no només s’articulen en quant als atributs funcionals o les restriccions monetàries, sinó també a través de les característiques qualitatives com la flexibilitat, l’evolucionabilitat, la robustesa, o la resiliència, entre d’altres. En línia amb això, l’entorn d’optimització defineix una única figura de mèrit que agrega rendiment, cost i atributs qualitatius. Així l’equip de disseny pot influir en les solucions del procés d’optimització tant en els aspectes quantitatius, com en les característiques dalt nivell. L’aplicació d’aquest entorn d’optimització s’il·lustra en dos casos d’ús actuals identificats en context del projecte europeu ONION: un sistema que mesura paràmetres de l’oceà i gel als pols per millorar la predicció meteorològica i les operacions marines; i un sistema que obté mesures agronòmiques vitals per la gestió global de l’aigua, l’estimació d’estat dels cultius, i la gestió de sequeres. L’anàlisi de propietats arquitecturals ha permès copsar de manera exhaustiva les característiques funcionals i operacionals d’aquests sistemes. Amb això, la tesi ha seguit aprofundint en el disseny de SSD centrant-se, particularment, en un tret funcional: l’autonomia. Minimitzar la intervenció de l’operador humà és comú en altres sistemes espacials i podria ser especialment crític pels SSD de gran escala, d’estructura dinàmica i heterogenis. En els SSD s’espera que l’autonomia solucioni la possible incapacitat d’operar sistemes de gran escala de forma centralitzada, que millori el retorn científic i que n’apuntali les seves propietats emergents (e.g. tolerància a errors, adaptabilitat a canvis estructural i de necessitats d’usuari, capacitat de resposta). Es proposa un sistema d’operacions autònomes que atorga la capacitat de gestionar els sistemes de forma descentralitzada, a través del raonament local, l’assignació individual de recursos, i les interaccions satèl·lit-a-satèl·lit. Al contrari que treballs anteriors, la presa de decisions autònoma s’avalua per constel·lacions que tenen com a objectius de missió la minimització del temps de revisita global
    • …
    corecore