2,848 research outputs found

    Mobile Device Background Sensors: Authentication vs Privacy

    Get PDF
    The increasing number of mobile devices in recent years has caused the collection of a large amount of personal information that needs to be protected. To this aim, behavioural biometrics has become very popular. But, what is the discriminative power of mobile behavioural biometrics in real scenarios? With the success of Deep Learning (DL), architectures based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), such as Long Short-Term Memory (LSTM), have shown improvements compared to traditional machine learning methods. However, these DL architectures still have limitations that need to be addressed. In response, new DL architectures like Transformers have emerged. The question is, can these new Transformers outperform previous biometric approaches? To answers to these questions, this thesis focuses on behavioural biometric authentication with data acquired from mobile background sensors (i.e., accelerometers and gyroscopes). In addition, to the best of our knowledge, this is the first thesis that explores and proposes novel behavioural biometric systems based on Transformers, achieving state-of-the-art results in gait, swipe, and keystroke biometrics. The adoption of biometrics requires a balance between security and privacy. Biometric modalities provide a unique and inherently personal approach for authentication. Nevertheless, biometrics also give rise to concerns regarding the invasion of personal privacy. According to the General Data Protection Regulation (GDPR) introduced by the European Union, personal data such as biometric data are sensitive and must be used and protected properly. This thesis analyses the impact of sensitive data in the performance of biometric systems and proposes a novel unsupervised privacy-preserving approach. The research conducted in this thesis makes significant contributions, including: i) a comprehensive review of the privacy vulnerabilities of mobile device sensors, covering metrics for quantifying privacy in relation to sensitive data, along with protection methods for safeguarding sensitive information; ii) an analysis of authentication systems for behavioural biometrics on mobile devices (i.e., gait, swipe, and keystroke), being the first thesis that explores the potential of Transformers for behavioural biometrics, introducing novel architectures that outperform the state of the art; and iii) a novel privacy-preserving approach for mobile biometric gait verification using unsupervised learning techniques, ensuring the protection of sensitive data during the verification process

    Tools for Landscape Analysis of Optimisation Problems in Procedural Content Generation for Games

    Get PDF
    The term Procedural Content Generation (PCG) refers to the (semi-)automatic generation of game content by algorithmic means, and its methods are becoming increasingly popular in game-oriented research and industry. A special class of these methods, which is commonly known as search-based PCG, treats the given task as an optimisation problem. Such problems are predominantly tackled by evolutionary algorithms. We will demonstrate in this paper that obtaining more information about the defined optimisation problem can substantially improve our understanding of how to approach the generation of content. To do so, we present and discuss three efficient analysis tools, namely diagonal walks, the estimation of high-level properties, as well as problem similarity measures. We discuss the purpose of each of the considered methods in the context of PCG and provide guidelines for the interpretation of the results received. This way we aim to provide methods for the comparison of PCG approaches and eventually, increase the quality and practicality of generated content in industry.Comment: 30 pages, 8 figures, accepted for publication in Applied Soft Computin

    Optimization for Energy Management in the Community Microgrids

    Full text link
    This thesis focuses on improving the energy management strategies for Community Microgrids (CMGs), which are expected to play a crucial role in the future smart grid. CMGs bring many benefits, including increased use of renewable energy, improved reliability, resiliency, and energy efficiency. An Energy Management System (EMS) is a key tool that helps in monitoring, controlling, and optimizing the operations of the CMG in a cost-effective manner. The EMS can include various functionalities like day-ahead generation scheduling, real-time scheduling, uncertainty management, and demand response programs. Generation scheduling in a microgrid is a challenging optimization problem, especially due to the intermittent nature of renewable energy. The power balance constraint, which is the balance between energy demand and generation, is difficult to satisfy due to prediction errors in energy demand and generation. Real-time scheduling, which is based on a shorter prediction horizon, reduces these errors, but the impact of uncertainties cannot be completely eliminated. In regards to demand response programs, it is challenging to design an effective model that motivates customers to voluntarily participate while benefiting the system operator. Mathematical optimization techniques have been widely used to solve power system problems, but their application is limited by the need for specific mathematical properties. Metaheuristic techniques, particularly Evolutionary Algorithms (EAs), have gained popularity for their ability to solve complex and non-linear problems. However, the traditional form of EAs may require significant computational effort for complex energy management problems in the CMG. This thesis aims to enhance the existing methods of EMS in CMGs. Improved techniques are developed for day-ahead generation scheduling, multi-stage real-time scheduling, and demand response implementation. For generation scheduling, the performance of conventional EAs is improved through an efficient heuristic. A new multi-stage scheduling framework is proposed to minimize the impact of uncertainties in real-time operations. In regards to demand response, a memetic algorithm is proposed to solve an incentive-based scheme from the perspective of an aggregator, and a price-based demand response driven by dynamic price optimization is proposed to enhance the electric vehicle hosting capacity. The proposed methods are validated through extensive numerical experiments and comparison with state-of-the-art approaches. The results confirm the effectiveness of the proposed methods in improving energy management in CMGs

    An improved dandelion optimizer algorithm for spam detection next-generation email filtering system

    Get PDF
    Spam emails have become a pervasive issue in recent years, as internet users receive increasing amounts of unwanted or fake emails. To combat this issue, automatic spam detection methods have been proposed, which aim to classify emails into spam and non-spam categories. Machine learning techniques have been utilized for this task with considerable success. In this paper, we introduce a novel approach to spam email detection by presenting significant advancements to the Dandelion Optimizer (DO) algorithm. DO is a relatively new nature-inspired optimization algorithm inspired by the flight of dandelion seeds. While DO shows promise, it faces challenges, especially in high-dimensional problems such as feature selection for spam detection. Our primary contributions focus on enhancing the DO algorithm. Firstly, we introduce a new local search algorithm based on flipping (LSAF), designed to improve DO's ability to find the best solutions. Secondly, we propose a reduction equation that streamlines the population size during algorithm execution, reducing computational complexity. To showcase the effectiveness of our modified DO algorithm, which we refer to as Improved DO (IDO), we conduct a comprehensive evaluation using the Spam base dataset from the UCI repository. However, we emphasize that our primary objective is to advance the DO algorithm, with spam email detection serving as a case study application. Comparative analysis against several popular algorithms, including Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Generalized Normal Distribution Optimization (GNDO), Chimp Optimization Algorithm (ChOA), Grasshopper Optimization Algorithm (GOA), Ant Lion Optimizer (ALO), and Dragonfly Algorithm (DA), demonstrates the superior performance of our proposed IDO algorithm. It excels in accuracy, fitness, and the number of selected features, among other metrics. Our results clearly indicate that IDO overcomes the local optima problem commonly associated with the standard DO algorithm, owing to the incorporation of LSAF and the reduction equation methods. In summary, our paper underscores the significant advancement made in the form of the IDO al-gorithm, which represents a promising approach for solving high-dimensional optimization prob-lems, with a keen focus on practical applications in real-world systems. While we employ spam email detection as a case study, our primary contribution lies in the improved DO algorithm, which is efficient, accurate, and outperforms several state-of-the-art algorithms in various metrics. This work opens avenues for enhancing optimization techniques and their applications in machine learning

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    An Intelligent Time and Performance Efficient Algorithm for Aircraft Design Optimization

    Get PDF
    Die Optimierung des Flugzeugentwurfs erfordert die Beherrschung der komplexen Zusammenhänge mehrerer Disziplinen. Trotz seiner Abhängigkeit von einer Vielzahl unabhängiger Variablen zeichnet sich dieses komplexe Entwurfsproblem durch starke indirekte Verbindungen und eine daraus resultierende geringe Anzahl lokaler Minima aus. Kürzlich entwickelte intelligente Methoden, die auf selbstlernenden Algorithmen basieren, ermutigten die Suche nach einer diesem Bereich zugeordneten neuen Methode. Tatsächlich wird der in dieser Arbeit entwickelte Hybrid-Algorithmus (Cavus) auf zwei Hauptdesignfälle im Luft- und Raumfahrtbereich angewendet: Flugzeugentwurf- und Flugbahnoptimierung. Der implementierte neue Ansatz ist in der Lage, die Anzahl der Versuchspunkte ohne große Kompromisse zu reduzieren. Die Trendanalyse zeigt, dass der Cavus-Algorithmus für die komplexen Designprobleme, mit einer proportionalen Anzahl von Prüfpunkten konservativer ist, um die erfolgreichen Muster zu finden. Aircraft Design Optimization requires mastering of the complex interrelationships of multiple disciplines. Despite its dependency on a diverse number of independent variables, this complex design problem has favourable nature as having strong indirect links and as a result a low number of local minimums. Recently developed intelligent methods that are based on self-learning algorithms encouraged finding a new method dedicated to this area. Indeed, the hybrid (Cavus) algorithm developed in this thesis is applied two main design cases in aerospace area: aircraft design optimization and trajectory optimization. The implemented new approach is capable of reducing the number of trial points without much compromise. The trend analysis shows that, for the complex design problems the Cavus algorithm is more conservative with a proportional number of trial points in finding the successful patterns

    Optimisation for Optical Data Centre Switching and Networking with Artificial Intelligence

    Get PDF
    Cloud and cluster computing platforms have become standard across almost every domain of business, and their scale quickly approaches O(106)\mathbf{O}(10^6) servers in a single warehouse. However, the tier-based opto-electronically packet switched network infrastructure that is standard across these systems gives way to several scalability bottlenecks including resource fragmentation and high energy requirements. Experimental results show that optical circuit switched networks pose a promising alternative that could avoid these. However, optimality challenges are encountered at realistic commercial scales. Where exhaustive optimisation techniques are not applicable for problems at the scale of Cloud-scale computer networks, and expert-designed heuristics are performance-limited and typically biased in their design, artificial intelligence can discover more scalable and better performing optimisation strategies. This thesis demonstrates these benefits through experimental and theoretical work spanning all of component, system and commercial optimisation problems which stand in the way of practical Cloud-scale computer network systems. Firstly, optical components are optimised to gate in 500ps\approx 500 ps and are demonstrated in a proof-of-concept switching architecture for optical data centres with better wavelength and component scalability than previous demonstrations. Secondly, network-aware resource allocation schemes for optically composable data centres are learnt end-to-end with deep reinforcement learning and graph neural networks, where 3×3\times less networking resources are required to achieve the same resource efficiency compared to conventional methods. Finally, a deep reinforcement learning based method for optimising PID-control parameters is presented which generates tailored parameters for unseen devices in O(103)s\mathbf{O}(10^{-3}) s. This method is demonstrated on a market leading optical switching product based on piezoelectric actuation, where switching speed is improved >20%>20\% with no compromise to optical loss and the manufacturing yield of actuators is improved. This method was licensed to and integrated within the manufacturing pipeline of this company. As such, crucial public and private infrastructure utilising these products will benefit from this work

    Optimising water quality outcomes for complex water resource systems and water grids

    Get PDF
    As the world progresses, water resources are likely to be subjected to much greater pressures than in the past. Even though the principal water problem revolves around inadequate and uncertain water supplies, water quality management plays an equally important role. Availability of good quality water is paramount to sustainability of human population as well as the environment. Achieving water quality and quantity objectives can be conflicting and becomes more complicated with challenges like, climate change, growing populations and changed land uses. Managing adequate water quality in a reservoir gets complicated by multiple inflows with different water quality levels often resulting in poor water quality. Hence, it is fundamental to approach this issue in a more systematic, comprehensive, and coordinated fashion. Most previous studies related to water resources management focused on water quantity and considered water quality separately. However, this research study focused on considering water quantity and quality objectives simultaneously in a single model to explore and understand the relationship between them in a reservoir system. A case study area was identified in Western Victoria, Australia with water quantity and quality challenges. Taylors Lake of Grampians System in Victoria, Australia receives water from multiple sources of differing quality and quantity and has the abovesaid problems. A combined simulation and optimisation approach was adopted to carry out the analysis. A multi-objective optimisation approach was applied to achieve optimal water availability and quality in the storage. The multi-objective optimisation model included three objective functions which were: water volume and two water quality parameters: salinity and turbidity. Results showed competing nature of water quantity and quality objectives and established the trade-offs. It further showed that it was possible to generate a range of optimal solutions to effectively manage those trade-offs. The trade-off analysis explored and informed that selective harvesting of inflows is effective to improve water quality in storage. However, with strict water quality restriction there is a considerable loss in water volume. The robustness of the optimisation approach used in this study was confirmed through sensitivity and uncertainty analysis. The research work also incorporated various spatio-temporal scenario analyses to systematically articulate long-term and short-term operational planning strategies. Operational decisions around possible harvesting regimes while achieving optimal water quantity and quality and meeting all water demands were established. The climate change analysis revealed that optimal management of water quantity and quality in storage became extremely challenging under future climate projections. The high reduction in storage volume in the future will lead to several challenges such as water supply shortfall and inability to undertake selective harvesting due to reduced water quality levels. In this context, selective harvesting of inflows based on water quality will no longer be an option to manage water quantity and quality optimally in storage. Some significant conclusions of this research work included the establishment of trade-offs between water quality and quantity objectives particular to this configuration of water supply system. The work demonstrated that selective harvesting of inflows will improve the stored water quality, and this finding along with the approach used is a significant contribution to decision makers working within the water sector. The simulation-optimisation approach is very effective in providing a range of optimal solutions, which can be used to make more informed decisions around achieving optimal water quality and quantity in storage. It was further demonstrated that there are range of planning periods, both long-term (>10 years) and short-term (<1 year), all of which offer distinct advantages and provides useful insights, making this an additional key contribution of the work. Importantly, climate change was also considered where it was found that diminishing water resources, particularly to this geographic location, makes it increasingly difficult to optimise both quality and quantity in storage providing further useful insights from this work.Doctor of Philosoph
    corecore