17 research outputs found

    Inductive biases and metaknowledge representations for search-based optimization

    Get PDF
    "What I do not understand, I can still create."- H. Sayama The following work follows closely the aforementioned bonmot. Guided by questions such as: ``How can evolutionary processes exhibit learning behavior and consolidate knowledge?´´, ``What are cognitive models of problem-solving?´´ and ``How can we harness these altogether as computational techniques?´´, we clarify within this work essentials required to implement them for metaheuristic search and optimization.We therefore look into existing models of computational problem-solvers and compare these with existing methodology in literature. Particularly, we find that the meta-learning model, which frames problem-solving in terms of domain-specific inductive biases and the arbitration thereof through means of high-level abstractions resolves outstanding issues with methodology proposed within the literature. Noteworthy, it can be also related to ongoing research on algorithm selection and configuration frameworks. We therefore look in what it means to implement such a model by first identifying inductive biases in terms of algorithm components and modeling these with density estimation techniques. And secondly, propose methodology to process metadata generated by optimization algorithms in an automated manner through means of deep pattern recognition architectures for spatio-temporal feature extraction. At last we look into an exemplary shape optimization problem which allows us to gain insight into what it means to apply our methodology to application scenarios. We end our work with a discussion on future possible directions to explore and discuss the limitations of such frameworks for system deployment

    Innovative Solutions for Navigation and Mission Management of Unmanned Aircraft Systems

    Get PDF
    The last decades have witnessed a significant increase in Unmanned Aircraft Systems (UAS) of all shapes and sizes. UAS are finding many new applications in supporting several human activities, offering solutions to many dirty, dull, and dangerous missions, carried out by military and civilian users. However, limited access to the airspace is the principal barrier to the realization of the full potential that can be derived from UAS capabilities. The aim of this thesis is to support the safe integration of UAS operations, taking into account both the user's requirements and flight regulations. The main technical and operational issues, considered among the principal inhibitors to the integration and wide-spread acceptance of UAS, are identified and two solutions for safe UAS operations are proposed: A. Improving navigation performance of UAS by exploiting low-cost sensors. To enhance the performance of the low-cost and light-weight integrated navigation system based on Global Navigation Satellite System (GNSS) and Micro Electro-Mechanical Systems (MEMS) inertial sensors, an efficient calibration method for MEMS inertial sensors is required. Two solutions are proposed: 1) The innovative Thermal Compensated Zero Velocity Update (TCZUPT) filter, which embeds the compensation of thermal effect on bias in the filter itself and uses Back-Propagation Neural Networks to build the calibration function. Experimental results show that the TCZUPT filter is faster than the traditional ZUPT filter in mapping significant bias variations and presents better performance in the overall testing period. Moreover, no calibration pre-processing stage is required to keep measurement drift under control, improving the accuracy, reliability, and maintainability of the processing software; 2) A redundant configuration of consumer grade inertial sensors to obtain a self-calibration of typical inertial sensors biases. The result is a significant reduction of uncertainty in attitude determination. In conclusion, both methods improve dead-reckoning performance for handling intermittent GNSS coverage. B. Proposing novel solutions for mission management to support the Unmanned Traffic Management (UTM) system in monitoring and coordinating the operations of a large number of UAS. Two solutions are proposed: 1) A trajectory prediction tool for small UAS, based on Learning Vector Quantization (LVQ) Neural Networks. By exploiting flight data collected when the UAS executes a pre-assigned flight path, the tool is able to predict the time taken to fly generic trajectory elements. Moreover, being self-adaptive in constructing a mathematical model, LVQ Neural Networks allow creating different models for the different UAS types in several environmental conditions; 2) A software tool aimed at supporting standardized procedures for decision-making process to identify UAS/payload configurations suitable for any type of mission that can be authorized standing flight regulations. The proposed methods improve the management and safe operation of large-scale UAS missions, speeding up the flight authorization process by the UTM system and supporting the increasing level of autonomy in UAS operations

    Evaluation of patients with acute coronary syndromes using cardiac magnetic resonance imaging and bioelectrical and biochemical markers

    Get PDF
    INTRODUCTION: Cardiac disease is a major cause of morbidity and mortality worldwide and therapeutic advances continue to be made. Improved accuracy of diagnosis and risk stratification is therefore important. Advanced imaging using contrast enhanced magnetic imaging ( ceMRI) is under investigation for assessment of myocardial necrosis in both acute and chronic settings due to ischaemic and non -ischaemic aetiologies. Consecutive patients with an incident episode of chest pain necessitating hospital admission were recruited and underwent ceMRI. CeMRI was considered the gold standard for determining presence of ischaemic myocardial necrosis and used to evaluate current ECG guidelines in acute chest pain syndromes. ST segment elevation on the presenting ECG determines the acute reperfusion strategy but will not detect all infarcts and additional consideration of ST depression termed, "STEMI equivalent" may reduce the burden of missed AMI. Infarct size (IS) was measured by manual planimetry of regions of delayed hyperenhancement (DE) and then correlated with routinely available biochemical and bioelectrical markers. The evolution of infarct size and characteristics were then followed using at ceMRI at 4 time points out to 1 year. The role of inflammation in MI using CRP as the marker was also investigated. Finally, additional clinical information was provided by performing ceMRI in this group of patients and the findings are presented

    Machine Learning-based Predictive Maintenance for Optical Networks

    Get PDF
    Optical networks provide the backbone of modern telecommunications by connecting the world faster than ever before. However, such networks are susceptible to several failures (e.g., optical fiber cuts, malfunctioning optical devices), which might result in degradation in the network operation, massive data loss, and network disruption. It is challenging to accurately and quickly detect and localize such failures due to the complexity of such networks, the time required to identify the fault and pinpoint it using conventional approaches, and the lack of proactive efficient fault management mechanisms. Therefore, it is highly beneficial to perform fault management in optical communication systems in order to reduce the mean time to repair, to meet service level agreements more easily, and to enhance the network reliability. In this thesis, the aforementioned challenges and needs are tackled by investigating the use of machine learning (ML) techniques for implementing efficient proactive fault detection, diagnosis, and localization schemes for optical communication systems. In particular, the adoption of ML methods for solving the following problems is explored: - Degradation prediction of semiconductor lasers, - Lifetime (mean time to failure) prediction of semiconductor lasers, - Remaining useful life (the length of time a machine is likely to operate before it requires repair or replacement) prediction of semiconductor lasers, - Optical fiber fault detection, localization, characterization, and identification for different optical network architectures, - Anomaly detection in optical fiber monitoring. Such ML approaches outperform the conventionally employed methods for all the investigated use cases by achieving better prediction accuracy and earlier prediction or detection capability

    Collection and Analysis of Driving Videos Based on Traffic Participants

    Full text link
    Autonomous vehicle (AV) prototypes have been deployed in increasingly varied environments in recent years. An AV must be able to reliably detect and predict the future motion of traffic participants to maintain safe operation based on data collected from high-quality onboard sensors. Sensors such as camera and LiDAR generate high-bandwidth data that requires substantial computational and memory resources. To address these AV challenges, this thesis investigates three related problems: 1) What will the observed traffic participants do? 2) Is an anomalous traffic event likely to happen in near future? and 3) How should we collect fleet-wide high-bandwidth data based on 1) and 2) over the long-term? The first problem is addressed with future traffic trajectory and pedestrian behavior prediction. We propose a future object localization (FOL) method for trajectory prediction in first person videos (FPV). FOL encodes heterogeneous observations including bounding boxes, optical flow features and ego camera motions with multi-stream recurrent neural networks (RNN) to predict future trajectories. Because FOL does not consider multi-modal future trajectories, its accuracy suffers from accumulated RNN prediction error. We then introduce BiTraP, a goal-conditioned bidirectional multi-modal trajectory prediction method. BiTraP estimates multi-modal trajectories and uses a novel bi-directional decoder and loss to improve longer-term trajectory prediction accuracy. We show that different choices of non-parametric versus parametric target models directly influence predicted multi-modal trajectory distributions. Experiments with two FPV and six bird's-eye view (BEV) datasets show the effectiveness of our methods compared to state-of-the-art. We define pedestrian behavior prediction as a combination of action and intent. We hypothesize that current and future actions are strong intent priors and propose a multi-task learning RNN encoder-decoder network to detect and predict future pedestrian actions and street crossing intent. Experimental results show that one task helps the other so they together achieve state-of-the-art performance on published datasets. To identify likely traffic anomaly events, we introduce an unsupervised video anomaly detection (VAD) method based on trajectories. We predict locations of traffic participants over a near-term future horizon and monitor accuracy and consistency of these predictions as evidence of an anomaly. Inconsistent predictions tend to indicate an anomaly has happened or is about to occur. A supervised video action recognition method can then be applied to classify detected anomalies. We introduce a spatial-temporal area under curve (STAUC) metric as a supplement to the existing area under curve (AUC) evaluation and show it captures how well a model detects temporal and spatial locations of anomalous events. Experimental results show the proposed method and consistency-based anomaly score are more robust to moving cameras than image generation based methods; our method achieves state-of-the-art performance over AUC and STAUC metrics. VAD and action recognition support event-of-interest (EOI) distinction from normal driving data. We introduce a Smart Black Box (SBB), an intelligent event data recorder, to prioritize EOI data in long-term driving. The SBB compresses high-bandwidth data based on EOI potential and on-board storage limits. The SBB is designed to prioritize newer and anomalous driving data and discard older and normal data. An optimal compression factor is selected based on the trade-off between data value and storage cost. Experiments in a traffic simulator and with real-world datasets show the efficiency and effectiveness of using a SBB to collect high-quality videos over long-term driving.PHDRoboticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/168035/1/brianyao_1.pd

    Ghost In the Grid: Challenges for Reinforcement Learning in Grid World Environments

    Get PDF
    The current state-of-the-art deep reinforcement learning techniques require agents to gather large amounts of diverse experiences to train effective and general models. In addition, there are also many other factors that have to be taken into consideration: for example, how the agent interacts with its environment; parameter optimization techniques; environment exploration methods; and finally the diversity of environments that is provided to an agent. In this thesis, we investigate several of these factors. Firstly we introduce Griddly, a high-performance grid-world game engine that provides a state-of-the-art combination of high performance and flexibility. We demonstrate that grid worlds provide a principled and expressive substrate for fundamental research questions in reinforcement learning, whilst filtering out noise inherent in physical systems. We show that although grid-worlds are constructed with simple rules-based mechanics, they can be used to construct complex open-ended, and procedurally generated environments. We improve upon Griddly with GriddlyJS, a web-based tool for designing and testing grid-world environments for reinforcement learning research. GriddlyJS provides a rich suite of features that assist researchers in a multitude of different learning approaches. To highlight the features of GriddlyJS we present a dataset of 100 complex escape-room puzzle levels. In addition to these complex puzzle levels, we provide human-generated trajectories and a baseline policy that can be run in a web browser. We show that this tooling enables significantly faster research iteration in many sub-fields. We then explore several areas of RL research that are made accessible by the features introduced by Griddly: Firstly, we explore learning grid-world game mechanics using deep neural networks. The {\em neural game engine} is introduced which has competitive performance in terms of sample efficiency and predicting states accurately over long time horizons. Secondly, {\em conditional action trees} are introduced which describe a method for compactly expressing complex hierarchical action spaces. Expressing hierarchical action spaces as trees leads to action spaces that are additive rather than multiplicative over the factors of the action space. It is shown that these compressed action spaces reduce the required output size of neural networks without compromising performance. This makes the interfaces to complex environments significantly simpler to implement. Finally, we explore the inherent symmetry in common observation spaces, using the concept of {\em geometric deep learning}. We show that certain geometric data augmentation methods do not conform to the underlying assumptions in several training algorithms. We provide solutions to these problems in the form of novel regularization functions and demonstrate that these methods fix the underlying assumptions

    Side-Channel Analysis and Cryptography Engineering : Getting OpenSSL Closer to Constant-Time

    Get PDF
    As side-channel attacks reached general purpose PCs and started to be more practical for attackers to exploit, OpenSSL adopted in 2005 a flagging mechanism to protect against SCA. The opt-in mechanism allows to flag secret values, such as keys, with the BN_FLG_CONSTTIME flag. Whenever a flag is checked and detected, the library changes its execution flow to SCA-secure functions that are slower but safer, protecting these secret values from being leaked. This mechanism favors performance over security, it is error-prone, and is obscure for most library developers, increasing the potential for side-channel vulnerabilities. This dissertation presents an extensive side-channel analysis of OpenSSL and criticizes its fragile flagging mechanism. This analysis reveals several flaws affecting the library resulting in multiple side-channel attacks, improved cache-timing attack techniques, and a new side channel vector. The first part of this dissertation introduces the main topic and the necessary related work, including the microarchitecture, the cache hierarchy, and attack techniques; then it presents a brief troubled history of side-channel attacks and defenses in OpenSSL, setting the stage for the related publications. This dissertation includes seven original publications contributing to the area of side-channel analysis, microarchitecture timing attacks, and applied cryptography. From an SCA perspective, the results identify several vulnerabilities and flaws enabling protocol-level attacks on RSA, DSA, and ECDSA, in addition to full SCA of the SM2 cryptosystem. With respect to microarchitecture timing attacks, the dissertation presents a new side-channel vector due to port contention in the CPU execution units. And finally, on the applied cryptography front, OpenSSL now enjoys a revamped code base securing several cryptosystems against SCA, favoring a secure-by-default protection against side-channel attacks, instead of the insecure opt-in flagging mechanism provided by the fragile BN_FLG_CONSTTIME flag

    Artificial Intelligence Applications to Critical Transportation Issues

    Full text link
    corecore