4,289 research outputs found

    Undergraduate Catalog of Studies, 2023-2024

    Get PDF

    Graduate Catalog of Studies, 2023-2024

    Get PDF

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science

    Exploring Hardware Fault Impacts on Different Real Number Representations of the Structural Resilience of TCUs in GPUs

    Get PDF
    The most recent generations of graphics processing units (GPUs) boost the execution of convolutional operations required by machine learning applications by resorting to specialized and efficient in-chip accelerators (Tensor Core Units or TCUs) that operate on matrix multiplication tiles. Unfortunately, modern cutting-edge semiconductor technologies are increasingly prone to hardware defects, and the trend to highly stress TCUs during the execution of safety-critical and high-performance computing (HPC) applications increases the likelihood of TCUs producing different kinds of failures. In fact, the intrinsic resiliency to hardware faults of arithmetic units plays a crucial role in safety-critical applications using GPUs (e.g., in automotive, space, and autonomous robotics). Recently, new arithmetic formats have been proposed, particularly those suited to neural network execution. However, the reliability characterization of TCUs supporting different arithmetic formats was still lacking. In this work, we quantitatively assessed the impact of hardware faults in TCU structures while employing two distinct formats (floating-point and posit) and using two different configurations (16 and 32 bits) to represent real numbers. For the experimental evaluation, we resorted to an architectural description of a TCU core (PyOpenTCU) and performed 120 fault simulation campaigns, injecting around 200,000 faults per campaign and requiring around 32 days of computation. Our results demonstrate that the posit format of TCUs is less affected by faults than the floating-point one (by up to three orders of magnitude for 16 bits and up to twenty orders for 32 bits). We also identified the most sensible fault locations (i.e., those that produce the largest errors), thus paving the way to adopting smart hardening solutions

    Meta-learning algorithms and applications

    Get PDF
    Meta-learning in the broader context concerns how an agent learns about their own learning, allowing them to improve their learning process. Learning how to learn is not only beneficial for humans, but it has also shown vast benefits for improving how machines learn. In the context of machine learning, meta-learning enables models to improve their learning process by selecting suitable meta-parameters that influence the learning. For deep learning specifically, the meta-parameters typically describe details of the training of the model but can also include description of the model itself - the architecture. Meta-learning is usually done with specific goals in mind, for example trying to improve ability to generalize or learn new concepts from only a few examples. Meta-learning can be powerful, but it comes with a key downside: it is often computationally costly. If the costs would be alleviated, meta-learning could be more accessible to developers of new artificial intelligence models, allowing them to achieve greater goals or save resources. As a result, one key focus of our research is on significantly improving the efficiency of meta-learning. We develop two approaches: EvoGrad and PASHA, both of which significantly improve meta-learning efficiency in two common scenarios. EvoGrad allows us to efficiently optimize the value of a large number of differentiable meta-parameters, while PASHA enables us to efficiently optimize any type of meta-parameters but fewer in number. Meta-learning is a tool that can be applied to solve various problems. Most commonly it is applied for learning new concepts from only a small number of examples (few-shot learning), but other applications exist too. To showcase the practical impact that meta-learning can make in the context of neural networks, we use meta-learning as a novel solution for two selected problems: more accurate uncertainty quantification (calibration) and general-purpose few-shot learning. Both are practically important problems and using meta-learning approaches we can obtain better solutions than the ones obtained using existing approaches. Calibration is important for safety-critical applications of neural networks, while general-purpose few-shot learning tests model's ability to generalize few-shot learning abilities across diverse tasks such as recognition, segmentation and keypoint estimation. More efficient algorithms as well as novel applications enable the field of meta-learning to make more significant impact on the broader area of deep learning and potentially solve problems that were too challenging before. Ultimately both of them allow us to better utilize the opportunities that artificial intelligence presents

    Development of pedestrian collision avoidance strategy based on the fusion of Markov and social force models

    Get PDF
    In urban traffic, accurate prediction of pedestrian trajectory and advanced collision avoidance strategy can effectively reduce the collision risk between intelligent vehicles and pedestrians. In order to improve the prediction accuracy of pedestrian trajectory and the safety of collision avoidance, a longitudinal and lateral intelligent collision avoidance strategy based on pedestrian trajectory prediction is proposed. Firstly, the process of a pedestrian crossing the road is considered as a combination of free motion described by first-order Markov model and the constrained motion presented by improved social force model. The predicted pedestrian trajectory is obtained by weighted fusion of the trajectories of the two models with a multiple linear regression algorithm. Secondly, according to the predicted pedestrian trajectory and time to collision (TTC) the longitudinal and lateral collision avoidance strategy is designed. The improved artificial potential field method is used to plan the lateral collision avoidance path in real time based on the predicted pedestrian position, and a fuzzy controller is constructed to obtain the desired deceleration of the vehicle. Finally, the pedestrian motion fusion model and the longitudinal and lateral collision avoidance strategy are verified by Prescan and Simulink co-simulation. The results show that the average displacement error (ADE) and final displacement error (FDE) of pedestrian trajectory based on pedestrian motion fusion model are smaller compared with a Markov model and improved social force model, and the proposed pedestrian collision avoidance strategy can effectively achieve longitudinal and lateral collision avoidance.</p

    Dconformer: A denoising convolutional transformer with joint learning strategy for intelligent diagnosis of bearing faults

    Get PDF
    Rolling bearings are the core components of rotating machinery, and their normal operation is crucial to entire industrial applications. Most existing condition monitoring methods have been devoted to extracting discriminative features from vibration signals that reflect bearing health status. However, the complex working conditions of rolling bearings often make the fault-related information easily buried in noise and other interference. Therefore, it is challenging for existing approaches to extract sufficient critical features in these scenarios. To address this issue, this paper proposes a novel CNN-Transformer network, referred to as Dconformer, capable of extracting both local and global discriminative features from noisy vibration signals. The main contributions of this research include: (1) Developing a novel joint-learning strategy that simultaneously enhances the performance of signal denoising and fault diagnosis, leading to robust and accurate diagnostic results; (2) Constructing a novel CNN-transformer network with a multi-branch cross-cascaded architecture, which inherits the strengths of CNNs and transformers and demonstrates superior anti-interference capability. Extensive experimental results reveal that the proposed Dconformer outperforms five state-of-the-art approaches, particularly in strong noisy scenarios

    Modellbasierte Simulation und Kalibrierung eines multimodalen Systems aus OCT und Optoakustik zur nichtinvasiven, prÀoperativen Dickenbestimmung von melanomverdÀchtigen HautlÀsionen

    Get PDF
    In this dissertation, methods for the calibration of optical coherence tomography (OCT) systems and for the simulation of optoacoustic signals are presented. The key question here is whether a multimodal system consisting of OCT and optoacoustics is suitable for noninvasive, preoperative thickness determination of skin lesions suspected of melanoma and what conditions, if any, must be met for this purpose. Given the current state of the art, such a modality for melanoma diagnosis would be very enriching for dermatology. In addition to the definition of malignant melanoma, the most common diagnostic procedures in dermatology will be explained. The current approach to melanoma diagnostics shows that there is a lot of potential for improvement in order to be able to make diagnoses preoperatively in the future and to prevent unnecessary surgical interventions. The project in which this work was developed is briefly presented. It also discusses the physical principles needed to simulate and calibrate the multimodal system. The methods presented in chapters 6 and 7 for calibrating the OCT and for simulating the optoacoustic signals then build on these fundamentals. The general setup of OCT systems as well as of two specific OCT devices is explained. The methods then presented for geometric calibration and refractive index correction are essential for the thickness determination of structures in OCT images. In chapter 7 different methods are presented which are suitable for the simulation of optoacoustic signals. On the one hand, the solution of the direct problem, i.e. the creation of optoacoustic signals, is shown as well as the solution of the indirect problem, in which conclusions can be drawn about the initial pressure profile if optoacoustic signals are available. Furthermore, optoacoustic signals of simulated melanomas are generated and evaluated, which is also important for answering the key question. The results of this dissertation are discussed in detail at the end and an outlook is given on how the work on the multimodal system will continue.In der vorliegenden Dissertation werden Methoden zur Kalibrierung von Optischen KohĂ€renztomographie (OCT)-Systemen und zur Simulation von Optoakustiksignalen prĂ€sentiert. Die Kernfrage hierbei ist, ob ein multimodales System aus OCT und Optoakustik fĂŒr eine nichtinvasive, prĂ€operative Dickenbestimmung von melanomverdĂ€chtigen HautlĂ€sionen geeignet ist und welche Bedingungen hierfĂŒr gegebenenfalls erfĂŒllt werden mĂŒssen. Beim derzeitigen Stand der Technik wĂ€re solch eine ModalitĂ€t fĂŒr die Melanomdiagnostik sehr bereichernd f š ur die Dermatologie. Neben der Definition eines malignen Melanoms werden die gelĂ€ufigsten diagnostischen Verfahren in der Dermatologie erlĂ€utert. Das momentane Vorgehen bei der Melanomdiagnostik zeigt, dass hier sehr viel Potenzial fĂŒr Verbesserungen ist, um zukĂŒnftig Diagnosen prĂ€operativ vornehmen und unnötige operative Eingriffe verhindern zu können. Es wird kurz das Projekt vorgestellt, in dem diese Arbeit entstanden ist. Außerdem werden die physikalischen Grundlagen erörtert, die fĂŒr die Simulation und Kalibrierung des multimodalen Systems benötigt werden. Auf diesen Grundlagen bauen dann die in Kapitel 6 und 7 vorgestellten Methoden zur Kalibrierung des OCT sowie zur Simulation der optoakustischen Signale auf. Es wird der allgemeine Aufbau von OCT-Systemen sowie von zwei speziellen OCT-GerĂ€ten erklĂ€rt. Die dann vorgestellten Methoden zur geometrischen Kalibrierung und zur Brechungsindexkorrektur sind unerlĂ€sslich fĂŒr eine Dickenbestimmung von Strukturen in OCT-Bildern. In Kapitel 7 werden verschiedene Verfahren vorgestellt, die sich zur Simulation von optoakustischen Signalen eignen. Hier wird zum einen die Lösung des direkten Problems, also das Erzeugen von Optoakustiksignalen gezeigt sowie die Lösung des indirekten Problems, bei der RĂŒckschluss auf das initiale Druckprofil geschlossen werden kann, wenn Optoakustiksignale vorliegen. Weiterhin werden Optoakustiksignale von simulierten Melanomen erzeugt und ausgewertet, was ebenfalls wichtig fĂŒr die Beantwortung der Kernfrage ist. Die Ergebnisse dieser Dissertation werden zum Schluss ausfĂŒhrlich erörtert und es wird ein Ausblick darauf gegeben, wie die Arbeit am multimodalen System weitergeht

    Occupational health and safety issues in human-robot collaboration: State of the art and open challenges

    Get PDF
    Human-Robot Collaboration (HRC) refers to the interaction of workers and robots in a shared workspace. Owing to the integration of the industrial automation strengths with the inimitable cognitive capabilities of humans, HRC is paramount to move towards advanced and sustainable production systems. Although the overall safety of collaborative robotics has increased over time, further research efforts are needed to allow humans to operate alongside robots, with awareness and trust. Numerous safety concerns are open, and either new or enhanced technical, procedural and organizational measures have to be investigated to design and implement inherently safe and ergonomic automation solutions, aligning the systems performance and the human safety. Therefore, a bibliometric analysis and a literature review are carried out in the present paper to provide a comprehensive overview of Occupational Health and Safety (OHS) issues in HRC. As a result, the most researched topics and application areas, and the possible future lines of research are identified. Reviewed articles stress the central role played by humans during collaboration, underlining the need to integrate the human factor in the hazard analysis and risk assessment. Human-centered design and cognitive engineering principles also require further investigations to increase the worker acceptance and trust during collaboration. Deepened studies are compulsory in the healthcare sector, to investigate the social and ethical implications of HRC. Whatever the application context is, the implementation of more and more advanced technologies is fundamental to overcome the current HRC safety concerns, designing low-risk HRC systems while ensuring the system productivity

    Self-supervised learning for transferable representations

    Get PDF
    Machine learning has undeniably achieved remarkable advances thanks to large labelled datasets and supervised learning. However, this progress is constrained by the labour-intensive annotation process. It is not feasible to generate extensive labelled datasets for every problem we aim to address. Consequently, there has been a notable shift in recent times toward approaches that solely leverage raw data. Among these, self-supervised learning has emerged as a particularly powerful approach, offering scalability to massive datasets and showcasing considerable potential for effective knowledge transfer. This thesis investigates self-supervised representation learning with a strong focus on computer vision applications. We provide a comprehensive survey of self-supervised methods across various modalities, introducing a taxonomy that categorises them into four distinct families while also highlighting practical considerations for real-world implementation. Our focus thenceforth is on the computer vision modality, where we perform a comprehensive benchmark evaluation of state-of-the-art self supervised models against many diverse downstream transfer tasks. Our findings reveal that self-supervised models often outperform supervised learning across a spectrum of tasks, albeit with correlations weakening as tasks transition beyond classification, particularly for datasets with distribution shifts. Digging deeper, we investigate the influence of data augmentation on the transferability of contrastive learners, uncovering a trade-off between spatial and appearance-based invariances that generalise to real-world transformations. This begins to explain the differing empirical performances achieved by self-supervised learners on different downstream tasks, and it showcases the advantages of specialised representations produced with tailored augmentation. Finally, we introduce a novel self-supervised pre-training algorithm for object detection, aligning pre-training with downstream architecture and objectives, leading to reduced localisation errors and improved label efficiency. In conclusion, this thesis contributes a comprehensive understanding of self-supervised representation learning and its role in enabling effective transfer across computer vision tasks
    • 

    corecore