762 research outputs found

    Exploring variability in medical imaging

    Get PDF
    Although recent successes of deep learning and novel machine learning techniques improved the perfor- mance of classification and (anomaly) detection in computer vision problems, the application of these methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this is the amount of variability that is encountered and encapsulated in human anatomy and subsequently reflected in medical images. This fundamental factor impacts most stages in modern medical imaging processing pipelines. Variability of human anatomy makes it virtually impossible to build large datasets for each disease with labels and annotation for fully supervised machine learning. An efficient way to cope with this is to try and learn only from normal samples. Such data is much easier to collect. A case study of such an automatic anomaly detection system based on normative learning is presented in this work. We present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative models, which are trained only utilising normal/healthy subjects. However, despite the significant improvement in automatic abnormality detection systems, clinical routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis and localise abnormalities. Integrating human expert knowledge into the medical imaging processing pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per- spective of building an automated medical imaging system, it is still an open issue, to what extent this kind of variability and the resulting uncertainty are introduced during the training of a model and how it affects the final performance of the task. Consequently, it is very important to explore the effect of inter-observer variability both, on the reliable estimation of model’s uncertainty, as well as on the model’s performance in a specific machine learning task. A thorough investigation of this issue is presented in this work by leveraging automated estimates for machine learning model uncertainty, inter-observer variability and segmentation task performance in lung CT scan images. Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging was attempted. This state-of-the-art survey includes both conventional pattern recognition methods and deep learning based methods. It is one of the first literature surveys attempted in the specific research area.Open Acces

    Anomaly Detection in Lidar Data by Combining Supervised and Self-Supervised Methods

    Get PDF
    To enable safe autonomous driving, a reliable and redundant perception of the environment is required. In the context of autonomous vehicles, the perception is mainly based on machine learning models that analyze data from various sensors such as camera, Radio Detection and Ranging (radar), and Light Detection and Ranging (lidar). Since the performance of the models depends significantly on the training data used, it is necessary to ensure perception even in situations that are difficult to analyze and deviate from the training dataset. These situations are called corner cases or anomalies. Motivated by the need to detect such situations, this thesis presents a new approach for detecting anomalies in lidar data by combining Supervised (SV) and Self-Supervised (SSV) models. In particular, inconsistent point-wise predictions between a SV and a SSV part serve as an indication of anomalies arising from the models used themselves, e.g., due to lack of knowledge. The SV part is composed of a SV semantic segmentation model and a SV moving object segmentation model, which together assign a semantic motion class to each point of the point cloud. Based on the definition of semantic motion classes, a first motion label, denoting whether the point is static or dynamic, is predicted for each point. The SSV part mainly consists of a SSV scene flow model and a SSV odometry model and predicts a second motion label for each point. Thereby, the scene flow model estimates a displacement vector for each point, which, using the odometry information of the odometry model, represents only a point’s own induced motion. A separate quantitative analysis of the two parts and a qualitative analysis of the anomaly detection capabilities by combining the two parts are performed. In the qualitative analysis, the frames are classified into four main categories, namely correctly consistent, incorrectly consistent, anomalies detected by the SSV part, and anomalies detected by the SV part. In addition, weaknesses were identified in both the SV part and the SSV part

    Multimodal Detection of Unknown Objects on Roads for Autonomous Driving

    Get PDF
    Tremendous progress in deep learning over the last years has led towards a future with autonomous vehicles on our roads. Nevertheless, the performance of their perception systems is strongly dependent on the quality of the utilized training data. As these usually only cover a fraction of all object classes an autonomous driving system will face, such systems struggle with handling the unexpected. In order to safely operate on public roads, the identification of objects from unknown classes remains a crucial task. In this paper, we propose a novel pipeline to detect unknown objects. Instead of focusing on a single sensor modality, we make use of lidar and camera data by combining state-of-the art detection models in a sequential manner. We evaluate our approach on the Waymo Open Perception Dataset and point out current research gaps in anomaly detection

    Multimodal Detection of Unknown Objects on Roads for Autonomous Driving

    Get PDF
    Tremendous progress in deep learning over the last years has led towards a future with autonomous vehicles on our roads. Nevertheless, the performance of their perception systems is strongly dependent on the quality of the utilized training data. As these usually only cover a fraction of all object classes an autonomous driving system will face, such systems struggle with handling the unexpected. In order to safely operate on public roads, the identification of objects from unknown classes remains a crucial task. In this paper, we propose a novel pipeline to detect unknown objects. Instead of focusing on a single sensor modality, we make use of lidar and camera data by combining state-of-the art detection models in a sequential manner. We evaluate our approach on the Waymo Open Perception Dataset and point out current research gaps in anomaly detection.Comment: Daniel Bogdoll, Enrico Eisen, Maximilian Nitsche, and Christin Scheib contributed equally. Accepted for publication at SMC 202

    Multiresolution Feature Guidance Based Transformer for Anomaly Detection

    Full text link
    Anomaly detection is represented as an unsupervised learning to identify deviated images from normal images. In general, there are two main challenges of anomaly detection tasks, i.e., the class imbalance and the unexpectedness of anomalies. In this paper, we propose a multiresolution feature guidance method based on Transformer named GTrans for unsupervised anomaly detection and localization. In GTrans, an Anomaly Guided Network (AGN) pre-trained on ImageNet is developed to provide surrogate labels for features and tokens. Under the tacit knowledge guidance of the AGN, the anomaly detection network named Trans utilizes Transformer to effectively establish a relationship between features with multiresolution, enhancing the ability of the Trans in fitting the normal data manifold. Due to the strong generalization ability of AGN, GTrans locates anomalies by comparing the differences in spatial distance and direction of multi-scale features extracted from the AGN and the Trans. Our experiments demonstrate that the proposed GTrans achieves state-of-the-art performance in both detection and localization on the MVTec AD dataset. GTrans achieves image-level and pixel-level anomaly detection AUROC scores of 99.0% and 97.9% on the MVTec AD dataset, respectively

    Vision Language Models in Autonomous Driving and Intelligent Transportation Systems

    Full text link
    The applications of Vision-Language Models (VLMs) in the fields of Autonomous Driving (AD) and Intelligent Transportation Systems (ITS) have attracted widespread attention due to their outstanding performance and the ability to leverage Large Language Models (LLMs). By integrating language data, the vehicles, and transportation systems are able to deeply understand real-world environments, improving driving safety and efficiency. In this work, we present a comprehensive survey of the advances in language models in this domain, encompassing current models and datasets. Additionally, we explore the potential applications and emerging research directions. Finally, we thoroughly discuss the challenges and research gap. The paper aims to provide researchers with the current work and future trends of VLMs in AD and ITS

    Data synthesis and adversarial networks: A review and meta-analysis in cancer imaging

    Get PDF
    Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include inter-observer variability, class imbalance, dataset shifts, inter- and intra-tumour heterogeneity, malignancy determination, and treatment effect uncertainty. Given the recent advancements in image synthesis, Generative Adversarial Networks (GANs), and adversarial training, we assess the potential of these technologies to address a number of key challenges of cancer imaging. We categorise these challenges into (a) data scarcity and imbalance, (b) data access and privacy, (c) data annotation and segmentation, (d) cancer detection and diagnosis, and (e) tumour profiling, treatment planning and monitoring. Based on our analysis of 164 publications that apply adversarial training techniques in the context of cancer imaging, we highlight multiple underexplored solutions with research potential. We further contribute the Synthesis Study Trustworthiness Test (SynTRUST), a meta-analysis framework for assessing the validation rigour of medical image synthesis studies. SynTRUST is based on 26 concrete measures of thoroughness, reproducibility, usefulness, scalability, and tenability. Based on SynTRUST, we analyse 16 of the most promising cancer imaging challenge solutions and observe a high validation rigour in general, but also several desirable improvements. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on data synthesis and adversarial networks in the artificial intelligence community

    Exploring the Potential of World Models for Anomaly Detection in Autonomous Driving

    Full text link
    In recent years there have been remarkable advancements in autonomous driving. While autonomous vehicles demonstrate high performance in closed-set conditions, they encounter difficulties when confronted with unexpected situations. At the same time, world models emerged in the field of model-based reinforcement learning as a way to enable agents to predict the future depending on potential actions. This led to outstanding results in sparse reward and complex control tasks. This work provides an overview of how world models can be leveraged to perform anomaly detection in the domain of autonomous driving. We provide a characterization of world models and relate individual components to previous works in anomaly detection to facilitate further research in the field.Comment: Accepted for publication at SSCI 202

    Anomaly Detection in the Latent Space of VAEs

    Get PDF
    One of the most important challenges in the development of autonomous driving systems is to make them robust against unexpected or unknown objects. Many of these systems perform really good in a controlled environment where they encounter situation for which they have been trained. In order for them to be safely deployed in the real world, they need to be aware if they encounter situations or novel objects for which the have not been sufficiently trained for in order to prevent possibly dangerous behavior. In reality, they often fail when dealing with such kind of anomalies, and do so without any signs of uncertainty in their predictions. This thesis focuses on the problem of detecting anomalous objects in road images in the latent space of a VAE. For that, normal and anomalous data was used to train the VAE to fit the data onto two prior distributions. This essentially trains the VAE to create an anomaly and a normal cluster. This structure of the latent space makes it possible to detect anomalies in it by using clustering algorithms like k-means. Multiple experiments were carried out in order to improve to separation of normal and anomalous data in the latent space. To test this approach, anomaly data from multiple datasets was used in order to evaluate the detection of anomalies. The approach described in this thesis was able to detect almost all images containing anomalous objects but also suffers from a high false positive rate which still is a common problem of many anomaly detection methods
    • …
    corecore