571 research outputs found

    Straight to Shapes: Real-time Detection of Encoded Shapes

    Full text link
    Current object detection approaches predict bounding boxes, but these provide little instance-specific information beyond location, scale and aspect ratio. In this work, we propose to directly regress to objects' shapes in addition to their bounding boxes and categories. It is crucial to find an appropriate shape representation that is compact and decodable, and in which objects can be compared for higher-order concepts such as view similarity, pose variation and occlusion. To achieve this, we use a denoising convolutional auto-encoder to establish an embedding space, and place the decoder after a fast end-to-end network trained to regress directly to the encoded shape vectors. This yields what to the best of our knowledge is the first real-time shape prediction network, running at ~35 FPS on a high-end desktop. With higher-order shape reasoning well-integrated into the network pipeline, the network shows the useful practical quality of generalising to unseen categories similar to the ones in the training set, something that most existing approaches fail to handle.Comment: 16 pages including appendix; Published at CVPR 201

    Anomaly detection and virtual reality visualisation in supercomputers

    Get PDF
    Anomaly detection is the identification of events or observations that deviate from the expected behaviour of a given set of data. Its main application is the prediction of possible technical failures. In particular, anomaly detection on supercomputers is a difficult problem to solve due to the large scale of the systems and the large number of components. Most research works in this field employ machine learning methods and regression models in a supervised fashion, which implies the need for a large amount of labelled data to train such systems. This work proposes the use of autoencoder models, allowing the problem to be approached with semi-supervised learning techniques. Two different model training approaches are compared. The former is a model trained with data from all the nodes of a supercomputer. In the latter approach, observing significant differences between nodes, one model is trained for each node. The results are analysed by evaluating the positive and negative aspects of each approach. On the other hand, a replica of the Marconi 100 supercomputer is developed in a virtual reality environment that allows the data from each node to be visualised at the same time.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. We would like to thank “A way of making Europe” European Regional Development Fund (ERDF) and MCIN/AEI/10.13039/501100011033 for supporting this work under the MoDeaAS project (grant PID2019-104818RB-I00). Furthermore, we would like to thank the University of Skövde and to ASSAR Innovation Arena for their support to develop this work

    A Knowledge Distillation Ensemble Framework for Predicting Short and Long-term Hospitalisation Outcomes from Electronic Health Records Data

    Get PDF
    The ability to perform accurate prognosis of patients is crucial for proactive clinical decision making, informed resource management and personalised care. Existing outcome prediction models suffer from a low recall of infrequent positive outcomes. We present a highly-scalable and robust machine learning framework to automatically predict adversity represented by mortality and ICU admission from time-series vital signs and laboratory results obtained within the first 24 hours of hospital admission. The stacked platform comprises two components: a) an unsupervised LSTM Autoencoder that learns an optimal representation of the time-series, using it to differentiate the less frequent patterns which conclude with an adverse event from the majority patterns that do not, and b) a gradient boosting model, which relies on the constructed representation to refine prediction, incorporating static features of demographics, admission details and clinical summaries. The model is used to assess a patient's risk of adversity over time and provides visual justifications of its prediction based on the patient's static features and dynamic signals. Results of three case studies for predicting mortality and ICU admission show that the model outperforms all existing outcome prediction models, achieving PR-AUC of 0.891 (95% CI: 0.878 - 0.969) in predicting mortality in ICU and general ward settings and 0.908 (95% CI: 0.870-0.935) in predicting ICU admission.Comment: 14 page

    A Survey on Explainable Anomaly Detection

    Full text link
    In the past two decades, most research on anomaly detection has focused on improving the accuracy of the detection, while largely ignoring the explainability of the corresponding methods and thus leaving the explanation of outcomes to practitioners. As anomaly detection algorithms are increasingly used in safety-critical domains, providing explanations for the high-stakes decisions made in those domains has become an ethical and regulatory requirement. Therefore, this work provides a comprehensive and structured survey on state-of-the-art explainable anomaly detection techniques. We propose a taxonomy based on the main aspects that characterize each explainable anomaly detection technique, aiming to help practitioners and researchers find the explainable anomaly detection method that best suits their needs.Comment: Paper accepted by the ACM Transactions on Knowledge Discovery from Data (TKDD) for publication (preprint version

    Immersive analytics for oncology patient cohorts

    Get PDF
    This thesis proposes a novel interactive immersive analytics tool and methods to interrogate the cancer patient cohort in an immersive virtual environment, namely Virtual Reality to Observe Oncology data Models (VROOM). The overall objective is to develop an immersive analytics platform, which includes a data analytics pipeline from raw gene expression data to immersive visualisation on virtual and augmented reality platforms utilising a game engine. Unity3D has been used to implement the visualisation. Work in this thesis could provide oncologists and clinicians with an interactive visualisation and visual analytics platform that helps them to drive their analysis in treatment efficacy and achieve the goal of evidence-based personalised medicine. The thesis integrates the latest discovery and development in cancer patients’ prognoses, immersive technologies, machine learning, decision support system and interactive visualisation to form an immersive analytics platform of complex genomic data. For this thesis, the experimental paradigm that will be followed is in understanding transcriptomics in cancer samples. This thesis specifically investigates gene expression data to determine the biological similarity revealed by the patient's tumour samples' transcriptomic profiles revealing the active genes in different patients. In summary, the thesis contributes to i) a novel immersive analytics platform for patient cohort data interrogation in similarity space where the similarity space is based on the patient's biological and genomic similarity; ii) an effective immersive environment optimisation design based on the usability study of exocentric and egocentric visualisation, audio and sound design optimisation; iii) an integration of trusted and familiar 2D biomedical visual analytics methods into the immersive environment; iv) novel use of the game theory as the decision-making system engine to help the analytics process, and application of the optimal transport theory in missing data imputation to ensure the preservation of data distribution; and v) case studies to showcase the real-world application of the visualisation and its effectiveness

    Rapid Spectral Parameter Prediction for Black Hole X-Ray Binaries using Physicalised Autoencoders

    Full text link
    Black hole X-ray binaries (BHBs) offer insights into extreme gravitational environments and the testing of general relativity. The X-ray spectrum collected by NICER offers valuable information on the properties and behaviour of BHBs through spectral fitting. However, traditional spectral fitting methods are slow and scale poorly with model complexity. This paper presents a new semi-supervised autoencoder neural network for parameter prediction and spectral reconstruction of BHBs, showing an improvement of up to a factor of 2,700 in speed while maintaining comparable accuracy. The approach maps the spectral features from the numerous outbursts catalogued by NICER and generalises them to new systems for efficient and accurate spectral fitting. The effectiveness of this approach is demonstrated in the spectral fitting of BHBs and holds promise for use in other areas of astronomy and physics for categorising large datasets.Comment: 12 pages, 12 figure
    corecore