986 research outputs found

    Radiation studies from meteorological satellites

    Get PDF
    Earth albedo variability and outgoing infrared radiation - data from TIROS satellite

    Understanding Human Coronavirus HCoV-NL63

    Get PDF
    Even though coronavirus infection of humans is not normally associated with severe diseases, the identification of the coronavirus responsible for the outbreak of severe acute respiratory syndrome showed that highly pathogenic coronaviruses can enter the human population. Shortly thereafter, in Holland in 2004, another novel human coronavirus (HCoV-NL63) was isolated from a seven-month old infant suffering from respiratory symptoms. This virus has subsequently been identified in various countries, indicating a worldwide distribution. HCoV-NL63 has been shown to infect mainly children and the immunocommpromised, who presented with either mild upper respiratory symptoms (cough, fever and rhinorrhoea) or more serious lower respiratory tract involvement such as bronchiolitis and croup, which was observed mainly in younger children. In fact, HCoV-NL63 is the aetiological agent for up to 10% of all respiratory diseases. This review summarizes recent findings of human coronavirus HCoV-NL63 infections, including isolation and identification, phylogeny and taxonomy, genome structure and transcriptional regulation, transmission and pathogenesis, and detection and diagnosis

    Efficient Scopeformer: Towards Scalable and Rich Feature Extraction for Intracranial Hemorrhage Detection

    Full text link
    The quality and richness of feature maps extracted by convolution neural networks (CNNs) and vision Transformers (ViTs) directly relate to the robust model performance. In medical computer vision, these information-rich features are crucial for detecting rare cases within large datasets. This work presents the "Scopeformer," a novel multi-CNN-ViT model for intracranial hemorrhage classification in computed tomography (CT) images. The Scopeformer architecture is scalable and modular, which allows utilizing various CNN architectures as the backbone with diversified output features and pre-training strategies. We propose effective feature projection methods to reduce redundancies among CNN-generated features and to control the input size of ViTs. Extensive experiments with various Scopeformer models show that the model performance is proportional to the number of convolutional blocks employed in the feature extractor. Using multiple strategies, including diversifying the pre-training paradigms for CNNs, different pre-training datasets, and style transfer techniques, we demonstrate an overall improvement in the model performance at various computational budgets. Later, we propose smaller compute-efficient Scopeformer versions with three different types of input and output ViT configurations. Efficient Scopeformers use four different pre-trained CNN architectures as feature extractors to increase feature richness. Our best Efficient Scopeformer model achieved an accuracy of 96.94\% and a weighted logarithmic loss of 0.083 with an eight times reduction in the number of trainable parameters compared to the base Scopeformer. Another version of the Efficient Scopeformer model further reduced the parameter space by almost 17 times with negligible performance reduction. Hybrid CNNs and ViTs might provide the desired feature richness for developing accurate medical computer vision model

    Designing an augmented reality exhibition: Leonardo's Impossible Machines

    Get PDF
    This paper discusses the origins, development and results of the animated and augmented reality aspects of the exhibition ‘Leonardo’s Impossible Machines’ that was developed at Ravensbourne University London and Birkbeck, University of London, with support from the Museo Galileo. The exhibition included novel reconstructions and visualisations of Leonardo’s perpetual motion machines from the Codex Forster, and the process is explained here, along with the challenges of mounting a combined physical and AR show

    Emergence of hexatic and long-range herringbone order in two-dimensional smectic liquid crystals : A Monte Carlo study

    Full text link
    Using a high resolution Monte Carlo simulation technique based on multi-histogram method and cluster-algorithm, we have investigated critical properties of a coupled XY model, consists of a six-fold symmetric hexatic and a three-fold symmetric herringbone field, in two dimensions. The simulation results demonstrate a series of novel continues transitions, in which both long-range hexatic and herringbone orderings are established simultaneously. It is found that the specific-heat anomaly exponents for some regions in coupling constants space are in excellent agreement with the experimentally measured exponents extracted from heat-capacity data near the smecticA-hexaticB transition of two-layer free standing film

    Exploring Robustness of Neural Networks through Graph Measures

    Get PDF
    Motivated by graph theory, artificial neural networks (ANNs) are traditionally structured as layers of neurons (nodes), which learn useful information by the passage of data through interconnections (edges). In the machine learning realm, graph structures (i.e., neurons and connections) of ANNs have recently been explored using various graph-theoretic measures linked to their predictive performance. On the other hand, in network science (NetSci), certain graph measures including entropy and curvature are known to provide insight into the robustness and fragility of real-world networks. In this work, we use these graph measures to explore the robustness of various ANNs to adversarial attacks. To this end, we (1) explore the design space of inter-layer and intra-layers connectivity regimes of ANNs in the graph domain and record their predictive performance after training under different types of adversarial attacks, (2) use graph representations for both inter-layer and intra-layers connectivity regimes to calculate various graph-theoretic measures, including curvature and entropy, and (3) analyze the relationship between these graph measures and the adversarial performance of ANNs. We show that curvature and entropy, while operating in the graph domain, can quantify the robustness of ANNs without having to train these ANNs. Our results suggest that the real-world networks, including brain networks, financial networks, and social networks may provide important clues to the neural architecture search for robust ANNs. We propose a search strategy that efficiently finds robust ANNs amongst a set of well-performing ANNs without having a need to train all of these ANNs.Comment: 18 pages, 15 figure

    Natural media workshop

    Get PDF
    This workshop will examine what our current imaging and sensing technologies do to our perception. We will examine, using practical examples, the potential to develop more 'Natural Media’ and technologies by broadening the focus of attention to the whole visual, auditory, tactile and sensual field. The aim is to re-incorporate peripheral awareness into our experience using these multiple sense inputs

    Data drift correction via time-varying importance weight estimator

    Full text link
    Real-world deployment of machine learning models is challenging when data evolves over time. And data does evolve over time. While no model can work when data evolves in an arbitrary fashion, if there is some pattern to these changes, we might be able to design methods to address it. This paper addresses situations when data evolves gradually. We introduce a novel time-varying importance weight estimator that can detect gradual shifts in the distribution of data. Such an importance weight estimator allows the training method to selectively sample past data -- not just similar data from the past like a standard importance weight estimator would but also data that evolved in a similar fashion in the past. Our time-varying importance weight is quite general. We demonstrate different ways of implementing it that exploit some known structure in the evolution of data. We demonstrate and evaluate this approach on a variety of problems ranging from supervised learning tasks (multiple image classification datasets) where the data undergoes a sequence of gradual shifts of our design to reinforcement learning tasks (robotic manipulation and continuous control) where data undergoes a shift organically as the policy or the task changes

    SUPER-Net: Trustworthy Medical Image Segmentation with Uncertainty Propagation in Encoder-Decoder Networks

    Full text link
    Deep Learning (DL) holds great promise in reshaping the healthcare industry owing to its precision, efficiency, and objectivity. However, the brittleness of DL models to noisy and out-of-distribution inputs is ailing their deployment in the clinic. Most models produce point estimates without further information about model uncertainty or confidence. This paper introduces a new Bayesian DL framework for uncertainty quantification in segmentation neural networks: SUPER-Net: trustworthy medical image Segmentation with Uncertainty Propagation in Encoder-decodeR Networks. SUPER-Net analytically propagates, using Taylor series approximations, the first two moments (mean and covariance) of the posterior distribution of the model parameters across the nonlinear layers. In particular, SUPER-Net simultaneously learns the mean and covariance without expensive post-hoc Monte Carlo sampling or model ensembling. The output consists of two simultaneous maps: the segmented image and its pixelwise uncertainty map, which corresponds to the covariance matrix of the predictive distribution. We conduct an extensive evaluation of SUPER-Net on medical image segmentation of Magnetic Resonances Imaging and Computed Tomography scans under various noisy and adversarial conditions. Our experiments on multiple benchmark datasets demonstrate that SUPER-Net is more robust to noise and adversarial attacks than state-of-the-art segmentation models. Moreover, the uncertainty map of the proposed SUPER-Net associates low confidence (or equivalently high uncertainty) to patches in the test input images that are corrupted with noise, artifacts, or adversarial attacks. Perhaps more importantly, the model exhibits the ability of self-assessment of its segmentation decisions, notably when making erroneous predictions due to noise or adversarial examples
    • …
    corecore