18 research outputs found

    DoPAMINE: Double-sided Masked CNN for Pixel Adaptive Multiplicative Noise Despeckling

    Full text link
    We propose DoPAMINE, a new neural network based multiplicative noise despeckling algorithm. Our algorithm is inspired by Neural AIDE (N-AIDE), which is a recently proposed neural adaptive image denoiser. While the original N-AIDE was designed for the additive noise case, we show that the same framework, i.e., adaptively learning a network for pixel-wise affine denoisers by minimizing an unbiased estimate of MSE, can be applied to the multiplicative noise case as well. Moreover, we derive a double-sided masked CNN architecture which can control the variance of the activation values in each layer and converge fast to high denoising performance during supervised training. In the experimental results, we show our DoPAMINE possesses high adaptivity via fine-tuning the network parameters based on the given noisy image and achieves significantly better despeckling results compared to SAR-DRN, a state-of-the-art CNN-based algorithm.Comment: AAAI 2019 Camera Ready Versio

    Subtask Gated Networks for Non-Intrusive Load Monitoring

    Full text link
    Non-intrusive load monitoring (NILM), also known as energy disaggregation, is a blind source separation problem where a household's aggregate electricity consumption is broken down into electricity usages of individual appliances. In this way, the cost and trouble of installing many measurement devices over numerous household appliances can be avoided, and only one device needs to be installed. The problem has been well-known since Hart's seminal paper in 1992, and recently significant performance improvements have been achieved by adopting deep networks. In this work, we focus on the idea that appliances have on/off states, and develop a deep network for further performance improvements. Specifically, we propose a subtask gated network that combines the main regression network with an on/off classification subtask network. Unlike typical multitask learning algorithms where multiple tasks simply share the network parameters to take advantage of the relevance among tasks, the subtask gated network multiply the main network's regression output with the subtask's classification probability. When standby-power is additionally learned, the proposed solution surpasses the state-of-the-art performance for most of the benchmark cases. The subtask gated network can be very effective for any problem that inherently has on/off states

    SwiFT: Swin 4D fMRI Transformer

    Full text link
    Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI.Comment: NeurIPS 202

    Towards More Robust Interpretation via Local Gradient Alignment

    No full text
    Neural network interpretation methods, particularly feature attribution methods, are known to be fragile with respect to adversarial input perturbations. To address this, several methods for enhancing the local smoothness of the gradient while training have been proposed for attaining robust feature attributions. However, the lack of considering the normalization of the attributions, which is essential in their visualizations, has been an obstacle to understanding and improving the robustness of feature attribution methods. In this paper, we provide new insights by taking such normalization into account. First, we show that for every non-negative homogeneous neural network, a naive l2-robust criterion for gradients is not normalization invariant, which means that two functions with the same normalized gradient can have different values. Second, we formulate a normalization invariant cosine distance-based criterion and derive its upper bound, which gives insight for why simply minimizing the Hessian norm at the input, as has been done in previous work, is not sufficient for attaining robust feature attribution. Finally, we propose to combine both l2 and cosine distance-based criteria as regularization terms to leverage the advantages of both in aligning the local gradient. As a result, we experimentally show that models trained with our method produce much more robust interpretations on CIFAR-10 and ImageNet-100 without significantly hurting the accuracy, compared to the recent baselines. To the best of our knowledge, this is the first work to verify the robustness of interpretation on a larger-scale dataset beyond CIFAR-10, thanks to the computational efficiency of our method

    Characterization of Petroleum Heavy Oil Fractions Prepared by Preparatory Liquid Chromatography with Thin-Layer Chromatography, High-Resolution Mass Spectrometry, and Gas Chromatography with an Atomic Emission Detector

    No full text
    In this study, a preparatory-scale fractionation method was developed. To verify the effectiveness of this method, an oil sample was fractionated into five fractions, referred to as saturate, aro1, aro2, polar1, and polar2; these fractions were completely characterized by thin-layer chromatography–flame ionization detection (TLC–FID), field desorption (FD) and (+) atmospheric pressure photoionization (APPI) high-resolution mass spectrometry (HR-MS), and gas chromatography with an atomic emission detector (GC–AED). TLC–FID analysis was used to compare the results obtained by the fractionation method to those obtained from the conventional saturates, aromatics, resins, and asphaltenes (SARA) method. FD–MS was employed to characterize the hydrocarbon class compounds in the saturate and aro1 fractions. As observed from the FD–MS spectra, non-aromatic hydrocarbon compounds were abundant in saturates, while mono- and diaromatic compounds were abundant in the aro1 fraction. This result is in good agreement with those obtained by HR-MS. (+) APPI HR-MS analysis of fractions showed that aromaticity increases from saturates to the polar1 fraction but decreases in the polar2 fraction. Heteroatom class distributions investigated by (+) APPI HR-MS showed that non-basic nitrogen compounds were abundant in polar1, while non-aromatic sulfur compounds were abundant in the polar2 fraction. From the results obtained by the GC–AED analysis of fractions, nickel porphyrin compounds were concentrated in the polar1 fraction. Hence, the combined results clearly demonstrate that the fractionation method is effective for isolating fractions on a preparative scale

    Fully Elastic Conductive Films from Viscoelastic Composites

    No full text
    We investigated, for the first time, the conditions where a thermoplastic conductive composite can exhibit completely reversible stretchability at high elongational strains (epsilon = 1.8). We studied a composite of Au nanosheets and a polystyrene-block-polybutadiene-block-polystyrene block copolymer as an example. The composite had an outstandingly low sheet resistance (0.45 Omega/sq). We found that when a thin thermoplastic composite film is placed on a relatively thicker chemically cross-linked elastomer film, it can follow the reversible elastic behavior of the bottom elastomer. Such elasticity comes from the restoration of the block copolymer microstructure. The strong adhesion of the thermoplastic polymer to the metallic fillers is advantageous in the fabrication of mechanically robust, highly conductive, stretchable electrodes. The chemical stability of the Au composite was used to fabricate high luminescence, stretchable electrochemiluminescence displays with a conventional top-bottom electrode setup and with a horizontal electrode setup

    X-DNA Origami-Networked Core-Supported Lipid Stratum

    No full text
    DNA hydrogels are promising materials for various fields of research, such as in vitro protein production, drug carrier systems, and cell transplantation. For effective application and further utilization of DNA hydrogels, highly effective methods of nano- and microscale DNA hydrogel fabrication are needed. In this respect, the fundamental advantages of a core-shell structure can provide a simple remedy. An isolated reaction chamber and massive production platform can be provided by a core-shell structure, and lipids are one of the best shell precursor candidates because of their intrinsic biocompatibility and potential for easy modification. Here, we demonstrate a novel core-shell nanostructure made of gene-knitted X-shaped DNA (X-DNA) origami-networked gel core-supported lipid strata. It was simply organized by cross-linking DNA molecules via T4 enzymatic ligation and enclosing them in lipid strata. As a condensed core structure, the DNA gel shows Brownian behavior in a confined area. It has been speculated that they could, in the future, be utilized for in vitro protein synthesis, gene-integration transporters, and even new molecular bottom-up biological machineries. © 2015 American Chemical Society.11Nsciescopu
    corecore