1,987 research outputs found

    A Driving Performance Forecasting System Based on Brain Dynamic State Analysis Using 4-D Convolutional Neural Networks.

    Full text link
    Vehicle accidents are the primary cause of fatalities worldwide. Most often, experiencing fatigue on the road leads to operator errors and behavioral lapses. Thus, there is a need to predict the cognitive state of drivers, particularly their fatigue level. Electroencephalography (EEG) has been demonstrated to be effective for monitoring changes in the human brain state and behavior. Thirty-seven subjects participated in this driving experiment and performed a perform lane-keeping task in a visual-reality environment. Three domains, namely, frequency, temporal, and 2-D spatial information, of the EEG channel location were comprehensively considered. A 4-D convolutional neural-network (4-D CNN) algorithm was then proposed to associate all information from the EEG signals and the changes in the human state and behavioral performance. A 4-D CNN achieves superior forecasting performance over 2-D CNN, 3-D CNN, and shallow networks. The results showed a 3.82% improvement in the root mean-square error, a 3.45% improvement in the error rate, and a 11.98% improvement in the correlation coefficient with 4-D CNN compared with 3-D CNN. The 4-D CNN algorithm extracts the significant θ and alpha activations in the frontal and posterior cingulate cortices under distinct fatigue levels. This work contributes to enhancing our understanding of deep learning methods in the analysis of EEG signals. We even envision that deep learning might serve as a bridge between translation neuroscience and further real-world applications

    A model-based circular binary segmentation algorithm for the analysis of array CGH data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Circular Binary Segmentation (CBS) is a permutation-based algorithm for array Comparative Genomic Hybridization (aCGH) data analysis. CBS accurately segments data by detecting change-points using a maximal-<it>t </it>test; but extensive computational burden is involved for evaluating the significance of change-points using permutations. A recent implementation utilizing a hybrid method and early stopping rules (hybrid CBS) to improve the performance in speed was subsequently proposed. However, a time analysis revealed that a major portion of computation time of the hybrid CBS was still spent on permutation. In addition, what the hybrid method provides is an approximation of the significance upper bound or lower bound, not an approximation of the significance of change-points itself.</p> <p>Results</p> <p>We developed a novel model-based algorithm, extreme-value based CBS (eCBS), which limits permutations and provides robust results without loss of accuracy. Thousands of aCGH data under null hypothesis were simulated in advance based on a variety of non-normal assumptions, and the corresponding maximal-<it>t </it>distribution was modeled by the Generalized Extreme Value (GEV) distribution. The modeling results, which associate characteristics of aCGH data to the GEV parameters, constitute lookup tables (eXtreme model). Using the eXtreme model, the significance of change-points could be evaluated in a constant time complexity through a table lookup process.</p> <p>Conclusions</p> <p>A novel algorithm, eCBS, was developed in this study. The current implementation of eCBS consistently outperforms the hybrid CBS 4× to 20× in computation time without loss of accuracy. Source codes, supplementary materials, supplementary figures, and supplementary tables can be found at <url>http://ntumaps.cgm.ntu.edu.tw/eCBSsupplementary</url>.</p

    Ultra-broadband Light Absorption by a Sawtooth Anisotropic Metamaterial Slab

    Get PDF
    We present an ultra broadband thin-film infrared absorber made of saw-toothed anisotropic metamaterial. Absorbtivity of higher than 95% at normal incidence is supported in a wide range of frequencies, where the full absorption width at half maximum is about 86%. Such property is retained well at a very wide range of incident angles too. Light of shorter wavelengths are harvested at upper parts of the sawteeth of smaller widths, while light of longer wavelengths are trapped at lower parts of larger tooth widths. This phenomenon is explained by the slowlight modes in anisotropic metamaterial waveguide. Our study can be applied in the field of designing photovoltaic devices and thermal emitters.Comment: 12 pages, 4 picture

    A five-year hedonic price breakdown for desktop personal computer attributes in Brazil

    Get PDF
    The purpose of this article is to identify the attributes that discriminate the prices of personal desktop computers. We employ the hedonic price method in evaluating such characteristics. This approach allows market prices to be expressed as a function, a set of attributes present in the products and services offered. Prices and characteristics of up to 3,779 desktop personal computers offered in the IT pages of one of the main Brazilian newspapers were collected from January 2003 to December 2007. Several specifications for the hedonic (multivariate) linear regression were tested. In this particular study, the main attributes were found to be hard drive capacity, screen technology, main board brand, random memory size, microprocessor brand, video board memory, digital video and compact disk recording devices, screen size and microprocessor speed. These results highlight the novel contribution of this study: the manner and means in which hedonic price indexes may be estimated in Brazil

    PmoB subunit of particulate methane monooxygenase (pMMO) in Methylococcus capsulatus (Bath): The Cu^I sponge and its function

    Get PDF
    In this study, we describe efforts to clarify the role of the copper cofactors associated with subunit B (PmoB) of the particulate methane monooxygenase (pMMO) from Methylococcus capsulatus (Bath) (M. capsulatus). This subunit exhibits strong affinity toward Cu^I ions. To elucidate the high copper affinity of the subunit, the full-length PmoB, and the N-terminal truncated mutants PmoB_(33–414) and PmoB_(55–414), each fused to the maltose-binding protein (MBP), are cloned and over-expressed into Escherichia coli (E. coli) K12 TB1 cells. The Y374F, Y374S and M300L mutants of these protein constructs are also studied. When this E. coli is grown with the pmoB gene in 1.0 mM Cu^(II), it behaves like M. capsulatus (Bath) cultured under high copper stresswith abundant membrane accumulation and high CuI content. The recombinantPmoB proteins are verified by Western blotting of antibodies directed against the MBP sub-domain in each of the copper-enriched PmoB proteins. Cu K-edge X-rayabsorption near edge spectroscopy (XANES) of the copper ions confirms that all the PmoB recombinants are Cu^I proteins. All the PmoB proteins show evidence of a “dicopper site” according to analysis of the Cu extended X-ray absorption edge fine structure (EXAFS) of the membranes. No specific activities toward methane and propene oxidation are observed with the recombinant membrane-bound PmoB proteins. However, significant production of hydrogen peroxide is observed in the case of the PmoB_(33–414) mutant. Reaction of the dicopper site with dioxygenproduces hydrogen peroxide and leads to oxidation of the CuI ions residing in the C-terminal sub-domain of the PmoB subunit

    Uniform Approximation Is More Appropriate for Wilcoxon Rank-Sum Test in Gene Set Analysis

    Get PDF
    Gene set analysis is widely used to facilitate biological interpretations in the analyses of differential expression from high throughput profiling data. Wilcoxon Rank-Sum (WRS) test is one of the commonly used methods in gene set enrichment analysis. It compares the ranks of genes in a gene set against those of genes outside the gene set. This method is easy to implement and it eliminates the dichotomization of genes into significant and non-significant in a competitive hypothesis testing. Due to the large number of genes being examined, it is impractical to calculate the exact null distribution for the WRS test. Therefore, the normal distribution is commonly used as an approximation. However, as we demonstrate in this paper, the normal approximation is problematic when a gene set with relative small number of genes is tested against the large number of genes in the complementary set. In this situation, a uniform approximation is substantially more powerful, more accurate, and less intensive in computation. We demonstrate the advantage of the uniform approximations in Gene Ontology (GO) term analysis using simulations and real data sets
    corecore