26 research outputs found

    Time series causality analysis and EEG data analysis on music improvisation

    Get PDF
    This thesis describes a PhD project on time series causality analysis and applications. The project is motivated by two EEG measurements of music improvisation experiments, where we aim to use causality measures to construct neural networks to identify the neural differences between improvisation and non-improvisation. The research is based on mathematical backgrounds of time series analysis, information theory and network theory. We first studied a series of popular causality measures, namely, the Granger causality, partial directed coherence (PDC) and directed transfer function (DTF), transfer entropy (TE), conditional mutual information from mixed embedding (MIME) and partial MIME (PMIME), from which we proposed our new measures: the direct transfer entropy (DTE) and the wavelet-based extensions of MIME and PMIME. The new measures improved the properties and applications of their father measures, which were verified by simulations and examples. By comparing the measures we studied, MIME was found to be the most useful causality measure for our EEG analysis. Thus, we used MIME to construct both the intra-brain and cross-brain neural networks for musicians and listeners during the music performances. Neural differences were identified in terms of direction and distribution of neural information flows and activity of the large brain regions. Furthermore, we applied MIME on other EEG and financial data applications, where reasonable causality results were obtained.Open Acces

    Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training

    Full text link
    Adversarial examples (AEs) for DNNs have been shown to be transferable: AEs that successfully fool white-box surrogate models can also deceive other black-box models with different architectures. Although a bunch of empirical studies have provided guidance on generating highly transferable AEs, many of these findings lack explanations and even lead to inconsistent advice. In this paper, we take a further step towards understanding adversarial transferability, with a particular focus on surrogate aspects. Starting from the intriguing little robustness phenomenon, where models adversarially trained with mildly perturbed adversarial samples can serve as better surrogates, we attribute it to a trade-off between two predominant factors: model smoothness and gradient similarity. Our investigations focus on their joint effects, rather than their separate correlations with transferability. Through a series of theoretical and empirical analyses, we conjecture that the data distribution shift in adversarial training explains the degradation of gradient similarity. Building on these insights, we explore the impacts of data augmentation and gradient regularization on transferability and identify that the trade-off generally exists in the various training mechanisms, thus building a comprehensive blueprint for the regulation mechanism behind transferability. Finally, we provide a general route for constructing better surrogates to boost transferability which optimizes both model smoothness and gradient similarity simultaneously, e.g., the combination of input gradient regularization and sharpness-aware minimization (SAM), validated by extensive experiments. In summary, we call for attention to the united impacts of these two factors for launching effective transfer attacks, rather than optimizing one while ignoring the other, and emphasize the crucial role of manipulating surrogate models.Comment: Accepted by IEEE Symposium on Security and Privacy (Oakland) 2024; 21 pages, 11 figures, 13 table

    A study for multiscale information transfer measures based on conditional mutual information.

    No full text
    As the big data science develops, efficient methods are demanded for various data analysis. Granger causality provides the prime model for quantifying causal interactions. However, this theoretic model does not meet the requirement for real-world data analysis, because real-world time series are diverse whose models are usually unknown. Therefore, model-free measures such as information transfer measures are strongly desired. Here, we propose the multi-scale extension of conditional mutual information measures using MORLET wavelet, which are named the WM and WPM. The proposed measures are computational efficient and interpret information transfer by multi-scales. We use both synthetic data and real-world examples to demonstrate the efficiency of the new methods. The results of the new methods are robust and reliable. Via the simulation studies, we found the new methods outperform the wavelet extension of transfer entropy (WTE) in both computational efficiency and accuracy. The features and properties of the proposed measures are also discussed

    A study on separation of the protein structural types in amino acid sequence feature spaces.

    No full text
    Proteins are diverse with their sequences, structures and functions, it is important to study the relations between the sequences, structures and functions. In this paper, we conduct a study that surveying the relations between the protein sequences and their structures. In this study, we use the natural vector (NV) and the averaged property factor (APF) features to represent protein sequences into feature vectors, and use the multi-class MSE and the convex hull methods to separate proteins of different structural classes into different regions. We found that proteins from different structural classes are separable by hyper-planes and convex hulls in the natural vector feature space, where the feature vectors of different structural classes are separated into disjoint regions or convex hulls in the high dimensional feature spaces. The natural vector outperforms the averaged property factor method in identifying the structures, and the convex hull method outperforms the multi-class MSE in separating the feature points. These outcomes convince the strong connections between the protein sequences and their structures, and may imply that the amino acids composition and their sequence arrangements represented by the natural vectors have greater influences to the structures than the averaged physical property factors of the amino acids

    A protein structural study based on the centrality analysis of protein sequence feature networks.

    No full text
    In this paper, we use network approaches to analyze the relations between protein sequence features for the top hierarchical classes of CATH and SCOP. We use fundamental connectivity measures such as correlation (CR), normalized mutual information rate (nMIR), and transfer entropy (TE) to analyze the pairwise-relationships between the protein sequence features, and use centrality measures to analyze weighted networks constructed from the relationship matrices. In the centrality analysis, we find both commonalities and differences between the different protein 3D structural classes. Results show that all top hierarchical classes of CATH and SCOP present strong non-deterministic interactions for the composition and arrangement features of Cystine (C), Methionine (M), Tryptophan (W), and also for the arrangement features of Histidine (H). The different protein 3D structural classes present different preferences in terms of their centrality distributions and significant features

    An information-based network approach for protein classification.

    No full text
    Protein classification is one of the critical problems in bioinformatics. Early studies used geometric distances and polygenetic-tree to classify proteins. These methods use binary trees to present protein classification. In this paper, we propose a new protein classification method, whereby theories of information and networks are used to classify the multivariate relationships of proteins. In this study, protein universe is modeled as an undirected network, where proteins are classified according to their connections. Our method is unsupervised, multivariate, and alignment-free. It can be applied to the classification of both protein sequences and structures. Nine examples are used to demonstrate the efficiency of our new method

    The cross-brain weights between flutist and listener in the second experiment.

    No full text
    <p>This figure plots the cross-brain causalities between flutist and listener against time windows for piece A: Ibert, strict mode. The red curve indicates flutistlistener, the blue curve represents listenerflutist, while the black curve is the significance threshold.</p

    Cross-brain networks for the two music improvisation experiments.

    No full text
    <p>The left graph is for the first experiment, while the right graph is for the second experiment. The red links represent the direction of cross-brain information flow, while the thickness of the links is proportional to the strength of the cross-brain weights (i.e. the average cross-brain causalities).</p

    Degree centrality contrasts between strict mode and “let-go” mode in the second experiment.

    No full text
    <p>In this figure, the red stems and the blue stems indicate the in and out degree centrality contrasts between strict mode and “let-go” mode, respectively. The horizontal axis has 9 channels represent the 8 electrodes: P4, T8, C4, F4, F3, C3, T7, P3 and the overall average over the 8 electrodes, while the vertical axis gives the magnitudes of the degree centrality contrasts between strict mode and “let-go” mode.</p

    Degree centrality contrasts, or difference, between composed music and improvisation in the second experiment.

    No full text
    <p>In this figure, the red stems and the blue stems indicate the in and out degree centrality contrasts between composed music and improvisation, respectively. The horizontal axis has 9 channels represent the 8 electrodes: P4, T8, C4, F4, F3, C3, T7, P3 and the overall average over the 8 electrodes, while the vertical axis gives the magnitudes of the degree centrality contrasts between composed music and improvisation.</p
    corecore