9 research outputs found

    Advancing Statistical Inference For Population Studies In Neuroimaging Using Machine Learning

    Get PDF
    Modern neuroimaging techniques allow us to investigate the brain in vivo and in high resolution, providing us with high dimensional information regarding the structure and the function of the brain in health and disease. Statistical analysis techniques transform this rich imaging information into accessible and interpretable knowledge that can be used for investigative as well as diagnostic and prognostic purposes. A prevalent area of research in neuroimaging is group comparison, i.e., the comparison of the imaging data of two groups (e.g. patients vs. healthy controls or people who respond to treatment vs. people who don\u27t) to identify discriminative imaging patterns that characterize different conditions. In recent years, the neuroimaging community has adopted techniques from mathematics, statistics, and machine learning to introduce novel methodologies targeting the improvement of our understanding of various neuropsychiatric and neurodegenerative disorders. However, existing statistical methods are limited by their reliance on ad-hoc assumptions regarding the homogeneity of disease effect, spatial properties of the underlying signal and the covariate structure of data, which imposes certain constraints about the sampling of datasets. 1. First, the overarching assumption behind most analytical tools, which are commonly used in neuroimaging studies, is that there is a single disease effect that differentiates the patients from controls. In reality, however, the disease effect may be heterogeneously expressed across the patient population. As a consequence, when searching for a single imaging pattern that characterizes the difference between healthy controls and patients, we may only get a partial or incomplete picture of the disease effect. 2. Second, and importantly, most analyses assume a uniform shape and size of disease effect. As a consequence, a common step in most neuroimaging analyses it to apply uniform smoothing of the data to aggregate regional information to each voxel to improve the signal to noise ratio. However, the shape and size of the disease patterns may not be uniformly represented across the brain. 3. Lastly, in practical scenarios, imaging datasets commonly include variations due to multiple covariates, which often have effects that overlap with the searched disease effects. To minimize the covariate effects, studies are carefully designed by appropriately matching the populations under observation. The difficulty of this task is further exacerbated by the advent of big data analyses that often entail the aggregation of large datasets collected across many clinical sites. The goal of this thesis is to address each of the aforementioned assumptions and limitations by introducing robust mathematical formulations, which are founded on multivariate machine learning techniques that integrate discriminative and generative approaches. Specifically, 1. First, we introduce an algorithm termed HYDRA which stands for heterogeneity through discriminative analysis. This method parses the heterogeneity in neuroimaging studies by simultaneously performing clustering and classification by use of piecewise linear decision boundaries. 2. Second, we propose to perform regionally linear multivariate discriminative statistical mapping (MIDAS) toward finding the optimal level of variable smoothing across the brain anatomy and tease out group differences in neuroimaging datasets. This method makes use of overlapping regional discriminative filters to approximate a matched filter that best delineates the underlying disease effect. 3. Lastly, we develop a method termed generative discriminative machines (GDM) toward reducing the effect of confounds in biased samples. The proposed method solves for a discriminative model that can also optimally generate the data when taking into account the covariate structure. We extensively validated the performance of the developed frameworks in the presence of diverse types of simulated scenarios. Furthermore, we applied our methods on a large number of clinical datasets that included structural and functional neuroimaging data as well as genetic data. Specifically, HYDRA was used for identifying distinct subtypes of Alzheimer\u27s Disease. MIDAS was applied for identifying the optimally discriminative patterns that differentiated between truth-telling and lying functional tasks. GDM was applied on a multi-site prediction setting with severely confounded samples. Our promising results demonstrate the potential of our methods to advance neuroimaging analysis beyond the set of assumptions that limit its capacity and improve statistical power

    On the implicit learnability of knowledge

    Get PDF
    The deployment of knowledge-based systems in the real world requires addressing the challenge of knowledge acquisition. While knowledge engineering by hand is a daunting task, machine learning has been proposed as an alternative. However, learning explicit representations for real-world knowledge that feature a desirable level of expressiveness remains difficult and often leads to heuristics without robustness guarantees. Probably Approximately Correct (PAC) Semantics offers strong guarantees, however learning explicit representations is not tractable, even in propositional logic. Previous works have proposed solutions to these challenges by learning to reason directly, without producing an explicit representation of the learned knowledge. Recent work on so-called implicit learning has shown tremendous promise in obtaining polynomial-time results for fragments of first-order logic, bypassing the intractable step of producing an explicit representation of learned knowledge. This thesis extends these ideas to richer logical languages such as arithmetic theories and multi-agent logics. We demonstrate that it is possible to learn to reason efficiently for standard fragments of linear arithmetic, and we establish a general finding that provides an efficient reduction from the learning-to-reason problem for any logic to any sound and complete solver for that logic. We then extend implicit learning in PAC Semantics to handle noisy data in the form of intervals and threshold uncertainty in the language of linear arithmetic. We prove that our extended framework maintains existing polynomial-time complexity guarantees. Furthermore, we provide the first empirical investigation of this purely theoretical framework. Using benchmark problems, we show that our implicit approach to learning optimal linear programming objective constraints significantly outperforms an explicit approach in practice. Our results demonstrate the effectiveness of PAC Semantics and implicit learning for real-world problems with noisy data and provide a path towards robust learning in expressive languages. Development in reasoning about knowledge and interactions in complex multi-agent systems spans domains such as artificial intelligence, smart traffic, and robotics. In these systems, epistemic logic serves as a formal language for expressing and reasoning about knowledge, beliefs, and communication among agents, yet integrating learning algorithms within multi-agent epistemic logic is challenging due to the inherent complexity of distributed knowledge reasoning. We provide proof of correctness for our learning procedure and analyse the sample complexity required to assert the entailment of an epistemic query. Overall, our work offers a promising approach to integrating learning and deduction in a range of logical languages from linear arithmetic to multi-agent epistemic logics

    Early detection of spam-related activity

    Get PDF
    Spam, the distribution of unsolicited bulk email, is a big security threat on the Internet. Recent studies show approximately 70-90% of the worldwide email traffic—about 70 billion messages a day—is spam. Spam consumes resources on the network and at mail servers, and it is also used to launch other attacks on users, such as distributing malware or phishing. Spammers have increased their virulence and resilience by sending spam from large collections of compromised machines (“botnets”). Spammers also make heavy use of URLs and domains to direct victims to point-of-sale Web sites, and miscreants register large number of domains to evade blacklisting efforts. To mitigate the threat of spam, users and network administrators need proactive techniques to distinguish spammers from legitimate senders and to take down online spam-advertised sites. In this dissertation, we focus on characterizing spam-related activities and developing systems to detect them early. Our work builds on the observation that spammers need to acquire attack agility to be profitable, which presents differences in how spammers and legitimate users interact with Internet services and exposes detectable during early period of attack. We examine several important components across the spam life cycle, including spam dissemination that aims to reach users' inboxes, the hosting process during which spammers set DNS servers and Web servers, and the naming process to acquire domain names via registration services. We first develop a new spam-detection system based on network-level features of spamming bots. These lightweight features allow the system to scale better and to be more robust. Next, we analyze DNS resource records and lookups from top-level domain servers during the initial stage after domain registrations, which provides a global view across the Internet to characterize spam hosting infrastructure. We further examine the domain registration process and present the unique registration behavior of spammers. Finally, we build an early-warning system to identify spammer domains at time-of-registration rather than later at time-of-use. We have demonstrated that our detection systems are effective by using real-world datasets. Our work has also had practical impact. Some of the network-level features that we identified have since been incorporated into spam filtering products at Yahoo! and McAfee, and our work on detecting spammer domains at time-of-registration has directly influenced new projects at Verisign to investigate domain registrations.Ph.D

    Large-margin convex polytope machine

    No full text
    We present the Convex Polytope Machine (CPM), a novel non-linear learning algorithm for large-scale binary classification tasks. The CPM finds a large margin convex polytope separator which encloses one class. We develop a stochastic gradient descent based algorithm that is amenable to massive datasets, and augment it with a heuristic procedure to avoid sub-optimal local minima. Our experimental evaluations of the CPM on large-scale datasets from distinct domains (MNIST handwritten digit recognition, text topic, and web security) demonstrate that the CPM trains models faster, sometimes several orders of magnitude, than state-of-the-art similar approaches and kernel-SVM methods while achieving comparable or better classification performance. Our empirical results suggest that, unlike prior similar approaches, we do not need to control the number of sub-classifiers (sides of the polytope) to avoid overfitting

    Large-Margin Convex Polytope Machine

    No full text
    We present the Convex Polytope Machine (CPM), a novel non-linear learning algorithm for large-scale binary classification tasks. The CPM finds a large margin convex polytope separator which encloses one class. We develop a stochastic gradient descent based algorithm that is amenable to massive datasets, and augment it with a heuristic procedure to avoid sub-optimal local minima. Our experimental evaluations of the CPM on large-scale datasets from distinct domains (MNIST handwritten digit recognition, text topic, and web security) demonstrate that the CPM trains models faster, sometimes several orders of magnitude, than state-ofthe-art similar approaches and kernel-SVM methods while achieving comparable or better classification performance. Our empirical results suggest that, unlike prior similar approaches, we do not need to control the number of sub-classifiers (sides of the polytope) to avoid overfitting
    corecore