7 research outputs found

    Interpretable Machine Learning for Electro-encephalography

    Get PDF
    While behavioral, genetic and psychological markers can provide important information about brain health, research in that area over the last decades has much focused on imaging devices such as magnetic resonance tomography (MRI) to provide non-invasive information about cognitive processes. Unfortunately, MRI based approaches, able to capture the slow changes in blood oxygenation levels, cannot capture electrical brain activity which plays out on a time scale up to three orders of magnitude faster. Electroencephalography (EEG), which has been available in clinical settings for over 60 years, is able to measure brain activity based on rapidly changing electrical potentials measured non-invasively on the scalp. Compared to MRI based research into neurodegeneration, EEG based research has, over the last decade, received much less interest from the machine learning community. But generally, EEG in combination with sophisticated machine learning offers great potential such that neglecting this source of information, compared to MRI or genetics, is not warranted. In collaborating with clinical experts, the ability to link any results provided by machine learning to the existing body of research is especially important as it ultimately provides an intuitive or interpretable understanding. Here, interpretable means the possibility for medical experts to translate the insights provided by a statistical model into a working hypothesis relating to brain function. To this end, we propose in our first contribution a method allowing for ultra-sparse regression which is applied on EEG data in order to identify a small subset of important diagnostic markers highlighting the main differences between healthy brains and brains affected by Parkinson's disease. Our second contribution builds on the idea that in Parkinson's disease impaired functioning of the thalamus causes changes in the complexity of the EEG waveforms. The thalamus is a small region in the center of the brain affected early in the course of the disease. Furthermore, it is believed that the thalamus functions as a pacemaker - akin to a conductor of an orchestra - such that changes in complexity are expressed and quantifiable based on EEG. We use these changes in complexity to show their association with future cognitive decline. In our third contribution we propose an extension of archetypal analysis embedded into a deep neural network. This generative version of archetypal analysis allows to learn an appropriate representation where every sample of a data set can be decomposed into a weighted sum of extreme representatives, the so-called archetypes. This opens up an interesting possibility of interpreting a data set relative to its most extreme representatives. In contrast, clustering algorithms describe a data set relative to its most average representatives. For Parkinson's disease, we show based on deep archetypal analysis, that healthy brains produce archetypes which are different from those produced by brains affected by neurodegeneration

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    On the analysis of stochastic optimization and variational inequality problems

    Get PDF
    Uncertainty has a tremendous impact on decision making. The more connected we get, it seems, the more sources of uncertainty we unfold. For example, uncertainty in the parameters of price and cost functions in power, transportation, communication and financial systems have stemmed from the way these networked systems operate and also how they interact with one another. Uncertainty influences the design, regulation and decisions of participants in several engineered systems like the financial markets, electricity markets, commodity markets, wired and wireless networks, all of which are ubiquitous. This poses many interesting questions in areas of understanding uncertainty (modeling) and dealing with uncertainty (decision making). This dissertation focuses on answering a set of fundamental questions that pertain to dealing with uncertainty arising in three major problem classes: [(1)] Convex Nash games; [(2)] Variational inequality problems and complementarity problems; [(3)] Hierarchical risk management problems in financial networks. Accordingly, this dissertation considers the analysis of a broad class of stochastic optimization and variational inequality problems complicated by uncertainty and nonsmoothness of objective functions. Nash games and variational inequalities have assumed practical relevance in industry and business settings because they are natural models for many real-world applications. Nash games arise naturally in modeling a range of equilibrium problems in power markets, communication networks, market-based allocation of resources etc. where as variational inequality problems allow for modeling frictional contact problems, traffic equilibrium problems etc. Incorporating uncertainty into convex Nash games leads us to stochastic Nash games. Despite the relevance of stochastic generalizations of Nash games and variational inequalities, answering fundamental questions regarding existence of equilibria in stochastic regimes has proved to be a challenge. Amongst other reasons, the main challenge arises from the nonlinearity arising from the presence of the expectation operator. Despite the rich literature in deterministic settings, direct application of deterministic results to stochastic regimes is not straightforward. The first part of this dissertation explores such fundamental questions in stochastic Nash games and variational inequality problems. Instead of directly using the deterministic results, by leveraging Lebesgue convergence theorems we are able to develop a tractable framework for analyzing problems in stochastic regimes over a continuous probability space. The benefit of this approach is that the framework does not rely on evaluation of the expectation operator to provide existence guarantees, thus making it amenable to tractable use. We extend the above framework to incorporate nonsmoothness of payoff functions as well as allow for stochastic constraints in models, all of which are important in practical settings. The second part of this dissertation extends the above framework to generalizations of variational inequality problems and complementarity problems. In particular, we develop a set of almost-sure sufficiency conditions for stochastic variational inequality problems with single-valued and multi-valued mappings. We extend these statements to quasi-variational regimes as well as to stochastic complementarity problems. The applicability of these results is demonstrated in analysis of risk-averse stochastic Nash games used in Nash-Cournot production distribution models in power markets by recasting the problem as a stochastic quasi-variational inequality problem and in Nash-Cournot games with piecewise smooth price functions by modeling this problem as a stochastic complementarity problem. The third part of this dissertation pertains to hierarchical problems in financial risk management. In the financial industry, risk has been traditionally managed by the imposition of value-at-risk or VaR constraints on portfolio risk exposure. Motivated by recent events in the financial industry, we examine the role that risk-seeking traders play in the accumulation of large and possibly infinite risk. We proceed to show that when traders employ a conditional value-at-risk (CVaR) metric, much can be said by studying the interaction between value at risk (VaR) (a non-coherent risk measure) and conditional value at risk CVaR (a coherent risk measure based on VaR). Resolving this question requires characterizing the optimal value of the associated stochastic, and possibly nonconvex, optimization problem, often a challenging problem. Our study makes two sets of contributions. First, under general asset distributions on a compact support, traders accumulate finite risk with magnitude of the order of the upper bound of this support. Second, when the supports are unbounded, under relatively mild assumptions, such traders can take on an unbounded amount of risk despite abiding by this VaR threshold. In short, VaR thresholds may be inadequate in guarding against financial ruin

    Advances in Optimization and Nonlinear Analysis

    Get PDF
    The present book focuses on that part of calculus of variations, optimization, nonlinear analysis and related applications which combines tools and methods from partial differential equations with geometrical techniques. More precisely, this work is devoted to nonlinear problems coming from different areas, with particular reference to those introducing new techniques capable of solving a wide range of problems. The book is a valuable guide for researchers, engineers and students in the field of mathematics, operations research, optimal control science, artificial intelligence, management science and economics
    corecore