167 research outputs found

    Multi-scale Discriminant Saliency with Wavelet-based Hidden Markov Tree Modelling

    Full text link
    The bottom-up saliency, an early stage of humans' visual attention, can be considered as a binary classification problem between centre and surround classes. Discriminant power of features for the classification is measured as mutual information between distributions of image features and corresponding classes . As the estimated discrepancy very much depends on considered scale level, multi-scale structure and discriminant power are integrated by employing discrete wavelet features and Hidden Markov Tree (HMT). With wavelet coefficients and Hidden Markov Tree parameters, quad-tree like label structures are constructed and utilized in maximum a posterior probability (MAP) of hidden class variables at corresponding dyadic sub-squares. Then, a saliency value for each square block at each scale level is computed with discriminant power principle. Finally, across multiple scales is integrated the final saliency map by an information maximization rule. Both standard quantitative tools such as NSS, LCC, AUC and qualitative assessments are used for evaluating the proposed multi-scale discriminant saliency (MDIS) method against the well-know information based approach AIM on its released image collection with eye-tracking data. Simulation results are presented and analysed to verify the validity of MDIS as well as point out its limitation for further research direction.Comment: arXiv admin note: substantial text overlap with arXiv:1301.396

    Cram\'er-Rao Bounds for Complex-Valued Independent Component Extraction: Determined and Piecewise Determined Mixing Models

    Full text link
    This paper presents Cram\'er-Rao Lower Bound (CRLB) for the complex-valued Blind Source Extraction (BSE) problem based on the assumption that the target signal is independent of the other signals. Two instantaneous mixing models are considered. First, we consider the standard determined mixing model used in Independent Component Analysis (ICA) where the mixing matrix is square and non-singular and the number of the latent sources is the same as that of the observed signals. The CRLB for Independent Component Extraction (ICE) where the mixing matrix is re-parameterized in order to extract only one independent target source is computed. The target source is assumed to be non-Gaussian or non-circular Gaussian while the other signals (background) are circular Gaussian or non-Gaussian. The results confirm some previous observations known for the real domain and bring new results for the complex domain. Also, the CRLB for ICE is shown to coincide with that for ICA when the non-Gaussianity of background is taken into account. %unless the assumed sources' distributions are misspecified. Second, we extend the CRLB analysis to piecewise determined mixing models. Here, the observed signals are assumed to obey the determined mixing model within short blocks where the mixing matrices can be varying from block to block. However, either the mixing vector or the separating vector corresponding to the target source is assumed to be constant across the blocks. The CRLBs for the parameters of these models bring new performance bounds for the BSE problem.Comment: 25 pages, 8 figure

    Exploring Social Sustainability and Economic Practices

    Get PDF
    Given the three pillars of sustainability, besides the environment, the interplay of social and economic dimensions provides valuable insight into how society is molded and the key components that should be considere. In terms of social sustainability, processes and framework objectives promote the wellbeing that is integral to the balance of people, planet, and profit. Economic practices consider the system of production, resource allocation, and distribution of goods and services with respect to demand and supply between economic agents. As a result, an economic system is a variant of the social system in which it exists. At present, the forefront of social sustainability research partially encompasses the impact of economic practices on people and society, with notable emphasis centered on the urban environment. Specific interdisciplinary analyses within the scope of sustainability, social development, competitiveness, and motivational management, as well as decision making within the urban landscape, are considered. This book contains nine thoroughly refereed contributions that interconnect detailed research into the two pillars reviewed

    Pattern recognition in spaces of probability distributions for the analysis of edge-localized modes in tokamak plasmas

    Get PDF
    Magnetically confined fusion plasmas provide several data analysis challenges due to the occurrence of massive data sets, substantial measurement uncertainty, stochasticity and data dimensionality, and often nonlinear interactions between measured quantities. Recently, methods from the fields of machine learning and probability theory - some standard, some more advanced - have come to play an increasingly important role in analyzing data from fusion experiments. The capabilities offered by such methods to efficiently extract, possibly in real time, additional information from the data that is not immediately apparent to human experts, has attracted attention from an increasing number of researchers. In addition, innovative methods for real-time data processing can play an important role in plasma control, in order to ensure safe and reliable operation of the machine. Pattern recognition is a discipline within the information sciences that concerns the exploration of structure in (multidimensional) data sets using computer-based methods and algorithms. In this doctoral work, pattern recognition techniques are developed and applied to data from tokamak plasmas, in order to contribute to a systematic analysis of edge-localized modes (ELMs). ELMs are magnetohydrodynamic (MHD) instabilities occurring in the edge region of high-confinement (H-mode) fusion plasmas. The type I ELMy H-mode is the reference scenario for operation of the next-step fusion device ITER. On the one hand, ELMs have a beneficial effect on plasma operation through their role in impurity control. On the other hand, ELMs eject energy and particles from the plasma and, in ITER, large unmitigated ELMs are expected to cause intolerable heat loads on the plasma-facing components (PFCs). In interpreting experiments focused on ELM understanding and control, a significant challenge lies in handling the measurement uncertainties and the inherent stochasticity of ELM properties. In this work, we employ probabilistic models (distributions) for a quantitative data description geared towards an enhanced systematization of ELM phenomenology. Hence, we start from the point of view that the fundamental object resulting from the observation of a system is a probability distribution, with every single measurement providing a sample from this distribution. We argue that, particularly for richly stochastic phenomena like ELMs, the probability distribution of physical quantities contain significantly more information compared to mere averages. Consequently, in exploring the patterns emerging from the various ELM regimes and relations, we need methods that can handle the intrinsic probabilistic nature of the data. The original contributions of this work are twofold. First, several novel pattern recognition methods in non-Euclidean spaces of probability distribution functions (PDFs) are developed and validated. The second main contribution lies in the application of these and other techniques to a systematic analysis of ELMs in tokamak plasmas. In regard to the methodological aims of the work, we employ the framework of information geometry to develop pattern visualization and classification methods in spaces of probability distributions. In information geometry, a family of probability distributions is considered as a Riemannian manifold. Every point on the manifold represents a single PDF and the distribution parameters provide local coordinates on the manifold. The Fisher information plays the role of a Riemannian metric tensor, enabling calculation of geodesic curves on the surface. The length of such curves yields the geodesic distance (GD) on probabilistic manifolds, which is a natural similarity (distance) measure between PDFs. Equipped with a suitable distance measure, we extrapolate several distance-based pattern recognition methods to the manifold setting. This includes k-nearest neighbor (kNN) and conformal predictor (CP) methods for classification, as well as multidimensional scaling (MDS) and landmark multidimensional scaling (LMDS) for data visualization (dimensionality reduction). Furthermore, two new classification schemes are developed: a distance-to-centroid classifier (D2C) and a principal geodesic classifier (PGC). D2C classifies on the basis of the minimum GD to the class centroids and PGC considers the shape of the class on the manifold by determining the minimum distance to the principal geodesic of each class. The methods are validated by their application to the classification and retrieval of colored texture images represented in the wavelet domain. Both methods prove to be computationally efficient, yield high accuracy and also clearly exhibit the adequacy of the GD and its superiority over the Euclidean distance, for comparing PDFs. This also aids in demonstrating the utility and adaptability of the developed methods to a wide range of applications other than ELMs, which are the prime focus of analysis in this work. The second main goal of the work targets ELM analysis at three fronts, using pattern recognition and probabilistic modeling : i). We first concentrate on visualization of ELM characteristics by creating maps containing projections of multidimensional ELM data, as well as the corresponding probabilistic models. Such maps can provide physicists and machine operators with a convenient means and a useful tool for plasma monitoring and for studying data patterns reflecting key regimes and their underlying physics. In particular, GD-based MDS is used for representing the complete distributions of the multidimensional data characterizing the operational space of ELMs onto two-dimensional maps. Clusters corresponding to type I and type III ELMs are identified and the maps enable tracking of trends in plasma parameters across the operational space. It is shown that the maps can also be used with reasonable accuracy for predicting the values of the plasma parameters at a certain point in the operational space. ii). Our second application concerns fast, standardized and automated classification of ELM types. ELM types have so far been identified and characterized on an empirical and phenomenological basis. The presented classification schemes are aimed at complementing the phenomenological characterization using standardized methods that are less susceptible to subjective interpretation, while considerably reducing the effort of ELM experts in identifying ELM types. To this end, different classification paradigms (parametric and non-parametric) are explored and put to use. Discriminant analysis (DA) is used for determining a linear separation boundary between type I and III ELMs in terms of global plasma parameters, which can then be used for the prediction of ELM types as well as the study of ELM occurrence boundaries and ELM physics. However, DA makes an assumption about the underlying class distribution and presently cannot be applied in spaces of probability distributions, leading to a sub-optimal treatment of stochasticity. This is circumvented by the use of GD-based CP and kNN classifiers. CP provides estimates of its own accuracy and reliability and kNN is a simple, yet powerful classifier of ELM types. It is shown that a classification based on the distribution of ELM properties, namely inter-ELM time intervals and the distribution of global plasma parameters, is more informative and accurate than the classification based on average parameter values. iii). Finally, the correlation} between ELM energy loss (ELM size) and ELM waiting times (inverse ELM frequency) is studied for individual ELMs in a set of plasmas from the JET tokamak upgraded with the ITER-like wall (ILW). Typically, ELM control methods rely on the empirically observed inverse dependence of average ELM energy loss on average ELM frequency, even though ELM control is targeted at reducing the size of individual ELMs and not the average ELM loss. The analysis finds that for individual ELMs the correlation between ELM energy loss and waiting times varies from zero to a moderately positive value. A comparison is made with the results from a set of carbon-wall (CW) JET plasmas and nitrogen-seeded ILW JET plasmas. It is found that a high correlation between ELM energy loss and waiting time comparable to CW plasmas is only found in nitrogen-seeded ILW plasmas. Furthermore, most of the unseeded JET ILW plasmas have ELMs that are followed by a second phase referred to as the slow transport event (STE). The effect of the STEs on the distribution of ELM durations is studied, as well as their influence on the correlation between ELM energy loss and waiting times. This analysis has a clear outcome for the optimization of ELM control methods, while presenting insights for an improved physics understanding of ELMs.Die Analyse von experimentellen Daten magnetisch eingeschlossener Fusionsplasmen stellt wegen der großen Datenmengen, der hohen DimensionalitĂ€t, der Messunsicherheiten und auch der oft nichtlinearen Beziehungen untereinander eine große Herausforderung dar. Methoden der Datenanalyse aus den Feldern des maschinellen Lernens sowie der Wahrscheinlichkeitstheorie spielen daher in letzter Zeit eine immer grĂ¶ĂŸere Rolle bei der Analyse von Daten aus Fusionsexperimenten. Dabei interessiert vor allem die Möglichkeit, zusĂ€tzliche Information welche dem menschlichen Beobachter verborgen bleiben, systematisch zu extrahieren. ZusĂ€tzlich können innovative Methoden der Echtzeit-Datenverarbeitung eine wichtige Rolle fĂŒr Kontrollanwendungen in Fusionsexperimenten spielen. Mustererkennung ist eine Disziplin der Informationstheorie welche sich mit der Erforschung von Strukturen in multidimensionalen DatensĂ€tzen durch computergestĂŒtzte Methoden und Algorithmen beschĂ€ftigt. In dieser Doktorarbeit werden Methoden der Mustererkennung auf Daten von Tokamakexperimenten fĂŒr eine systematische Analyse von edge-localized modes (ELMs) angewendet. ELMs sind magnetohydrodynamische (MHD) InstabilitĂ€ten die am Plasmarand in ‘high-confinement‘ (H-mode) Fusionsplasmen auftreten. Die ‘Typ I ELMy H-mode' ist das Referenz-Betriebsszenario fĂŒr das zukĂŒnftige ITER Experiment. ELMs spielen einerseits eine positive Rolle fĂŒr den Plasmabetrieb da sie zur Verunreinigungskontrolle beitragen. Andererseits werfen ELMs Teilchen und Energie aus dem Plasma und könnten daher in ITER die IntegritĂ€t der ersten Wand gefĂ€hrden. Eine signifikante Herausforderung bei der Interpretation von Experimenten welche sich mit dem VerstĂ€ndnis und der Kontrolle von ELMs beschĂ€ftigen liegt in der Behandlung der Messunsicherheiten sowie der inhĂ€renten StochastizitĂ€t der ELM Parameter. In der vorliegenden Arbeit werden probabilistische Modelle (Verteilungen) zur quantitativen Beschreibung der Daten mit dem Ziel einer verbesserten systematischen Einteilung der ELM-PhĂ€nomenologie verwendet. Dabei wird davon ausgegangen, dass die fundamentale GrĂ¶ĂŸe eines Systems eine Wahrscheinlichkeitsverteilung ist, wobei jede Einzelmessung eine Stichprobe dieser Verteilung darstellt. Dabei wird angenommen dass, im Besonderen fĂŒr stark stochastische Ereignisse wie ELMs, die Wahrscheinlichkeitsverteilung der physikalischen Parameter deutlich mehr Information enthĂ€lt als deren Mittelwerte. Folglich erfordert die Erforschung der Struktur der unterschiedlichen ELM Regimes Methoden, welche die intrinsisch stochastische Natur der Daten berĂŒcksichtigen kann. Diese Arbeit liefert zwei grundsĂ€tzlich neue BeitrĂ€ge: zunĂ€chst werden neuartige Strukturerkennungs-Methoden in nicht-euklidischen RĂ€umen von Wahrscheinlichkeitsverteilungen entwickelt und validiert. Der zweite grundsĂ€tzliche Beitrag liegt in der Anwendung dieser und anderer Methoden auf eine systematische Analyse von ELMs in Tokamakplasmen. Aus methodologischer Sicht wird in dieser Arbeit die Informationsgeometrie angewendet um Methoden zur Mustererkennung und –klassifizierung in RĂ€umen von Wahrscheinlichkeitsverteilungen zu entwickeln. In der Informationsgeometrie wird eine Familie von Wahrscheinlichkeitsverteilungen als eine Riemannsche Mannigfaltigkeit aufgefasst. Jeder Punkt auf der Mannigfaltigkeit stellt eine Wahrscheinlichkeitsverteilung dar und die Verteilungsparameter sind lokale Koordinaten auf der Mannigfaltigkeit. Die Fisher Information spielt dabei die Rolle des Riemannschen metrischen Tensors und erlaubt es, geodĂ€tische Kurven auf der FlĂ€che zu berechnen. Die LĂ€nge einer solchen Kurve ergibt den geodĂ€tischen Abstand auf der Mannigfaltigkeit, welcher ein natĂŒrliches Maß fĂŒr den Abstand zwischen Verteilungsfunktionen ist. Mit diesem geeigneten Abstandsmaß werden mehrere Mustererkennungsmethoden welche auf dem Abstand basieren auf die Mannigfaltigkeit angewandt. Diese schließen die ‘k-nearest neighbor’ (kNN) und ‘conformal predictor’ (CP) Klassifikationsmethoden ein sowie ‘multidimensional scaling’ (MDS) und ‘landmark multidimensional scaling‘ (LMDS) zur Datenvisualisierung mit dem Ziel der Dimensionsreduktion. Desweitern werden zwei neue Klassifikationsmethoden entwickelt: ein ‘distance-to-centroid classifier’ (D2C) und ein ‘principal geodesic classifier’ (PGC). D2C klassifiziert auf Basis des minimalen geodĂ€tischen Abstands vom Schwerpunkt der Daten und PGC berĂŒcksichtigt die Form der Klasse auf der Mannigfaltigkeit indem der Abstand zur HauptgeodĂ€tischen jeder Klasse bestimmt wird. Diese Methoden werden durch Anwendung auf die Klassifizierung und Rekonstruktion von farbigen Texturbildern in der Waveletdarstellung validiert. Beide Methoden stellen sich als effizient im Rechenaufwand heraus und liefern hohe Genauigkeit, wobei der geodĂ€tische Abstand dem euklidischen Abstand deutlich ĂŒberlegen ist und somit als angemessen fĂŒr den Vergleich von Verteilungsfunktionen bestĂ€tigt wird. Dies dient auch dem Nachweis der Eignung der entwickelten Methoden fĂŒr eine Vielzahl von Anwendungen ĂŒber das in dieser Arbeit vorrangig behandelte Feld der ELMs hinaus. Das zweite Hauptziel der Arbeit ist die Analyse von ELMs mit den Methoden der Mustererkennung und der wahrscheinlichkeitstheoretischen Modellierung auf drei Gebieten: i). ZunĂ€chst wird die Visualisierung von ELM Eigenschaften durch Erstellung von Abbildungen behandelt welche multidimensionale ELM Daten projizieren. Solche Abbildungen können fĂŒr Physiker und Experimentatoren ein nĂŒtzliches Werkzeug zur Überwachung der Plasmaentladung darstellen und dienen darĂŒber hinaus zu Studien von Datenmustern, welche prinzipielle Regimes und deren zugrundeliegende Physik charakterisieren. Im speziellen wird die GD-basierte MDS zur Darstellung der gesamten Verteilung der multidimensionalen Daten, welche das Auftreten von ELMs beschreiben in zweidimensionalen Abbildungen verwendet. Cluster in welchen ‘Typ I’ und ‘Typ III’ ELMs auftreten werden identifiziert und die Abbildung ermöglicht es, Trends in der VerĂ€nderung von Plasmaparametern im Parameterraum zu erkennen. Es wird gezeigt, dass diese Abbildungen auch dazu verwendet werden können, die Plasmaparameter fĂŒr einen bestimmten Punkt im Betriebsbereich vorherzusagen. ii). Eine zweite Anwendung beschĂ€ftigt sich mit einer schnellen, standardisierten Klassifizierung des ELM Typs. ELM Typen wurden bisher auf einer empirisch-phĂ€nomenologischen Basis identifiziert. Die hier vorgestellten Klassifizierungs-Schemata dienen der ErgĂ€nzung der phĂ€nomenologischen Beschreibung durch standardisierte Methoden welche weniger anfĂ€llig fĂŒr subjektive Wahrnehmung und Interpretation sind und sollen auch den Aufwand bei der Bestimmung des ELM Typs verringern. Verschiedene Klassifizierungsmethoden, parametrisch und nicht-parametrisch, werden untersucht und eingesetzt. Discriminant Analysis (DA) wird fĂŒr die Bestimmung einer linearen Grenze zwischen Typ I und Typ III ELMs in globalen Plasmaparametern eingesetzt, die dann sowohl zur Vorhersage des ELM Typs als auch zur Untersuchung der Bereiche, in denen die unterschiedlichen ELM Typen auftreten, verwendet wird. Dabei basiert die DA allerdings auf einer Annahme ĂŒber die zugrunde liegende Verteilung der Klassen und kann nach derzeitigem Stand nicht auf RĂ€ume von Verteilungsfunktionen angewendet werden, was zu einer unzureichenden Behandlung der StochastizitĂ€t fĂŒhrt. Dies wird durch die Verwendung von GD-basierter CP und von kNN Klassifikatoren behoben. CP liefert eine AbschĂ€tzung ihrer Genauigkeit und ZuverlĂ€ssigkeit und kNN ist ein einfacher, aber leistungsstarker Klassifikator fĂŒr ELM-Typen. Es wird gezeigt dass eine Klassifizierung basierend auf der Verteilung der ELM Eigenschaften, namentlich der inter-ELM Zeitintervalle und der Verteilung der globalen Plasmaparameter, mehr Information enthĂ€lt als eine Klassifizierung welche auf gemittelten Werten basiert. iii).Schließlich wird die Korrelation zwischen ELM Energieverlust (ELM GrĂ¶ĂŸe) und ELM Wartezeiten (inverse ELM Frequenz) fĂŒr individuelle ELMs aus einer Datenbasis von Plasmaentladungen des JET Tokamaks in der ‚ITER-like wall‘ (ILW) Konfiguration untersucht. ELM Kontrollmethoden basieren typischerweise auf dem empirisch beobachteten inversen Zusammenhang zwischen mittlerem ELM-Verlust und mittlerer ELM-Frequenz, obwohl ELM Kontrolle die Reduktion der GrĂ¶ĂŸe individueller ELMs zum Ziel hat. Die Analyse zeigt, dass fĂŒr individuelle ELMs die Korrelation zwischen ELM-Energieverlust und Wartezeit generell niedrig ist. Dieses Ergebnis wird mit einem Datensatz von JET in der ‚carbon-wall‘ (CW) Konfiguration sowie einem Datensatz von Stickstoff-gekĂŒhlten ILW JET Plasmen verglichen. Es zeigt sich, dass eine hohe Korrelation zwischen ELM-Energieverlust und Wartezeit, vergleichbar zu CW Plasmen, nur in Stickstoff-gekĂŒhlten ILW Plasmen auftritt. DarĂŒber hinaus treten in den meisten JET ILW Plasmen ohne StickstoffkĂŒhlung ELMs auf, welche von einer zweiten Phase, slow transport event (STE) genannt, begleitet werden. Der Effekt der STEs auf die Verteilung der ELM Dauer sowie deren Einfluss auf die Korrelation zwischen ELM-Energieverlust und Wartezeit wird untersucht. Diese Untersuchung hat einerseits eine starke Relevanz fĂŒr die Optimierung von Methoden zur ELM Kontrolle, andererseits trĂ€gt sie zum tieferen Einblick in die den ELMs zugrunde liegende Physik bei

    Inverse problems in medical ultrasound images - applications to image deconvolution, segmentation and super-resolution

    Get PDF
    In the field of medical image analysis, ultrasound is a core imaging modality employed due to its real time and easy-to-use nature, its non-ionizing and low cost characteristics. Ultrasound imaging is used in numerous clinical applications, such as fetus monitoring, diagnosis of cardiac diseases, flow estimation, etc. Classical applications in ultrasound imaging involve tissue characterization, tissue motion estimation or image quality enhancement (contrast, resolution, signal to noise ratio). However, one of the major problems with ultrasound images, is the presence of noise, having the form of a granular pattern, called speckle. The speckle noise in ultrasound images leads to the relative poor image qualities compared with other medical image modalities, which limits the applications of medical ultrasound imaging. In order to better understand and analyze ultrasound images, several device-based techniques have been developed during last 20 years. The object of this PhD thesis is to propose new image processing methods allowing us to improve ultrasound image quality using postprocessing techniques. First, we propose a Bayesian method for joint deconvolution and segmentation of ultrasound images based on their tight relationship. The problem is formulated as an inverse problem that is solved within a Bayesian framework. Due to the intractability of the posterior distribution associated with the proposed Bayesian model, we investigate a Markov chain Monte Carlo (MCMC) technique which generates samples distributed according to the posterior and use these samples to build estimators of the ultrasound image. In a second step, we propose a fast single image super-resolution framework using a new analytical solution to the l2-l2 problems (i.e., ℓ2\ell_2-norm regularized quadratic problems), which is applicable for both medical ultrasound images and piecewise/ natural images. In a third step, blind deconvolution of ultrasound images is studied by considering the following two strategies: i) A Gaussian prior for the PSF is proposed in a Bayesian framework. ii) An alternating optimization method is explored for blind deconvolution of ultrasound

    Decision-Making with Heterogeneous Sensors - A Copula Based Approach

    Get PDF
    Statistical decision making has wide ranging applications, from communications and signal processing to econometrics and finance. In contrast to the classical one source-one receiver paradigm, several applications have been identified in the recent past that require acquiring data from multiple sources or sensors. Information from the multiple sensors are transmitted to a remotely located receiver known as the fusion center which makes a global decision. Past work has largely focused on fusion of information from homogeneous sensors. This dissertation extends the formulation to the case when the local sensors may possess disparate sensing modalities. Both the theoretical and practical aspects of multimodal signal processing are considered. The first and foremost challenge is to \u27adequately\u27 model the joint statistics of such heterogeneous sensors. We propose the use of copula theory for this purpose. Copula models are general descriptors of dependence. They provide a way to characterize the nonlinear functional relationships between the multiple modalities, which are otherwise difficult to formalize. The important problem of selecting the `best\u27 copula function from a given set of valid copula densities is addressed, especially in the context of binary hypothesis testing problems. Both, the training-testing paradigm, where a training set is assumed to be available for learning the copula models prior to system deployment, as well as generalized likelihood ratio test (GLRT) based fusion rule for the online selection and estimation of copula parameters are considered. The developed theory is corroborated with extensive computer simulations as well as results on real-world data. Sensor observations (or features extracted thereof) are most often quantized before their transmission to the fusion center for bandwidth and power conservation. A detection scheme is proposed for this problem assuming unifom scalar quantizers at each sensor. The designed rule is applicable for both binary and multibit local sensor decisions. An alternative suboptimal but computationally efficient fusion rule is also designed which involves injecting a deliberate disturbance to the local sensor decisions before fusion. The rule is based on Widrow\u27s statistical theory of quantization. Addition of controlled noise helps to \u27linearize\u27 the higly nonlinear quantization process thus resulting in computational savings. It is shown that although the introduction of external noise does cause a reduction in the received signal to noise ratio, the proposed approach can be highly accurate when the input signals have bandlimited characteristic functions, and the number of quantization levels is large. The problem of quantifying neural synchrony using copula functions is also investigated. It has been widely accepted that multiple simultaneously recorded electroencephalographic signals exhibit nonlinear and non-Gaussian statistics. While the existing and popular measures such as correlation coefficient, corr-entropy coefficient, coh-entropy and mutual information are limited to being bivariate and hence applicable only to pairs of channels, measures such as Granger causality, even though multivariate, fail to account for any nonlinear inter-channel dependence. The application of copula theory helps alleviate both these limitations. The problem of distinguishing patients with mild cognitive impairment from the age-matched control subjects is also considered. Results show that the copula derived synchrony measures when used in conjunction with other synchrony measures improve the detection of Alzheimer\u27s disease onset
    • 

    corecore