11 research outputs found

    Efficient complex sphere decoding for MC-CDMA systems

    Get PDF
    Maximum likelihood (ML) joint detection of multi-carrier code division multiple access (MC-CDMA) systems can be efficiently implemented with a sphere decoding (SD) algorithm. In this paper, we examine the application of complex instead of real SD to detect MC-CDMA, which solves many problems in a more elegant manner and extends SD adaptability to any constellation. We first propose a new complex SD algorithm whose efficiency is based on not requiring an estimate of the initial search radius but selecting the Babai point as the initial sphere radius instead; also, efficient strategies regarding sorting the list of possible lattice points are applied. Indeed, complex SD allows complex matrix operations which are faster than real counterparts in double dimension. Next, a novel lattice representation for the MC-CDMA system is introduced, which allows optimum multiuser detection directly from the received signal. This avoids noise whitening operation, and also despreading and equalization procedures are not required further at the receiver sideThis work has been partly funded by the Spanish government with national project MACAWI (TEC 2005-07477-c02-02) and project MAMBO (UC3M-TEC-05-027

    Quality estimation of the electrocardiogram using cross-correlation among leads

    Get PDF
    Background Fast and accurate quality estimation of the electrocardiogram (ECG) signal is a relevant research topic that has attracted considerable interest in the scientific community, particularly due to its impact on tele-medicine monitoring systems, where the ECG is collected by untrained technicians. In recent years, a number of studies have addressed this topic, showing poor performance in discriminating between clinically acceptable and unacceptable ECG records. Methods This paper presents a novel, simple and accurate algorithm to estimate the quality of the 12-lead ECG by exploiting the structure of the cross-covariance matrix among different leads. Ideally, ECG signals from different leads should be highly correlated since they capture the same electrical activation process of the heart. However, in the presence of noise or artifacts the covariance among these signals will be affected. Eigenvalues of the ECG signals covariance matrix are fed into three different supervised binary classifiers. Results and conclusion The performance of these classifiers were evaluated using PhysioNet/CinC Challenge 2011 data. Our best quality classifier achieved an accuracy of 0.898 in the test set, while having a complexity well below the results of contestants who participated in the Challenge, thus making it suitable for implementation in current cellular devices.National Institute of General Medical Sciences (U.S.) (Grant R01GM104987)Spain (Research Grant TEC2013-46067-R)Spain (Research Grant TEC2013-48439-C4-1-R)Spain (Research Grant TEC2010-19263

    Diagsnóstico automático de tuberculosis : una decisión ante incertidumbre

    No full text
    En un contraste de hipótesis no siempre es posible definir las hipótesis con precisión, o bien, no es posible relacionarlas directamente con los datos disponibles. Esta tesis considera el desarrollo de herramientas estadísticas para el contraste de hipótesis definidas con incertidumbre y su aplicación al diagnóstico automático de tuberculosis. Se proponen métodos para medir las prestaciones alcanzables a partir de los datos de entrenamiento y también se presentan test que consideran la incertidumbre y moderan las probabilidades con las que se decide. Por otro lado, se considera el equivalente en aprendizaje estadístico de los cocientes de verosimilitud en aquellas situaciones en las que no se conoce la distribución de los datos y una sola muestra de test no es suficiente para tomar una decisión con las prestaciones requeridas. La tesis arranca con una introducción a la tuberculosis, por qué es un problema de salud y cuáles son las técnicas para su diagnóstico, centrándose en las propuestas automáticas basadas en el análisis de imágenes del esputo y sus principales inconvenientes. A continuación realiza una breve revisión de la teoría estadística de la decisión, que incluye las metodologías paramétrica y no paramétrica del contraste de hipótesis y los contrastes secuenciales; la determinación de regiones de confianza y la máquina de vectores soporte. Luego, introduce el contraste de hipótesis inciertas y propone métodos para dicho contraste desde el punto de vista frecuentista y bayesiano. Asimismo, formula cotas superiores de las probabilidades de error desde el punto de vista frecuentista y cotas superiores de la probabilidad a posteriori de cada hipótesis desde el punto de vista bayesiano. A continuación, considera el problema de clasificar un conjunto de muestras de la misma clase desde el punto de vista del aprendizaje estadístico. Propone un nuevo método que “extiende” los datos de entrenamiento de forma que el clasificador entrenado mediante dichos datos “extendidos” proporciona una salida única al conjunto de muestras. La bondad del método se comprueba desde un punto de vista empírico mediante varias bases de datos públicas y su complejidad es examinada cuando se emplea la máquina de vectores soporte como clasificador. Finalmente, propone un sistema automático para el diagnóstico de pacientes de tuberculosis capaz de procesar imágenes al ritmo que se capturan del microscopio. Este sistema examina imágenes de esputo vistas al microscopio en busca del bacilo de Koch. Sin embargo, no es sencillo definir qué es un paciente sano porque es muy difícil construir un clasificador cuya probabilidad de declarar un bacilo erróneamente sea cero. Es aquí dónde los métodos descritos arriba proporcionan una decisión acerca del estado del paciente que considera la incertidumbre en la definición de paciente sano y obtienen cotas de las prestaciones alcanzables a partir de los ejemplos de ambos tipos de pacientes. _____________________________________In hypothesis testing, it is not always possible to define the hypotheses precisely, sometimes they are not directly related with the available data. This thesis considers new statistical tools for testing uncertain hypotheses and their application to automatic tuberculosis diagnosis. Methods to measure the achievable performance using the training data are developed and test which consider the uncertainty in the hypotheses and modify the decision probabilities accordingly are proposed. Another addressed problem is the machine learning equivalent to the likelihood ratio for those situations where the data distributions are unknown and one test sample does not provide the desired performance. The thesis starts with an introduction to tuberculosis, why is a health problem and which are the diagnosis techniques. We focus on automatic diagnosis based on sputum images analysis and their principal issues. Later, it shortly reviews decision theory, which includes parametric and non-parametric hypothesis testing methodologies and sequential testing; confidence region estimation and support vector machines. Uncertain hypotheses testing follows and methods from frequentist and Bayesian points of view are proposed. Upper bounds of the error probabilities for frequentist view and upper bounds for the a posteriori hypotheses probability are presented. The problem of classifying a set of samples of the same class is considered from a machine learning point of view. A new method to extend the training samples in such a way than a classifier trained with these “extended” training samples gives a single output for the test set of samples. This algorithm is evaluated and proved worthy in some public datasets and its complexity is analysed for the support vector machine classifier. Finally, an automatic diagnosis system for tuberculosis patients is proposed. This system is capable to process images at the same rate as the microscope captures them. The system looks for Koch bacilli in the sputum. However, it is not clear how to define a healthy patient as it is difficult to build a classifier with zero false bacillus detection probability. The methods described above give a decision for the patient that correctly considers the uncertainty in the healthy patient definition. In addition, those methods bound the achievable performance from the available training data

    Extended input space support vector machine

    Get PDF
    In some applications, the probability of error of a given classifier is too high for its practical application, but we are allowed to gather more independent test samples from the same class to reduce the probability of error of the final decision. From the point of view of hypothesis testing, the solution is given by the Neyman-Pearson lemma. However, there is no equivalent result to the Neyman-Pearson lemma when the likelihoods are unknown, and we are given a training dataset. In this brief, we explore two alternatives. First, we combine the soft (probabilistic) outputs of a given classifier to produce a consensus labeling for K test samples. In the second approach, we build a new classifier that directly computes the label for K test samples. For this second approach, we need to define an extended input space training set and incorporate the known symmetries in the classifier. This latter approach gives more accurate results, as it only requires an accurate classification boundary, while the former needs an accurate posterior probability estimate for the whole input space. We illustrate our results with well-known databases

    Extended input space support vector machine

    No full text
    In some applications, the probability of error of a given classifier is too high for its practical application, but we are allowed to gather more independent test samples from the same class to reduce the probability of error of the final decision. From the point of view of hypothesis testing, the solution is given by the Neyman-Pearson lemma. However, there is no equivalent result to the Neyman-Pearson lemma when the likelihoods are unknown, and we are given a training dataset. In this brief, we explore two alternatives. First, we combine the soft (probabilistic) outputs of a given classifier to produce a consensus labeling for K test samples. In the second approach, we build a new classifier that directly computes the label for K test samples. For this second approach, we need to define an extended input space training set and incorporate the known symmetries in the classifier. This latter approach gives more accurate results, as it only requires an accurate classification boundary, while the former needs an accurate posterior probability estimate for the whole input space. We illustrate our results with well-known databases

    Extended Input Space Support Vector Machine

    No full text

    A two-phase heuristic evolutionary algorithm for personalizing course timetables: a case study in a Spanish university

    Get PDF
    This paper presents, as a case study, the application of a two-phase heuristic evolutionary algorithm to obtain personalized timetables in a Spanish university. The algorithm consists of a two-phase heuristic, which, starting from an initial ordering of the students, allocates students into groups, taking into account the student's preferences as a primal factor for the assignment. An evolutionary algorithm is then used in order to select the ordering of students which provides the best assignment. The algorithm has been tested in a real problem, the timetable of the Telecommunication Engineering School at Universidade de Vigo (Spain), and has shown good performance in terms of the number of constraints ful lled and groups assigned to student

    On the Early Detection of Perinatal Hypoxia with Information-Theory based Methods

    No full text
    Abstract Perinatal hypoxia is a severe condition that may harm fetus organs permanently. When the fetus brain is partially deprived from oxygen, the control of the fetal heart rate (FHR) is affected. We hypothesized that advanced processing of the FHR can reveal whether the fetus is under perinatal hypoxia. We analyzed FHR morphology with normalized compression distance (NCD) that compares two arbitrary sequences and outputs their dissimilarity. This parameterfree measure exploits linear and non-linear relations in the data and allows the comparison of sequences of different sizes. It was applied to raw FHR sequences and to a set of statistics computed from them (e.g. moments on 5 minutes signal windows). We classified the cases from the NCD dissimilarity matrix by using a simple nearest neighbor classifier and leave-one-out cross-validation. Best results in a database with 26 FHR recordings (13 controls and 13 cases) were provided by the central moment of order 3 calculated over sliding windows of 5 minutes on the interval from 4 to 3 hours to delivery. The resulting accuracy was 0.88 with sensitivity 0.92 and specificity 0.85. Introduction Perinatal hypoxia is caused by the lack of oxygenation at tissues and might cause serious sequels, such as brain or adrenal hemorrhage, necrotizing enterocolitis, delayed neurological development, mental handicap, seizures (West syndrome), or cerebral palsy Continuous electronic fetal monitoring, or Cardiotocography (CTG) consists in simultaneous evaluation of the Fetal Heart Rate (FHR) and the uterine activity Automatic analysis of CTG have also been proposed. For instance, it has been shown that automatic ST analysis combined with CTG increases the ability of obstetricians to identify hypoxia In this paper we analyze the readily available FHR to determine whether the fetus is suffering hypoxia. We decide using a nearest neighbor (NN) classifier using as distance a general information theory measure, the normalized compression distance (NCD) ISSN 2325-8861 Computing in Cardiology 2013; 40:425-428. 42
    corecore