37 research outputs found

    Efficient complex sphere decoding for MC-CDMA systems

    Get PDF
    Maximum likelihood (ML) joint detection of multi-carrier code division multiple access (MC-CDMA) systems can be efficiently implemented with a sphere decoding (SD) algorithm. In this paper, we examine the application of complex instead of real SD to detect MC-CDMA, which solves many problems in a more elegant manner and extends SD adaptability to any constellation. We first propose a new complex SD algorithm whose efficiency is based on not requiring an estimate of the initial search radius but selecting the Babai point as the initial sphere radius instead; also, efficient strategies regarding sorting the list of possible lattice points are applied. Indeed, complex SD allows complex matrix operations which are faster than real counterparts in double dimension. Next, a novel lattice representation for the MC-CDMA system is introduced, which allows optimum multiuser detection directly from the received signal. This avoids noise whitening operation, and also despreading and equalization procedures are not required further at the receiver sideThis work has been partly funded by the Spanish government with national project MACAWI (TEC 2005-07477-c02-02) and project MAMBO (UC3M-TEC-05-027

    Extended Input Space Support Vector Machine

    Get PDF
    In some applications, the probability of error of a given classifier is too high for its practical application, but we are allowed to gather more independent test samples from the same class to reduce the probability of error of the final decision. From the point of view of hypothesis testing, the solution is given by the Neyman-Pearson lemma. However, there is no equivalent result to the Neyman-Pearson lemma when the likelihoods are unknown, and we are given a training dataset. In this brief, we explore two alternatives. First, we combine the soft (probabilistic) outputs of a given classifier to produce a consensus labeling for K test samples. In the second approach, we build a new classifier that directly computes the label for K test samples. For this second approach, we need to define an extended input space training set and incorporate the known symmetries in the classifier. This latter approach gives more accurate results, as it only requires an accurate classification boundary, while the former needs an accurate posterior probability estimate for the whole input space. We illustrate our results with well-known databases

    Quality estimation of the electrocardiogram using cross-correlation among leads

    Get PDF
    Background Fast and accurate quality estimation of the electrocardiogram (ECG) signal is a relevant research topic that has attracted considerable interest in the scientific community, particularly due to its impact on tele-medicine monitoring systems, where the ECG is collected by untrained technicians. In recent years, a number of studies have addressed this topic, showing poor performance in discriminating between clinically acceptable and unacceptable ECG records. Methods This paper presents a novel, simple and accurate algorithm to estimate the quality of the 12-lead ECG by exploiting the structure of the cross-covariance matrix among different leads. Ideally, ECG signals from different leads should be highly correlated since they capture the same electrical activation process of the heart. However, in the presence of noise or artifacts the covariance among these signals will be affected. Eigenvalues of the ECG signals covariance matrix are fed into three different supervised binary classifiers. Results and conclusion The performance of these classifiers were evaluated using PhysioNet/CinC Challenge 2011 data. Our best quality classifier achieved an accuracy of 0.898 in the test set, while having a complexity well below the results of contestants who participated in the Challenge, thus making it suitable for implementation in current cellular devices.National Institute of General Medical Sciences (U.S.) (Grant R01GM104987)Spain (Research Grant TEC2013-46067-R)Spain (Research Grant TEC2013-48439-C4-1-R)Spain (Research Grant TEC2010-19263

    Interactively solving school timetabling problems using extensions of constraint programming

    Get PDF
    Timetabling problems have been frequently studied due to their wide range of applications. However, they are often solved manually because of the lack of appropriate computer tools. Although many approaches mainly based on local search or constraint programming seem to have been quite successful in recent years, they are often highly dedicated to specific problems and encounter difficulties to take the dynamic and over-constrained nature of such problems. We were confronted with such an over-constrained and dynamic problem in our institution. This paper deals with a timetabling system based on constraint programming with the use of explanations to offer a dynamic behaviour and to allow automatic relaxations of constraints. Our tool has successfully answered the needs of the current planner by providing solutions in a few minutes instead of a week of manual design.We present in this paper the techniques used, the results obtained and a discussion on the effects of the automation of the timetabling process

    Cell lineage visualisation

    Get PDF
    Cell lineages describe the developmental history of cell populations and are produced by combining time-lapse imaging and image processing. Biomedical researchers study cell lineages to understand fundamental processes, such as cell differentiation and the pharmacodynamic action of anticancer agents. Yet, the interpretation of cell lineages is hindered by their complexity and insufficient capacity for visual analysis. We present a novel approach for interactive visualisation of cell lineages. Based on an understanding of cellular biology and live-cell imaging methodology, we identify three requirements: multimodality (cell lineages combine spatial, temporal, and other properties), symmetry (related to lineage branching structure), and synchrony (related to temporal alignment of cellular events). We address these by combining visual summaries of the spatiotemporal behaviour of an arbitrary number of lineages, including variation from average behaviour, with node-link representations that emphasise the presence or absence of symmetry and synchrony. We illustrate the merit of our approach by presenting a real-world case study where the cytotoxic action of the anticancer drug topotecan was determined

    A survey of visualisation for live cell imaging

    Get PDF
    Live cell imaging is an important biomedical research paradigm for studying dynamic cellular behaviour. Although phenotypic data derived from images are difficult to explore and analyse, some researchers have successfully addressed this with visualisation. Nonetheless, visualisation methods for live cell imaging data have been reported in an ad hoc and fragmented fashion. This leads to a knowledge gap where it is difficult for biologists and visualisation developers to evaluate the advantages and disadvantages of different visualisation methods, and for visualisation researchers to gain an overview of existing work to identify research priorities. To address this gap, we survey existing visualisation methods for live cell imaging from a visualisation research perspective for the first time. Based on recent visualisation theory, we perform a structured qualitative analysis of visualisation methods that includes characterising the domain and data, abstracting tasks, and describing visual encoding and interaction design. Based on our survey, we identify and discuss research gaps that future work should address: the broad analytical context of live cell imaging; the importance of behavioural comparisons; links with dynamic data visualisation; the consequences of different data modalities; shortcomings in interactive support; and, in addition to analysis, the value of the presentation of phenotypic data and insights to other stakeholders

    Diagsnóstico automático de tuberculosis : una decisión ante incertidumbre

    No full text
    En un contraste de hipótesis no siempre es posible definir las hipótesis con precisión, o bien, no es posible relacionarlas directamente con los datos disponibles. Esta tesis considera el desarrollo de herramientas estadísticas para el contraste de hipótesis definidas con incertidumbre y su aplicación al diagnóstico automático de tuberculosis. Se proponen métodos para medir las prestaciones alcanzables a partir de los datos de entrenamiento y también se presentan test que consideran la incertidumbre y moderan las probabilidades con las que se decide. Por otro lado, se considera el equivalente en aprendizaje estadístico de los cocientes de verosimilitud en aquellas situaciones en las que no se conoce la distribución de los datos y una sola muestra de test no es suficiente para tomar una decisión con las prestaciones requeridas. La tesis arranca con una introducción a la tuberculosis, por qué es un problema de salud y cuáles son las técnicas para su diagnóstico, centrándose en las propuestas automáticas basadas en el análisis de imágenes del esputo y sus principales inconvenientes. A continuación realiza una breve revisión de la teoría estadística de la decisión, que incluye las metodologías paramétrica y no paramétrica del contraste de hipótesis y los contrastes secuenciales; la determinación de regiones de confianza y la máquina de vectores soporte. Luego, introduce el contraste de hipótesis inciertas y propone métodos para dicho contraste desde el punto de vista frecuentista y bayesiano. Asimismo, formula cotas superiores de las probabilidades de error desde el punto de vista frecuentista y cotas superiores de la probabilidad a posteriori de cada hipótesis desde el punto de vista bayesiano. A continuación, considera el problema de clasificar un conjunto de muestras de la misma clase desde el punto de vista del aprendizaje estadístico. Propone un nuevo método que “extiende” los datos de entrenamiento de forma que el clasificador entrenado mediante dichos datos “extendidos” proporciona una salida única al conjunto de muestras. La bondad del método se comprueba desde un punto de vista empírico mediante varias bases de datos públicas y su complejidad es examinada cuando se emplea la máquina de vectores soporte como clasificador. Finalmente, propone un sistema automático para el diagnóstico de pacientes de tuberculosis capaz de procesar imágenes al ritmo que se capturan del microscopio. Este sistema examina imágenes de esputo vistas al microscopio en busca del bacilo de Koch. Sin embargo, no es sencillo definir qué es un paciente sano porque es muy difícil construir un clasificador cuya probabilidad de declarar un bacilo erróneamente sea cero. Es aquí dónde los métodos descritos arriba proporcionan una decisión acerca del estado del paciente que considera la incertidumbre en la definición de paciente sano y obtienen cotas de las prestaciones alcanzables a partir de los ejemplos de ambos tipos de pacientes. _____________________________________In hypothesis testing, it is not always possible to define the hypotheses precisely, sometimes they are not directly related with the available data. This thesis considers new statistical tools for testing uncertain hypotheses and their application to automatic tuberculosis diagnosis. Methods to measure the achievable performance using the training data are developed and test which consider the uncertainty in the hypotheses and modify the decision probabilities accordingly are proposed. Another addressed problem is the machine learning equivalent to the likelihood ratio for those situations where the data distributions are unknown and one test sample does not provide the desired performance. The thesis starts with an introduction to tuberculosis, why is a health problem and which are the diagnosis techniques. We focus on automatic diagnosis based on sputum images analysis and their principal issues. Later, it shortly reviews decision theory, which includes parametric and non-parametric hypothesis testing methodologies and sequential testing; confidence region estimation and support vector machines. Uncertain hypotheses testing follows and methods from frequentist and Bayesian points of view are proposed. Upper bounds of the error probabilities for frequentist view and upper bounds for the a posteriori hypotheses probability are presented. The problem of classifying a set of samples of the same class is considered from a machine learning point of view. A new method to extend the training samples in such a way than a classifier trained with these “extended” training samples gives a single output for the test set of samples. This algorithm is evaluated and proved worthy in some public datasets and its complexity is analysed for the support vector machine classifier. Finally, an automatic diagnosis system for tuberculosis patients is proposed. This system is capable to process images at the same rate as the microscope captures them. The system looks for Koch bacilli in the sputum. However, it is not clear how to define a healthy patient as it is difficult to build a classifier with zero false bacillus detection probability. The methods described above give a decision for the patient that correctly considers the uncertainty in the healthy patient definition. In addition, those methods bound the achievable performance from the available training data

    ¿Es arabismo sintáctico el gerundio "de posterioridad"?

    No full text

    ¿Es arabismo sintáctico el gerundio "de posterioridad"?

    No full text
    corecore