309 research outputs found

    A K Nearest Classifier design

    Get PDF
    This paper presents a multi-classifier system design controlled by the topology of the learning data. Our work also introduces a training algorithm for an incremental self-organizing map (SOM). This SOM is used to distribute classification tasks to a set of classifiers. Thus, the useful classifiers are activated when new data arrives. Comparative results are given for synthetic problems, for an image segmentation problem from the UCI repository and for a handwritten digit recognition problem

    Fisher Vectors Derived from Hybrid Gaussian-Laplacian Mixture Models for Image Annotation

    Full text link
    In the traditional object recognition pipeline, descriptors are densely sampled over an image, pooled into a high dimensional non-linear representation and then passed to a classifier. In recent years, Fisher Vectors have proven empirically to be the leading representation for a large variety of applications. The Fisher Vector is typically taken as the gradients of the log-likelihood of descriptors, with respect to the parameters of a Gaussian Mixture Model (GMM). Motivated by the assumption that different distributions should be applied for different datasets, we present two other Mixture Models and derive their Expectation-Maximization and Fisher Vector expressions. The first is a Laplacian Mixture Model (LMM), which is based on the Laplacian distribution. The second Mixture Model presented is a Hybrid Gaussian-Laplacian Mixture Model (HGLMM) which is based on a weighted geometric mean of the Gaussian and Laplacian distribution. An interesting property of the Expectation-Maximization algorithm for the latter is that in the maximization step, each dimension in each component is chosen to be either a Gaussian or a Laplacian. Finally, by using the new Fisher Vectors derived from HGLMMs, we achieve state-of-the-art results for both the image annotation and the image search by a sentence tasks.Comment: new version includes text synthesis by an RNN and experiments with the COCO benchmar

    Visual and Contextual Modeling for the Detection of Repeated Mild Traumatic Brain Injury.

    Get PDF
    Currently, there is a lack of computational methods for the evaluation of mild traumatic brain injury (mTBI) from magnetic resonance imaging (MRI). Further, the development of automated analyses has been hindered by the subtle nature of mTBI abnormalities, which appear as low contrast MR regions. This paper proposes an approach that is able to detect mTBI lesions by combining both the high-level context and low-level visual information. The contextual model estimates the progression of the disease using subject information, such as the time since injury and the knowledge about the location of mTBI. The visual model utilizes texture features in MRI along with a probabilistic support vector machine to maximize the discrimination in unimodal MR images. These two models are fused to obtain a final estimate of the locations of the mTBI lesion. The models are tested using a novel rodent model of repeated mTBI dataset. The experimental results demonstrate that the fusion of both contextual and visual textural features outperforms other state-of-the-art approaches. Clinically, our approach has the potential to benefit both clinicians by speeding diagnosis and patients by improving clinical care

    Theoretical Interpretations and Applications of Radial Basis Function Networks

    Get PDF
    Medical applications usually used Radial Basis Function Networks just as Artificial Neural Networks. However, RBFNs are Knowledge-Based Networks that can be interpreted in several way: Artificial Neural Networks, Regularization Networks, Support Vector Machines, Wavelet Networks, Fuzzy Controllers, Kernel Estimators, Instanced-Based Learners. A survey of their interpretations and of their corresponding learning algorithms is provided as well as a brief survey on dynamic learning algorithms. RBFNs' interpretations can suggest applications that are particularly interesting in medical domains

    Pathology detection mechanisms through continuous acquisition of biological signals

    Get PDF
    Mención Internacional en el título de doctorPattern identification is a widely known technology, which is used on a daily basis for both identification and authentication. Examples include biometric identification (fingerprint or facial), number plate recognition or voice recognition. However, when we move into the world of medical diagnostics this changes substantially. This field applies many of the recent innovations and technologies, but it is more difficult to see cases of pattern recognition applied to diagnostics. In addition, the cases where they do occur are always supervised by a specialist and performed in controlled environments. This behaviour is expected, as in this field, a false negative (failure to identify pathology when it does exists) can be critical and lead to serious consequences for the patient. This can be mitigated by configuring the algorithm to be safe against false negatives, however, this will raise the false positive rate, which may increase the workload of the specialist in the best case scenario or even result in a treatment being given to a patient who does not need it. This means that, in many cases, validation of the algorithm’s decision by a specialist is necessary, however, there may be cases where this validation is not so essential, or where this first identification can be treated as a guideline to help the specialist. With this objective in mind, this thesis focuses on the development of an algorithm for the identification of lower body pathologies. This identification is carried out by means of the way people walk (gait). People’s gait differs from one person to another, even making biometric identification possible through its use. however, when the people has a pathology, both physical or psychological, the gait is affected. This alteration generates a common pattern depending on the type of pathology. However, this thesis focuses exclusively on the identification of physical pathologies. Another important aspect in this thesis is that the different algorithms are created with the idea of portability in mind, avoiding the obligation of the user to carry out the walks with excessive restrictions (both in terms of clothing and location). First, different algorithms are developed using different configurations of smartphones for database acquisition. In particular, configurations using 1, 2 and 4 phones are used. The phones are placed on the legs using special holders so that they cannot move freely. Once all the walks have been captured, the first step is to filter the signals to remove possible noise. The signals are then processed to extract the different gait cycles (corresponding to two steps) that make up the walks. Once the feature extraction process is finished, part of the features are used to train different machine learning algorithms, which are then used to classify the remaining features. However, the evidence obtained through the experiments with the different configurations and algorithms indicates that it is not feasible to perform pathology identification using smartphones. This can be mainly attributed to three factors: the quality of the signals captured by the phones, the unstable sampling frequency and the lack of synchrony between the phones. Secondly, due to the poor results obtained using smartphones, the capture device is changed to a professional motion acquisition system. In addition, two types of algorithm are proposed, one based on neural networks and the other based on the algorithms used previously. Firstly, the acquisition of a new database is proposed. To facilitate the capture of the data, a procedure is established, which is proposed to be in an environment of freedom for the user. Once all the data are available, the preprocessing to be carried out is similar to that applied previously. The signals are filtered to remove noise and the different gait cycles that make up the walks are extracted. However, as we have information from several sensors and several locations for the capture device, instead of using a common cut-off frequency, we empirically set a cut-off frequency for each signal and position. Since we already have the data ready, a recurrent neural network is created based on the literature, so we can have a first approximation to the problem. Given the feasibility of the neural network, different experiments are carried out with the aim of improving the performance of the neural network. Finally, the other algorithm picks up the legacy of what was seen in the first part of the thesis. As before, this algorithm is based on the parameterisation of the gait cycles for its subsequent use and employs algorithms based on machine learning. Unlike the use of time signals, by parameterising the cycles, spurious data can be generated. To eliminate this data, the dataset undergoes a preparation phase (cleaning and scaling). Once a prepared dataset has been obtained, it is split in two, one part is used to train the algorithms, which are used to classify the remaining samples. The results of these experiments validate the feasibility of this algorithm for pathology detection. Next, different experiments are carried out with the aim of reducing the amount of information needed to identify a pathology, without compromising accuracy. As a result of these experiments, it can be concluded that it is feasible to detect pathologies using only 2 sensors placed on a leg.La identificación de patrones es una tecnología ampliamente conocida, la cual se emplea diariamente tanto para identificación como para autenticación. Algunos ejemplos de ello pueden ser la identificación biométrica (dactilar o facial), el reconocimiento de matrículas o el reconocimiento de voz. Sin embargo, cuando nos movemos al mundo del diagnóstico médico esto cambia sustancialmente. Este campo aplica muchas de las innovaciones y tecnologías recientes, pero es más difícil ver casos de reconocimiento de patrones aplicados al diagnóstico. Además, los casos donde se dan siempre están supervisados por un especialista y se realizan en ambientes controlados. Este comportamiento es algo esperado, ya que, en este campo, un falso negativo (no identificar la patología cuando esta existe) puede ser crítico y provocar consecuencias graves para el paciente. Esto se puede intentar paliar, configurando el algoritmo para que sea seguro frente a los falsos negativos, no obstante, esto aumentará la tasa de falsos positivos, lo cual puede aumentar el trabajo del especialista en el mejor de los casos o incluso puede provocar que se suministre un tratamiento a un paciente que no lo necesita. Esto hace que, en muchos casos sea necesaria la validación de la decisión del algoritmo por un especialista, sin embargo, pueden darse casos donde esta validación no sea tan esencial, o que se pueda tratar a esta primera identificación como una orientación de cara a ayudar al especialista. Con este objetivo en mente, esta tesis se centra en el desarrollo de un algoritmo para la identificación de patologías del tren inferior. Esta identificación se lleva a cabo mediante la forma de caminar de la gente (gait, en inglés). La forma de caminar de la gente difiere entre unas personas y otras, haciendo posible incluso la identificación biométrica mediante su uso. Sin embargo, esta también se ve afectada cuando se presenta una patología, tanto física como psíquica, que afecta a las personas. Esta alteración, genera un patrón común dependiendo del tipo de patología. No obstante, esta tesis se centra exclusivamente la identificación de patologías físicas. Otro aspecto importante en esta tesis es que los diferentes algoritmos se crean con la idea de la portabilidad en mente, evitando la obligación del usuario de realizar los paseos con excesivas restricciones (tanto de vestimenta como de localización). En primer lugar, se desarrollan diferentes algoritmos empleando diferentes configuraciones de teléfonos inteligentes para la adquisición de la base de datos. En concreto se usan configuraciones empleando 1, 2 y 4 teléfonos. Los teléfonos se colocan en las piernas empleando sujeciones especiales, de tal modo que no se puedan mover libremente. Una vez que se han capturado todos los paseos, el primer paso es filtrar las señales para eliminar el posible ruido que contengan. Seguidamente las señales se procesan para extraer los diferentes ciclos de la marcha (que corresponden a dos pasos) que componen los paseos. Una vez terminado el proceso de extracción de características, parte de estas se emplean para entrenar diferentes algoritmos de machine learning, los cuales luego son empleados para clasificar las restantes características. Sin embargo, las evidencias obtenidas a través de la realización de los experimentos con las diferentes configuración y algoritmos indican que no es viable realizar una identificación de patologías empleando teléfonos inteligentes. Principalmente esto se puede achacar a tres factores: la calidad de las señales capturadas por los teléfonos, la frecuencia de muestreo inestable y la falta de sincronía entre los teléfonos. Por otro lado, a raíz de los pobres resultados obtenidos empleado teléfonos inteligentes se cambia el dispositivo de captura a un sistema profesional de adquisición de movimiento. Además, se plantea crear dos tipos de algoritmo, uno basado en redes neuronales y otro basado en los algoritmos empleados anteriormente. Primeramente, se plantea la adquisición de una nueva base de datos. Para ellos se establece un procedimiento para facilitar la captura de los datos, los cuales se plantea han de ser en un entorno de libertad para el usuario. Una vez que se tienen todos los datos, el preprocesado que se realizar es similar al aplicado anteriormente. Las señales se filtran para eliminar el ruido y se extraen los diferentes ciclos de la marcha que componen los paseos. Sin embargo, como para el dispositivo de captura tenemos información de varios sensores y varias localizaciones, el lugar de emplear una frecuencia de corte común, empíricamente se establece una frecuencia de corte para cada señal y posición. Dado que ya tenemos los datos listos, se crea una red neuronal recurrente basada en la literatura, de este modo podemos tener una primera aproximación al problema. Vista la viabilidad de la red neuronal, se realizan diferentes experimentos con el objetivo de mejorar el rendimiento de esta. Finalmente, el otro algoritmo recoge el legado de lo visto en la primera parte de la tesis. Al igual que antes, este algoritmo se basa en la parametrización de los ciclos de la marcha, para su posterior utilización y emplea algoritmos basado en machine learning. A diferencia del uso de señales temporales, al parametrizar los ciclos, se pueden generar datos espurios. Para eliminar estos datos, el conjunto de datos se somete a una fase de preparación (limpieza y escalado). Una vez que se ha obtenido un conjunto de datos preparado, este se divide en dos, una parte se usa para entrenar los algoritmos, los cuales se emplean para clasificar las muestras restantes. Los resultados de estos experimentos validan la viabilidad de este algoritmo para la detección de patologías. A continuación, se realizan diferentes experimentos con el objetivo de reducir la cantidad de información necesaria para identificar una patología, sin perjudicar a la precisión. Resultado de estos experimentos, se puede concluir que es viable detectar patologías empleando únicamente 2 sensores colocados en una pierna.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: María del Carmen Sánchez Ávila.- Secretario: Mariano López García.- Vocal: Richard Matthew Gues

    Penentuan Nilai Standar Distorsi Berminyak Pada Akuisisi Citra Sidik Jari

    Full text link
    Determining the Standard Value of the Oily Distortion of Acquisition the Fingerprint Images. This research describes a novel procedure for determining the standard value of the oily distortion of acquisition the fingerprint images based on the score of clarity and ridge-valley thickness ratio. The fingerprint image is quantized into blocks size 32 x 32 pixels. Inside each block, an orientation line, which perpendicular to the ridge direction, is computed. The center of the block along the ridge direction, a two-dimension (2-D) vector  V1 (slanted square) with the pixel size 32 x 13 pixels can be extracted and transformed to a vertical 2-D vector V2. Linear regression can be applied to the one-dimension (1-D) vector V3 to find the determinant threshold (DT1). The lower regions than DT1 are the ridges, otherwise are the valleys. Tests carried out by calculating the clarity  of the image from the overlapping area of the gray-level distribution of ridge and valley that has been separated. Thickness ratio size of the ridge to valley, it is computation per block, the thickness of ridge and valley obtained from the gray-level values per block of image in the normal direction toward the ridge, the average values obtained from the overall image. The results shown that the standard value of the oily distortion of acquisition the fingerprint image is said to oily fingerprint when the images have local clarity scores (LCS) is between 0.01446 to 0.01550, global clarity scores (GCS) is between 0.01186 to 0.01230, and ridge-valley thickness ratio (RVTR) is between 6.98E-05 to 7.22E-05. &nbsp
    corecore