9 research outputs found

    Detection of Microcalcifications in Mammographies Based on Linear Pixel Prediction and Support-Vector Machines

    Get PDF
    Breast cancer is one of the diseases causing the largest number of deaths among women. Its early detection has been proved to be the most effective way to combat it. This work is focused on developing an integral tool able to detect microcalcifications in mammographies, since the presence of these particles is a clear symptom of an incipient cancer. The proposed approach combines two techniques successfully used in other areas separately, such as linear pixel prediction and support-vector machines, in order to obtain almost perfect prediction accuracy. Moreover, a filter has been designed with the aim of decrease the processing time. The result verges on 96% of hits, improving previous works by 6%, on average

    Fast joint design method for parallel excitation radiofrequency pulse and gradient waveforms considering off‐resonance

    Full text link
    A fast parallel excitation pulse design algorithm to select and to order phase‐encoding (PE) locations (also known as “spokes”) of an Echo‐Volumar excitation k ‐space trajectory considering B 0 field inhomogeneity is presented. Recently, other groups have conducted research to choose optimal PE locations, but the potential benefit of considering B 0 field inhomogeneity during PE location selection or their ordering has not been fully investigated. This article introduces a novel fast greedy algorithm to determine PE locations and their order that takes into account the off‐resonance effects. Computer simulations of the proposed algorithm for B 1 field inhomogeneity correction demonstrate that it not only improves excitation accuracy but also provides an effective ordering of the PE locations. Magn Reson Med, 2012. © 2012 Wiley Periodicals, Inc.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/91341/1/24311_ftp.pd

    Intelligent Screening Systems for Cervical Cancer

    Get PDF

    Machine learning for beam dynamics studies at the CERN Large Hadron Collider

    Full text link
    Machine learning entails a broad range of techniques that have been widely used in Science and Engineering since decades. High-energy physics has also profited from the power of these tools for advanced analysis of colliders data. It is only up until recently that Machine Learning has started to be applied successfully in the domain of Accelerator Physics, which is testified by intense efforts deployed in this domain by several laboratories worldwide. This is also the case of CERN, where recently focused efforts have been devoted to the application of Machine Learning techniques to beam dynamics studies at the Large Hadron Collider (LHC). This implies a wide spectrum of applications from beam measurements and machine performance optimisation to analysis of numerical data from tracking simulations of non-linear beam dynamics. In this paper, the LHC-related applications that are currently pursued are presented and discussed in detail, paying also attention to future developments

    Automatic texture classification in manufactured paper

    Get PDF

    A Novel Embedded Feature Selection Framework for Probabilistic Load Forecasting With Sparse Data via Bayesian Inference

    Get PDF
    With the modernization of power industry over recent decades, diverse smart technologies have been introduced to the power systems. Such transition has brought in a significant level of variability and uncertainty to the networks, resulting in less predictable electricity demand. In this regard, load forecasting stands in the breach and is even more challenging. Urgent needs have been raised from different sections, especially for probabilistic analysis for industrial applications. Hence, attentions have been shifted from point load forecasting to probabilistic load forecasting (PLF) in recent years. This research proposes a novel embedded feature selection method for PLF to deal with sparse features and thus to improve PLF performance. Firstly, the proposed method employs quantile regression to connect the predictor variables and each quantile of the distribution of the load. Thereafter, an embedded feature selection structure is incorporated to identify and select subsets of input features by introducing an inclusion indicator variable for each feature. Then, Bayesian inference is applied to the model with a sparseness favoring prior endowed over the inclusion indicator variables. A Markov Chain Monte Carlo (MCMC) approach is adopted to sample the parameters from the posterior. Finally, the samples are used to approximate the posterior distribution, which is achieved by using discrete formulas applied to these samples to approximate the integrals of interest. The proposed approach allows each quantile of the distribution of the dependent load to be affected by different sets of features, and also allows all features to take a chance to show their impact on the load. Consequently, this methodology leads to the improved estimation of more complex predictive densities. The proposed framework has been successfully applied to a linear model, the quantile linear regression, and been extended to improve the performance of a nonlinear model. Three case studies have been designed to validate the effectiveness of the proposed method. The first case study performed on an open dataset validates that the proposed feature selection technique can improve the performance of PLF based on quantile linear regression and outperforms the selected comparable benchmarks. This case study does not consider any recency effect. The second case study further examines the impact of recency effect using another open dataset which contains historical load and weather records of 10 different regions. The third case study explores the potential of extending the application of the proposed framework for nonlinear models. In this case study, the proposed method is used as a wrapper approach and applied to a nonlinear model. The simulation results show that the proposed method has the best overall performance among all the tested methods with and without considering recency effect, and it could slightly improve the performance of other models when applied as a wrapper approach

    Algoritmos de Enjambre para la Optimización de HMM en la Detección de Soplos Cardíacos en Señales Fonocardiográficas Usando Representaciones Derivadas del Análisis de Vibraciones

    Get PDF
    Este trabajo presenta una metodología para desarrollar un sistema automático de apoyo en la clasificación de señales fonocardiográficos (PCG). En primer lugar, las señales PCG fueron pre-procesadas. Luego descompuestas por medio de la técnica descomposición modo empírico (EMD) con algunas de sus variantes y el análisis de vibración por descomposición de Hilbert (HVD) de forma independiente, donde se comparó el costó computacional y el error en la reconstrucción de la señal original generando constructos a partir de las IMFs. A continuación, se extrajeron las características con los momentos estadísticos de los datos generados por la transformada de Hilbert-Huang (HHT), además de los coeficientes cepstrales en las frecuencias de Mel (MFCC) y cuatro de sus variantes. Por último, un subconjunto de características fue seleccionado usando conjuntos de aproximación difusos (FRS), análisis de componentes principales (PCA) y selección secuencial flotante hacia adelante (SFFS) de manera simultánea para ser utilizadas como entradas del modelo oculto de Markov (HMM) ergódico ajustado con optimización por enjambre de partículas (PSO), con el fin de proporcionar un mecanismo objetivo y preciso para mejorar la fiabilidad en la detección de soplos en el corazón, obteniendo resultados en la clasificación de alrededor del 96% con valores de sensibilidad superiores a 0.8 y de especificidad mayores a 0.9, utilizando validación cruzada (70/30 con 30 fold)This study presents a methodology for developing an automated support system in the classification of phonographic signals (PCG). First, the PCG signals were preprocessed. You then decomposed by the decomposition technique empirically (EMD) with some of its variants and vibration analysis by decomposition of Hilbert (HVD) independently, where the computational cost and the error was compared in the reconstruction of the original signal generating constructs from IMFs. Then the characteristics of the statistical moments data generated by the Hilbert-Huang Transform (HHT), plus cepstral coeffcients at frequencies of Mel (MFCC) and four of its variants were extracted. Finally, a subset of features was selected using sets of fuzzy approximation (FRS), principal component analysis (PCA) and floating sequential forward selection (SFFS) simultaneously to be used as inputs to the hidden Markov model (HMM) ergodic adjusted particle swarm optimization (PSO), in order to provide an objective and accurate to improve reliability in detecting heart murmurs mechanism, obtaining results in the classification of about 96% with sensitivity values higher 0.8 and higher specificity to 0.9, using cross-validation (70/30 split with 30 fold)Magister en Automatización y Contro
    corecore