739 research outputs found

    Automated Segmentation of Cerebral Aneurysm Using a Novel Statistical Multiresolution Approach

    Get PDF
    Cerebral Aneurysm (CA) is a vascular disease that threatens the lives of many adults. It a ects almost 1:5 - 5% of the general population. Sub- Arachnoid Hemorrhage (SAH), resulted by a ruptured CA, has high rates of morbidity and mortality. Therefore, radiologists aim to detect it and diagnose it at an early stage, by analyzing the medical images, to prevent or reduce its damages. The analysis process is traditionally done manually. However, with the emerging of the technology, Computer-Aided Diagnosis (CAD) algorithms are adopted in the clinics to overcome the traditional process disadvantages, as the dependency of the radiologist's experience, the inter and intra observation variability, the increase in the probability of error which increases consequently with the growing number of medical images to be analyzed, and the artifacts added by the medical images' acquisition methods (i.e., MRA, CTA, PET, RA, etc.) which impedes the radiologist' s work. Due to the aforementioned reasons, many research works propose di erent segmentation approaches to automate the analysis process of detecting a CA using complementary segmentation techniques; but due to the challenging task of developing a robust reproducible reliable algorithm to detect CA regardless of its shape, size, and location from a variety of the acquisition methods, a diversity of proposed and developed approaches exist which still su er from some limitations. This thesis aims to contribute in this research area by adopting two promising techniques based on the multiresolution and statistical approaches in the Two-Dimensional (2D) domain. The rst technique is the Contourlet Transform (CT), which empowers the segmentation by extracting features not apparent in the normal image scale. While the second technique is the Hidden Markov Random Field model with Expectation Maximization (HMRF-EM), which segments the image based on the relationship of the neighboring pixels in the contourlet domain. The developed algorithm reveals promising results on the four tested Three- Dimensional Rotational Angiography (3D RA) datasets, where an objective and a subjective evaluation are carried out. For the objective evaluation, six performance metrics are adopted which are: accuracy, Dice Similarity Index (DSI), False Positive Ratio (FPR), False Negative Ratio (FNR), speci city, and sensitivity. As for the subjective evaluation, one expert and four observers with some medical background are involved to assess the segmentation visually. Both evaluations compare the segmented volumes against the ground truth data

    Performance analysis of different matrix decomposition methods on face recognition

    Get PDF
    Applications using face biometric are ubiquitous in various domains. We propose an efficient method using Discrete Wavelet Transform (DWT), Extended Directional Binary codes (EDBC), three matrix decompositions and Singular Value Decomposition (SVD) for face recognition. The combined effect of Schur, Hessenberg and QR matrix decompositions are utilized with existing algorithm. The discrimination power between two different persons is justified using Average Overall Deviation (AOD) parameter. Fused EDBC and SVD features are considered for performance calculation. City-block and Euclidean Distance (ED) measure is used for matching. Performance is improved on YALE, GTAV and ORL face databases compared with existing methods

    Performance analysis of different matrix decomposition methods on face recognition

    Full text link

    Anomaly Detection in Noisy Images

    Get PDF
    Finding rare events in multidimensional data is an important detection problem that has applications in many fields, such as risk estimation in insurance industry, finance, flood prediction, medical diagnosis, quality assurance, security, or safety in transportation. The occurrence of such anomalies is so infrequent that there is usually not enough training data to learn an accurate statistical model of the anomaly class. In some cases, such events may have never been observed, so the only information that is available is a set of normal samples and an assumed pairwise similarity function. Such metric may only be known up to a certain number of unspecified parameters, which would either need to be learned from training data, or fixed by a domain expert. Sometimes, the anomalous condition may be formulated algebraically, such as a measure exceeding a predefined threshold, but nuisance variables may complicate the estimation of such a measure. Change detection methods used in time series analysis are not easily extendable to the multidimensional case, where discontinuities are not localized to a single point. On the other hand, in higher dimensions, data exhibits more complex interdependencies, and there is redundancy that could be exploited to adaptively model the normal data. In the first part of this dissertation, we review the theoretical framework for anomaly detection in images and previous anomaly detection work done in the context of crack detection and detection of anomalous components in railway tracks. In the second part, we propose new anomaly detection algorithms. The fact that curvilinear discontinuities in images are sparse with respect to the frame of shearlets, allows us to pose this anomaly detection problem as basis pursuit optimization. Therefore, we pose the problem of detecting curvilinear anomalies in noisy textured images as a blind source separation problem under sparsity constraints, and propose an iterative shrinkage algorithm to solve it. Taking advantage of the parallel nature of this algorithm, we describe how this method can be accelerated using graphical processing units (GPU). Then, we propose a new method for finding defective components on railway tracks using cameras mounted on a train. We describe how to extract features and use a combination of classifiers to solve this problem. Then, we scale anomaly detection to bigger datasets with complex interdependencies. We show that the anomaly detection problem naturally fits in the multitask learning framework. The first task consists of learning a compact representation of the good samples, while the second task consists of learning the anomaly detector. Using deep convolutional neural networks, we show that it is possible to train a deep model with a limited number of anomalous examples. In sequential detection problems, the presence of time-variant nuisance parameters affect the detection performance. In the last part of this dissertation, we present a method for adaptively estimating the threshold of sequential detectors using Extreme Value Theory on a Bayesian framework. Finally, conclusions on the results obtained are provided, followed by a discussion of possible future work

    Applying wavelets for the controlled compression of communication network measurements

    Get PDF
    Monitoring and measuring various metrics of high-speed networks produces a vast amount of information over a long period of time making the storage of the metrics a serious issue. Previous work has suggested stream aware compression algorithms, among others, i.e. methodologies that try to organise the network packets in a compact way in order to occupy less storage. However, these methods do not reduce the redundancy in the stream information. Lossy compression becomes an attractive solution, as higher compression ratios can be achieved. However, the important and significant elements of the original data need to be preserved. This work proposes the use of a lossy wavelet compression mechanism that preserves crucial statistical and visual characteristics of the examined computer network measurements and provides significant compression against the original file sizes. To the best of our knowledge, the authors are the first to suggest and implement a wavelet analysis technique for compressing computer network measurements. In this paper, wavelet analysis is used and compared against the Gzip and Bzip2 tools for data rate and delay measurements. In addition this paper provides a comparison of eight different wavelets with respect to the compression ratio, the preservation of the scaling behavior, of the long range dependence, of mean and standard deviation and of the general reconstruction quality. The results show that the Haar wavelet provides higher peak signal-to-noise ratio (PSNR) values and better overall results, than other wavelets with more vanishing moments. Our proposed methodology has been implemented on an on-line based measurement platform and compressed data traffic generated from a live network
    • …
    corecore