5 research outputs found

    Pathological Brain Detection by a Novel Image Feature—Fractional Fourier Entropy

    Full text link
    Aim: To detect pathological brain conditions early is a core procedure for patients so as to have enough time for treatment. Traditional manual detection is either cumbersome, or expensive, or time-consuming. We aim to offer a system that can automatically identify pathological brain images in this paper.Method: We propose a novel image feature, viz., Fractional Fourier Entropy (FRFE), which is based on the combination of Fractional Fourier Transform(FRFT) and Shannon entropy. Afterwards, the Welch’s t-test (WTT) and Mahalanobis distance (MD) were harnessed to select distinguishing features. Finally, we introduced an advanced classifier: twin support vector machine (TSVM). Results: A 10 x K-fold stratified cross validation test showed that this proposed “FRFE +WTT + TSVM” yielded an accuracy of 100.00%, 100.00%, and 99.57% on datasets that contained 66, 160, and 255 brain images, respectively. Conclusions: The proposed “FRFE +WTT + TSVM” method is superior to 20 state-of-the-art methods

    A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Get PDF
    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkagethresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) randomshift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the leastmean absolute error, the leastmean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches

    Pathological Brain Detection by a Novel Image Feature—Fractional Fourier Entropy

    No full text
    Aim: To detect pathological brain conditions early is a core procedure for patients so as to have enough time for treatment. Traditional manual detection is either cumbersome, or expensive, or time-consuming. We aim to offer a system that can automatically identify pathological brain images in this paper. Method: We propose a novel image feature, viz., Fractional Fourier Entropy (FRFE), which is based on the combination of Fractional Fourier Transform (FRFT) and Shannon entropy. Afterwards, the Welch’s t-test (WTT) and Mahalanobis distance (MD) were harnessed to select distinguishing features. Finally, we introduced an advanced classifier: twin support vector machine (TSVM). Results: A 10 × K-fold stratified cross validation test showed that this proposed “FRFE + WTT + TSVM” yielded an accuracy of 100.00%, 100.00%, and 99.57% on datasets that contained 66, 160, and 255 brain images, respectively. Conclusions: The proposed “FRFE + WTT + TSVM” method is superior to 20 state-of-the-art methods

    Pathological Brain Detection by a Novel Image Feature—Fractional Fourier Entropy

    No full text
    Aim: To detect pathological brain conditions early is a core procedure for patients so as to have enough time for treatment. Traditional manual detection is either cumbersome, or expensive, or time-consuming. We aim to offer a system that can automatically identify pathological brain images in this paper. Method: We propose a novel image feature, viz., Fractional Fourier Entropy (FRFE), which is based on the combination of Fractional Fourier Transform (FRFT) and Shannon entropy. Afterwards, the Welch’s t-test (WTT) and Mahalanobis distance (MD) were harnessed to select distinguishing features. Finally, we introduced an advanced classifier: twin support vector machine (TSVM). Results: A 10 × K-fold stratified cross validation test showed that this proposed “FRFE + WTT + TSVM” yielded an accuracy of 100.00%, 100.00%, and 99.57% on datasets that contained 66, 160, and 255 brain images, respectively. Conclusions: The proposed “FRFE + WTT + TSVM” method is superior to 20 state-of-the-art methods

    Security and privacy services based on biosignals for implantable and wearable device

    Get PDF
    Mención Internacional en el título de doctorThe proliferation of wearable and implantable medical devices has given rise to an interest in developing security schemes suitable for these devices and the environment in which they operate. One area that has received much attention lately is the use of (human) biological signals as the basis for biometric authentication, identification and the generation of cryptographic keys. More concretely, in this dissertation we use the Electrocardiogram (ECG) to extract some fiducial points which are later used on crytographic protocols. The fiducial points are used to describe the points of interest which can be extracted from biological signals. Some examples of fiducials points of the ECG are P-wave, QRS complex,T-wave, R peaks or the RR-time-interval. In particular, we focus on the time difference between two consecutive heartbeats (R-peaks). These time intervals are referred to as Inter-Pulse Intervals (IPIs) and have been proven to contain entropy after applying some signal processing algorithms. This process is known as quantization algorithm. Theentropy that the heart signal has makes the ECG values an ideal candidate to generate tokens to be used on security protocols. Most of the proposed solutions in the literature rely on some questionable assumptions. For instance, it is commonly assumed that it possible to generate the same cryptographic token in at least two different devices that are sensing the same signal using the IPI of each cardiac signal without applying any synchronization algorithm; authors typically only measure the entropy of the LSB to determine whether the generated cryptographic values are random or not; authors usually pick the four LSBs assuming they are the best ones to create the best cryptographic tokens; the datasets used in these works are rather small and, therefore, possibly not significant enough, or; in general it is impossible to reproduce the experiments carried out by other researchers because the source code of such experiments is not usually available. In this Thesis, we overcome these weaknesses trying to systematically address most of the open research questions. That is why, in all the experiments carried out during this research we used a public database called PhysioNet which is available on Internet and stores a huge heart database named PhysioBank. This repository is constantly being up dated by medical researchers who share the sensitive information about patients and it also offers an open source software named PhysioToolkit which can be used to read and display these signals. All datasets we used contain ECG records obtained from a variety of real subjects with different heart-related pathologies as well as healthy people. The first chapter of this dissertation (Chapter 1) is entirely dedicated to present the research questions, introduce the main concepts used all along this document as well as settle down some medical and cryptographic definitions. Finally, the objectives that this dissertation tackles down are described together with the main motivations for this Thesis. In Chapter 2 we report the results of a large-scale statistical study to determine if heart signal is a good source of entropy. For this, we analyze 19 public datasets of heart signals from the Physionet repository, spanning electrocardiograms from multiple subjects sampled at different frequencies and lengths. We then apply both ENT and NIST STS standard battery of randomness tests to the extracted IPIs. The results we obtain through the analysis, clearly show that a short burst of bits derived from an ECG record may seem random, but large files derived from long ECG records should not be used for security purposes. In Chapter3, we carry out an análisis to check whether it is reasonable or not the assumption that two different sensors can generate the same cryptographic token. We systematically check if two sensors can agree on the same token without sharing any type of information. Similarly to other proposals, we include ECC algorithms like BCH to the token generation. We conclude that a fuzzy extractor (or another error correction technique) is not enough to correct the synchronization errors between the IPI values derived from two ECG signals captured via two sensors placed on different positions. We demonstrate that a pre-processing of the heart signal must be performed before the fuzzy extractor is applied. Going one step forward and, in order to generate the same token on different sensors, we propose a synchronization algorithm. To do so, we include a runtimemonitoralgorithm. Afterapplyingourproposedsolution,werun again the experiments with 19 public databases from the PhysioNet repository. The only constraint to pick those databases was that they need at least two measurements of heart signals (ECG1 and ECG2). As a conclusion, running the experiments, the same token can be dexix rived on different sensors in most of the tested databases if and only if a pre-processing of the heart signal is performed before extracting the tokens. In Chapter 4, we analyze the entropy of the tokens extracted from a heart signal according to the NISTSTS recommendation (i.e.,SP80090B Recommendation for the Entropy Sources Used for Random Bit Generation). We downloaded 19 databases from the Physionet public repository and analyze, in terms of min-entropy, more than 160,000 files. Finally, we propose other combinations for extracting tokens by taking 2, 3, 4 and 5 bits different than the usual four LSBs. Also, we demonstrate that the four LSB are not the best bits to be used in cryptographic applications. We offer other alternative combinations for two (e.g., 87), three (e.g., 638), four (e.g., 2638) and five (e.g., 23758) bits which are, in general, much better than taking the four LSBs from the entropy point of view. Finally, the last Chapter of this dissertation (Chapter 5) summarizes the main conclusions arisen from this PhD Thesis and introduces some open questions.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Arturo Ribagorda Garnacho.- Secretario: Jorge Blasco Alis.- Vocal: Jesús García López de la Call
    corecore