83 research outputs found

    A study of information-theoretic metaheuristics applied to functional neuroimaging datasets

    Get PDF
    This dissertation presents a new metaheuristic related to a two-dimensional ensemble empirical mode decomposition (2DEEMD). It is based on Green’s functions and is called Green’s Function in Tension - Bidimensional Empirical Mode Decomposition (GiT-BEMD). It is employed for decomposing and extracting hidden information of images. A natural image (face image) as well as images with artificial textures have been used to test and validate the proposed approach. Images are selected to demonstrate efficiency and performance of the GiT-BEMD algorithm in extracting textures on various spatial scales from the different images. In addition, a comparison of the performance of the new algorithm GiT-BEMD with a canonical BEEMD is discussed. Then, GiT-BEMD as well as canonical bidimensional EEMD (BEEMD) are applied to an fMRI study of a contour integration task. Thus, it explores the potential of employing GiT-BEMD to extract such textures, so-called bidimensional intrinsic mode functions (BIMFs), of functional biomedical images. Because of the enormous computational load and the artifacts accompanying the extracted textures when using a canonical BEEMD, GiT-BEMD is developed to cope with such challenges. It is seen that the computational cost is decreased dramatically, and the quality of the extracted textures is enhanced considerably. Consequently, GiT-BEMD achieves a higher quality of the estimated BIMFs as can be seen from a direct comparison of the results obtained with different variants of BEEMD and GiT-BEMD. Moreover, results generated by 2DBEEMD, especially in case of GiT-BEMD, distinctly show a superior precision in spatial localization of activity blobs when compared with a canonical general linear model (GLM) analysis employing statistical parametric mapping (SPM). Furthermore, to identify most informative textures, i.e. BIMFs, a support vector machine (SVM) as well as a random forest (RF) classifier is employed. Classification performance demonstrates the potential of the extracted BIMFs in supporting decision making of the classifier. With GiT-BEMD, the classification performance improved significantly which might also be a consequence of a clearer structure for these modes compared to the ones obtained with canonical BEEMD. Altogether, there is strong believe that the newly proposed metaheuristic GiT-BEMD offers a highly competitive alternative to existing BEMD algorithms and represents a promising technique for blindly decomposing images and extracting textures thereof which may be used for further analysis

    A Study of Biomedical Time Series Using Empirical Mode Decomposition : Extracting event-related modes from EEG signals recorded during visual processing of contour stimuli

    Get PDF
    Noninvasive neuroimaging techniques like functional Magnetic Resonance Imaging (fMRI) and/or Electroencephalography (EEG) allow researchers to investigate and analyze brain activities during visual processing. EEG offers a high temporal resolution at a level of submilliseconds which can be combined favorably with fMRI which has a good spatial resolution on small spatial scales in the millimeter range. These neuroimaging techniques were, and still are instrumental in the diagnoses and treatments of neurological disorders in the clinical applications. In this PhD thesis we concentrate on lectrophysiological signatures within EEG recordings of a combined EEG-fMRI data set which where taken while performing a contour integration task. The estimation of location and distribution of the electrical sources in the brain from surface recordings which are responsible for interesting EEG waves has drawn the attention of many EEG/MEG researchers. However, this process which is called brain source localization is still one of the major problems in EEG. It consists of solving two modeling problems: forward and inverse. In the forward problem, one is interested in predicting the expected potential distribution on the scalp from given electrical sources that represent active neurons in the head. These evaluations are necessary to solve the inverse problem which can be defined as the problem of estimating the brain sources that generated the measured electrical potentials. This thesis presents a data-driven analysis of EEG data recorded during a combined EEG/fMRI study of visual processing during a contour integration task. The analysis is based on an ensemble empirical mode decomposition (EEMD) and discusses characteristic features of event related modes (ERMs) resulting from the decomposition. We identify clear differences in certain ERMs in response to contour vs non-contour Gabor stimuli mainly for response amplitudes peaking around 100 [ms] (called P100) and 200 [ms] (called N200) after stimulus onset, respectively. We observe early P100 and N200 responses at electrodes located in the occipital area of the brain, while late P100 and N200 responses appear at electrodes located in frontal brain areas. Signals at electrodes in central brain areas show bimodal early/late response signatures in certain ERMs. Head topographies clearly localize statistically significant response differences to both stimulus conditions. Our findings provide an independent proof of recent models which suggest that contour integration depends on distributed network activity within the brain. Next and based on the previous analysis, a new approach for source localization of EEG data based on combining ERMs, extracted with EEMD, with inverse models has been presented. As the first step, 64 channel EEG recordings are pooled according to six brain areas and decomposed, by applying an EEMD, into their underlying ERMs. Then, based upon the problem at hand, the most closely related ERM, in terms of frequency and amplitude, is combined with inverse modeling techniques for source localization. More specifically, the standardized low resolution brain electromagnetic tomography (sLORETA) procedure is employed in this work. Accuracy and robustness of the results indicate that this approach deems highly promising in source localization techniques for EEG data. Given the results of analyses above, it can be said that EMD is able to extract intrinsic signal modes, ERMs, which contain decisive information about responses to contour and non-contour stimuli. Hence, we introduce a new toolbox, called EMDLAB, which serves the growing interest of the signal processing community in applying EMD as a decomposition technique. EMDLAB can be used to perform, easily and effectively, four common types of EMD: plain EMD, ensemble EMD (EEMD), weighted sliding EMD (wSEMD) and multivariate EMD (MEMD) on the EEG data. The main goal of EMDLAB toolbox is to extract characteristics of either the EEG signal by intrinsic mode functions (IMFs) or ERMs. Since IMFs reflect characteristics of the original EEG signal, ERMs reflect characteristics of ERPs of the original signal. The new toolbox is provided as a plug-in to the well-known EEGLAB which enables it to exploit the advantageous visualization capabilities of EEGLAB as well as statistical data analysis techniques provided there for extracted IMFs and ERMs of the signal

    Estimation of Dose Distribution for Lu-177 Therapies in Nuclear Medicine

    Get PDF
    In nuclear medicine, two frequent applications of 177-Lu therapy exist: DOTATOC therapy for patients with a neuroendocrine tumor and PSMA thearpy for prostate cancer. During the therapy a pharmaceutical is injected intravenously, which attaches to tumor cells due to its molecular composition. Since the pharmaceutical contains a radioactive 177Lu isotope, tumor cells are destroyed through irradiation. Afterwards the substance is excreted via the kidneys. Since the latter are very sensitive to high energy radiation, it is necessary to compute exactly how much radioactivity can be administered to the patient without endangering healthy organs. This calculation is called dosimetry and currently is made according to the state of the art MIRD method. At the beginning of this work, an error assessment of the established method is presented, which has determined an overall error of 25% in the renal dose value. The presented study improves and personalizes the MIRD method in several respects and reduces individual error estimates considerably. In order to be able to estimate of the amount of activity, first a test dose is injected to the patient. Subsequently, after 4h, 24h, 48h and 72h SPECT images are taken. From these images the activity at each voxel can be obtained a specified time points, i. e. the physical decline and physiological metabolization of the pharmaceutical can be followed in time. To calculate the amount of decay in each voxel from the four SPECT registrations, a time activity curve must be integrated. In this work, a statistical method was developed to estimate the time dependent activity and then integrate a voxel-by-voxel time-activity curve. This procedure results in a decay map for all available 26 patients (13 PSMA/13 DOTATOC). After the decay map has been estimated, a full Monte Carlo simulation has been carried out on the basis of these decay maps to determine a related dose distribution. The simulation results are taken as reference (“Gold Standard”) and compared with methods for an approximate but faster estimation of the dose distribution. Recently, a convolution with Dose Voxel Kernels (DVK) has been established as a standard dose estimation method (Soft Tissue Scaling STS). Thereby a radioactive Lutetium isotope is placed in a cube consisting of soft tissue. Then radiation interactions are simulated for a number of 10^10 decays. The resulting Dose Voxel Kernel is then convolved with the estimated decay map. The result is a dose distribution, which, however, does not take into account any tissue density differences. To take tissue inhomogeneities into account, three methods are described in the literature, namely Center Scaling (CS), Density Scaling (DS), and Percentage Scaling (PS). However, their application did not improve the results of the STS method as is demonstrated in this study. Consequently, a neural network was trained finally to estimate DVKs adapted to the respective individual tissue density distribution. During the convolution process, it uses for each voxel an adapted DVK that was deduced from the corresponding tissue density kernel. This method outperformed the MIRD method, which resulted in an uncertainty of the renal dose between -42.37-10.22% an achieve a reduction in the uncertainty to a range between -26.00%-7.93%. These dose deviations were calculated for 26 patients and relate to the mean renal dose compared with the respective result of the Monte Carlo simulation. In order to improve the estimates of dose distribution even further, a 3D 2D neural network was trained in the second part of the work. This network predicts the dose distribution of an entire patient. In combination with an Empirical Mode Decomposition, this method achieved deviations of only -12.21%-2.13% . The mean deviation of the dose estimates is in the range of the statistical error of the Monte Carlo simulation. In the third part of the work, a neural network was used to automatically segment the kidney, spleen and tumors. Compared to an established segmentation algorithm, the method developed in this work can segment tumors because it uses not only the CT image as input, but also the SPECT image

    BEMDEC: An Adaptive and Robust Methodology for Digital Image Feature Extraction

    Get PDF
    The intriguing study of feature extraction, and edge detection in particular, has, as a result of the increased use of imagery, drawn even more attention not just from the field of computer science but also from a variety of scientific fields. However, various challenges surrounding the formulation of feature extraction operator, particularly of edges, which is capable of satisfying the necessary properties of low probability of error (i.e., failure of marking true edges), accuracy, and consistent response to a single edge, continue to persist. Moreover, it should be pointed out that most of the work in the area of feature extraction has been focused on improving many of the existing approaches rather than devising or adopting new ones. In the image processing subfield, where the needs constantly change, we must equally change the way we think. In this digital world where the use of images, for variety of purposes, continues to increase, researchers, if they are serious about addressing the aforementioned limitations, must be able to think outside the box and step away from the usual in order to overcome these challenges. In this dissertation, we propose an adaptive and robust, yet simple, digital image features detection methodology using bidimensional empirical mode decomposition (BEMD), a sifting process that decomposes a signal into its two-dimensional (2D) bidimensional intrinsic mode functions (BIMFs). The method is further extended to detect corners and curves, and as such, dubbed as BEMDEC, indicating its ability to detect edges, corners and curves. In addition to the application of BEMD, a unique combination of a flexible envelope estimation algorithm, stopping criteria and boundary adjustment made the realization of this multi-feature detector possible. Further application of two morphological operators of binarization and thinning adds to the quality of the operator

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications (MAVEBA) came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the neonate to the adult and elderly. Over the years the initial issues have grown and spread also in other aspects of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years always in Firenze, Italy. This edition celebrates twenty years of uninterrupted and succesfully research in the field of voice analysis

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Three-dimensional geometry characterization using structured light fields

    Get PDF
    Tese de doutoramento. Engenharia Mecânica. Faculdade de Engenharia. Universidade do Porto. 200

    Pattern Recognition

    Get PDF
    Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition

    Intelligent Biosignal Processing in Wearable and Implantable Sensors

    Get PDF
    This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine
    • …
    corecore