9 research outputs found
Assessing Variability of EEG and ECG/HRV Time Series Signals Using a Variety of Non-Linear Methods
Time series signals, such as Electroencephalogram (EEG) and Electrocardiogram
(ECG) represent the complex dynamic behaviours of biological systems.
The analysis of these signals using variety of nonlinear methods is essential
for understanding variability within EEG and ECG, which potentially
could help unveiling hidden patterns related to underlying physiological mechanisms.
EEG is a time varying signal, and electrodes for recording EEG at different
positions on the scalp give different time varying signals. There might
be correlation between these signals. It is important to know the correlation
between EEG signals because it might tell whether or not brain activities from
different areas are related. EEG and ECG might be related to each other because
both of them are generated from one co-ordinately working body. Investigating
this relationship is of interest because it may reveal information about
the correlation between EEG and ECG signals.
This thesis is about assessing variability of time series data, EEG and ECG, using
variety of nonlinear measures. Although other research has looked into the
correlation between EEGs using a limited number of electrodes and a limited
number of combinations of electrode pairs, no research has investigated the
correlation between EEG signals and distance between electrodes. Furthermore,
no one has compared the correlation performance for participants with
and without medical conditions. In my research, I have filled up these gaps
by using a full range of electrodes and all possible combinations of electrode
pairs analysed in Time Domain (TD). Cross-Correlation method is calculated
on the processed EEG signals for different number unique electrode pairs from
each datasets. In order to obtain the distance in centimetres (cm) between
electrodes, a measuring tape was used. For most of our participants the head
circumference range was 54-58cm, for which a medium-sized I have discovered
that the correlation between EEG signals measured through electrodes
is linearly dependent on the physical distance (straight-line) distance between
them for datasets without medical condition, but not for datasets with medical
conditions.
Some research has investigated correlation between EEG and Heart Rate Variability
(HRV) within limited brain areas and demonstrated the existence of
correlation between EEG and HRV. But no research has indicated whether or
not the correlation changes with brain area. Although Wavelet Transformations
(WT) have been performed on time series data including EEG and HRV
signals to extract certain features respectively by other research, so far correlation
between WT signals of EEG and HRV has not been analysed. My research
covers these gaps by conducting a thorough investigation of all electrodes on
the human scalp in Frequency Domain (FD) as well as TD. For the reason of
different sample rates of EEG and HRV, two different approaches (named as
Method 1 and Method 2) are utilised to segment EEG signals and to calculate
Pearson’s Correlation Coefficient for each of the EEG frequencies with each
of the HRV frequencies in FD. I have demonstrated that EEG at the front area
of the brain has a stronger correlation with HRV than that at the other area in
a frequency domain. These findings are independent of both participants and
brain hemispheres.
Sample Entropy (SE) is used to predict complexity of time series data. Recent
research has proposed new calculation methods for SE, aiming to improve the
accuracy. To my knowledge, no one has attempted to reduce the computational
time of SE calculation. I have developed a new calculation method for time
series complexity which could improve computational time significantly in the
context of calculating a correlation between EEG and HRV. The results have
a parsimonious outcome of SE calculation by exploiting a new method of SE
implementation. In addition, it is found that the electrical activity in the frontal
lobe of the brain appears to be correlated with the HRV in a time domain.
Time series analysis method has been utilised to study complex systems that
appear ubiquitous in nature, but limited to certain dynamic systems (e.g. analysing
variables affecting stock values). In this thesis, I have also investigated the nature
of the dynamic system of HRV. I have disclosed that Embedding Dimension
could unveil two variables that determined HRV
Automated Characterisation and Classification of Liver Lesions From CT Scans
Cancer is a general term for a wide range of diseases that can affect any part of the body due to the rapid creation of abnormal cells that grow outside their normal boundaries. Liver cancer is one of the common diseases that cause the death of more than 600,000 each year. Early detection is important to diagnose and reduce the incidence of death. Examination of liver lesions is performed with various medical imaging modalities such as Ultrasound (US), Computer tomography (CT), and Magnetic resonance imaging (MRI). The improvements in medical imaging and image processing techniques have significantly enhanced the interpretation of medical images. Computer-Aided Diagnosis (CAD) systems based on these techniques play a vital role in the early detection of liver disease and hence reduce liver cancer death rate. Moreover, CAD systems can help physician, as a second opinion, in characterising lesions and making the diagnostic decision. Thus, CAD systems have become an important research area. Particularly, these systems can provide diagnostic assistance to doctors to improve overall diagnostic accuracy.
The traditional methods to characterise liver lesions and differentiate normal liver tissues from abnormal ones are largely dependent on the radiologists experience. Thus, CAD systems based on the image processing and artificial intelligence techniques gained a lot of attention, since they could provide constructive diagnosis suggestions to clinicians for decision making. The liver lesions are characterised through two ways: (1) Using a content-based image retrieval (CBIR) approach to assist the radiologist in liver lesions characterisation. (2) Calculating the high-level features that describe/ characterise the liver lesion in a way that is interpreted by humans, particularly Radiologists/Clinicians, based on the hand-crafted/engineered computational features (low-level features) and learning process. However, the research gap is related to the high-level understanding and interpretation of the medical image contents from the low-level pixel analysis, based on mathematical processing and artificial intelligence methods. In our work, the research gap is bridged if a relation of image contents to medical meaning in analogy to radiologist understanding is established.
This thesis explores an automated system for the classification and characterisation of liver lesions in CT scans. Firstly, the liver is segmented automatically by using anatomic medical knowledge, histogram-based adaptive threshold and morphological operations. The lesions and vessels are then extracted from the segmented liver by applying AFCM and Gaussian mixture model through a region growing process respectively. Secondly, the proposed framework categorises the high-level features into two groups; the first group is the high-level features that are extracted from the image contents such as (Lesion location, Lesion focality, Calcified, Scar, ...); the second group is the high-level features that are inferred from the low-level features through machine learning process to characterise the lesion such as (Lesion density, Lesion rim, Lesion composition, Lesion shape,...). The novel Multiple ROIs selection approach is proposed, in which regions are derived from generating abnormality level map based on intensity difference and the proximity distance for each voxel with respect to the normal liver tissue. Then, the association between low-level, high-level features and the appropriate ROI are derived by assigning the ability of each ROI to represents a set of lesion characteristics. Finally, a novel feature vector is built, based on high-level features, and fed into SVM for lesion classification. In contrast with most existing research, which uses low-level features only, the use of high-level features and characterisation helps in interpreting and explaining the diagnostic decision. The methods are evaluated on a dataset containing 174 CT scans. The experimental results demonstrated that the efficacy of the proposed framework in the successful characterisation and classification of the liver lesions in CT scans. The achieved average accuracy was 95:56% for liver lesion characterisation. While the lesion’s classification accuracy was 97:1% for the entire dataset.
The proposed framework is developed to provide a more robust and efficient lesion characterisation framework through comprehensions of the low-level features to generate semantic features. The use of high-level features (characterisation) helps in better interpretation of CT liver images. In addition, the difference-of-features using multiple ROIs were developed for robust capturing of lesion characteristics in a reliable way. This is in contrast to the current research trend of extracting the features from the lesion only and not paying much attention to the relation between lesion and surrounding area. The design of the liver lesion characterisation framework is based on the prior knowledge of the medical background to get a better and clear understanding of the liver lesion characteristics in medical CT images
Deep Learning in Mobile and Wireless Networking: A Survey
The rapid uptake of mobile devices and the rising popularity of mobile
applications and services pose unprecedented demands on mobile and wireless
networking infrastructure. Upcoming 5G systems are evolving to support
exploding mobile traffic volumes, agile management of network resource to
maximize user experience, and extraction of fine-grained real-time analytics.
Fulfilling these tasks is challenging, as mobile environments are increasingly
complex, heterogeneous, and evolving. One potential solution is to resort to
advanced machine learning techniques to help managing the rise in data volumes
and algorithm-driven applications. The recent success of deep learning
underpins new and powerful tools that tackle problems in this space.
In this paper we bridge the gap between deep learning and mobile and wireless
networking research, by presenting a comprehensive survey of the crossovers
between the two areas. We first briefly introduce essential background and
state-of-the-art in deep learning techniques with potential applications to
networking. We then discuss several techniques and platforms that facilitate
the efficient deployment of deep learning onto mobile systems. Subsequently, we
provide an encyclopedic review of mobile and wireless networking research based
on deep learning, which we categorize by different domains. Drawing from our
experience, we discuss how to tailor deep learning to mobile environments. We
complete this survey by pinpointing current challenges and open future
directions for research
Computational Modelling of Concrete and Concrete Structures
Computational Modelling of Concrete and Concrete Structures contains the contributions to the EURO-C 2022 conference (Vienna, Austria, 23-26 May 2022). The papers review and discuss research advancements and assess the applicability and robustness of methods and models for the analysis and design of concrete, fibre-reinforced and prestressed concrete structures, as well as masonry structures. Recent developments include methods of machine learning, novel discretisation methods, probabilistic models, and consideration of a growing number of micro-structural aspects in multi-scale and multi-physics settings. In addition, trends towards the material scale with new fibres and 3D printable concretes, and life-cycle oriented models for ageing and durability of existing and new concrete infrastructure are clearly visible. Overall computational robustness of numerical predictions and mathematical rigour have further increased, accompanied by careful model validation based on respective experimental programmes. The book will serve as an important reference for both academics and professionals, stimulating new research directions in the field of computational modelling of concrete and its application to the analysis of concrete structures. EURO-C 2022 is the eighth edition of the EURO-C conference series after Innsbruck 1994, Bad Gastein 1998, St. Johann im Pongau 2003, Mayrhofen 2006, Schladming 2010, St. Anton am Arlberg 2014, and Bad Hofgastein 2018. The overarching focus of the conferences is on computational methods and numerical models for the analysis of concrete and concrete structures
Computational Modelling of Concrete and Concrete Structures
Computational Modelling of Concrete and Concrete Structures contains the contributions to the EURO-C 2022 conference (Vienna, Austria, 23-26 May 2022). The papers review and discuss research advancements and assess the applicability and robustness of methods and models for the analysis and design of concrete, fibre-reinforced and prestressed concrete structures, as well as masonry structures. Recent developments include methods of machine learning, novel discretisation methods, probabilistic models, and consideration of a growing number of micro-structural aspects in multi-scale and multi-physics settings. In addition, trends towards the material scale with new fibres and 3D printable concretes, and life-cycle oriented models for ageing and durability of existing and new concrete infrastructure are clearly visible. Overall computational robustness of numerical predictions and mathematical rigour have further increased, accompanied by careful model validation based on respective experimental programmes. The book will serve as an important reference for both academics and professionals, stimulating new research directions in the field of computational modelling of concrete and its application to the analysis of concrete structures. EURO-C 2022 is the eighth edition of the EURO-C conference series after Innsbruck 1994, Bad Gastein 1998, St. Johann im Pongau 2003, Mayrhofen 2006, Schladming 2010, St. Anton am Arlberg 2014, and Bad Hofgastein 2018. The overarching focus of the conferences is on computational methods and numerical models for the analysis of concrete and concrete structures