15,928 research outputs found
Prospects for Theranostics in Neurosurgical Imaging: Empowering Confocal Laser Endomicroscopy Diagnostics via Deep Learning
Confocal laser endomicroscopy (CLE) is an advanced optical fluorescence
imaging technology that has the potential to increase intraoperative precision,
extend resection, and tailor surgery for malignant invasive brain tumors
because of its subcellular dimension resolution. Despite its promising
diagnostic potential, interpreting the gray tone fluorescence images can be
difficult for untrained users. In this review, we provide a detailed
description of bioinformatical analysis methodology of CLE images that begins
to assist the neurosurgeon and pathologist to rapidly connect on-the-fly
intraoperative imaging, pathology, and surgical observation into a
conclusionary system within the concept of theranostics. We present an overview
and discuss deep learning models for automatic detection of the diagnostic CLE
images and discuss various training regimes and ensemble modeling effect on the
power of deep learning predictive models. Two major approaches reviewed in this
paper include the models that can automatically classify CLE images into
diagnostic/nondiagnostic, glioma/nonglioma, tumor/injury/normal categories and
models that can localize histological features on the CLE images using weakly
supervised methods. We also briefly review advances in the deep learning
approaches used for CLE image analysis in other organs. Significant advances in
speed and precision of automated diagnostic frame selection would augment the
diagnostic potential of CLE, improve operative workflow and integration into
brain tumor surgery. Such technology and bioinformatics analytics lend
themselves to improved precision, personalization, and theranostics in brain
tumor treatment.Comment: See the final version published in Frontiers in Oncology here:
https://www.frontiersin.org/articles/10.3389/fonc.2018.00240/ful
Link Prediction in Complex Networks: A Survey
Link prediction in complex networks has attracted increasing attention from
both physical and computer science communities. The algorithms can be used to
extract missing information, identify spurious interactions, evaluate network
evolving mechanisms, and so on. This article summaries recent progress about
link prediction algorithms, emphasizing on the contributions from physical
perspectives and approaches, such as the random-walk-based methods and the
maximum likelihood methods. We also introduce three typical applications:
reconstruction of networks, evaluation of network evolving mechanism and
classification of partially labelled networks. Finally, we introduce some
applications and outline future challenges of link prediction algorithms.Comment: 44 pages, 5 figure
Integral-based filtering of continuous glucose sensor measurements for glycaemic control in critical care
Hyperglycaemia is prevalent in critical illness and increases the risk of further
complications and mortality, while tight control can reduce mortality up to 43%.
Adaptive control methods are capable of highly accurate, targeted blood glucose
regulation using limited numbers of manual measurements due to patient discomfort
and labour intensity. Therefore, the option to obtain greater data density using
emerging continuous glucose sensing devices is attractive. However, the few such
systems currently available can have errors in excess of 20-30%. In contrast, typical
bedside testing kits have errors of approximately 7-10%. Despite greater measurement
frequency larger errors significantly impact the resulting glucose and patient specific
parameter estimates, and thus the control actions determined creating an important
safety and performance issue. This paper models the impact of the Continuous
Glucose Monitoring System (CGMS, Medtronic, Northridge, CA) on model-based
parameter identification and glucose prediction. An integral-based fitting and filtering
method is developed to reduce the effect of these errors. A noise model is developed
based on CGMS data reported in the literature, and is slightly conservative with a
mean Clarke Error Grid (CEG) correlation of R=0.81 (range: 0.68-0.88) as compared to a reported value of R=0.82 in a critical care study. Using 17 virtual patient profiles
developed from retrospective clinical data, this noise model was used to test the
methods developed. Monte-Carlo simulation for each patient resulted in an average
absolute one-hour glucose prediction error of 6.20% (range: 4.97-8.06%) with an
average standard deviation per patient of 5.22% (range: 3.26-8.55%). Note that all the
methods and results are generalisable to similar applications outside of critical care,
such as less acute wards and eventually ambulatory individuals. Clinically, the results
show one possible computational method for managing the larger errors encountered
in emerging continuous blood glucose sensors, thus enabling their more effective use
in clinical glucose regulation studies
Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates
The study of cerebral anatomy in developing neonates is of great importance for
the understanding of brain development during the early period of life. This
dissertation therefore focuses on three challenges in the modelling of cerebral
anatomy in neonates during brain development. The methods that have been
developed all use Magnetic Resonance Images (MRI) as source data.
To facilitate study of vascular development in the neonatal period, a set of image
analysis algorithms are developed to automatically extract and model cerebral
vessel trees. The whole process consists of cerebral vessel tracking from
automatically placed seed points, vessel tree generation, and vasculature
registration and matching. These algorithms have been tested on clinical Time-of-
Flight (TOF) MR angiographic datasets.
To facilitate study of the neonatal cortex a complete cerebral cortex segmentation
and reconstruction pipeline has been developed. Segmentation of the neonatal
cortex is not effectively done by existing algorithms designed for the adult brain
because the contrast between grey and white matter is reversed. This causes pixels
containing tissue mixtures to be incorrectly labelled by conventional methods. The
neonatal cortical segmentation method that has been developed is based on a novel
expectation-maximization (EM) method with explicit correction for mislabelled
partial volume voxels. Based on the resulting cortical segmentation, an implicit
surface evolution technique is adopted for the reconstruction of the cortex in
neonates. The performance of the method is investigated by performing a detailed
landmark study.
To facilitate study of cortical development, a cortical surface registration algorithm
for aligning the cortical surface is developed. The method first inflates extracted
cortical surfaces and then performs a non-rigid surface registration using free-form
deformations (FFDs) to remove residual alignment. Validation experiments using
data labelled by an expert observer demonstrate that the method can capture local
changes and follow the growth of specific sulcus
Vibration suppression using fractional-order disturbance observer based adaptive grey predictive controller
A novel control strategy is proposed for vibration suppression using an integration of a fractional-order disturbance observer (FDOB) and an adaptive grey predictive controller (AGPC). AGPC is utilized to realize outer loop control for better transient performance by predicting system outputs ahead with metabolic GM(1,1) model, and an adaptive step switching module is adopted for the grey predictor in AGPC. FDOB is used to obtain disturbance estimate and generate compensation signal, and as the order of Q-filter is expanded to real-number domain, FDOB has a wider range to select a suitable tradeoff between robustness and vibration suppression. For implementation of the fractional order Q-filter, broken-line approximation method is introduced. The proposed control strategy is simple in control-law derivation, and its effectiveness is validated by numerical simulations
Guidelines for the recording and evaluation of pharmaco-EEG data in man: the International Pharmaco-EEG Society (IPEG)
The International Pharmaco-EEG Society (IPEG) presents updated guidelines summarising the requirements for the recording and computerised evaluation of pharmaco-EEG data in man. Since the publication of the first pharmaco-EEG guidelines in 1982, technical and data processing methods have advanced steadily, thus enhancing data quality and expanding the palette of tools available to investigate the action of drugs on the central nervous system (CNS), determine the pharmacokinetic and pharmacodynamic properties of novel therapeutics and evaluate the CNS penetration or toxicity of compounds. However, a review of the literature reveals inconsistent operating procedures from one study to another. While this fact does not invalidate results per se, the lack of standardisation constitutes a regrettable shortcoming, especially in the context of drug development programmes. Moreover, this shortcoming hampers reliable comparisons between outcomes of studies from different laboratories and hence also prevents pooling of data which is a requirement for sufficiently powering the validation of novel analytical algorithms and EEG-based biomarkers. The present updated guidelines reflect the consensus of a global panel of EEG experts and are intended to assist investigators using pharmaco-EEG in clinical research, by providing clear and concise recommendations and thereby enabling standardisation of methodology and facilitating comparability of data across laboratories
Signal Processing and Machine Learning Techniques Towards Various Real-World Applications
abstract: Machine learning (ML) has played an important role in several modern technological innovations and has become an important tool for researchers in various fields of interest. Besides engineering, ML techniques have started to spread across various departments of study, like health-care, medicine, diagnostics, social science, finance, economics etc. These techniques require data to train the algorithms and model a complex system and make predictions based on that model. Due to development of sophisticated sensors it has become easier to collect large volumes of data which is used to make necessary hypotheses using ML. The promising results obtained using ML have opened up new opportunities of research across various departments and this dissertation is a manifestation of it. Here, some unique studies have been presented, from which valuable inference have been drawn for a real-world complex system. Each study has its own unique sets of motivation and relevance to the real world. An ensemble of signal processing (SP) and ML techniques have been explored in each study. This dissertation provides the detailed systematic approach and discusses the results achieved in each study. Valuable inferences drawn from each study play a vital role in areas of science and technology, and it is worth further investigation. This dissertation also provides a set of useful SP and ML tools for researchers in various fields of interest.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
Multimodel Approaches for Plasma Glucose Estimation in Continuous Glucose Monitoring. Development of New Calibration Algorithms
ABSTRACT
Diabetes Mellitus (DM) embraces a group of metabolic diseases which main characteristic is the presence of high glucose levels in blood. It is one of the diseases with major social and health impact, both for its prevalence and also the consequences of the chronic complications that it implies.
One of the research lines to improve the quality of life of people with diabetes is of technical focus. It involves several lines of research, including the development and improvement of devices to estimate "online" plasma glucose: continuous glucose monitoring systems (CGMS), both invasive and non-invasive. These devices estimate plasma glucose from sensor measurements from compartments alternative to blood. Current commercially available CGMS are minimally invasive and offer an estimation of plasma glucose from measurements in the interstitial fluid
CGMS is a key component of the technical approach to build the artificial pancreas, aiming at closing the loop in combination with an insulin pump. Yet, the accuracy of current CGMS is still poor and it may partly depend on low performance of the implemented Calibration Algorithm (CA). In addition, the sensor-to-patient sensitivity is different between patients and also for the same patient in time.
It is clear, then, that the development of new efficient calibration algorithms for CGMS is an interesting and challenging problem.
The indirect measurement of plasma glucose through interstitial glucose is a main confounder of CGMS accuracy. Many components take part in the glucose transport dynamics. Indeed, physiology might suggest the existence of different local behaviors in the glucose transport process.
For this reason, local modeling techniques may be the best option for the structure of the desired CA. Thus, similar input samples are represented by the same local model. The integration of all of them considering the input regions where they are valid is the final model of the whole data set.
Clustering is tBarceló Rico, F. (2012). Multimodel Approaches for Plasma Glucose Estimation in Continuous Glucose Monitoring. Development of New Calibration Algorithms [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17173Palanci
- …