63 research outputs found
Novel image processing algorithms and methods for improving their robustness and operational performance
Image processing algorithms have developed rapidly in recent years. Imaging functions are becoming more common in electronic devices, demanding better image quality, and more robust image capture in challenging conditions. Increasingly more complicated algorithms are being developed in order to achieve better signal to noise characteristics, more accurate colours, and wider dynamic range, in order to approach the human visual system performance levels. [Continues.
On the Recognition of Emotion from Physiological Data
This work encompasses several objectives, but is primarily concerned with an experiment where 33 participants were shown 32 slides in order to create ‗weakly induced emotions‘. Recordings of the participants‘ physiological state were taken as well as a self report of their emotional state. We then used an assortment of classifiers to predict emotional state from the recorded physiological signals, a process known as Physiological Pattern Recognition (PPR). We investigated techniques for recording, processing and extracting features from six different physiological signals: Electrocardiogram (ECG), Blood Volume Pulse (BVP), Galvanic Skin Response (GSR), Electromyography (EMG), for the corrugator muscle, skin temperature for the finger and respiratory rate. Improvements to the state of PPR emotion detection were made by allowing for 9 different weakly induced emotional states to be detected at nearly 65% accuracy. This is an improvement in the number of states readily detectable. The work presents many investigations into numerical feature extraction from physiological signals and has a chapter dedicated to collating and trialing facial electromyography techniques. There is also a hardware device we created to collect participant self reported emotional states which showed several improvements to experimental procedure
Non-contact vision-based deformation monitoring on bridge structures
Information on deformation is an important metric for bridge condition and performance assessment, e.g. identifying abnormal events, calibrating bridge models and estimating load carrying capacities, etc. However, accurate measurement of bridge deformation, especially for long-span bridges remains as a challenging task. The major aim of this research is to develop practical and cost-effective techniques for accurate deformation monitoring on bridge structures. Vision-based systems are taken as the study focus due to a few reasons: low cost, easy installation, desired sample rates, remote and distributed sensing, etc.
This research proposes an custom-developed vision-based system for bridge deformation monitoring. The system supports either consumer-grade or professional cameras and incorporates four advanced video tracking methods to adapt to different test situations. The sensing accuracy is firstly quantified in laboratory conditions. The working performance in field testing is evaluated on one short-span and one long-span bridge examples considering several influential factors i.e. long-range sensing, low-contrast target patterns, pattern changes and lighting changes. Through case studies, some suggestions about tracking method selection are summarised for field testing. Possible limitations of vision-based systems are illustrated as well.
To overcome observed limitations of vision-based systems, this research further proposes a mixed system combining cameras with accelerometers for accurate deformation measurement. To integrate displacement with acceleration data autonomously, a novel data fusion method based on Kalman filter and maximum likelihood estimation is proposed. Through field test validation, the method is effective for improving displacement accuracy and widening frequency bandwidth. The mixed system based on data fusion is implemented on field testing of a railway bridge considering undesired test conditions (e.g. low-contrast target patterns and camera shake). Analysis results indicate that the system offers higher accuracy than using a camera alone and is viable for bridge influence line estimation.
With considerable accuracy and resolution in time and frequency domains, the potential of vision-based measurement for vibration monitoring is investigated. The proposed vision-based system is applied on a cable-stayed footbridge for deck deformation and cable vibration measurement under pedestrian loading. Analysis results indicate that the measured data enables accurate estimation of modal frequencies and could be used to investigate variations of modal frequencies under varying pedestrian loads. The vision-based system in this application is used for multi-point vibration measurement and provides results comparable to those obtained using an array of accelerometers
Essays on the nonlinear and nonstochastic nature of stock market data
The nature and structure of stock-market price dynamics is an area of ongoing and rigourous scientific debate. For almost three decades, most emphasis has been given on upholding the concepts of Market Efficiency and rational investment behaviour. Such an approach has favoured the development of numerous linear and nonlinear models mainly of stochastic foundations. Advances in mathematics have shown that nonlinear deterministic processes i.e. "chaos" can produce sequences that appear random to linear statistical techniques. Till recently, investment finance has been a science based on linearity and stochasticity. Hence it is important that studies of Market Efficiency include investigations of chaotic determinism and power laws. As far as chaos is concerned, there are rather mixed or inconclusive research results, prone with controversy. This inconclusiveness is attributed to two things: the nature of stock market time series, which are highly volatile and contaminated with a substantial amount of noise of largely unknown structure, and the lack of appropriate robust statistical testing procedures. In order to overcome such difficulties, within this thesis it is shown empirically and for the first time how one can combine novel techniques from recent chaotic and signal analysis literature, under a univariate time series analysis framework. Three basic methodologies are investigated: Recurrence analysis, Surrogate Data and Wavelet transforms. Recurrence Analysis is used to reveal qualitative and quantitative evidence of nonlinearity and nonstochasticity for a number of stock markets. It is then demonstrated how Surrogate Data, under a statistical hypothesis testing framework, can be simulated to provide similar evidence. Finally, it is shown how wavelet transforms can be applied in order to reveal various salient features of the market data and provide a platform for nonparametric regression and denoising. The results indicate that without the invocation of any parametric model-based assumptions, one can easily deduce that there is more to linearity and stochastic randomness in the data. Moreover, substantial evidence of recurrent patterns and aperiodicities is discovered which can be attributed to chaotic dynamics. These results are therefore very consistent with existing research indicating some types of nonlinear dependence in financial data. Concluding, the value of this thesis lies in its contribution to the overall evidence on Market Efficiency and chaotic determinism in financial markets. The main implication here is that the theory of equilibrium pricing in financial markets may need reconsideration in order to accommodate for the structures revealed
Bayesian plug & play methods for inverse problems in imaging.
Thèse de Doctorat de Mathématiques Appliquées (Université de Paris)Tesis de Doctorado en Ingeniería Eléctrica (Universidad de la República)This thesis deals with Bayesian methods for solving ill-posed inverse problems in imaging with learnt image priors. The first part of this thesis (Chapter 3) concentrates on two particular problems, namely joint denoising and decompression and multi-image super-resolution. After an extensive study of the noise statistics for these problem in the transformed (wavelet or Fourier) domain, we derive two novel algorithms to solve this particular inverse problem. One of them is based on a multi-scale self-similarity prior and can be seen as a transform-domain generalization of the celebrated non-local bayes algorithm to the case of non-Gaussian noise. The second one uses a neural-network denoiser to implicitly encode the image prior, and a splitting scheme to incorporate this prior into an optimization algorithm to find a MAP-like estimator. The second part of this thesis concentrates on the Variational AutoEncoder (VAE) model and some of its variants that show its capabilities to explicitly capture the probability distribution of high-dimensional datasets such as images. Based on these VAE models, we propose two ways to incorporate them as priors for general inverse problems in imaging : • The first one (Chapter 4) computes a joint (space-latent) MAP estimator named Joint Posterior Maximization using an Autoencoding Prior (JPMAP). We show theoretical and experimental evidence that the proposed objective function satisfies a weak bi-convexity property which is sufficient to guarantee that our optimization scheme converges to a stationary point. Experimental results also show the higher quality of the solutions obtained by our JPMAP approach with respect to other non-convex MAP approaches which more often get stuck in spurious local optima. • The second one (Chapter 5) develops a Gibbs-like posterior sampling algorithm for the exploration of posterior distributions of inverse problems using multiple chains and a VAE as image prior. We showhowto use those samples to obtain MMSE estimates and their corresponding uncertainty.Cette thèse traite des méthodes bayésiennes pour résoudre des problèmes inverses mal posés en imagerie avec des distributions a priori d’images apprises. La première partie de cette thèse (Chapitre 3) se concentre sur deux problèmes partic-uliers, à savoir le débruitage et la décompression conjoints et la super-résolutionmulti-images. Après une étude approfondie des statistiques de bruit pour ces problèmes dans le domaine transformé (ondelettes ou Fourier), nous dérivons deuxnouveaux algorithmes pour résoudre ce problème inverse particulie. L’un d’euxest basé sur une distributions a priori d’auto-similarité multi-échelle et peut êtrevu comme une généralisation du célèbre algorithme de Non-Local Bayes au cas dubruit non gaussien. Le second utilise un débruiteur de réseau de neurones pourcoder implicitement la distribution a priori, et un schéma de division pour incor-porer cet distribution dans un algorithme d’optimisation pour trouver un estima-teur de type MAP. La deuxième partie de cette thèse se concentre sur le modèle Variational Auto Encoder (VAE) et certaines de ses variantes qui montrent ses capacités à capturer explicitement la distribution de probabilité d’ensembles de données de grande dimension tels que les images. Sur la base de ces modèles VAE, nous proposons deuxmanières de les incorporer comme distribution a priori pour les problèmes inverses généraux en imagerie: •Le premier (Chapitre 4) calcule un estimateur MAP conjoint (espace-latent) nommé Joint Posterior Maximization using an Autoencoding Prior (JPMAP). Nous montrons des preuves théoriques et expérimentales que la fonction objectif proposée satisfait une propriété de bi-convexité faible qui est suffisante pour garantir que notre schéma d’optimisation converge vers un pointstationnaire. Les résultats expérimentaux montrent également la meilleurequalité des solutions obtenues par notre approche JPMAP par rapport à d’autresapproches MAP non convexes qui restent le plus souvent bloquées dans desminima locaux. •Le second (Chapitre 5) développe un algorithme d’échantillonnage a poste-riori de type Gibbs pour l’exploration des distributions a posteriori de problèmes inverses utilisant des chaînes multiples et un VAE comme distribution a priori. Nous montrons comment utiliser ces échantillons pour obtenir desestimations MMSE et leur incertitude correspondante.En esta tesis se estudian métodos bayesianos para resolver problemas inversos mal condicionados en imágenes usando distribuciones a priori entrenadas. La primera parte de esta tesis (Capítulo 3) se concentra en dos problemas particulares, a saber, el de eliminación de ruido y descompresión conjuntos, y el de superresolución a partir de múltiples imágenes. Después de un extenso estudio de las estadísticas del ruido para estos problemas en el dominio transformado (wavelet o Fourier),derivamos dos algoritmos nuevos para resolver este problema inverso en particular. Uno de ellos se basa en una distribución a priori de autosimilitud multiescala y puede verse como una generalización al dominio wavelet del célebre algoritmo Non-Local Bayes para el caso de ruido no Gaussiano. El segundo utiliza un algoritmo de eliminación de ruido basado en una red neuronal para codificar implícitamente la distribución a priori de las imágenes y un esquema de relajación para incorporar esta distribución en un algoritmo de optimización y así encontrar un estimador similar al MAP. La segunda parte de esta tesis se concentra en el modelo Variational AutoEncoder (VAE) y algunas de sus variantes que han mostrado capacidad para capturar explícitamente la distribución de probabilidad de conjuntos de datos en alta dimensión como las imágenes. Basándonos en estos modelos VAE, proponemos dos formas de incorporarlos como distribución a priori para problemas inversos genéricos en imágenes : •El primero (Capítulo 4) calcula un estimador MAP conjunto (espacio imagen y latente) llamado Joint Posterior Maximization using an Autoencoding Prior (JPMAP). Mostramos evidencia teórica y experimental de que la función objetivo propuesta satisface una propiedad de biconvexidad débil que es suficiente para garantizar que nuestro esquema de optimización converge a un punto estacionario. Los resultados experimentales también muestran la mayor calidad de las soluciones obtenidas por nuestro enfoque JPMAP con respecto a otros enfoques MAP no convexos que a menudo se atascan en mínimos locales espurios. •El segundo (Capítulo 5) desarrolla un algoritmo de muestreo tipo Gibbs parala exploración de la distribución a posteriori de problemas inversos utilizando múltiples cadenas y un VAE como distribución a priori. Mostramos cómo usar esas muestras para obtener estimaciones de MMSE y su correspondiente incertidumbr
Models and analysis of vocal emissions for biomedical applications: 5th International Workshop: December 13-15, 2007, Firenze, Italy
The MAVEBA Workshop proceedings, held on a biannual basis, collect the scientific papers presented both as oral and poster contributions, during the conference. The main subjects are: development of theoretical and mechanical models as an aid to the study of main phonatory dysfunctions, as well as the biomedical engineering methods for the analysis of voice signals and images, as a support to clinical diagnosis and classification of vocal pathologies. The Workshop has the sponsorship of: Ente Cassa Risparmio di Firenze, COST Action 2103, Biomedical Signal Processing and Control Journal (Elsevier Eds.), IEEE Biomedical Engineering Soc. Special Issues of International Journals have been, and will be, published, collecting selected papers from the conference
Recommended from our members
A high-throughput system for automated training combined with continuous long-term neural recordings in rodents
Addressing the neural mechanisms underlying complex learned behaviors requires training animals in well-controlled tasks and concurrently measuring neural activity in their brains, an often time-consuming and labor-intensive process that can severely limit the feasibility of such studies. To overcome this constraint, we developed a fully computer-controlled general purpose system for high-throughput training of rodents. By standardizing and automating the implementation of predefined training protocols within the animal’s home-cage our system dramatically reduces the efforts involved in animal training while also removing human errors and biases from the process. We deployed this system to train rats in a variety of sensorimotor tasks, achieving learning rates comparable to existing, but more laborious, methods. By incrementally and systematically increasing the difficulty of the task over weeks of training, rats were able to master motor tasks that, in complexity and structure, resemble ones used in primate studies of motor sequence learning. We also developed a low-cost system that can be attached to the home-cages for recording neural activity continuously in an unsupervised fashion for the entire months-long training process. Our system allows long-term tethering of animals and is designed for recording and processing tens of terabytes of raw data at very high speeds. We developed a novel spike-sorting algorithm that allows us to track the activity of many simultaneously recorded single neurons for weeks despite large gradual changes in their spike waveforms. This is done with minimal human input enabling, for the first time, the identification of almost every single spike from a single neuron over many weeks of training. We used these systems to record from the motor cortex of rats as they learned to perform a sequence of highly stereotyped movements. We found that neural activity in the motor cortex was exquisitely correlated with the behavior. Surprisingly, the pattern of neural activity in the motor cortex was similar before and after learning despite the fact that motor cortex is required to learn the task, but not to perform it once it has been acquired.Medical Science
Automatic Spatiotemporal Analysis of Cardiac Image Series
RÉSUMÉ
À ce jour, les maladies cardiovasculaires demeurent au premier rang des principales causes de
décès en Amérique du Nord. Chez l’adulte et au sein de populations de plus en plus jeunes,
la soi-disant épidémie d’obésité entraînée par certaines habitudes de vie tels que la mauvaise
alimentation, le manque d’exercice et le tabagisme est lourde de conséquences pour les personnes
affectées, mais aussi sur le système de santé. La principale cause de morbidité et de
mortalité chez ces patients est l’athérosclérose, une accumulation de plaque à l’intérieur des
vaisseaux sanguins à hautes pressions telles que les artères coronaires. Les lésions athérosclérotiques
peuvent entraîner l’ischémie en bloquant la circulation sanguine et/ou en provoquant
une thrombose. Cela mène souvent à de graves conséquences telles qu’un infarctus. Outre les
problèmes liés à la sténose, les parois artérielles des régions criblées de plaque augmentent la
rigidité des parois vasculaires, ce qui peut aggraver la condition du patient. Dans la population
pédiatrique, la pathologie cardiovasculaire acquise la plus fréquente est la maladie de
Kawasaki. Il s’agit d’une vasculite aigüe pouvant affecter l’intégrité structurale des parois des
artères coronaires et mener à la formation d’anévrismes. Dans certains cas, ceux-ci entravent
l’hémodynamie artérielle en engendrant une perfusion myocardique insuffisante et en activant
la formation de thromboses.
Le diagnostic de ces deux maladies coronariennes sont traditionnellement effectués à l’aide
d’angiographies par fluoroscopie. Pendant ces examens paracliniques, plusieurs centaines de
projections radiographiques sont acquises en séries suite à l’infusion artérielle d’un agent de
contraste. Ces images révèlent la lumière des vaisseaux sanguins et la présence de lésions
potentiellement pathologiques, s’il y a lieu. Parce que les séries acquises contiennent de l’information
très dynamique en termes de mouvement du patient volontaire et involontaire (ex.
battements cardiaques, respiration et déplacement d’organes), le clinicien base généralement
son interprétation sur une seule image angiographique où des mesures géométriques sont effectuées
manuellement ou semi-automatiquement par un technicien en radiologie. Bien que
l’angiographie par fluoroscopie soit fréquemment utilisé partout dans le monde et souvent
considéré comme l’outil de diagnostic “gold-standard” pour de nombreuses maladies vasculaires,
la nature bidimensionnelle de cette modalité d’imagerie est malheureusement très
limitante en termes de spécification géométrique des différentes régions pathologiques. En effet,
la structure tridimensionnelle des sténoses et des anévrismes ne peut pas être pleinement
appréciée en 2D car les caractéristiques observées varient selon la configuration angulaire de
l’imageur. De plus, la présence de lésions affectant les artères coronaires peut ne pas refléter
la véritable santé du myocarde, car des mécanismes compensatoires naturels (ex. vaisseaux----------ABSTRACT
Cardiovascular disease continues to be the leading cause of death in North America. In adult
and, alarmingly, ever younger populations, the so-called obesity epidemic largely driven by
lifestyle factors that include poor diet, lack of exercise and smoking, incurs enormous stresses
on the healthcare system. The primary cause of serious morbidity and mortality for these
patients is atherosclerosis, the build up of plaque inside high pressure vessels like the coronary
arteries. These lesions can lead to ischemic disease and may progress to precarious blood
flow blockage or thrombosis, often with infarction or other severe consequences. Besides
the stenosis-related outcomes, the arterial walls of plaque-ridden regions manifest increased
stiffness, which may exacerbate negative patient prognosis. In pediatric populations, the
most prevalent acquired cardiovascular pathology is Kawasaki disease. This acute vasculitis
may affect the structural integrity of coronary artery walls and progress to aneurysmal lesions.
These can hinder the blood flow’s hemodynamics, leading to inadequate downstream
perfusion, and may activate thrombus formation which may lead to precarious prognosis.
Diagnosing these two prominent coronary artery diseases is traditionally performed using
fluoroscopic angiography. Several hundred serial x-ray projections are acquired during selective
arterial infusion of a radiodense contrast agent, which reveals the vessels’ luminal
area and possible pathological lesions. The acquired series contain highly dynamic information
on voluntary and involuntary patient movement: respiration, organ displacement and
heartbeat, for example. Current clinical analysis is largely limited to a single angiographic
image where geometrical measures will be performed manually or semi-automatically by a
radiological technician. Although widely used around the world and generally considered
the gold-standard diagnosis tool for many vascular diseases, the two-dimensional nature of
this imaging modality is limiting in terms of specifying the geometry of various pathological
regions. Indeed, the 3D structures of stenotic or aneurysmal lesions may not be fully appreciated
in 2D because their observable features are dependent on the angular configuration of
the imaging gantry. Furthermore, the presence of lesions in the coronary arteries may not
reflect the true health of the myocardium, as natural compensatory mechanisms may obviate
the need for further intervention. In light of this, cardiac magnetic resonance perfusion
imaging is increasingly gaining attention and clinical implementation, as it offers a direct
assessment of myocardial tissue viability following infarction or suspected coronary artery
disease. This type of modality is plagued, however, by motion similar to that present in fluoroscopic
imaging. This issue predisposes clinicians to laborious manual intervention in order
to align anatomical structures in sequential perfusion frames, thus hindering automation o
- …