13,282 research outputs found

    Cellular neural networks, Navier-Stokes equation and microarray image reconstruction

    Get PDF
    Copyright @ 2011 IEEE.Although the last decade has witnessed a great deal of improvements achieved for the microarray technology, many major developments in all the main stages of this technology, including image processing, are still needed. Some hardware implementations of microarray image processing have been proposed in the literature and proved to be promising alternatives to the currently available software systems. However, the main drawback of those proposed approaches is the unsuitable addressing of the quantification of the gene spot in a realistic way without any assumption about the image surface. Our aim in this paper is to present a new image-reconstruction algorithm using the cellular neural network that solves the Navier–Stokes equation. This algorithm offers a robust method for estimating the background signal within the gene-spot region. The MATCNN toolbox for Matlab is used to test the proposed method. Quantitative comparisons are carried out, i.e., in terms of objective criteria, between our approach and some other available methods. It is shown that the proposed algorithm gives highly accurate and realistic measurements in a fully automated manner within a remarkably efficient time

    Domain Generalization by Marginal Transfer Learning

    Full text link
    In the problem of domain generalization (DG), there are labeled training data sets from several related prediction problems, and the goal is to make accurate predictions on future unlabeled data sets that are not known to the learner. This problem arises in several applications where data distributions fluctuate because of environmental, technical, or other sources of variation. We introduce a formal framework for DG, and argue that it can be viewed as a kind of supervised learning problem by augmenting the original feature space with the marginal distribution of feature vectors. While our framework has several connections to conventional analysis of supervised learning algorithms, several unique aspects of DG require new methods of analysis. This work lays the learning theoretic foundations of domain generalization, building on our earlier conference paper where the problem of DG was introduced Blanchard et al., 2011. We present two formal models of data generation, corresponding notions of risk, and distribution-free generalization error analysis. By focusing our attention on kernel methods, we also provide more quantitative results and a universally consistent algorithm. An efficient implementation is provided for this algorithm, which is experimentally compared to a pooling strategy on one synthetic and three real-world data sets

    REGISTRATION AND SEGMENTATION OF BRAIN MR IMAGES FROM ELDERLY INDIVIDUALS

    Get PDF
    Quantitative analysis of the MRI structural and functional images is a fundamental component in the assessment of brain anatomical abnormalities, in mapping functional activation onto human anatomy, in longitudinal evaluation of disease progression, and in computer-assisted neurosurgery or surgical planning. Image registration and segmentation is central in analyzing structural and functional MR brain images. However, due to increased variability in brain morphology and age-related atrophy, traditional methods for image registration and segmentation are not suitable for analyzing MR brain images from elderly individuals. The overall goal of this dissertation is to develop algorithms to improve the registration and segmentation accuracy in the geriatric population. The specific aims of this work includes 1) to implement a fully deformable registration pipeline to allow a higher degree of spatial deformation and produce more accurate deformation field, 2) to propose and validate an optimum template selection method for atlas-based segmentation, 3) to propose and validate a multi-template strategy for image normalization, which characterizes brain structural variations in the elderly, 4) to develop an automated segmentation and localization method to access white matter integrity (WMH) in the elderly population, and finally 5) to study the default-mode network (DMN) connectivity and white matter hyperintensity in late-life depression (LLD) with the developed registration and segmentation methods. Through a series of experiments, we have shown that the deformable registration pipeline and the template selection strategies lead to improved accuracy in the brain MR image registration and segmentation, and the automated WMH segmentation and localization method provides more specific and more accurate information about volume and spatial distribution of WMH than traditional visual grading methods. Using the developed methods, our clinical study provides evidence for altered DMN connectivity in LLD. The correlation between WMH volume and DMN connectivity emphasizes the role of vascular changes in LLD's etiopathogenesis

    A Survey on Graph Kernels

    Get PDF
    Graph kernels have become an established and widely-used technique for solving classification tasks on graphs. This survey gives a comprehensive overview of techniques for kernel-based graph classification developed in the past 15 years. We describe and categorize graph kernels based on properties inherent to their design, such as the nature of their extracted graph features, their method of computation and their applicability to problems in practice. In an extensive experimental evaluation, we study the classification accuracy of a large suite of graph kernels on established benchmarks as well as new datasets. We compare the performance of popular kernels with several baseline methods and study the effect of applying a Gaussian RBF kernel to the metric induced by a graph kernel. In doing so, we find that simple baselines become competitive after this transformation on some datasets. Moreover, we study the extent to which existing graph kernels agree in their predictions (and prediction errors) and obtain a data-driven categorization of kernels as result. Finally, based on our experimental results, we derive a practitioner's guide to kernel-based graph classification

    Embodiment and the Self: using Virtual Reality and Full Body Illusion to change bodily self-representation, perception and behaviour

    Get PDF
    Virtual Reality (VR) is an important tool for researchers of many different fields, from cognitive neuroscience to social psychology. The present work will explore the use of VR, and in particular of immersive virtual reality (IVR), in the study of some key aspects of our bodily self and bodily related behaviour and perception. In the first part of the present work we will discuss the combined used of IVR and full body illusion (FBI) in the study of body image distortion (BID) in anorexia nervosa (AN). The first chapter will serve as a general introduction to AN and its most prominent clinical characteristics, as well as introducing some key concepts like the malleability of the bodily-self through multisensory bodily illusions and a brief overview of a series of studies that applied both IVR and embodiment illusions to manipulate participants’ body representation. The second chapter will present a study in which we used the embodiment illusion of different sized avatars to characterize and reduce both the perceptual (body overestimation) and cognitive-emotional (body dissatisfaction) components of BID in AN. For this study we built personalized avatars for each participants (healthy controls (HC) and AN patients) and applied synchronous and asynchronous interpersonal multisensory stimulation (IMS) to three different virtual bodies (the perceived one, a +15% fatter one and a -15% thinner one). The different components of BID were measured by asking participants to choose the body that best resembled their real and ideal body before and after the embodiment illusion was induced. The results of this study showed a higher body dissatisfaction in AN patients, who also reported stronger negative emotions after being exposed to the largest avatar. However, the embodiment procedure did not affect BID in AN patients. Based on the results of the previous study, in the study presented in the third chapter we decided to shift our focus from somatorepresentation, i.e. the explicit representation of the body which comprise both cognitive and emotional components of body image, to somatoperception, i.e. the implicit representation of the body which comprise both body perception and body schema. In this study we applied a FBI over an underweight (BMI = 15) and normal weight (BMI = 19) avatar and measured the effect of the embodiment illusion on participants’ (AN and HC) body perception and body schema estimations. To measure body perception, we asked participants to estimate the width of their hips while their vision was blocked, whereas for the body schema estimation participants had to estimate the minimum door’s aperture width in order to pass through it inside an IVR scenario. The results showed that AN patients reported an overestimation in both body perception and body schema estimations. Furthermore, in AN patients we saw a change in the body schema estimation accordingly to the size of the embodied avatar, thus showing a higher bodily self-plasticity compared to HC. In the fourth chapter of the present work we will go over the results of the two aforementioned studies and will briefly discuss some possible future directions. Finally, the last two chapters of the thesis will present two research projects that will respectively utilize IVR and the embodiment illusion for the study of individual dishonest behaviour in digital interactions (chapter 5) and for the modulation of acute and chronic pain (chapter 6). As the COVID-19 pandemic deeply affected the work on both these studies, these two final chapters will only present a general introduction and the methods/technical implementation for both research projects

    Engineering data compendium. Human perception and performance. User's guide

    Get PDF
    The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use
    • 

    corecore