126 research outputs found

    Reverse Engineering of Biological Systems

    Get PDF
    Gene regulatory network (GRN) consists of a set of genes and regulatory relationships between the genes. As outputs of the GRN, gene expression data contain important information that can be used to reconstruct the GRN to a certain degree. However, the reverse engineer of GRNs from gene expression data is a challenging problem in systems biology. Conventional methods fail in inferring GRNs from gene expression data because of the relative less number of observations compared with the large number of the genes. The inherent noises in the data make the inference accuracy relatively low and the combinatorial explosion nature of the problem makes the inference task extremely difficult. This study aims at reconstructing the GRNs from time-course gene expression data based on GRN models using system identification and parameter estimation methods. The main content consists of three parts: (1) a review of the methods for reverse engineering of GRNs, (2) reverse engineering of GRNs based on linear models and (3) reverse engineering of GRNs based on a nonlinear model, specifically S-systems. In the first part, after the necessary background and challenges of the problem are introduced, various methods for the inference of GRNs are comprehensively reviewed from two aspects: models and inference algorithms. The advantages and disadvantages of each method are discussed. The second part focus on inferring GRNs from time-course gene expression data based on linear models. First, the statistical properties of two sparse penalties, adaptive LASSO and SCAD, with an autoregressive model are studied. It shows that the proposed methods using these two penalties can asymptotically reconstruct the underlying networks. This provides a solid foundation for these methods and their extensions. Second, the integration of multiple datasets should be able to improve the accuracy of the GRN inference. A novel method, Huber group LASSO, is developed to infer GRNs from multiple time-course data, which is also robust to large noises and outliers that the data may contain. An efficient algorithm is also developed and its convergence analysis is provided. The third part can be further divided into two phases: estimating the parameters of S-systems with system structure known and inferring the S-systems without knowing the system structure. Two methods, alternating weighted least squares (AWLS) and auxiliary function guided coordinate descent (AFGCD), have been developed to estimate the parameters of S-systems from time-course data. AWLS takes advantage of the special structure of S-systems and significantly outperforms one existing method, alternating regression (AR). AFGCD uses the auxiliary function and coordinate descent techniques to get the smart and efficient iteration formula and its convergence is theoretically guaranteed. Without knowing the system structure, taking advantage of the special structure of the S-system model, a novel method, pruning separable parameter estimation algorithm (PSPEA) is developed to locally infer the S-systems. PSPEA is then combined with continuous genetic algorithm (CGA) to form a hybrid algorithm which can globally reconstruct the S-systems

    Practical Saccade Prediction for Head-Mounted Displays: Towards a Comprehensive Model

    Get PDF
    Eye-tracking technology is an integral component of new display devices suchas virtual and augmented reality headsets. Applications of gaze informationrange from new interaction techniques exploiting eye patterns togaze-contingent digital content creation. However, system latency is still asignificant issue in many of these applications because it breaks thesynchronization between the current and measured gaze positions. Consequently,it may lead to unwanted visual artifacts and degradation of user experience. Inthis work, we focus on foveated rendering applications where the quality of animage is reduced towards the periphery for computational savings. In foveatedrendering, the presence of latency leads to delayed updates to the renderedframe, making the quality degradation visible to the user. To address thisissue and to combat system latency, recent work proposes to use saccade landingposition prediction to extrapolate the gaze information from delayedeye-tracking samples. While the benefits of such a strategy have already beendemonstrated, the solutions range from simple and efficient ones, which makeseveral assumptions about the saccadic eye movements, to more complex andcostly ones, which use machine learning techniques. Yet, it is unclear to whatextent the prediction can benefit from accounting for additional factors. Thispaper presents a series of experiments investigating the importance ofdifferent factors for saccades prediction in common virtual and augmentedreality applications. In particular, we investigate the effects of saccadeorientation in 3D space and smooth pursuit eye-motion (SPEM) and how theirinfluence compares to the variability across users. We also present a simpleyet efficient correction method that adapts the existing saccade predictionmethods to handle these factors without performing extensive data collection.<br

    Eye velocity gain fields for visuo- motor coordinate transformations

    Get PDF
    ’Gain-field-like’ tuning behavior is characterized by a modulation of the neuronal response depending on a certain variable, without changing the actual receptive field characteristics in relation to another variable. Eye position gain fields were first observed in area 7a of the posterior parietal cortex (PPC), where visually responsive neurons are modulated by ocular position. Analysis of artificial neural networks has shown that this type of tuning function might comprise the neuronal substrate for coordinate transformations. In this work, neuronal activity in the dorsal medial superior temporal area (MSTd) has been analyzed with an focus on it’s involvement in oculomotor control. MSTd is part of the extrastriate visual cortex and located in the PPC. Lesion studies suggested a participation of this cortical area in the control of eye movements. Inactivation of MSTd severely impairs the optokinetic response (OKR), which is an reflex-like kind of eye movement that compensates for motion of the whole visual scene. Using a novel, information-theory based approach for neuronal data analysis, we were able to identify those visual and eye movement related signals which were most correlated to the mean rate of spiking activity in MSTd neurons during optokinetic stimulation. In a majority of neurons firing rate was non-linearly related to a combination of retinal image velocity and eye velocity. The observed neuronal latency relative to these signals is in line with a system-level model of OKR, where an efference copy of the motor command signal is used to generate an internal estimate of the head-centered stimulus velocity signal. Tuning functions were obtained by using a probabilistic approach. In most MSTd neurons these functions exhibited gain-field-like shapes, with eye velocity modulating the visual response in a multiplicative manner. Population analysis revealed a large diversity of tuning forms including asymmetric and non-separable functions. The distribution of gain fields was almost identical to the predictions from a neural network model trained to perform the summation of image and eye velocity. These findings therefore strongly support the hypothesis of MSTd’s participation in the OKR control system by implementing the transformation from retinal image velocity to an estimate of stimulus velocity. In this sense, eye velocity gain fields constitute an intermediate step in transforming the eye-centered to a head-centered visual motion signal.Another aspect that was addressed in this work was the comparison of the irregularity of MSTd spiking activity during optokinetic response with the behavior during pure visual stimulation. The goal of this study was an evaluation of potential neuronal mechanisms underlying the observed gain field behavior. We found that both inter- and intra-trial variability were decreased with increasing retinal image velocity, but increased with eye velocity. This observation argues against a symmetrical integration of driving and modulating inputs. Instead, we propose an architecture where multiplicative gain modulation is achieved by simultaneous increase of excitatory and inhibitory background synaptic input. A conductance-based single-compartment model neuron was able to reproduce realistic gain modulation and the observed stimulus-dependence of neural variability, at the same time. In summary, this work leads to improved knowledge about MSTd’s role in visuomotor transformation by analyzing various functional and mechanistic aspects of eye velocity gain fields on a systems-, network-, and neuronal level

    Coding shape inside the shape

    Get PDF
    The shape of an object lies at the interface between vision and cognition, yet the field of statistical shape analysis is far from developing a general mathematical model to represent shapes that would allow computational descriptions to express some simple tasks that are carried out robustly and e↵ortlessly by humans. In this thesis, novel perspectives on shape characterization are presented where the shape information is encoded inside the shape. The representation is free from the dimensions of the shape, hence the model is readily extendable to any shape embedding dimensions (i.e 2D, 3D, 4D). A very desirable property is that the representation possesses the possibility to fuse shape information with other types of information available inside the shape domain, an example would be reflectance information from an optical camera. Three novel fields are proposed within the scope of the thesis, namely ‘Scalable Fluctuating Distance Fields’, ‘Screened Poisson Hyperfields’, ‘Local Convexity Encoding Fields’, which are smooth fields that are obtained by encoding desired shape information. ‘Scalable Fluctuating Distance Fields’, that encode parts explicitly, is presented as an interactive tool for tumor protrusion segmentation and as an underlying representation for tumor follow-up analysis. Secondly, ‘Screened Poisson Hyper-Fields’, provide a rich characterization of the shape that encodes global, local, interior and boundary interactions. Low-dimensional embeddings of the hyper-fields are employed to address problems of shape partitioning, 2D shape classification and 3D non-rigid shape retrieval. Moreover, the embeddings are used to translate the shape matching problem into an image matching problem, utilizing existing arsenal of image matching tools that could not be utilized in shape matching before. Finally, the ‘Local Convexity Encoding Fields’ is formed by encoding information related to local symmetry and local convexity-concavity properties. The representation performance of the shape fields is presented both qualitatively and quantitatively. The descriptors obtained using the regional encoding perspective outperform existing state-of-the-art shape retrieval methods over public benchmark databases, which is highly motivating for further study of regional-volumetric shape representations

    A Neurocomputational Model of Smooth Pursuit Control to Interact with the Real World

    Get PDF
    Whether we want to drive a car, play a ball game, or even enjoy watching a flying bird, we need to track moving objects. This is possible via smooth pursuit eye movements (SPEMs), which maintain the image of the moving object on the fovea (i.e., a very small portion of the retina with high visual resolution). At first glance, performing an accurate SPEM by the brain may seem trivial. However, imperfect visual coding, processing and transmission delays, wide variety of object sizes, and background textures make the task challenging. Furthermore, the existence of distractors in the environment makes it even more complicated and it is no wonder why understanding SPEM has been a classic question of human motor control. To understand physiological systems of which SPEM is an example, creation of models has played an influential role. Models make quantitative predictions that can be tested in experiments. Therefore, modelling SPEM is not only valuable to learn neurobiological mechanisms of smooth pursuit or more generally gaze control but also beneficial to give insight into other sensory-motor functions. In this thesis, I present a neurocomputational SPEM model based on Neural Engineering Framework (NEF) to drive an eye-like robot. The model interacts with the real world in real time. It uses naturalistic images as input and by the use of spiking model neurons controls the robot. This work can be the first step towards more thorough validation of abstract SPEM control models. Besides, it is a small step toward neural models that drive robots to accomplish more intricate sensory-motor tasks such as reaching and grasping
    • …
    corecore