1,023 research outputs found

    Kalirin12 interacts with dynamin

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Guanine nucleotide exchange factors (GEFs) and their target Rho GTPases regulate cytoskeletal changes and membrane trafficking. Dynamin, a large force-generating GTPase, plays an essential role in membrane tubulation and fission in cells. Kalirin12, a neuronal RhoGEF, is found in growth cones early in development and in dendritic spines later in development.</p> <p>Results</p> <p>The IgFn domain of Kalirin12, not present in other Kalirin isoforms, binds dynamin1 and dynamin2. An inactivating mutation in the GTPase domain of dynamin diminishes this interaction and the isolated GTPase domain of dynamin retains the ability to bind Kalirin12. Co-immunoprecipitation demonstrates an interaction of Kalirin12 and dynamin2 in embryonic brain. Purified recombinant Kalirin-IgFn domain inhibits the ability of purified rat brain dynamin to oligomerize in response to the presence of liposomes containing phosphatidylinositol-4,5-bisphosphate. Consistent with this, expression of exogenous Kalirin12 or its IgFn domain in PC12 cells disrupts clathrin-mediated transferrin endocytosis. Similarly, expression of exogenous Kalirin12 disrupts transferrin endocytosis in cortical neurons. Expression of Kalirin7, a shorter isoform which lacks the IgFn domain, was previously shown to inhibit clathrin-mediated endocytosis; the GTPase domain of dynamin does not interact with Kalirin7.</p> <p>Conclusion</p> <p>Kalirin12 may play a role in coordinating Rho GTPase-mediated changes in the actin cytoskeleton with dynamin-mediated changes in membrane trafficking.</p

    HMM based scenario generation for an investment optimisation problem

    Get PDF
    This is the post-print version of the article. The official published version can be accessed from the link below - Copyright @ 2012 Springer-Verlag.The Geometric Brownian motion (GBM) is a standard method for modelling financial time series. An important criticism of this method is that the parameters of the GBM are assumed to be constants; due to this fact, important features of the time series, like extreme behaviour or volatility clustering cannot be captured. We propose an approach by which the parameters of the GBM are able to switch between regimes, more precisely they are governed by a hidden Markov chain. Thus, we model the financial time series via a hidden Markov model (HMM) with a GBM in each state. Using this approach, we generate scenarios for a financial portfolio optimisation problem in which the portfolio CVaR is minimised. Numerical results are presented.This study was funded by NET ACE at OptiRisk Systems

    Automated user modeling for personalized digital libraries

    Get PDF
    Digital libraries (DL) have become one of the most typical ways of accessing any kind of digitalized information. Due to this key role, users welcome any improvements on the services they receive from digital libraries. One trend used to improve digital services is through personalization. Up to now, the most common approach for personalization in digital libraries has been user-driven. Nevertheless, the design of efficient personalized services has to be done, at least in part, in an automatic way. In this context, machine learning techniques automate the process of constructing user models. This paper proposes a new approach to construct digital libraries that satisfy user’s necessity for information: Adaptive Digital Libraries, libraries that automatically learn user preferences and goals and personalize their interaction using this information

    Second-Order Belief Hidden Markov Models

    Get PDF
    Hidden Markov Models (HMMs) are learning methods for pattern recognition. The probabilistic HMMs have been one of the most used techniques based on the Bayesian model. First-order probabilistic HMMs were adapted to the theory of belief functions such that Bayesian probabilities were replaced with mass functions. In this paper, we present a second-order Hidden Markov Model using belief functions. Previous works in belief HMMs have been focused on the first-order HMMs. We extend them to the second-order model

    Inducing Probabilistic Grammars by Bayesian Model Merging

    Full text link
    We describe a framework for inducing probabilistic grammars from corpora of positive samples. First, samples are {\em incorporated} by adding ad-hoc rules to a working grammar; subsequently, elements of the model (such as states or nonterminals) are {\em merged} to achieve generalization and a more compact representation. The choice of what to merge and when to stop is governed by the Bayesian posterior probability of the grammar given the data, which formalizes a trade-off between a close fit to the data and a default preference for simpler models (`Occam's Razor'). The general scheme is illustrated using three types of probabilistic grammars: Hidden Markov models, class-based nn-grams, and stochastic context-free grammars.Comment: To appear in Grammatical Inference and Applications, Second International Colloquium on Grammatical Inference; Springer Verlag, 1994. 13 page

    Reductions of Hidden Information Sources

    Full text link
    In all but special circumstances, measurements of time-dependent processes reflect internal structures and correlations only indirectly. Building predictive models of such hidden information sources requires discovering, in some way, the internal states and mechanisms. Unfortunately, there are often many possible models that are observationally equivalent. Here we show that the situation is not as arbitrary as one would think. We show that generators of hidden stochastic processes can be reduced to a minimal form and compare this reduced representation to that provided by computational mechanics--the epsilon-machine. On the way to developing deeper, measure-theoretic foundations for the latter, we introduce a new two-step reduction process. The first step (internal-event reduction) produces the smallest observationally equivalent sigma-algebra and the second (internal-state reduction) removes sigma-algebra components that are redundant for optimal prediction. For several classes of stochastic dynamical systems these reductions produce representations that are equivalent to epsilon-machines.Comment: 12 pages, 4 figures; 30 citations; Updates at http://www.santafe.edu/~cm

    A comparative review of dynamic neural networks and hidden Markov model methods for mobile on-device speech recognition

    Get PDF
    The adoption of high-accuracy speech recognition algorithms without an effective evaluation of their impact on the target computational resource is impractical for mobile and embedded systems. In this paper, techniques are adopted to minimise the required computational resource for an effective mobile-based speech recognition system. A Dynamic Multi-Layer Perceptron speech recognition technique, capable of running in real time on a state-of-the-art mobile device, has been introduced. Even though a conventional hidden Markov model when applied to the same dataset slightly outperformed our approach, its processing time is much higher. The Dynamic Multi-layer Perceptron presented here has an accuracy level of 96.94% and runs significantly faster than similar techniques

    Computational identification of adaptive mutants using the VERT system

    Get PDF
    <p/> <p>Background</p> <p>Evolutionary dynamics of microbial organisms can now be visualized using the Visualizing Evolution in Real Time (VERT) system, in which several isogenic strains expressing different fluorescent proteins compete during adaptive evolution and are tracked using fluorescent cell sorting to construct a population history over time. Mutations conferring enhanced growth rates can be detected by observing changes in the fluorescent population proportions.</p> <p>Results</p> <p>Using data obtained from several VERT experiments, we construct a hidden Markov-derived model to detect these adaptive events in VERT experiments without external intervention beyond initial training. Analysis of annotated data revealed that the model achieves consensus with human annotation for 85-93% of the data points when detecting adaptive events. A method to determine the optimal time point to isolate adaptive mutants is also introduced.</p> <p>Conclusions</p> <p>The developed model offers a new way to monitor adaptive evolution experiments without the need for external intervention, thereby simplifying adaptive evolution efforts relying on population tracking. Future efforts to construct a fully automated system to isolate adaptive mutants may find the algorithm a useful tool.</p

    Complex sequencing rules of birdsong can be explained by simple hidden Markov processes

    Get PDF
    Complex sequencing rules observed in birdsongs provide an opportunity to investigate the neural mechanism for generating complex sequential behaviors. To relate the findings from studying birdsongs to other sequential behaviors, it is crucial to characterize the statistical properties of the sequencing rules in birdsongs. However, the properties of the sequencing rules in birdsongs have not yet been fully addressed. In this study, we investigate the statistical propertiesof the complex birdsong of the Bengalese finch (Lonchura striata var. domestica). Based on manual-annotated syllable sequences, we first show that there are significant higher-order context dependencies in Bengalese finch songs, that is, which syllable appears next depends on more than one previous syllable. This property is shared with other complex sequential behaviors. We then analyze acoustic features of the song and show that higher-order context dependencies can be explained using first-order hidden state transition dynamics with redundant hidden states. This model corresponds to hidden Markov models (HMMs), well known statistical models with a large range of application for time series modeling. The song annotation with these models with first-order hidden state dynamics agreed well with manual annotation, the score was comparable to that of a second-order HMM, and surpassed the zeroth-order model (the Gaussian mixture model (GMM)), which does not use context information. Our results imply that the hierarchical representation with hidden state dynamics may underlie the neural implementation for generating complex sequences with higher-order dependencies
    corecore