5,641 research outputs found

    Comparative evaluation of approaches in T.4.1-4.3 and working definition of adaptive module

    Get PDF
    The goal of this deliverable is two-fold: (1) to present and compare different approaches towards learning and encoding movements us- ing dynamical systems that have been developed by the AMARSi partners (in the past during the first 6 months of the project), and (2) to analyze their suitability to be used as adaptive modules, i.e. as building blocks for the complete architecture that will be devel- oped in the project. The document presents a total of eight approaches, in two groups: modules for discrete movements (i.e. with a clear goal where the movement stops) and for rhythmic movements (i.e. which exhibit periodicity). The basic formulation of each approach is presented together with some illustrative simulation results. Key character- istics such as the type of dynamical behavior, learning algorithm, generalization properties, stability analysis are then discussed for each approach. We then make a comparative analysis of the different approaches by comparing these characteristics and discussing their suitability for the AMARSi project

    Stability of Systems with Stochastic Delays and Applications to Genetic Regulatory Networks

    Get PDF
    The dynamics of systems with stochastically varying time delays are investigated in this paper. It is shown that the mean dynamics can be used to derive necessary conditions for the stability of equilibria of the stochastic system. Moreover, the second moment dynamics can be used to derive sufficient conditions for almost sure stability of equilibria. The results are summarized using stability charts that are obtained via semidiscretization. The theoretical methods are applied to simple gene regulatory networks where it is demonstrated that stochasticity in the delay can improve the stability of steady protein production

    Query-driven learning for predictive analytics of data subspace cardinality

    Get PDF
    Fundamental to many predictive analytics tasks is the ability to estimate the cardinality (number of data items) of multi-dimensional data subspaces, defined by query selections over datasets. This is crucial for data analysts dealing with, e.g., interactive data subspace explorations, data subspace visualizations, and in query processing optimization. However, in many modern data systems, predictive analytics may be (i) too costly money-wise, e.g., in clouds, (ii) unreliable, e.g., in modern Big Data query engines, where accurate statistics are difficult to obtain/maintain, or (iii) infeasible, e.g., for privacy issues. We contribute a novel, query-driven, function estimation model of analyst-defined data subspace cardinality. The proposed estimation model is highly accurate in terms of prediction and accommodating the well-known selection queries: multi-dimensional range and distance-nearest neighbors (radius) queries. Our function estimation model: (i) quantizes the vectorial query space, by learning the analysts’ access patterns over a data space, (ii) associates query vectors with their corresponding cardinalities of the analyst-defined data subspaces, (iii) abstracts and employs query vectorial similarity to predict the cardinality of an unseen/unexplored data subspace, and (iv) identifies and adapts to possible changes of the query subspaces based on the theory of optimal stopping. The proposed model is decentralized, facilitating the scaling-out of such predictive analytics queries. The research significance of the model lies in that (i) it is an attractive solution when data-driven statistical techniques are undesirable or infeasible, (ii) it offers a scale-out, decentralized training solution, (iii) it is applicable to different selection query types, and (iv) it offers a performance that is superior to that of data-driven approaches

    Recurrent Neural Networks-Based Collision-Free Motion Planning for Dual Manipulators Under Multiple Constraints

    Get PDF
    Dual robotic manipulators are robotic systems that are developed to imitate human arms, which shows great potential in performing complex tasks. Collision-free motion planning in real time is still a challenging problem for controlling a dual robotic manipulator because of the overlap workspace. In this paper, a novel planning strategy under physical constraints of dual manipulators using dynamic neural networks is proposed, which can satisfy the collision avoidance and trajectory tracking. Particularly, the problem of collision avoidance is first formulated into a set of inequality formulas, whereas the robotic trajectory is then transformed into an equality constraint by introducing negative feedback in outer loop. The planning problem subsequently becomes a Quadratic Programming (QP) problem by considering the redundancy, the boundaries of joint angles and velocities of the system. The QP is solved using a convergent provable recurrent neural network that without calculating the pseudo-inversion of the Jacobian. Consequently, numerical experiments on 8-DoF modular robot and 14-DoF Baxter robot are conducted to show the superiority of the proposed strategy

    Steepest descent as Linear Quadratic Regulation

    Full text link
    Concorder un modĂšle Ă  certaines observations, voilĂ  qui rĂ©sume assez bien ce que l’apprentissage machine cherche Ă  accomplir. Ce concept est maintenant omniprĂ©sent dans nos vies, entre autre grĂące aux percĂ©es rĂ©centes en apprentissage profond. La stratĂ©gie d’optimisation prĂ©dominante pour ces deux domaines est la minimisation d’un objectif donnĂ©. Et pour cela, la mĂ©thode du gradient, mĂ©thode de premier-ordre qui modifie les paramĂštres du modĂšle Ă  chaque itĂ©ration, est l’approche dominante. À l’opposĂ©, les mĂ©thodes dites de second ordre n’ont jamais rĂ©ussi Ă  s’imposer en apprentissage profond. Pourtant, elles offrent des avantages reconnus qui soulĂšvent encore un grand intĂ©rĂȘt. D’oĂč l’importance de la mĂ©thode du col, qui unifie les mĂ©thodes de premier et second ordre sous un mĂȘme paradigme. Dans ce mĂ©moire, nous Ă©tablissons un parralĂšle direct entre la mĂ©thode du col et le domaine du contrĂŽle optimal ; domaine qui cherche Ă  optimiser mathĂ©matiquement une sĂ©quence de dĂ©cisions. Et certains des problĂšmes les mieux compris et Ă©tudiĂ©s en contrĂŽle optimal sont les commandes linĂ©aires quadratiques. ProblĂšmes pour lesquels on connaĂźt trĂšs bien la solution optimale. Plus spĂ©cifiquement, nous dĂ©montrerons l’équivalence entre une itĂ©ration de la mĂ©thode du col et la rĂ©solution d’une Commande LinĂ©aire Quadratique (CLQ). Cet Ă©clairage nouveau implique une approche unifiĂ©e quand vient le temps de dĂ©ployer nombre d’algorithmes issus de la mĂ©thode du col, tel que la mĂ©thode du gradient et celle des gradients naturels, sans ĂȘtre limitĂ©e Ă  ceux-ci. Approche que nous Ă©tendons ensuite aux problĂšmes Ă  horizon infini, tel que les modĂšles Ă  Ă©quilibre profond. Ce faisant, nous dĂ©montrons pour ces problĂšmes que calculer les gradients via la diffĂ©rentiation implicite revient Ă  employer l’équation de Riccati pour solutionner la CLQ associĂ©e Ă  la mĂ©thode du gradient. Finalement, notons que l’incorporation d’information sur la courbure du problĂšme revient gĂ©nĂ©ralement Ă  rencontrer une inversion matricielle dans la mĂ©thode du col. Nous montrons que l’équivalence avec les CLQ permet de contourner cette inversion en utilisant une approximation issue des sĂ©ries de Neumann. Surprenamment, certaines observations empiriques suggĂšrent que cette approximation aide aussi Ă  stabiliser le processus d’optimisation quand des mĂ©thodes de second-ordre sont impliquĂ©es ; en agissant comme un rĂ©gularisateur adaptif implicite.Machine learning entails training a model to fit some given observations, and recent advances in the field, particularly in deep learning, have made it omnipresent in our lives. Fitting a model usually requires the minimization of a given objective. When it comes to deep learning, first-order methods like gradient descent have become a default tool for optimization in deep learning. On the other hand, second-order methods did not see widespread use in deep learning. Yet, they hold many promises and are still a very active field of research. An important perspective into both methods is steepest descent, which allows you to encompass first and second-order approaches into the same framework. In this thesis, we establish an explicit connection between steepest descent and optimal control, a field that tries to optimize sequential decision-making processes. Core to it is the family of problems known as Linear Quadratic Regulation; problems that have been well studied and for which we know optimal solutions. More specifically, we show that performing one iteration of steepest descent is equivalent to solving a Linear Quadratic Regulator (LQR). This perspective gives us a convenient and unified framework for deploying a wide range of steepest descent algorithms, such as gradient descent and natural gradient descent, but certainly not limited to. This framework can also be extended to problems with an infinite horizon, such as deep equilibrium models. Doing so reveals that retrieving the gradient via implicit differentiation is equivalent to recovering it via Riccati’s solution to the LQR associated with gradient descent. Finally, incorporating curvature information into steepest descent usually takes the form of a matrix inversion. However, casting a steepest descent step as a LQR also hints toward a trick that allows to sidestep this inversion, by leveraging Neumann’s series approximation. Empirical observations provide evidence that this approximation actually helps to stabilize the training process, by acting as an adaptive damping parameter
    • 

    corecore