11,928 research outputs found

    Surfaces of minimal degree of tame representation type and mutations of Cohen-Macaulay modules

    Full text link
    We provide two examples of smooth projective surfaces of tame CM type, by showing that any parameter space of isomorphism classes of indecomposable ACM bundles with fixed rank and determinant on a rational quartic scroll in projective 5-space is either a single point or a projective line. For surfaces of minimal degree and wild CM type, we classify rigid Ulrich bundles as Fibonacci extensions. For the rational normal scrolls S(2,3) and S(3,3), a complete classification of rigid ACM bundles is given in terms of the action of the braid group in three strands.Comment: This version is meant to amend two inaccurate statements appearing in the published pape

    Pressure anisotropy and small spatial scales induced by velocity shear

    Get PDF
    Non-Maxwellian metaequilibria can exist in low-collisionality plasmas as evidenced by satellite and laboratory measurements. By including the full pressure tensor dynamics in a fluid plasma model, we show that a sheared velocity field can provide an effective mechanism that makes an initial isotropic state anisotropic and agyrotropic. We discuss how the propagation of magneto-elastic waves can affect the pressure tensor anisotropization and its spatial filamentation which are due to the action of both the magnetic field and flow strain tensor. We support this analysis by a numerical integration of the nonlinear equations describing the pressure tensor evolution.Comment: 5 pages, 3 Figure

    DOP: Deep Optimistic Planning with Approximate Value Function Evaluation

    Get PDF
    Research on reinforcement learning has demonstrated promising results in manifold applications and domains. Still, efficiently learning effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. multi-agent systems or hyper-redundant robots). To alleviate this problem, we present DOP, a deep model-based reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) plan effective policies. Specifically, we exploit deep neural networks to learn Q-functions that are used to attack the curse of dimensionality during a Monte-Carlo tree search. Our algorithm, in fact, constructs upper confidence bounds on the learned value function to select actions optimistically. We implement and evaluate DOP on different scenarios: (1) a cooperative navigation problem, (2) a fetching task for a 7-DOF KUKA robot, and (3) a human-robot handover with a humanoid robot (both in simulation and real). The obtained results show the effectiveness of DOP in the chosen applications, where action values drive the exploration and reduce the computational demand of the planning process while achieving good performance

    On-line Joint Limit Avoidance for Torque Controlled Robots by Joint Space Parametrization

    Full text link
    This paper proposes control laws ensuring the stabilization of a time-varying desired joint trajectory, as well as joint limit avoidance, in the case of fully-actuated manipulators. The key idea is to perform a parametrization of the feasible joint space in terms of exogenous states. It follows that the control of these states allows for joint limit avoidance. One of the main outcomes of this paper is that position terms in control laws are replaced by parametrized terms, where joint limits must be avoided. Stability and convergence of time-varying reference trajectories obtained with the proposed method are demonstrated to be in the sense of Lyapunov. The introduced control laws are verified by carrying out experiments on two degrees-of-freedom of the humanoid robot iCub.Comment: 8 pages, 4 figures. Submitted to the 2016 IEEE-RAS International Conference on Humanoid Robot

    Q-CP: Learning Action Values for Cooperative Planning

    Get PDF
    Research on multi-robot systems has demonstrated promising results in manifold applications and domains. Still, efficiently learning an effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. hyper-redundant and groups of robot). To alleviate this problem, we present Q-CP a cooperative model-based reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) generate effective policies. Specifically, we exploit Q-learning to attack the curse-of-dimensionality in the iterations of a Monte-Carlo Tree Search. We implement and evaluate Q-CP on different stochastic cooperative (general-sum) games: (1) a simple cooperative navigation problem among 3 robots, (2) a cooperation scenario between a pair of KUKA YouBots performing hand-overs, and (3) a coordination task between two mobile robots entering a door. The obtained results show the effectiveness of Q-CP in the chosen applications, where action values drive the exploration and reduce the computational demand of the planning process while achieving good performance
    • …
    corecore