4,938 research outputs found
A Reduced Form for Linear Differential Systems and its Application to Integrability of Hamiltonian Systems
Let with be a differential linear
system. We say that a matrix is a {\em reduced
form} of if and there exists such that . Such a form is
often the sparsest possible attainable through gauge transformations without
introducing new transcendants. In this article, we discuss how to compute
reduced forms of some symplectic differential systems, arising as variational
equations of hamiltonian systems. We use this to give an effective form of the
Morales-Ramis theorem on (non)-integrability of Hamiltonian systems.Comment: 28 page
A Characterization of Reduced Forms of Linear Differential Systems
A differential system , with
is said to be in reduced form if where
is the Lie algebra of the differential Galois group of
. In this article, we give a constructive criterion for a system to be in
reduced form. When is reductive and unimodular, the system is in
reduced form if and only if all of its invariants (rational solutions of
appropriate symmetric powers) have constant coefficients (instead of rational
functions). When is non-reductive, we give a similar characterization via
the semi-invariants of . In the reductive case, we propose a decision
procedure for putting the system into reduced form which, in turn, gives a
constructive proof of the classical Kolchin-Kovacic reduction theorem.Comment: To appear in : Journal of Pure and Applied Algebr
Trajectory Prediction with Event-Based Cameras for Robotics Applications
This thesis presents the study, analysis, and implementation of a framework to perform trajectory prediction using an event-based camera for robotics applications. Event-based perception represents a novel computation paradigm based on unconventional sensing technology that holds promise for data acquisition, transmission, and processing at very low latency and power consumption, crucial in the future of robotics. An event-based camera, in particular, is a sensor that responds to light changes in the scene, producing an asynchronous and sparse output over a wide illumination dynamic range. They only capture relevant spatio-temporal information - mostly driven by motion - at high rate, avoiding the inherent redundancy in static areas of the field of view. For such reasons, this device represents a potential key tool for robots that must function in highly dynamic and/or rapidly changing scenarios, or where the optimisation of the resources is fundamental, like robots with on-board systems. Prediction skills are something humans rely on daily - even unconsciously - for instance when driving, playing sports, or collaborating with other people. In the same way, predicting the trajectory or the end-point of a moving target allows a robot to plan for appropriate actions and their timing in advance, interacting with it in many different manners. Moreover, prediction is also helpful for compensating robot internal delays in the perception-action chain, due for instance to limited sensors and/or actuators. The question I addressed in this work is whether event-based cameras are advantageous or not in trajectory prediction for robotics. In particular, if classical deep learning architecture used for this task can accommodate for event-based data, working asynchronously, and which benefit they can bring with respect to standard cameras. The a priori hypothesis is that being the sampling of the scene driven by motion, such a device would allow for more meaningful information acquisition, improving the prediction accuracy and processing data only when needed - without any information loss or redundant acquisition. To test the hypothesis, experiments are mostly carried out using the neuromorphic iCub, a custom version of the iCub humanoid platform that mounts two event-based cameras in the eyeballs, along with standard RGB cameras. To further motivate the work on iCub, a preliminary step is the evaluation of the robot's internal delays, a value that should be compensated by the prediction to interact in real-time with the object perceived.
The first part of this thesis sees the implementation of the event-based framework for prediction, to answer the question if Long Short-Term Memory neural networks, the architecture used in this work, can be combined with event-based cameras. The task considered is the handover Human-Robot Interaction, during which the trajectory of the object in the human's hand must be inferred. Results show that the proposed pipeline can predict both spatial and temporal coordinates of the incoming trajectory with higher accuracy than model-based regression methods.
Moreover, fast recovery from failure cases and adaptive prediction horizon behavior are exhibited. Successively, I questioned how much the event-based sampling approach can be convenient with respect to the classical fixed-rate approach. The test case used is the trajectory prediction of a bouncing ball, implemented with the pipeline previously introduced. A comparison between the two sampling methods is analysed in terms of error for different working rates, showing how the spatial sampling of the event-based approach allows to achieve lower error and also to adapt the computational load dynamically, depending on the motion in the scene. Results from both works prove that the merging of event-based data and Long Short-Term Memory networks looks promising for spatio-temporal features prediction in highly dynamic tasks, and paves the way to further studies about the temporal aspect and to a wide range of applications, not only robotics-related. Ongoing work is now focusing on the robot control side, finding the best way to exploit the spatio-temporal information provided by the predictor and defining the optimal robot behavior.
Future work will see the shift of the full pipeline - prediction and robot control - to a spiking implementation. First steps in this direction have been already made thanks to a collaboration with a group from the University of Zurich, with which I propose a closed-loop motor controller implemented on a mixed-signal analog/digital neuromorphic processor, emulating a classical PID controller by means of spiking neural networks
Simulación de la calidad de aire con dispersión muy anisotrópica mediante elementos finitos adaptativos
En aquest document es simula el problema de qualitat d'aire al voltant d'un emissor puntual en condicions atmosf eriques de calma mitjan cant Elements Finits adaptatius. En concret s'aplica al cas de La Oroya (Per u).
La resoluci o del problema d'advecció-difusi o-reacci ó mitjan cant Elements Finits acostuma a produir oscilacions en la soluci o. Per tal d'aminorar-les i poder aconseguir les corbes d'inmissi o (que t picament prenen valors varis ordres de magnitud inferiors a la concentraci o emesa) no n'hi ha prou amb la utilitzaci o d'esquemes estabilitzats. Per aix o es proposa emprar un proc es adaptatiu mitjan cant el qual es canvia la discretitzaci o espacial en el rang
d'inter es de la solucio.
S'estudia el comportament de la soluci o i es proposa un indicador de l'error especialment dissenyat pel problema d'emissors puntuals que delimita les zones on es produeixen les oscilacions. S'utilitza un esquema de remallat basat en la imposici o d'un volum m axim als elements de certes regions.
Aquest algoritme permet evitar recalcular cada interval de temps ja que es possible augmentar en una sola iteració o la densitat d'elements en una region
- …