420 research outputs found

    A Comparative Evaluation of the Detection and Tracking Capability Between Novel Event-Based and Conventional Frame-Based Sensors

    Get PDF
    Traditional frame-based technology continues to suffer from motion blur, low dynamic range, speed limitations and high data storage requirements. Event-based sensors offer a potential solution to these challenges. This research centers around a comparative assessment of frame and event-based object detection and tracking. A basic frame-based algorithm is used to compare against two different event-based algorithms. First event-based pseudo-frames were parsed through standard frame-based algorithms and secondly, target tracks were constructed directly from filtered events. The findings show there is significant value in pursuing the technology further

    SMASH: Data-driven Reconstruction of Physically Valid Collisions.

    Get PDF
    Collision sequences are commonly used in games and entertainment to add drama and excitement. Authoring even two body collisions in real world can be difficult as one has to get timing and the object trajectories to be correctly synchronized. After trial-anderror iterations, when objects can actually be made to collide, then they are difficult to acquire in 3D. In contrast, synthetically generating plausible collisions is difficult as it requires adjusting different collision parameters (e.g., object mass ratio, coefficient of restitution, etc.) and appropriate initial parameters. We present SMASH to directly ‘read off’ appropriate collision parameters simply based on input video recordings. Specifically, we describe how to use laws of rigid body collision to regularize the problem of lifting 2D annotated poses to 3D reconstruction of collision sequences. The reconstructed sequences can then be modified and combined to easily author novel and plausible collision sequences. We demonstrate the system on various complex collision sequences

    Adaptive Robot Systems in Highly Dynamic Environments: A Table Tennis Robot

    Get PDF
    Hintergrund: Tischtennis bietet ideale Bedingungen, um Kamera-basierte Roboterarme am Limit zu testen. Die besondere Herausforderung liegt in der hohen Geschwindigkeit des Spiels und in der großen Varianz von Spin und Tempo jedes einzelnen Schlages. Die bisherige Forschung mit Tischtennisrobotern beschrĂ€nkt sich jedoch auf einfache Szenarien, d.h. auf langsame BĂ€lle mit einer geringen Rotation. Forschungsziel: Es soll ein lernfĂ€higer Tischtennisroboter entwickelt werden, der mit dem Spin menschlicher Gegner umgehen kann. Methoden: Das vorgestellte Robotersystem besteht aus sechs Komponenten: Ballpositionserkennung, Ballspinerkennung, Balltrajektorienvorhersage, Schlagparameterbestimmung, Robotertrajektorienplanung und Robotersteuerung. Zuerst wird der Ball mit traditioneller Bildverarbeitung in den Kamerabildern lokalisiert. Mit iterativer Triangulation wird dann seine 3D-Position berechnet. Aus der Kurve der Ballpositionen wird die aktuelle Position und Geschwindigkeit des Balles ermittelt. FĂŒr die Spinerkennung werden drei Methoden prĂ€sentiert: Die ersten beiden verfolgen die Bewegung des aufgedruckten Ball-Logos auf hochauflösenden Bildern durch Computer Vision bzw. Convolutional Neural Networks. Im dritten Ansatz wird die Flugbahn des Balls unter BerĂŒcksichtigung der Magnus-Kraft analysiert. Anhand der Position, der Geschwindigkeit und des Spins des Balls wird die zukĂŒnftige Flugbahn berechnet. DafĂŒr wird die physikalische Diffenzialgleichung mit Gravitationskraft, Luftwiderstandskraft und Magnus-Kraft schrittweise gelöst. Mit dem berechneten Zustand des Balls am Schlagpunkt haben wir einen Reinforcement-Learning-Algorithmus trainiert, der bestimmt, mit welchen Schlagparametern der Ball zu treffen ist. Eine passende Robotertrajektorie wird von der Reflexxes-Bibliothek generiert. %Der Roboter wird dann mit einer Frequenz von 250 Hz angesteuert. Ergebnisse: In der quantitativen Auswertung erzielen die einzelnen Komponenten mindestens so gute Ergebnisse wie vergleichbare Tischtennisroboter. Im Hinblick auf das Forschungsziel konnte der Roboter - ein Konterspiel mit einem Menschen fĂŒhren, mit bis zu 60 RĂŒckschlĂ€gen, - unterschiedlichen Spin (Über- und Unterschnitt) retournieren - und mehrere TischtennisĂŒbungen innerhalb von 200 SchlĂ€gen erlernen. Schlußfolgerung: Bedeutende algorithmische Neuerungen fĂŒhren wir in der Spinerkennung und beim Reinforcement Learning von Schlagparametern ein. Dadurch meistert der Roboter anspruchsvollere Spin- und Übungsszenarien als in vergleichbaren Arbeiten.Background: Robotic table tennis systems offer an ideal platform for pushing camera-based robotic manipulation systems to the limit. The unique challenge arises from the fast-paced play and the wide variation in spin and speed between strokes. The range of scenarios under which existing table tennis robots are able to operate is, however, limited, requiring slow play with low rotational velocity of the ball (spin). Research Goal: We aim to develop a table tennis robot system with learning capabilities able to handle spin against a human opponent. Methods: The robot system presented in this thesis consists of six components: ball position detection, ball spin detection, ball trajectory prediction, stroke parameter suggestion, robot trajectory generation, and robot control. For ball detection, the camera images pass through a conventional image processing pipeline. The ball’s 3D positions are determined using iterative triangulation and these are then used to estimate the current ball state (position and velocity). We propose three methods for estimating the spin. The first two methods estimate spin by analyzing the movement of the logo printed on the ball on high-resolution images using either conventional computer vision or convolutional neural networks. The final approach involves analyzing the trajectory of the ball using Magnus force fitting. Once the ball’s position, velocity, and spin are known, the future trajectory is predicted by forward-solving a physical ball model involving gravitational, drag, and Magnus forces. With the predicted ball state at hitting time as state input, we train a reinforcement learning algorithm to suggest the racket state at hitting time (stroke parameter). We use the Reflexxes library to generate a robot trajectory to achieve the suggested racket state. Results: Quantitative evaluation showed that all system components achieve results as good as or better than comparable robots. Regarding the research goal of this thesis, the robot was able to - maintain stable counter-hitting rallies of up to 60 balls with a human player, - return balls with different spin types (topspin and backspin) in the same rally, - learn multiple table tennis drills in just 200 strokes or fewer. Conclusion: Our spin detection system and reinforcement learning-based stroke parameter suggestion introduce significant algorithmic novelties. In contrast to previous work, our robot succeeds in more difficult spin scenarios and drills

    Trajectory Prediction with Event-Based Cameras for Robotics Applications

    Get PDF
    This thesis presents the study, analysis, and implementation of a framework to perform trajectory prediction using an event-based camera for robotics applications. Event-based perception represents a novel computation paradigm based on unconventional sensing technology that holds promise for data acquisition, transmission, and processing at very low latency and power consumption, crucial in the future of robotics. An event-based camera, in particular, is a sensor that responds to light changes in the scene, producing an asynchronous and sparse output over a wide illumination dynamic range. They only capture relevant spatio-temporal information - mostly driven by motion - at high rate, avoiding the inherent redundancy in static areas of the field of view. For such reasons, this device represents a potential key tool for robots that must function in highly dynamic and/or rapidly changing scenarios, or where the optimisation of the resources is fundamental, like robots with on-board systems. Prediction skills are something humans rely on daily - even unconsciously - for instance when driving, playing sports, or collaborating with other people. In the same way, predicting the trajectory or the end-point of a moving target allows a robot to plan for appropriate actions and their timing in advance, interacting with it in many different manners. Moreover, prediction is also helpful for compensating robot internal delays in the perception-action chain, due for instance to limited sensors and/or actuators. The question I addressed in this work is whether event-based cameras are advantageous or not in trajectory prediction for robotics. In particular, if classical deep learning architecture used for this task can accommodate for event-based data, working asynchronously, and which benefit they can bring with respect to standard cameras. The a priori hypothesis is that being the sampling of the scene driven by motion, such a device would allow for more meaningful information acquisition, improving the prediction accuracy and processing data only when needed - without any information loss or redundant acquisition. To test the hypothesis, experiments are mostly carried out using the neuromorphic iCub, a custom version of the iCub humanoid platform that mounts two event-based cameras in the eyeballs, along with standard RGB cameras. To further motivate the work on iCub, a preliminary step is the evaluation of the robot's internal delays, a value that should be compensated by the prediction to interact in real-time with the object perceived. The first part of this thesis sees the implementation of the event-based framework for prediction, to answer the question if Long Short-Term Memory neural networks, the architecture used in this work, can be combined with event-based cameras. The task considered is the handover Human-Robot Interaction, during which the trajectory of the object in the human's hand must be inferred. Results show that the proposed pipeline can predict both spatial and temporal coordinates of the incoming trajectory with higher accuracy than model-based regression methods. Moreover, fast recovery from failure cases and adaptive prediction horizon behavior are exhibited. Successively, I questioned how much the event-based sampling approach can be convenient with respect to the classical fixed-rate approach. The test case used is the trajectory prediction of a bouncing ball, implemented with the pipeline previously introduced. A comparison between the two sampling methods is analysed in terms of error for different working rates, showing how the spatial sampling of the event-based approach allows to achieve lower error and also to adapt the computational load dynamically, depending on the motion in the scene. Results from both works prove that the merging of event-based data and Long Short-Term Memory networks looks promising for spatio-temporal features prediction in highly dynamic tasks, and paves the way to further studies about the temporal aspect and to a wide range of applications, not only robotics-related. Ongoing work is now focusing on the robot control side, finding the best way to exploit the spatio-temporal information provided by the predictor and defining the optimal robot behavior. Future work will see the shift of the full pipeline - prediction and robot control - to a spiking implementation. First steps in this direction have been already made thanks to a collaboration with a group from the University of Zurich, with which I propose a closed-loop motor controller implemented on a mixed-signal analog/digital neuromorphic processor, emulating a classical PID controller by means of spiking neural networks

    Constraints on movement variability during a discrete multi-articular action

    Get PDF
    The aim of this programme of work was to examine how the manipulation of organismic and task constraints affected movement variability during a basketball shooting task. The specific constraints that were manipulated included task expertise, state anxiety and dioptric blur (organismic constraints), and, shooting distance and attentional focus instruction (task constraints). The aim of Study 1 was to investigate the effect of shooting distance and task expertise on movement variability. Task expertise was characterised by decreased coordination variability and heightened compensatory variability between wrist, elbow and shoulder joints. However, no significant difference was found in joint angle variability at release as a function of task expertise. There was no significant change in movement variability with shooting distance, a finding that was consistent across all expertise groups. In Study 2, the aims were to examine the effect of induced dioptric blur on shooting performance and movement variability during basketball free-throw shooting, and, to ascertain whether task expertise plays a mediating role in the capacity to stabilise performance against impaired visual information. Significant improvements in shooting performance were noted with the introduction of moderate visual blur (+1.00 and +2.00 D). This performance change was evident in both expert and novice performers. Only with the onset of substantial dioptric blur (+3.00 D), equivalent to the legal blindness limit, was there a significant decrease in coordination variability. Despite the change in coordination variability at +3.00 D, there was no significant difference in shooting performance when compared to the baseline condition. The aims of Study 3 were to examine the effect of elevated anxiety on shooting performance and movement variability and, again, to determine whether task expertise plays a mediating role in stabilising performance and movement kinematics against perturbation from emotional fluctuations. Commensurate with the results of Study 2, both expert and novice performers were able to stabilise performance and movement kinematics, this time with elevated anxiety. Stabilisation was achieved through the allocation of additional attentional resources to the task. Study 4, had two aims. The first was to examine the interactive effects of practice and focus of attention on both performance and learning of an accuracy-based, discrete multi-articular action. The second was to identify potential focus-dependent changes on joint kinematics, intra-limb coordination and coordination variability. Support was found for the role of an external focus of attention on shooting performance during both acquisition and retention. However, there was evidence to suggest that internal focus instruction could play a pivotal role in shaping emerging patterns of intra-limb coordination and channelling the learners‟ search towards a smaller range of kinematic solutions within the perceptual-motor workspace. Collectively, this programme of work consistently highlighted the fundamental role that constraints play in governing shooting performance, movement variability and, more broadly, perceptual-motor organisation. For instance, task expertise was characterised by decreased coordination variability and heightened compensatory control. However, in light of the data pertaining to joint angle variability at release, general assumptions about expertise-variability relations cannot be made and should be viewed with caution. In addition, there is strong evidence to suggest that adaptation to constraints is, perhaps, a universal human response, and consequently not mediated by task expertise. Further research is needed to fully elucidate this proposition

    Utilizing Fluorescent Nanoscale Particles to Create a Map of the Electric Double Layer

    Get PDF
    The interactions between charged particles in solution and an applied electric field follow several models, most notably the Gouy-Chapman-Stern model, for the establishment of an electric double layer along the electrode, but these models make several assumptions of ionic concentrations and an infinite bulk solution. As more scientific progress is made for the finite and single molecule reactions inside microfluidic cells, the limitations of the models become more extreme. Thus, creating an accurate map of the precise response of charged nanoparticles in an electric field becomes increasingly vital. Another compounding factor is Brownian motion’s inverse relationship with size: large easily observable particles have relatively small Brownian movements, while nanoscale particles are simultaneously more difficult to be observed directly and have much larger magnitude Brownian movements. The research presented here tackles both cases simultaneously using fluorescently tagged, negatively charged, 20 nm diameter polystyrene nanoparticles. By utilizing parallel plate electrodes within a specially constructed microfluidic device that limits the z-direction, the nanoparticle movements become restricted to two dimensions. By using one axis to measure purely Brownian motion, while the other axis has both Brownian motion and ballistic movement from the applied electric field, the ballistic component can be disentangled and isolated. Using this terminal velocity to calculate the direct effect of the field on a single nanoparticle, as opposed to the reaction of the bulk solution, several curious phenomena were observed: the trajectory of the nanoparticle suggests that the charge time of the electrode is several magnitudes larger than the theoretical value, lasting for over a minute instead of tens of milliseconds. Additionally, the effective electric field does not reduce to below the Brownian limit, but instead has a continued influence for far longer than the model suggests. Finally, when the electrode was toggled off, a repeatable response was observed where the nanoparticle would immediately alter course in the opposite direction of the previously established field, rebounding with a high degree of force for several seconds after the potential had been cut before settling to a neutral and stochastic Brownian motion. While some initial hypotheses are presented in this dissertation as possible explanations, these findings indicate the need for additional experiments to find the root cause of these unexpected results and observations
    • 

    corecore