369 research outputs found

    Event-Based Algorithms For Geometric Computer Vision

    Get PDF
    Event cameras are novel bio-inspired sensors which mimic the function of the human retina. Rather than directly capturing intensities to form synchronous images as in traditional cameras, event cameras asynchronously detect changes in log image intensity. When such a change is detected at a given pixel, the change is immediately sent to the host computer, where each event consists of the x,y pixel position of the change, a timestamp, accurate to tens of microseconds, and a polarity, indicating whether the pixel got brighter or darker. These cameras provide a number of useful benefits over traditional cameras, including the ability to track extremely fast motions, high dynamic range, and low power consumption. However, with a new sensing modality comes the need to develop novel algorithms. As these cameras do not capture photometric intensities, novel loss functions must be developed to replace the photoconsistency assumption which serves as the backbone of many classical computer vision algorithms. In addition, the relative novelty of these sensors means that there does not exist the wealth of data available for traditional images with which we can train learning based methods such as deep neural networks. In this work, we address both of these issues with two foundational principles. First, we show that the motion blur induced when the events are projected into the 2D image plane can be used as a suitable substitute for the classical photometric loss function. Second, we develop self-supervised learning methods which allow us to train convolutional neural networks to estimate motion without any labeled training data. We apply these principles to solve classical perception problems such as feature tracking, visual inertial odometry, optical flow and stereo depth estimation, as well as recognition tasks such as object detection and human pose estimation. We show that these solutions are able to utilize the benefits of event cameras, allowing us to operate in fast moving scenes with challenging lighting which would be incredibly difficult for traditional cameras

    The development of a robotic test bed with applications in Q-learning

    Get PDF
    In this work, we show the design, development, and testing of an autonomous ground vehicle for experiments in learning and intelligent transportation research. We then implement the Q-Learning algorithm to teach the robot to navigate towards a light source. The vehicle platform is based on the Tamiya TXT-1 chassis which is out\ufb01tted with an onboard computer for processing high-level functions, a microcontroller for controlling the low-level tasks, and an array of sensors for collecting information about its surroundings. The TXT-1 robot is a unique research testbed that encourages the use of a modular design, low-cost COTS hardware, and open-source software. The TXT-1 is designed using different modules or blocks that are separated based on functionality. The different functional blocks of the TXT-1 are the motors, power, low-level controller, high-level controller, and sensors. This modular design is important when considering upgrading or maintaining the robot. The research platform uses an Apple Mac Mini as its on-board computer for handling high-level navigation tasks like processing sensor data and computing navigation trajectories. ROS, the robot operating system, is used on the computer as a development environment to easily implement algorithms to validate on the robot. A ROS driver was created so that the TXT-1 low-level functions can be sensed and commanded. The TXT-1 low-level controller is designed using an ARM7 processor development board with FreeRTOS, OpenOCD, and the CodeSourcery development tools. The RTOS is used to provide a stable, real-time platform that can be used for many future generations of TXT-1 robots. A communication protocol is created so that the high and low-level processors can communicate. A power distribution system is designed and built to deliver power to all of the systems efficiently and reliably while using a single battery type. Velocity controllers are developed and implemented on the low-level controller. These control the linear and angular velocities using the wheel encoders in a PID feedback loop. The angular velocity controller uses gain scheduling to overcome the systems nonlinearity. The controllers are then tested for adequate velocity response and tracking. The robot is then tested by using the Q-Learning algorithm to teach the robot to navigate towards a light source. The Q-Learning algorithm is \ufb01rst described in detail, and then the problem is formulated and the algorithm is tested in the Stage simulation environment with ROS. The same ROS code is then used on the TXT-1 to implement the algorithm in hardware. Because of delays encountered in the system, the Q-Learning algorithm is modi\ufb01ed to use the sensed action to update the Q-Table, which gives promising results. As a result of this research, a novel autonomous ground vehicle was built and the Q-Learning source \ufb01nding problem was implemented.\u2

    The Einstein Toolkit: A Community Computational Infrastructure for Relativistic Astrophysics

    Full text link
    We describe the Einstein Toolkit, a community-driven, freely accessible computational infrastructure intended for use in numerical relativity, relativistic astrophysics, and other applications. The Toolkit, developed by a collaboration involving researchers from multiple institutions around the world, combines a core set of components needed to simulate astrophysical objects such as black holes, compact objects, and collapsing stars, as well as a full suite of analysis tools. The Einstein Toolkit is currently based on the Cactus Framework for high-performance computing and the Carpet adaptive mesh refinement driver. It implements spacetime evolution via the BSSN evolution system and general-relativistic hydrodynamics in a finite-volume discretization. The toolkit is under continuous development and contains many new code components that have been publicly released for the first time and are described in this article. We discuss the motivation behind the release of the toolkit, the philosophy underlying its development, and the goals of the project. A summary of the implemented numerical techniques is included, as are results of numerical test covering a variety of sample astrophysical problems.Comment: 62 pages, 20 figure

    Fast algorithm for real-time rings reconstruction

    Get PDF
    The GAP project is dedicated to study the application of GPU in several contexts in which real-time response is important to take decisions. The definition of real-time depends on the application under study, ranging from answer time of μs up to several hours in case of very computing intensive task. During this conference we presented our work in low level triggers [1] [2] and high level triggers [3] in high energy physics experiments, and specific application for nuclear magnetic resonance (NMR) [4] [5] and cone-beam CT [6]. Apart from the study of dedicated solution to decrease the latency due to data transport and preparation, the computing algorithms play an essential role in any GPU application. In this contribution, we show an original algorithm developed for triggers application, to accelerate the ring reconstruction in RICH detector when it is not possible to have seeds for reconstruction from external trackers

    GEANT4--a simulation toolkikt

    Get PDF
    Geant4 is a toolkit for simulating the passage of particles through matter. It includes a complete range of functionality including tracking, geometry, physics models and hits. The physics processes offered cover a comprehensive range, including electromagnetic, hadronic and optical processes, a large set of long-lived particles, materials and elements, over a wide energy range starting, in some cases, from 250 eV and extending in others to the TeV energy range. It has been designed and constructed to expose the physics models utilised, to handle complex geometries, and to enable its easy adaptation for optimal use in different sets of applications. The toolkit is the result of a worldwide collaboration of physicists and software engineers. It has been created exploiting software engineering and object-oriented technology and implemented in the C++ programming language. It has been used in applications in particle physics, nuclear physics, accelerator design, space engineering and medical physics

    Cognitive-developmental learning for a humanoid robot : a caregiver's gift

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 319-341).(cont.) which are then applied to developmentally acquire new object representations. The humanoid robot therefore sees the world through the caregiver's eyes. Building an artificial humanoid robot's brain, even at an infant's cognitive level, has been a long quest which still lies only in the realm of our imagination. Our efforts towards such a dimly imaginable task are developed according to two alternate and complementary views: cognitive and developmental.The goal of this work is to build a cognitive system for the humanoid robot, Cog, that exploits human caregivers as catalysts to perceive and learn about actions, objects, scenes, people, and the robot itself. This thesis addresses a broad spectrum of machine learning problems across several categorization levels. Actions by embodied agents are used to automatically generate training data for the learning mechanisms, so that the robot develops categorization autonomously. Taking inspiration from the human brain, a framework of algorithms and methodologies was implemented to emulate different cognitive capabilities on the humanoid robot Cog. This framework is effectively applied to a collection of AI, computer vision, and signal processing problems. Cognitive capabilities of the humanoid robot are developmentally created, starting from infant-like abilities for detecting, segmenting, and recognizing percepts over multiple sensing modalities. Human caregivers provide a helping hand for communicating such information to the robot. This is done by actions that create meaningful events (by changing the world in which the robot is situated) thus inducing the "compliant perception" of objects from these human-robot interactions. Self-exploration of the world extends the robot's knowledge concerning object properties. This thesis argues for enculturating humanoid robots using infant development as a metaphor for building a humanoid robot's cognitive abilities. A human caregiver redesigns a humanoid's brain by teaching the humanoid robot as she would teach a child, using children's learning aids such as books, drawing boards, or other cognitive artifacts. Multi-modal object properties are learned using these tools and inserted into several recognition schemes,by Artur Miguel Do Amaral Arsenio.Ph.D

    Achieving broad access to satellite control research with zero robotics

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2013.This thesis was scanned as part of an electronic thesis pilot project.Cataloged from PDF version of thesis.Includes bibliographical references (p. 307-313).Since operations began in 2006, the SPHERES facility, including three satellites aboard the International Space Station (ISS), has demonstrated many future satellite technologies in a true microgravity environment and established a model for developing successful ISS payloads. In 2009, the Zero Robotics program began with the goal of leveraging the resources of SPHERES as a tool for Science, Technology, Engineering, and Math education through a unique student robotics competition. Since the first iteration with two teams, the program has grown over four years into an international tournament involving more than two thousand student competitors and has given hundreds of students the experience of running experiments on the ISS. Zero Robotics tournaments involve an annually updated challenge motivated by a space theme and designed to match the hardware constraints of the SPHERES facility. The tournament proceeds in several phases of increasing difficulty, including a multi-week collaboration period where geographically separated teams work together through the provided tools to write software for SPHERES. Students initially compete in a virtual, online simulation environment, then transition to hardware for the final live championship round aboard the ISS. Along the way, the online platform ensures compatibility with the satellite hardware and provides feedback in the form of 3D simulation animations. During each competition phase, a continuous scoring system allows competitors to incrementally explore new strategies while striving for a seat in the championship. This thesis will present the design of the Zero Robotics competition and supporting online environment and tools that enable users from around the world to successfully write computer programs for satellites. The central contribution is a framework for building virtual platforms that serve as surrogates for limited availability hardware facilities. The framework includes the elaboration of the core principles behind the design of Zero Robotics along with examples and lessons from the implementation of the competition. The virtual platform concept is further extended with a web-based architecture for writing, compiling, simulating, and analyzing programs for a dynamic robot. A standalone and key enabling component of the architecture is a pattern for building fast, high fidelity, web-based simulations. For control of the robots, an easy to use programming interface for controlling 6 degree-of-freedom (6DOF) satellites is presented, along with a lightweight supervisory control law to prevent collisions between satellites without user action. This work also contributes a new form of student robotics competition, including the unique features of model-based online simulation, programming, 6DOF dynamics, a multi-week team collaboration phase, and the chance to test satellites aboard the ISS. Scoring during the competition is made possible by possible by a game-agnostic scoring algorithm, which has been demonstrated during a tournament season and improved for responsiveness. Lastly, future directions are suggested for improving the tournament including a detailed initial exploration of creating open-ended Monte Carlo analysis tools.by Jacob G. Katz.Ph.D

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    A abordagem POESIA para a integração de dados e serviços na Web semantica

    Get PDF
    Orientador: Claudia Bauzer MedeirosTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: POESIA (Processes for Open-Ended Systems for lnformation Analysis), a abordagem proposta neste trabalho, visa a construção de processos complexos envolvendo integração e análise de dados de diversas fontes, particularmente em aplicações científicas. A abordagem é centrada em dois tipos de mecanismos da Web semântica: workflows científicos, para especificar e compor serviços Web; e ontologias de domínio, para viabilizar a interoperabilidade e o gerenciamento semânticos dos dados e processos. As principais contribuições desta tese são: (i) um arcabouço teórico para a descrição, localização e composição de dados e serviços na Web, com regras para verificar a consistência semântica de composições desses recursos; (ii) métodos baseados em ontologias de domínio para auxiliar a integração de dados e estimar a proveniência de dados em processos cooperativos na Web; (iii) implementação e validação parcial das propostas, em urna aplicação real no domínio de planejamento agrícola, analisando os benefícios e as limitações de eficiência e escalabilidade da tecnologia atual da Web semântica, face a grandes volumes de dadosAbstract: POESIA (Processes for Open-Ended Systems for Information Analysis), the approach proposed in this work, supports the construction of complex processes that involve the integration and analysis of data from several sources, particularly in scientific applications. This approach is centered in two types of semantic Web mechanisms: scientific workflows, to specify and compose Web services; and domain ontologies, to enable semantic interoperability and management of data and processes. The main contributions of this thesis are: (i) a theoretical framework to describe, discover and compose data and services on the Web, inc1uding mIes to check the semantic consistency of resource compositions; (ii) ontology-based methods to help data integration and estimate data provenance in cooperative processes on the Web; (iii) partial implementation and validation of the proposal, in a real application for the domain of agricultural planning, analyzing the benefits and scalability problems of the current semantic Web technology, when faced with large volumes of dataDoutoradoCiência da ComputaçãoDoutor em Ciência da Computaçã
    corecore