2,000 research outputs found

    Face tracking using a hyperbolic catadioptric omnidirectional system

    Get PDF
    In the first part of this paper, we present a brief review on catadioptric omnidirectional systems. The special case of the hyperbolic omnidirectional system is analysed in depth. The literature shows that a hyperboloidal mirror has two clear advantages over alternative geometries. Firstly, a hyperboloidal mirror has a single projection centre [1]. Secondly, the image resolution is uniformly distributed along the mirror’s radius [2]. In the second part of this paper we show empirical results for the detection and tracking of faces from the omnidirectional images using Viola-Jones method. Both panoramic and perspective projections, extracted from the omnidirectional image, were used for that purpose. The omnidirectional image size was 480x480 pixels, in greyscale. The tracking method used regions of interest (ROIs) set as the result of the detections of faces from a panoramic projection of the image. In order to avoid losing or duplicating detections, the panoramic projection was extended horizontally. Duplications were eliminated based on the ROIs established by previous detections. After a confirmed detection, faces were tracked from perspective projections (which are called virtual cameras), each one associated with a particular face. The zoom, pan and tilt of each virtual camera was determined by the ROIs previously computed on the panoramic image. The results show that, when using a careful combination of the two projections, good frame rates can be achieved in the task of tracking faces reliably

    Sparse-to-Continuous: Enhancing Monocular Depth Estimation using Occupancy Maps

    Full text link
    This paper addresses the problem of single image depth estimation (SIDE), focusing on improving the quality of deep neural network predictions. In a supervised learning scenario, the quality of predictions is intrinsically related to the training labels, which guide the optimization process. For indoor scenes, structured-light-based depth sensors (e.g. Kinect) are able to provide dense, albeit short-range, depth maps. On the other hand, for outdoor scenes, LiDARs are considered the standard sensor, which comparatively provides much sparser measurements, especially in areas further away. Rather than modifying the neural network architecture to deal with sparse depth maps, this article introduces a novel densification method for depth maps, using the Hilbert Maps framework. A continuous occupancy map is produced based on 3D points from LiDAR scans, and the resulting reconstructed surface is projected into a 2D depth map with arbitrary resolution. Experiments conducted with various subsets of the KITTI dataset show a significant improvement produced by the proposed Sparse-to-Continuous technique, without the introduction of extra information into the training stage.Comment: Accepted. (c) 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other work

    pT distribution of hyperons in 200A GeV Au-Au by smoothed particle hydrodynamics

    Full text link
    The transverse momentum distributions for hadrons in 200GeV Au-Au collision at RHIC is calculated using a smoothed particle hydrodynamics code SPheRIO, and are compared with the data from STAR and PHOBOS Collaborations. By employing the equation of state which explicitly incorporate the strangeness conservation and introducing strangeness chemical potential into the code, the transverse spectrums give a reasonable description for the experimental data.Comment: 3 pages, 6 figure

    Learning to Race through Coordinate Descent Bayesian Optimisation

    Full text link
    In the automation of many kinds of processes, the observable outcome can often be described as the combined effect of an entire sequence of actions, or controls, applied throughout its execution. In these cases, strategies to optimise control policies for individual stages of the process might not be applicable, and instead the whole policy might have to be optimised at once. On the other hand, the cost to evaluate the policy's performance might also be high, being desirable that a solution can be found with as few interactions as possible with the real system. We consider the problem of optimising control policies to allow a robot to complete a given race track within a minimum amount of time. We assume that the robot has no prior information about the track or its own dynamical model, just an initial valid driving example. Localisation is only applied to monitor the robot and to provide an indication of its position along the track's centre axis. We propose a method for finding a policy that minimises the time per lap while keeping the vehicle on the track using a Bayesian optimisation (BO) approach over a reproducing kernel Hilbert space. We apply an algorithm to search more efficiently over high-dimensional policy-parameter spaces with BO, by iterating over each dimension individually, in a sequential coordinate descent-like scheme. Experiments demonstrate the performance of the algorithm against other methods in a simulated car racing environment.Comment: Accepted as conference paper for the 2018 IEEE International Conference on Robotics and Automation (ICRA

    Usability Study of a Control Framework for an Intelligent Wheelchair

    Get PDF
    We describe the development and assessment of a computer controlled wheelchair called the SMARTCHAIR. A shared control framework with different levels of autonomy allows the human operator to stay in complete control of the chair at each level while ensuring her safety. The framework incorporates deliberative motion plans or controllers, reactive behaviors, and human user inputs. At every instant in time, control inputs from these three different sources are blended continuously to provide a safe trajectory to the destination, while allowing the human to maintain control and safely override the autonomous behavior. In this paper, we present usability experiments with 50 participants and demonstrate quantitatively the benefits of human-robot augmentation

    Integrating Human Inputs with Autonomous Behaviors on an Intelligent Wheelchair Platform

    Get PDF
    Researchers have developed and assessed a computer-controlled wheelchair called the Smart Chair. A shared control framework has different levels of autonomy, allowing the human operator complete control of the chair at each level while ensuring the user\u27s safety. The semiautonomous system incorporates deliberative motion plans or controllers, reactive behaviors, and human user inputs. At every instant in time, control inputs from three sources are integrated continuously to provide a safe trajectory to the destination. Experiments with 50 participants demonstrate quantitatively and qualitatively the benefits of human-robot augmentation in three modes of operation: manual, autonomous, and semiautonomous. This article is part of a special issue on Interacting with Autonomy

    Incorporating User Inputs in Motion Planning for a Smart Wheelchair

    Get PDF
    We describe the development and assessment of a computer controlled wheelchair equipped with a suite of sensors and a novel interface, called the SMARTCHAIR. The main focus of this paper is a shared control framework which allows the human operator to interact with the chair while it is performing an autonomous task. At the highest level, the autonomous system is able to plan paths using high level deliberative navigation behaviors depending on destinations or waypoints commanded by the user. The user is able to locally modify or override previously commanded autonomous behaviors or plans. This is possible because of our hierarchical control strategy that combines three independent sources of control inputs: deliberative plans obtained from maps and user commands, reactive behaviors generated by stimuli from the environment, and user-initiated commands that might arise during the execution of a plan or behavior. The framework we describe ensures the user\u27s safety while allowing the user to be in complete control of a potentially autonomous system
    • …
    corecore