3 research outputs found

    Target Object Identification and Location Based on Multi-sensor Fusion

    Full text link

    A Voice and Pointing Gesture Interaction System for Supporting Human Spontaneous Decisions in Autonomous Cars

    Get PDF
    Autonomous cars are expected to improve road safety, traffic and mobility. It is projected that in the next 20-30 years fully autonomous vehicles will be on the market. The advancement on the research and development of this technology will allow the disengagement of humans from the driving task, which will be responsibility of the vehicle intelligence. In this scenario new vehicle interior designs are proposed, enabling more flexible human vehicle interactions inside them. In addition, as some important stakeholders propose, control elements such as the steering wheel and accelerator and brake pedals may not be needed any longer. However, this user control disengagement is one of the main issues related with the user acceptance of this technology. Users do not seem to be comfortable with the idea of giving all the decision power to the vehicle. In addition, there can be location awareness situations where the user makes a spontaneous decision and requires some type of vehicle control. Such is the case of stopping at a particular point of interest or taking a detour in the pre-calculated autonomous route of the car. Vehicle manufacturers\u27 maintain the steering wheel as a control element, allowing the driver to take over the vehicle if needed or wanted. This causes a constraint in the previously mentioned human vehicle interaction flexibility. Thus, there is an unsolved dilemma between providing users enough control over the autonomous vehicle and route so they can make spontaneous decision, and interaction flexibility inside the car. This dissertation proposes the use of a voice and pointing gesture human vehicle interaction system to solve this dilemma. Voice and pointing gestures have been identified as natural interaction techniques to guide and command mobile robots, potentially providing the needed user control over the car. On the other hand, they can be executed anywhere inside the vehicle, enabling interaction flexibility. The objective of this dissertation is to provide a strategy to support this system. For this, a method based on pointing rays intersections for the computation of the point of interest (POI) that the user is pointing to is developed. Simulation results show that this POI computation method outperforms the traditional ray-casting based by 76.5% in cluttered environments and 36.25% in combined cluttered and non-cluttered scenarios. The whole system is developed and demonstrated using a robotics simulator framework. The simulations show how voice and pointing commands performed by the user update the predefined autonomous path, based on the recognized command semantics. In addition, a dialog feedback strategy is proposed to solve conflicting situations such as ambiguity in the POI identification. This additional step is able to solve all the previously mentioned POI computation inaccuracies. In addition, it allows the user to confirm, correct or reject the performed commands in case the system misunderstands them

    Target object identification and localization in mobile manipulations

    No full text
    How to make mobile manipulator autonomously identify and locate target object in unknown environment, this is a very challenging question. In this paper, a multi-sensor fusion method based on camera and laser range finder (LRF) for mobile manipulations is proposed. Although the camera can acquire rich perceptual information, the image processing is very complex and easily influenced from the change in ambient light. Moreover, it can not directly provide the depth information of the environment. Since the LRF has the ability to directly measure 3D coordinates and the stability against the ambient light influence, meanwhile, the camera has the ability to acquire color information, the combination of the two sensors by making use of their advantages is utilized to obtain more accurate measurements as well as to simplify information processing. To overlay the camera image with the measurement points of the LRF pitching scan and to reconstruct the 3D image which includes the depth-of-field information, the model and the calibration of the system are built. Based on the combination of the color features extracted from the color image and the shape, size features extracted from the 3D depth-of-field image, the target object identification and localization are implemented autonomously. In order to extract the shape and size features, a triangular facet normal vector clustering (TFNVC) algorithm is introduced. The effectiveness of the proposed method and algorithm are validated by some experimental testing and analysis carried out on the mobile manipulator platform. © 2011 IEEE.Link_to_subscribed_fulltex
    corecore