13 research outputs found

    Attitude, Linear Velocity and Depth Estimation of a Camera observing a planar target using continuous homography and inertial data

    Get PDF
    International audienceThis paper revisits the problem of estimating the attitude, linear velocity and depth of an IMU-Camera with respect to a planar target. The considered solution relies on the measurement of the optical flow (extracted from the continuous homography) complemented with gyrometer and accelerometer measurements. The proposed deterministic observer is accompanied with an observability analysis that points out camera's motion excitation conditions whose satisfaction grants stability of the observer and convergence of the estimation errors to zero. The performance of the observer is illustrated by performing experiments on a test-bed IMU-Camera system

    Optimisation of a moving platform vehicle simulator for vehicle handling experiments

    Get PDF
    This thesis discusses the optimisation of motion platform simulators and was motivated by Loughborough University's acquisition of a low cost six strut moving platform vehicle simulator. Historically, we see that automotive vehicle simulators are more generally used for human factors experiments that examine driver behaviour during low severity manoeuvres or short events e.g. obstacle avoidance. The purpose of this thesis is to examine the potential for the simulator to be used for vehicle handling experiments where the vehicle is free to explore the limits of the vehicle for sustained periods of time. This research has a significant emphasis on vehicle handling models. In particular, we examine data acquisition systems and testing methods before investigating potential optimisation and identification techniques for estimating vehicle model parameters that have the potential to be implemented on the simulator. Here we examine the possibility of producing high quality vehicle models within a short space of time with a view to rapid identification of different types of vehicle directly from vehicle testing. This includes the data acquisition process and addresses the significance of the sensors and equipment used to measure the vehicle states and the importance of the recorded vehicle manoeuvres and test track characteristics. The second phase was carried out once the simulator was installed and functional. Clearly, the simulator is a piece of experimental equipment and as with any engineering experiment, the equipment should be well understood. Consequently, the accuracy to which it adheres to the real world, i.e. its fidelity, is assessed by investigating the simulators capabilities and limitations and is achieved by analysing the raw performance of the motion platform and conducting driver-in-the-Ioop experiments; this work proves valuable as it is used to optimise how the motion platform responds to vehicle dynamics and provides the motivation behind conducting a driver-in-the-Ioop handling experiment for the final section of this thesis. Here, the simulators potential to be used as a tool to assess race car driver skill is investigated. After conducting various tests in the simulated and real world, the correlation between the subjects simulated and real world performances are used to critically assess the simulators performance and draw conclusions concerning its future potential for handling based research. This thesis shows it possible to use an Inertial GPS Navigation System for capturing vehicle data to good effect and describes how a comprehensive set of new vehicle dynamics measurements can be collected and used for model tuning and optimisation within a relatively short space of time (approximately one day). The work presents substantial evidence that shows how dominant the influence of steer ratio and toe compliance is on the accuracy of the handling models and that they are a likely source of modelling errors. The importance of vehicle slip angle measurement is a particular point if of interest and is examined concurrently with the driving manoeuvres, where some guidelines for test methodology and data collection are established. A novel identification process is also presented with the Identifying Extended KaIman Filter. It has been shown possible to identify separate front and rear tyre models as well as a single tyre model. The thesis also describes the relative importance of motion for vehicle simulators that are to be used for handling based experiments. It appears more valuable to emulate only those vehicle motions that are within the platforms capabilities and limitations in a quest for quality over quantity. Finally, this work demonstrates the simulators potential to be used as tool to evaluate race car driver skill, which also fundamentally assesses the fidelity of the simulator. This is achieved by examining the correlation between a simulated and real world experiment, where we see a positive correlation which indicates high fidelity. Further analysis shows the importance that adequate driver training is being administered before beginning experimentation

    Development and evaluation of a novel method for in-situ medical image display

    Get PDF
    Three-dimensional (3D) medical imaging, including computed tomography (CT) and magnetic resonance (MR), and other modalities, has become a standard of care for diagnosis of disease and guidance of interventional procedures. As the technology to acquire larger, more magnificent, and more informative medical images advances, so too must the technology to display, interact with, and interpret these data.This dissertation concerns the development and evaluation of a novel method for interaction with 3D medical images called "grab-a-slice," which is a movable, tracked stereo display. It is the latest in a series of displays developed in our laboratory that we describe as in-situ, meaning that the displayed image is embedded in a physical 3D coordinate system. As the display is moved through space, a continuously updated tomographic slice of a 3D medical image is shown on the screen, corresponding to the position and orientation of the display. The act of manipulating the display through a "virtual patient" preserves the perception of 3D anatomic relationships in a way that is not possible with conventional, fixed displays. The further addition of stereo display capabilities permits augmentation of the tomographic image data with out-of-plane structures using 3D graphical methods.In this dissertation we describe the research and clinical motivations for such a device. We describe the technical development of grab-a-slice as well as psychophysical experiments to evaluate the hypothesized perceptual and cognitive benefits. We speculate on the advantages and limitations of the grab-a-slice display and propose future directions for its use in psychophysical research, clinical settings, and image analysis

    Visually-guided walking reference modification for humanoid robots

    Get PDF
    Humanoid robots are expected to assist humans in the future. As for any robot with mobile characteristics, autonomy is an invaluable feature for a humanoid interacting with its environment. Autonomy, along with components from artificial intelligence, requires information from sensors. Vision sensors are widely accepted as the source of richest information about the surroundings of a robot. Visual information can be exploited in tasks ranging from object recognition, localization and manipulation to scene interpretation, gesture identification and self-localization. Any autonomous action of a humanoid, trying to accomplish a high-level goal, requires the robot to move between arbitrary waypoints and inevitably relies on its selflocalization abilities. Due to the disturbances accumulating over the path, it can only be achieved by gathering feedback information from the environment. This thesis proposes a path planning and correction method for bipedal walkers based on visual odometry. A stereo camera pair is used to find distinguishable 3D scene points and track them over time, in order to estimate the 6 degrees-of-freedom position and orientation of the robot. The algorithm is developed and assessed on a benchmarking stereo video sequence taken from a wheeled robot, and then tested via experiments with the humanoid robot SURALP (Sabanci University Robotic ReseArch Laboratory Platform)

    Towards Robust Visual-Controlled Flight of Single and Multiple UAVs in GPS-Denied Indoor Environments

    Get PDF
    Having had its origins in the minds of science fiction authors, mobile robot hardware has become reality many years ago. However, most envisioned applications have yet remained fictional - a fact that is likely to be caused by the lack of sufficient perception systems. In particular, mobile robots need to be aware of their own location with respect to their environment at all times to act in a reasonable manner. Nevertheless, a promising application for mobile robots in the near future could be, e.g., search and rescue tasks on disaster sites. Here, small and agile flying robots are an ideal tool to effectively create an overview of the scene since they are largely unaffected by unstructured environments and blocked passageways. In this respect, this thesis first explores the problem of ego-motion estimation for quadrotor Unmanned Aerial Vehicles (UAVs) based entirely on onboard sensing and processing hardware. To this end, cameras are an ideal choice as the major sensory modality. They are light, cheap, and provide a dense amount of information on the environment. While the literature provides camera-based algorithms to estimate and track the pose of UAVs over time, these solutions lack the robustness required for many real-world applications due to their inability to recover a loss of tracking fast. Therefore, in the first part of this thesis, a robust algorithm to estimate the velocity of a quadrotor UAV based on optical flow is presented. Additionally, the influence of the incorporated measurements from an Inertia Measurement Unit (IMU) on the precision of the velocity estimates is discussed and experimentally validated. Finally, we introduce a novel nonlinear observation scheme to recover the metric scale factor of the state estimate through fusion with acceleration measurements. This nonlinear model allows now to predict the convergence behavior of the presented filtering approach. All findings are experimentally evaluated, including the first presented human-controlled closed-loop flights based entirely on onboard velocity estimation. In the second part of this thesis, we address the problem of collaborative multi robot operations based on onboard visual perception. For instances of a direct line-of-sight between the robots, we propose a distributed formation control based on ego-motion detection and visually detected bearing angles between the members of the formation. To overcome the limited field of view of real cameras, we add an artificial yaw-rotation to track robots that would be invisible to static cameras. Afterwards, without the need for direct visual detections, we present a novel contribution to the mutual localization problem. In particular, we demonstrate a precise global localization of a monocular camera with respect to a dense 3D map. To this end, we propose an iterative algorithm that aims to estimate the location of the camera for which the photometric error between a synthesized view of the dense map and the real camera image is minimal

    Adaptive Vision Based Scene Registration for Outdoor Augmented Reality

    Get PDF
    Augmented Reality (AR) involves adding virtual content into real scenes. Scenes are viewed using a Head-Mounted Display or other display type. In order to place content into the user's view of a scene, the user's position and orientation relative to the scene, commonly referred to as their pose, must be determined accurately. This allows the objects to be placed in the correct positions and to remain there when the user moves or the scene changes. It is achieved by tracking the user in relation to their environment using a variety of technology. One technology which has proven to provide accurate results is computer vision. Computer vision involves a computer analysing images and achieving an understanding of them. This may be locating objects such as faces in the images, or in the case of AR, determining the pose of the user. One of the ultimate goals of AR systems is to be capable of operating under any condition. For example, a computer vision system must be robust under a range of different scene types, and under unpredictable environmental conditions due to variable illumination and weather. The majority of existing literature tests algorithms under the assumption of ideal or 'normal' imaging conditions. To ensure robustness under as many circumstances as possible it is also important to evaluate the systems under adverse conditions. This thesis seeks to analyse the effects that variable illumination has on computer vision algorithms. To enable this analysis, test data is required to isolate weather and illumination effects, without other factors such as changes in viewpoint that would bias the results. A new dataset is presented which also allows controlled viewpoint differences in the presence of weather and illumination changes. This is achieved by capturing video from a camera undergoing a repeatable motion sequence. Ground truth data is stored per frame allowing images from the same position under differing environmental conditions, to be easily extracted from the videos. An in depth analysis of six detection algorithms and five matching techniques demonstrates the impact that non-uniform illumination changes can have on vision algorithms. Specifically, shadows can degrade performance and reduce confidence in the system, decrease reliability, or even completely prevent successful operation. An investigation into approaches to improve performance yields techniques that can help reduce the impact of shadows. A novel algorithm is presented that merges reference data captured at different times, resulting in reference data with minimal shadow effects. This can significantly improve performance and reliability when operating on images containing shadow effects. These advances improve the robustness of computer vision systems and extend the range of conditions in which they can operate. This can increase the usefulness of the algorithms and the AR systems that employ them

    Robotics 2010

    Get PDF
    Without a doubt, robotics has made an incredible progress over the last decades. The vision of developing, designing and creating technical systems that help humans to achieve hard and complex tasks, has intelligently led to an incredible variety of solutions. There are barely technical fields that could exhibit more interdisciplinary interconnections like robotics. This fact is generated by highly complex challenges imposed by robotic systems, especially the requirement on intelligent and autonomous operation. This book tries to give an insight into the evolutionary process that takes place in robotics. It provides articles covering a wide range of this exciting area. The progress of technical challenges and concepts may illuminate the relationship between developments that seem to be completely different at first sight. The robotics remains an exciting scientific and engineering field. The community looks optimistically ahead and also looks forward for the future challenges and new development

    Advances in Mechanical Systems Dynamics 2020

    Get PDF
    The fundamentals of mechanical system dynamics were established before the beginning of the industrial era. The 18th century was a very important time for science and was characterized by the development of classical mechanics. This development progressed in the 19th century, and new, important applications related to industrialization were found and studied. The development of computers in the 20th century revolutionized mechanical system dynamics owing to the development of numerical simulation. We are now in the presence of the fourth industrial revolution. Mechanical systems are increasingly integrated with electrical, fluidic, and electronic systems, and the industrial environment has become characterized by the cyber-physical systems of industry 4.0. Within this framework, the status-of-the-art has become represented by integrated mechanical systems and supported by accurate dynamic models able to predict their dynamic behavior. Therefore, mechanical systems dynamics will play a central role in forthcoming years. This Special Issue aims to disseminate the latest research findings and ideas in the field of mechanical systems dynamics, with particular emphasis on novel trends and applications
    corecore