40,479 research outputs found

    Intelligent computational techniques and virtual environment for understanding cerebral visual impairment patients

    Get PDF
    Cerebral Visual Impairment (CVI) is a medical area that concerns the study of the effect of brain damages on the visual field (VF). People with CVI are not able to construct a perfect 3-Dimensional view of what they see through their eyes in their brain. Therefore, they have difficulties in their mobility and behaviours that others find hard to understand due to their visual impairment. A branch of Artificial Intelligence (AI) is the simulation of behaviour by building computational models that help to explain how people solve problems or why they behave in a certain way. This project describes a novel intelligent system that simulates the navigation problems faced by people with CVI. This will help relatives, friends, and ophthalmologists of CVI patients understand more about their difficulties in navigating their everyday environment. The navigation simulation system is implemented using the Unity3D game engine. Virtual scenes of different living environments are also created using the Unity modelling software. The vision of the avatar in the virtual environment is implemented using a camera provided by the 3D game engine. Given a visual field chart of a CVI patient with visual impairment, the system automatically creates a filter (mask) that mimics a visual defect and places it in front of the visual field of the avatar. The filters are created by extracting, classifying and converting the symbols of the defected areas in the visual field chart to numerical values and then converted to textures to mask the vision. Each numeric value represents a level of transparency and opacity according to the severity of the visual defect in that region. The filters represent the vision masks. Unity3D supports physical properties to facilitate the representation of the VF defects into a form of structures of rays. The length of each ray depends on the VF defect s numeric value. Such that, the greater values (means a greater percentage of opacity) represented by short rays in length. While the smaller values (means a greater percentage of transparency) represented by longer rays. The lengths of all rays are representing the vision map (how far the patient can see). Algorithms for navigation based on the generated rays have been developed to enable the avatar to move around in given virtual environments. The avatar depends on the generated vision map and will exhibit different behaviours to simulate the navigation problem of real patients. The avatar s behaviour of navigation differs from patient to another according to their different defects. An experiment of navigating virtual environments (scenes) using the HTC Oculus Vive Headset was conducted using different scenarios. The scenarios are designed to use different VF defects within different scenes. The experiment simulates the patient s navigation in virtual environments with static objects (rooms) and in virtual environments with moving objects. The behaviours of the experiment participants actions (avoid/bump) match the avatar s using the same scenario. This project has created a system that enables the CVI patient s parents and relatives to aid the understanding what the CVI patient encounter. Besides, it aids the specialists and educators to take into account all the difficulties that the patients experience. Then, is to design and develop appropriate educational programs that can help each individual patient

    Image-Based Flexible Endoscope Steering

    Get PDF
    Manually steering the tip of a flexible endoscope to navigate through an endoluminal path relies on the physician’s dexterity and experience. In this paper we present the realization of a robotic flexible endoscope steering system that uses the endoscopic images to control the tip orientation towards the direction of the lumen. Two image-based control algorithms are investigated, one is based on the optical flow and the other is based on the image intensity. Both are evaluated using simulations in which the endoscope was steered through the lumen. The RMS distance to the lumen center was less than 25% of the lumen width. An experimental setup was built using a standard flexible endoscope, and the image-based control algorithms were used to actuate the wheels of the endoscope for tip steering. Experiments were conducted in an anatomical model to simulate gastroscopy. The image intensity- based algorithm was capable of steering the endoscope tip through an endoluminal path from the mouth to the duodenum accurately. Compared to manual control, the robotically steered endoscope performed 68% better in terms of keeping the lumen centered in the image

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Fault-tolerant formation driving mechanism designed for heterogeneous MAVs-UGVs groups

    Get PDF
    A fault-tolerant method for stabilization and navigation of 3D heterogeneous formations is proposed in this paper. The presented Model Predictive Control (MPC) based approach enables to deploy compact formations of closely cooperating autonomous aerial and ground robots in surveillance scenarios without the necessity of a precise external localization. Instead, the proposed method relies on a top-view visual relative localization provided by the micro aerial vehicles flying above the ground robots and on a simple yet stable visual based navigation using images from an onboard monocular camera. The MPC based schema together with a fault detection and recovery mechanism provide a robust solution applicable in complex environments with static and dynamic obstacles. The core of the proposed leader-follower based formation driving method consists in a representation of the entire 3D formation as a convex hull projected along a desired path that has to be followed by the group. Such an approach provides non-collision solution and respects requirements of the direct visibility between the team members. The uninterrupted visibility is crucial for the employed top-view localization and therefore for the stabilization of the group. The proposed formation driving method and the fault recovery mechanisms are verified by simulations and hardware experiments presented in the paper

    Learning Deployable Navigation Policies at Kilometer Scale from a Single Traversal

    Full text link
    Model-free reinforcement learning has recently been shown to be effective at learning navigation policies from complex image input. However, these algorithms tend to require large amounts of interaction with the environment, which can be prohibitively costly to obtain on robots in the real world. We present an approach for efficiently learning goal-directed navigation policies on a mobile robot, from only a single coverage traversal of recorded data. The navigation agent learns an effective policy over a diverse action space in a large heterogeneous environment consisting of more than 2km of travel, through buildings and outdoor regions that collectively exhibit large variations in visual appearance, self-similarity, and connectivity. We compare pretrained visual encoders that enable precomputation of visual embeddings to achieve a throughput of tens of thousands of transitions per second at training time on a commodity desktop computer, allowing agents to learn from millions of trajectories of experience in a matter of hours. We propose multiple forms of computationally efficient stochastic augmentation to enable the learned policy to generalise beyond these precomputed embeddings, and demonstrate successful deployment of the learned policy on the real robot without fine tuning, despite environmental appearance differences at test time. The dataset and code required to reproduce these results and apply the technique to other datasets and robots is made publicly available at rl-navigation.github.io/deployable

    Symbolic representation of scenarios in Bologna airport on virtual reality concept

    Get PDF
    This paper is a part of a big Project named Retina Project, which is focused in reduce the workload of an ATCO. It uses the last technological advances as Virtual Reality concept. The work has consisted in studying the different awareness situations that happens daily in Bologna Airport. It has been analysed one scenario with good visibility where the sun predominates and two other scenarios with poor visibility where the rain and the fog dominate. Due to the study of visibility in the three scenarios computed, the conclusion obtained is that the overlay must be shown with a constant dimension regardless the position of the aircraft to be readable by the ATC and also, the frame and the flight strip should be coloured in a showy colour (like red) for a better control by the ATCO
    corecore