39,244 research outputs found

    Towards Monocular Vision based Obstacle Avoidance through Deep Reinforcement Learning

    Full text link
    Obstacle avoidance is a fundamental requirement for autonomous robots which operate in, and interact with, the real world. When perception is limited to monocular vision avoiding collision becomes significantly more challenging due to the lack of 3D information. Conventional path planners for obstacle avoidance require tuning a number of parameters and do not have the ability to directly benefit from large datasets and continuous use. In this paper, a dueling architecture based deep double-Q network (D3QN) is proposed for obstacle avoidance, using only monocular RGB vision. Based on the dueling and double-Q mechanisms, D3QN can efficiently learn how to avoid obstacles in a simulator even with very noisy depth information predicted from RGB image. Extensive experiments show that D3QN enables twofold acceleration on learning compared with a normal deep Q network and the models trained solely in virtual environments can be directly transferred to real robots, generalizing well to various new environments with previously unseen dynamic objects.Comment: Accepted by RSS 2017 workshop New Frontiers for Deep Learning in Robotic

    A Web-Based Distributed Virtual Educational Laboratory

    Get PDF
    Evolution and cost of measurement equipment, continuous training, and distance learning make it difficult to provide a complete set of updated workbenches to every student. For a preliminary familiarization and experimentation with instrumentation and measurement procedures, the use of virtual equipment is often considered more than sufficient from the didactic point of view, while the hands-on approach with real instrumentation and measurement systems still remains necessary to complete and refine the student's practical expertise. Creation and distribution of workbenches in networked computer laboratories therefore becomes attractive and convenient. This paper describes specification and design of a geographically distributed system based on commercially standard components

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Design Creativity: Future Directions for Integrated Visualisation

    Get PDF
    The Architecture, Engineering and Construction (AEC) sectors are facing unprecedented challenges, not just with increased complexity of projects per se, but design-related integration. This requires stakeholders to radically re-think their existing business models (and thinking that underpins them), but also the technological challenges and skills required to deliver these projects. Whilst opponents will no doubt cite that this is nothing new as the sector as a whole has always had to respond to change; the counter to this is that design ‘creativity’ is now much more dependent on integration from day one. Given this, collaborative processes embedded in Building Information Modelling (BIM) models have been proffered as a panacea solution to embrace this change and deliver streamlined integration. The veracity of design teams’ “project data” is increasingly becoming paramount - not only for the coordination of design, processes, engineering services, fabrication, construction, and maintenance; but more importantly, facilitate ‘true’ project integration and interchange – the actualisation of which will require firm consensus and commitment. This Special Issue envisions some of these issues, challenges and opportunities (from a future landscape perspective), by highlighting a raft of concomitant factors, which include: technological challenges, design visualisation and integration, future digital tools, new and anticipated operating environments, and training requirements needed to deliver these aspirations. A fundamental part of this Special Issue’s ‘call’ was to capture best practice in order to demonstrate how design, visualisation and delivery processes (and technologies) affect the finished product viz: design outcome, design procedures, production methodologies and construction implementation. In this respect, the use of virtual environments are now particularly effective at supporting the design and delivery processes. In summary therefore, this Special Issue presents nine papers from leading scholars, industry and contemporaries. These papers provide an eclectic (but cognate) representation of AEC design visualisation and integration; which not only uncovers new insight and understanding of these challenges and solutions, but also provides new theoretical and practice signposts for future research
    • 

    corecore