319 research outputs found

    HoME: a Household Multimodal Environment

    Full text link
    We introduce HoME: a Household Multimodal Environment for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context. HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning, and more. We hope HoME better enables artificial agents to learn as humans do: in an interactive, multimodal, and richly contextualized setting.Comment: Presented at NIPS 2017's Visually-Grounded Interaction and Language Worksho

    Deep Reinforcement Learning for Autonomous Inspection System

    Get PDF
    The objective of the group is to investigate the application of Reinforced Deep Learning to autonomously complete damage inspection on buildings after natural disasters. By using Reinforced Deep Learning with Convolutional Neural Networks performing image processing, the drone will be trained to navigate itself towards buildings and perform building inspection without scanning the whole structure. The algorithms for Reinforced Deep Learning can be used to drive drones autonomously and perform inspections currently done by humans. The stability of the structure of buildings are unknown when these inspections are performed. This application would greatly reduce the risk of individuals involved in inspecting the building structures and provide greater insight into the stability of these structures
    corecore