46 research outputs found

    Natural Language Instructions for Intuitive Human Interaction with Robotic Assistants in Field Construction Work

    Full text link
    The introduction of robots is widely considered to have significant potential of alleviating the issues of worker shortage and stagnant productivity that afflict the construction industry. However, it is challenging to use fully automated robots in complex and unstructured construction sites. Human-Robot Collaboration (HRC) has shown promise of combining human workers' flexibility and robot assistants' physical abilities to jointly address the uncertainties inherent in construction work. When introducing HRC in construction, it is critical to recognize the importance of teamwork and supervision in field construction and establish a natural and intuitive communication system for the human workers and robotic assistants. Natural language-based interaction can enable intuitive and familiar communication with robots for human workers who are non-experts in robot programming. However, limited research has been conducted on this topic in construction. This paper proposes a framework to allow human workers to interact with construction robots based on natural language instructions. The proposed method consists of three stages: Natural Language Understanding (NLU), Information Mapping (IM), and Robot Control (RC). Natural language instructions are input to a language model to predict a tag for each word in the NLU module. The IM module uses the result of the NLU module and building component information to generate the final instructional output essential for a robot to acknowledge and perform the construction task. A case study for drywall installation is conducted to evaluate the proposed approach. The obtained results highlight the potential of using natural language-based interaction to replicate the communication that occurs between human workers within the context of human-robot teams

    Enhancing Perceived Safety in Human–Robot Collaborative Construction Using Immersive Virtual Environments

    Full text link
    Advances in robotics now permit humans to work collaboratively with robots. However, humans often feel unsafe working alongside robots. Our knowledge of how to help humans overcome this issue is limited by two challenges. One, it is difficult, expensive and time-consuming to prototype robots and set up various work situations needed to conduct studies in this area. Two, we lack strong theoretical models to predict and explain perceived safety and its influence on human–robot work collaboration (HRWC). To address these issues, we introduce the Robot Acceptance Safety Model (RASM) and employ immersive virtual environments (IVEs) to examine perceived safety of working on tasks alongside a robot. Results from a between-subjects experiment done in an IVE show that separation of work areas between robots and humans increases perceived safety by promoting team identification and trust in the robot. In addition, the more participants felt it was safe to work with the robot, the more willing they were to work alongside the robot in the future.University of Michigan Mcubed Grant: Virtual Prototyping of Human-Robot Collaboration in Unstructured Construction EnvironmentsPeer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/145620/1/You et al. forthcoming in AutCon.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/145620/4/You et al. 2018.pdfDescription of You et al. 2018.pdf : Published Versio

    Plane Registration Leveraged by Global Constraints for Context‐Aware AEC Applications

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/97513/1/j.1467-8667.2012.00795.x.pd

    Integrated Information Modeling And Visual Simulation Of Engineering Operations Using Dynamic Augmented Reality Scene Graphs

    No full text
    This paper describes research that investigated the applicability of 3D Augmented Reality (AR) in visualizing dynamic engineering operations. The research resulted in the design and development of ARVISCOPE, a general purpose high level AR animation scripting language, and ROVER, a mobile computing hardware framework. When used together, ARVISCOPE and ROVER can create 3D AR animations of arbitrary length and complexity at the operations level of detail. ARVISCOPE takes advantage of advanced Global Positioning System (GPS) and 3D orientation tracking technologies to accurately track a user\u27s spatial context while georeferencing superimposed dynamic 3D graphics in an augmented environment. In achieving the research objectives, major technical challenges such as accurate registration, automated occlusion handling, and dynamic scene construction and manipulation were encountered and successfully addressed. This paper, describes the methodology and technical details of how scene graphs enable the process of constructing and updating an augmented scene representing a dynamic engineering operation. © 2011 The authors

    Enabling Discovery-Based Learning In Construction Using Telepresent Augmented Reality

    No full text
    Construction engineering students often complain about the lack of engagement and interaction with the learning environment. Notwithstanding, many instructors still rely on traditional teaching methods which include the use of chalkboard, handouts, and computer presentations that are often filled with many words and few visual elements. Research shows that these teaching techniques are considered almost obsolete by a many students specially those who are visual learners or team workers. Also, the influence of visual and social media has changed student perceptions and how they expect the instructional materials to be presented in a classroom setting. This paper presents an innovative pedagogical tool that uses remote videotaping, augmented reality (AR), and ultra-wide band (UWB) locationing to bring live videos of remote construction jobsites to the classroom, create an intuitive interface for students to interact with the objects in the video scenes, and visually deliver location-aware instructional materials to them. © 2012 Elsevier B.V

    Scalable Algorithm for Resolving Incorrect Occlusion in Dynamic Augmented Reality Engineering Environments

    No full text
    Augmented reality (AR) offers significant potential in construction, manufacturing, and other engineering disciplines that employ graphical visualization to plan and design their operations. As a result of introducing real‐world objects into the visualization, less virtual models have to be deployed to create a realistic visual output that directly translates into less time and effort required to create, render, manipulate, manage, and update three‐dimensional (3D) virtual contents (CAD model engineering) of the animated scene. At the same time, using the existing layout of land or plant as the background of visualization significantly alleviates the need to collect data about the surrounding environment prior to creating the final visualization while providing visually convincing representations of the processes being studied. In an AR animation, virtual and real objects must be simultaneously managed and accurately displayed to a user to create a visually convincing illusion of their coexistence and interaction. A critical challenge impeding this objective is the problem of incorrect occlusion that manifests itself when real objects in an AR scene partially or wholly block the view of virtual objects. In the presented research, a new AR occlusion handling system based on depth‐sensing algorithms and frame buffer manipulation techniques was designed and implemented. This algorithm is capable of resolving incorrect occlusion occurring in dynamic AR environments in real time using depth‐sensing equipment such as laser detection and ranging (LADAR) devices, and can be integrated into any mobile AR platform that allows a user to navigate freely and observe a dynamic AR scene from any vantage position

    Enabling Discovery Based Learning in Construction Using Telepresent Augmented Reality

    No full text
    Construction engineering students often complain about the lack of engagement and interaction with the learning environment. Notwithstanding, many instructors still rely on traditional teaching methods which include the use of chalkboard, handouts, and computer presentations that are often filled with many words and few visual elements. Research shows that these teaching techniques are considered almost obsolete by a many students specially those who are visual learners or team workers. Also, the influence of visual and social media has changed student perceptions and how they expect the instructional materials to be presented in a classroom setting. This paper presents an innovative pedagogical tool that uses remote videotaping, augmented reality (AR), and ultra-wide band (UWB) locationing to bring live videos of remote construction jobsites to the classroom, create an intuitive interface for students to interact with the objects in the video scenes, and visually deliver location-aware instructional materials to them
    corecore