1,051 research outputs found

    Intuitive Robot Teleoperation Based on Haptic Feedback and 3D Visualization

    Get PDF
    Robots are required in many jobs. The jobs related to tele-operation may be very challenging and often require reaching a destination quickly and with minimum collisions. In order to succeed in these jobs, human operators are asked to tele-operate a robot manually through a user interface. The design of a user interface and of the information provided in it, become therefore critical elements for the successful completion of robot tele-operation tasks. Effective and timely robot tele-navigation mainly relies on the intuitiveness provided by the interface and on the richness and presentation of the feedback given. This project investigated the use of both haptic and visual feedbacks in a user interface for robot tele-navigation. The aim was to overcome some of the limitations observed in a state of the art works, turning what is sometimes described as contrasting into an added value to improve tele-navigation performance. The key issue is to combine different human sensory modalities in a coherent way and to benefit from 3-D vision too. The proposed new approach was inspired by how visually impaired people use walking sticks to navigate. Haptic feedback may provide helpful input to a user to comprehend distances to surrounding obstacles and information about the obstacle distribution. This was proposed to be achieved entirely relying on on-board range sensors, and by processing this input through a simple scheme that regulates magnitude and direction of the environmental force-feedback provided to the haptic device. A specific algorithm was also used to render the distribution of very close objects to provide appropriate touch sensations. Scene visualization was provided by the system and it was shown to a user coherently to haptic sensation. Different visualization configurations, from multi-viewpoint observation to 3-D visualization, were proposed and rigorously assessed through experimentations, to understand the advantages of the proposed approach and performance variations among different 3-D display technologies. Over twenty users were invited to participate in a usability study composed by two major experiments. The first experiment focused on a comparison between the proposed haptic-feedback strategy and a typical state of the art approach. It included testing with a multi-viewpoint visual observation. The second experiment investigated the performance of the proposed haptic-feedback strategy when combined with three different stereoscopic-3D visualization technologies. The results from the experiments were encouraging and showed good performance with the proposed approach and an improvement over literature approaches to haptic feedback in robot tele-operation. It was also demonstrated that 3-D visualization can be beneficial for robot tele-navigation and it will not contrast with haptic feedback if it is properly aligned to it. Performance may vary with different 3-D visualization technologies, which is also discussed in the presented work

    Interface Design for Sonobuoy System

    Get PDF
    Modern sonar systems have greatly improved their sensor technology and processing techniques, but little effort has been put into display design for sonar data. The enormous amount of acoustic data presented by the traditional frequency versus time display can be overwhelming for a sonar operator to monitor and analyze. The recent emphasis placed on networked underwater warfare also requires the operator to create and maintain awareness of the overall tactical picture in order to improve overall effectiveness in communication and sharing of critical data. In addition to regular sonar tasks, sonobuoy system operators must manage the deployment of sonobuoys and ensure proper functioning of deployed sonobuoys. This thesis examines an application of the Ecological Interface Design framework in the interface design of a sonobuoy system on board a maritime patrol aircraft. Background research for this thesis includes a literature review, interviews with subject matter experts, and an analysis of the decision making process of sonar operators from an information processing perspective. A work domain analysis was carried out, which yielded a dual domain model: the domain of sonobuoy management and the domain of tactical situation awareness address the two different aspects of the operator's work. Information requirements were drawn from the two models, which provided a basis for the generation of various unique interface concepts. These concepts covered both the needs to build a good tactical picture and manage sonobuoys as physical resources. The later requirement has generally been overlooked by previous sonobuoy interface designs. A number of interface concepts were further developed into an integrated display prototype for user testing. Demos created with the same prototype were also delivered to subject matter experts for their feedback. While the evaluation means are subjective and limited in their ability to draw solid comparisons with existing sonobuoy displays, positive results from both user testing and subject matter feedback indicated that the concepts developed here are intuitive to use and effective in communicating critical data and supporting the user’s awareness of the tactical events simulated. Subject matter experts also acknowledged the potential for these concepts to be included in future research and development for sonobuoy systems. This project was funded by the Industrial Postgraduate Scholarships (IPS) from Natural Science and Engineering Research Council of Canada (NSERC) and the sponsorship of Humansystems Inc. at Guelph, Ontario

    2020 NASA Technology Taxonomy

    Get PDF
    This document is an update (new photos used) of the PDF version of the 2020 NASA Technology Taxonomy that will be available to download on the OCT Public Website. The updated 2020 NASA Technology Taxonomy, or "technology dictionary", uses a technology discipline based approach that realigns like-technologies independent of their application within the NASA mission portfolio. This tool is meant to serve as a common technology discipline-based communication tool across the agency and with its partners in other government agencies, academia, industry, and across the world

    Systems for Safety and Autonomous Behavior in Cars: The DARPA Grand Challenge Experience

    Get PDF

    Proceedings of the 2009 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    The joint workshop of the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB, Karlsruhe, and the Vision and Fusion Laboratory (Institute for Anthropomatics, Karlsruhe Institute of Technology (KIT)), is organized annually since 2005 with the aim to report on the latest research and development findings of the doctoral students of both institutions. This book provides a collection of 16 technical reports on the research results presented on the 2009 workshop

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    Beyond visualization : designing interfaces to contextualize geospatial data

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 71-74).The growing sensor data collections about our environment have the potential to drastically change our perception of the fragile world we live in. To make sense of such data, we commonly use visualization techniques, enabling public discourse and analysis. This thesis describes the design and implementation of a series of interactive systems that integrate geospatial sensor data visualization and terrain models with various user interface modalities in an educational context to support data analysis and knowledge building using part-digital, part-physical rendering. The main contribution of this thesis is a concrete application scenario and initial prototype of a "Designed Environment" where we can explore the relationship between the surface of Japan's islands, the tension that originates in the fault lines along the seafloor beneath its east coast, and the resulting natural disasters. The system is able to import geospatial data from a multitude of sources on the "Spatial Web", bringing us one step closer to a tangible "dashboard of the Earth."Samuel Luescher.S.M

    2015 Oil Observing Tools: A Workshop Report

    Get PDF
    Since 2010, the National Oceanic and Atmospheric Administration (NOAA) and the National Aeronautics and Space Administration (NASA) have provided satellite-based pollution surveillance in United States waters to regulatory agencies such as the United States Coast Guard (USCG). These technologies provide agencies with useful information regarding possible oil discharges. Unfortunately, there has been confusion as to how to interpret the images collected by these satellites and other aerial platforms, which can generate misunderstandings during spill events. Remote sensor packages on aircraft and satellites have advantages and disadvantages vis-à-vis human observers, because they do not “see” features or surface oil the same way. In order to improve observation capabilities during oil spills, applicable technologies must be identified, and then evaluated with respect to their advantages and disadvantages for the incident. In addition, differences between sensors (e.g., visual, IR, multispectral sensors, radar) and platform packages (e.g., manned/unmanned aircraft, satellites) must be understood so that reasonable approaches can be made if applicable and then any data must be correctly interpreted for decision support. NOAA convened an Oil Observing Tools Workshop to focus on the above actions and identify training gaps for oil spill observers and remote sensing interpretation to improve future oil surveillance, observation, and mapping during spills. The Coastal Response Research Center (CRRC) assisted NOAA’s Office of Response and Restoration (ORR) with this effort. The workshop was held on October 20-22, 2015 at NOAA’s Gulf of Mexico Disaster Response Center in Mobile, AL. The expected outcome of the workshop was an improved understanding, and greater use of technology to map and assess oil slicks during actual spill events. Specific workshop objectives included: •Identify new developments in oil observing technologies useful for real-time (or near real-time) mapping of spilled oil during emergency events. •Identify merits and limitations of current technologies and their usefulness to emergency response mapping of oil and reliable prediction of oil surface transport and trajectory forecasts.Current technologies include: the traditional human aerial observer, unmanned aircraft surveillance systems, aircraft with specialized senor packages, and satellite earth observing systems. •Assess training needs for visual observation (human observers with cameras) and sensor technologies (including satellites) to build skills and enhance proper interpretation for decision support during actual events

    Crowd-based cognitive perception of the physical world: Towards the internet of senses

    Get PDF
    This paper introduces a possible architecture and discusses the research directions for the realization of the Cognitive Perceptual Internet (CPI), which is enabled by the convergence of wired and wireless communications, traditional sensor networks, mobile crowd-sensing, and machine learning techniques. The CPI concept stems from the fact that mobile devices, such as smartphones and wearables, are becoming an outstanding mean for zero-effort world-sensing and digitalization thanks to their pervasive diffusion and the increasing number of embedded sensors. Data collected by such devices provide unprecedented insights into the physical world that can be inferred through cognitive processes, thus originating a digital sixth sense. In this paper, we describe how the Internet can behave like a sensing brain, thus evolving into the Internet of Senses, with network-based cognitive perception and action capabilities built upon mobile crowd-sensing mechanisms. The new concept of hyper-map is envisioned as an efficient geo-referenced repository of knowledge about the physical world. Such knowledge is acquired and augmented through heterogeneous sensors, multi-user cooperation and distributed learning mechanisms. Furthermore, we indicate the possibility to accommodate proactive sensors, in addition to common reactive sensors such as cameras, antennas, thermometers and inertial measurement units, by exploiting massive antenna arrays at millimeter-waves to enhance mobile terminals perception capabilities as well as the range of new applications. Finally, we distillate some insights about the challenges arising in the realization of the CPI, corroborated by preliminary results, and we depict a futuristic scenario where the proposed Internet of Senses becomes true
    corecore