5,273 research outputs found

    Flight Deck Automation Support with Dynamic 4D Trajectory Management for ACAS: AUTOFLY-AID

    Get PDF
    AUTOFLY-Aid Project aims to develop and demonstrate novel automation support algorithms and tools to the flight crew for flight critical collision avoidance using “dynamic 4D trajectory management”. The automation support system is envisioned to improve the primary shortcomings of TCAS, and to aid the pilot through add-on avionics/head-up displays and reality augmentation devices in dynamically evolving collision avoidance scenarios. The main theoretical innovative and novel concepts to be developed by AUTOFLY-Aid Project are a) design and development of the mathematical models of the full composite airspace picture from the flight deck’s perspective, as seen/measured/informed by the aircraft flying in SESAR 2020 b) design and development of a dynamic trajectory planning algorithm that can generate at real-time (on the order of seconds) flyable (i.e. dynamically and performance-wise feasible)alternative trajectories across the evolving stochastic composite airspace picture (which includes new conflicts, blunder risks, terrain and weather limitations) and c) development and testing of the Collision Avoidance Automation Support System on a Boeing 737 NG FNPT II Flight Simulator with synthetic vision and reality augmentation while providing the flight crew with quantified and visual understanding of collision risks in terms of time and directions and countermeasures

    Mobile Augmented Reality: User Interfaces, Frameworks, and Intelligence

    Get PDF
    Mobile Augmented Reality (MAR) integrates computer-generated virtual objects with physical environments for mobile devices. MAR systems enable users to interact with MAR devices, such as smartphones and head-worn wearables, and perform seamless transitions from the physical world to a mixed world with digital entities. These MAR systems support user experiences using MAR devices to provide universal access to digital content. Over the past 20 years, several MAR systems have been developed, however, the studies and design of MAR frameworks have not yet been systematically reviewed from the perspective of user-centric design. This article presents the first effort of surveying existing MAR frameworks (count: 37) and further discuss the latest studies on MAR through a top-down approach: (1) MAR applications; (2) MAR visualisation techniques adaptive to user mobility and contexts; (3) systematic evaluation of MAR frameworks, including supported platforms and corresponding features such as tracking, feature extraction, and sensing capabilities; and (4) underlying machine learning approaches supporting intelligent operations within MAR systems. Finally, we summarise the development of emerging research fields and the current state-of-the-art, and discuss the important open challenges and possible theoretical and technical directions. This survey aims to benefit both researchers and MAR system developers alike.Peer reviewe

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    Enhancing user experience and safety in the context of automated driving through uncertainty communication

    Get PDF
    Operators of highly automated driving systems may exhibit behaviour characteristic of overtrust issues due to an insufficient awareness of automation fallibility. Consequently, situation awareness in critical situations is reduced and safe driving performance following emergency takeovers is impeded. Previous research has indicated that conveying system uncertainties may alleviate these issues. However, existing approaches require drivers to attend the uncertainty information with focal attention, likely resulting in missed changes when engaged in non-driving-related tasks. This research project expands on existing work regarding uncertainty communication in the context of automated driving. Specifically, it aims to investigate the implications of conveying uncertainties under consideration of non-driving-related tasks and, based on the outcomes, develop and evaluate an uncertainty display that enhances both user experience and driving safety. In a first step, the impact of visually conveying uncertainties was investigated under consideration of workload, trust, monitoring behaviour, non-driving-related tasks, takeover performance, and situation awareness. For this, an anthropomorphic visual uncertainty display located in the instrument cluster was developed. While the hypothesised benefits for trust calibration and situation awareness were confirmed, the results indicate that visually conveying uncertainties leads to an increased perceived effort due to a higher frequency of monitoring glances. Building on these findings, peripheral awareness displays were explored as a means for conveying uncertainties without the need for focused attention to reduce monitoring glances. As a prerequisite for developing such a display, a systematic literature review was conducted to identify evaluation methods and criteria, which were then coerced into a comprehensive framework. Grounded in this framework, a peripheral awareness display for uncertainty communication was developed and subsequently compared with the initially proposed visual anthropomorphic uncertainty display in a driving simulator study. Eye tracking and subjective workload data indicate that the peripheral awareness display reduces the monitoring effort relative to the visual display, while driving performance and trust data highlight that the benefits of uncertainty communication are maintained. Further, this research project addresses the implications of increasing the functional detail of uncertainty information. Results of a driving simulator study indicate that particularly workload should be considered when increasing the functional detail of uncertainty information. Expanding upon this approach, an augmented reality display concept was developed and a set of visual variables was explored in a forced choice sorting task to assess their ordinal characteristics. Particularly changes in colour hue and animation-based variables received high preference ratings and were ordered consistently from low to high uncertainty. This research project has contributed a series of novel insights and ideas to the field of human factors in automated driving. It confirmed that conveying uncertainties improves trust calibration and situation awareness, but highlighted that using a visual display lessens the positive effects. Addressing this shortcoming, a peripheral awareness display was designed applying a dedicated evaluation framework. Compared with the previously employed visual display, it decreased monitoring glances and, consequentially, perceived effort. Further, an augmented reality-based uncertainty display concept was developed to minimise the workload increments associated with increases in the functional detail of uncertainty information.</div

    On uncertainty propagation in image-guided renal navigation: Exploring uncertainty reduction techniques through simulation and in vitro phantom evaluation

    Get PDF
    Image-guided interventions (IGIs) entail the use of imaging to augment or replace direct vision during therapeutic interventions, with the overall goal is to provide effective treatment in a less invasive manner, as an alternative to traditional open surgery, while reducing patient trauma and shortening the recovery time post-procedure. IGIs rely on pre-operative images, surgical tracking and localization systems, and intra-operative images to provide correct views of the surgical scene. Pre-operative images are used to generate patient-specific anatomical models that are then registered to the patient using the surgical tracking system, and often complemented with real-time, intra-operative images. IGI systems are subject to uncertainty from several sources, including surgical instrument tracking / localization uncertainty, model-to-patient registration uncertainty, user-induced navigation uncertainty, as well as the uncertainty associated with the calibration of various surgical instruments and intra-operative imaging devices (i.e., laparoscopic camera) instrumented with surgical tracking sensors. All these uncertainties impact the overall targeting accuracy, which represents the error associated with the navigation of a surgical instrument to a specific target to be treated under image guidance provided by the IGI system. Therefore, understanding the overall uncertainty of an IGI system is paramount to the overall outcome of the intervention, as procedure success entails achieving certain accuracy tolerances specific to individual procedures. This work has focused on studying the navigation uncertainty, along with techniques to reduce uncertainty, for an IGI platform dedicated to image-guided renal interventions. We constructed life-size replica patient-specific kidney models from pre-operative images using 3D printing and tissue emulating materials and conducted experiments to characterize the uncertainty of both optical and electromagnetic surgical tracking systems, the uncertainty associated with the virtual model-to-physical phantom registration, as well as the uncertainty associated with live augmented reality (AR) views of the surgical scene achieved by enhancing the pre-procedural model and tracked surgical instrument views with live video views acquires using a camera tracked in real time. To better understand the effects of the tracked instrument calibration, registration fiducial configuration, and tracked camera calibration on the overall navigation uncertainty, we conducted Monte Carlo simulations that enabled us to identify optimal configurations that were subsequently validated experimentally using patient-specific phantoms in the laboratory. To mitigate the inherent accuracy limitations associated with the pre-procedural model-to-patient registration and their effect on the overall navigation, we also demonstrated the use of tracked video imaging to update the registration, enabling us to restore targeting accuracy to within its acceptable range. Lastly, we conducted several validation experiments using patient-specific kidney emulating phantoms using post-procedure CT imaging as reference ground truth to assess the accuracy of AR-guided navigation in the context of in vitro renal interventions. This work helped find answers to key questions about uncertainty propagation in image-guided renal interventions and led to the development of key techniques and tools to help reduce optimize the overall navigation / targeting uncertainty

    Exploration of smart infrastructure for drivers of autonomous vehicles

    Get PDF
    The connection between vehicles and infrastructure is an integral part of providing autonomous vehicles information about the environment. Autonomous vehicles need to be safe and users need to trust their driving decision. When smart infrastructure information is integrated into the vehicle, the driver needs to be informed in an understandable manner what the smart infrastructure detected. Nevertheless, interactions that benefit from smart infrastructure have not been the focus of research, leading to knowledge gaps in the integration of smart infrastructure information in the vehicle. For example, it is unclear, how the information from two complex systems can be presented, and if decisions are made, how these can be explained. Enriching the data of vehicles with information from the infrastructure opens unexplored opportunities. Smart infrastructure provides vehicles with information to predict traffic flow and traffic events. Additionally, it has information about traffic events in several kilometers distance and thus enables a look ahead on a traffic situation, which is not in the immediate view of drivers. We argue that this smart infrastructure information can be used to enhance the driving experience. To achieve this, we explore designing novel interactions, providing warnings and visualizations about information that is out of the view of the driver, and offering explanations for the cause of changed driving behavior of the vehicle. This thesis focuses on exploring the possibilities of smart infrastructure information with a focus on the highway. The first part establishes a design space for 3D in-car augmented reality applications that profit from smart infrastructure information. Through the input of two focus groups and a literature review, use cases are investigated that can be introduced in the vehicle's interaction interface which, among others, rely on environment information. From those, a design space that can be used to design novel in-car applications is derived. The second part explores out-of-view visualizations before and during take over requests to increase situation awareness. With three studies, different visualizations for out-of-view information are implemented in 2D, stereoscopic 3D, and augmented reality. Our results show that visualizations improve the situation awareness about critical events in larger distances during take over request situations. In the third part, explanations are designed for situations in which the vehicle drives unexpectedly due to unknown reasons. Since smart infrastructure could provide connected vehicles with out-of-view or cloud information, the driving maneuver of the vehicle might remain unclear to the driver. Therefore, we explore the needs of drivers in those situations and derive design recommendations for an interface which displays the cause for the unexpected driving behavior. This thesis answers questions about the integration of environment information in vehicles'. Three important aspects are explored, which are essential to consider when implementing use cases with smart infrastructure in mind. It enables to design novel interactions, provides insights on how out-of-view visualizations can improve the drivers' situation awareness and explores unexpected driving situations and the design of explanations for them. Overall, we have shown how infrastructure and connected vehicle information can be introduced in vehicles' user interface and how new technology such as augmented reality glasses can be used to improve the driver's perception of the environment.Autonome Fahrzeuge werden immer mehr in den alltĂ€glichen Verkehr integriert. Die Verbindung von Fahrzeugen mit der Infrastruktur ist ein wesentlicher Bestandteil der Bereitstellung von Umgebungsinformationen in autonome Fahrzeugen. Die Erweiterung der Fahrzeugdaten mit Informationen der Infrastruktur eröffnet ungeahnte Möglichkeiten. Intelligente Infrastruktur ĂŒbermittelt verbundenen Fahrzeugen Informationen ĂŒber den prĂ€dizierten Verkehrsfluss und Verkehrsereignisse. ZusĂ€tzlich können Verkehrsgeschehen in mehreren Kilometern Entfernung ĂŒbermittelt werden, wodurch ein Vorausblick auf einen Bereich ermöglicht wird, der fĂŒr den Fahrer nicht unmittelbar sichtbar ist. Mit dieser Dissertation wird gezeigt, dass Informationen der intelligenten Infrastruktur benutzt werden können, um das Fahrerlebnis zu verbessern. Dies kann erreicht werden, indem innovative Interaktionen gestaltet werden, Warnungen und Visualisierungen ĂŒber Geschehnisse außerhalb des Sichtfelds des Fahrers vermittelt werden und indem ErklĂ€rungen ĂŒber den Grund eines verĂ€nderten Fahrzeugverhaltens untersucht werden. Interaktionen, welche von intelligenter Infrastruktur profitieren, waren jedoch bisher nicht im Fokus der Forschung. Dies fĂŒhrt zu WissenslĂŒcken bezĂŒglich der Integration von intelligenter Infrastruktur in das Fahrzeug. Diese Dissertation exploriert die Möglichkeiten intelligenter Infrastruktur, mit einem Fokus auf die Autobahn. Der erste Teil erstellt einen Design Space fĂŒr Anwendungen von augmentierter RealitĂ€t (AR) in 3D innerhalb des Autos, die unter anderem von Informationen intelligenter Infrastruktur profitieren. Durch das Ergebnis mehrerer Studien werden AnwendungsfĂ€lle in einem Katalog gesammelt, welche in die Interaktionsschnittstelle des Autos einfließen können. Diese AnwendungsfĂ€lle bauen unter anderem auf Umgebungsinformationen. Aufgrund dieser Anwendungen wird der Design Space entwickelt, mit Hilfe dessen neuartige Anwendungen fĂŒr den Fahrzeuginnenraum entwickelt werden können. Der zweite Teil exploriert Visualisierungen fĂŒr Verkehrssituationen, die außerhalb des Sichtfelds des Fahrers sind. Es wird untersucht, ob durch diese Visualisierungen der Fahrer besser auf ein potentielles Übernahmeszenario vorbereitet wird. Durch mehrere Studien wurden verschiedene Visualisierungen in 2D, stereoskopisches 3D und augmentierter RealitĂ€t implementiert, die Szenen außerhalb des Sichtfelds des Fahrers darstellen. Diese Visualisierungen verbessern das Situationsbewusstsein ĂŒber kritische Szenarien in einiger Entfernung wĂ€hrend eines Übernahmeszenarios. Im dritten Teil werden ErklĂ€rungen fĂŒr Situationen gestaltet, in welchen das Fahrzeug ein unerwartetes Fahrmanöver ausfĂŒhrt. Der Grund des Fahrmanövers ist dem Fahrer dabei unbekannt. Mit intelligenter Infrastruktur verbundene Fahrzeuge erhalten Informationen, die außerhalb des Sichtfelds des Fahrers liegen oder von der Cloud bereit gestellt werden. Dadurch könnte der Grund fĂŒr das unerwartete Fahrverhalten unklar fĂŒr den Fahrer sein. Daher werden die BedĂŒrfnisse des Fahrers in diesen Situationen erforscht und Empfehlungen fĂŒr die Gestaltung einer Schnittstelle, die ErklĂ€rungen fĂŒr das unerwartete Fahrverhalten zur VerfĂŒgung stellt, abgeleitet. Zusammenfassend wird gezeigt wie Daten der Infrastruktur und Informationen von verbundenen Fahrzeugen in die Nutzerschnittstelle des Fahrzeugs implementiert werden können. Zudem wird aufgezeigt, wie innovative Technologien wie AR Brillen, die Wahrnehmung der Umgebung des Fahrers verbessern können. Durch diese Dissertation werden Fragen ĂŒber AnwendungsfĂ€lle fĂŒr die Integration von Umgebungsinformationen in Fahrzeugen beantwortet. Drei wichtige Themengebiete wurden untersucht, welche bei der Betrachtung von AnwendungsfĂ€llen der intelligenten Infrastruktur essentiell sind. Durch diese Arbeit wird die Gestaltung innovativer Interaktionen ermöglicht, Einblicke in Visualisierungen von Informationen außerhalb des Sichtfelds des Fahrers gegeben und es wird untersucht, wie ErklĂ€rungen fĂŒr unerwartete Fahrsituationen gestaltet werden können

    Proceedings of the 2009 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    The joint workshop of the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB, Karlsruhe, and the Vision and Fusion Laboratory (Institute for Anthropomatics, Karlsruhe Institute of Technology (KIT)), is organized annually since 2005 with the aim to report on the latest research and development findings of the doctoral students of both institutions. This book provides a collection of 16 technical reports on the research results presented on the 2009 workshop
    • 

    corecore