4,494 research outputs found
Exploration of smart infrastructure for drivers of autonomous vehicles
The connection between vehicles and infrastructure is an integral part of providing autonomous vehicles information about the environment.
Autonomous vehicles need to be safe and users need to trust their driving decision. When smart infrastructure information is integrated into the vehicle, the driver needs to be informed in an understandable manner what the smart infrastructure detected.
Nevertheless, interactions that benefit from smart infrastructure have not been the focus of research, leading to knowledge gaps in the integration of smart infrastructure information in the vehicle. For example, it is unclear, how the information from two complex systems can be presented, and if decisions are made, how these can be explained.
Enriching the data of vehicles with information from the infrastructure opens unexplored opportunities.
Smart infrastructure provides vehicles with information to predict traffic flow and traffic events.
Additionally, it has information about traffic events in several kilometers distance and thus enables a look ahead on a traffic situation, which is not in the immediate view of drivers.
We argue that this smart infrastructure information can be used to enhance the driving experience. To achieve this, we explore designing novel interactions, providing warnings and visualizations about information that is out of the view of the driver, and offering explanations for the cause of changed driving behavior of the vehicle.
This thesis focuses on exploring the possibilities of smart infrastructure information with a focus on the highway.
The first part establishes a design space for 3D in-car augmented reality applications that profit from smart infrastructure information. Through the input of two focus groups and a literature review, use cases are investigated that can be introduced in the vehicle's interaction interface which, among others, rely on environment information. From those, a design space that can be used to design novel in-car applications is derived.
The second part explores out-of-view visualizations before and during take over requests to increase situation awareness. With three studies, different visualizations for out-of-view information are implemented in 2D, stereoscopic 3D, and augmented reality. Our results show that visualizations improve the situation awareness about critical events in larger distances during take over request situations.
In the third part, explanations are designed for situations in which the vehicle drives unexpectedly due to unknown reasons. Since smart infrastructure could provide connected vehicles with out-of-view or cloud information, the driving maneuver of the vehicle might remain unclear to the driver. Therefore, we explore the needs of drivers in those situations and derive design recommendations for an interface which displays the cause for the unexpected driving behavior.
This thesis answers questions about the integration of environment information in vehicles'.
Three important aspects are explored, which are essential to consider when implementing use cases with smart infrastructure in mind. It enables to design novel interactions, provides insights on how out-of-view visualizations can improve the drivers' situation awareness and explores unexpected driving situations and the design of explanations for them.
Overall, we have shown how infrastructure and connected vehicle information can be introduced in vehicles' user interface and how new technology such as augmented reality glasses can be used to improve the driver's perception of the environment.Autonome Fahrzeuge werden immer mehr in den alltäglichen Verkehr integriert. Die Verbindung von Fahrzeugen mit der Infrastruktur ist ein wesentlicher Bestandteil der Bereitstellung von Umgebungsinformationen in autonome Fahrzeugen.
Die Erweiterung der Fahrzeugdaten mit Informationen der Infrastruktur eröffnet ungeahnte Möglichkeiten.
Intelligente Infrastruktur übermittelt verbundenen Fahrzeugen Informationen über den prädizierten Verkehrsfluss und Verkehrsereignisse.
Zusätzlich können Verkehrsgeschehen in mehreren Kilometern Entfernung übermittelt werden, wodurch ein Vorausblick auf einen Bereich ermöglicht wird, der für den Fahrer nicht unmittelbar sichtbar ist.
Mit dieser Dissertation wird gezeigt, dass Informationen der intelligenten Infrastruktur benutzt werden können, um das Fahrerlebnis zu verbessern. Dies kann erreicht werden, indem innovative Interaktionen gestaltet werden, Warnungen und Visualisierungen über Geschehnisse außerhalb des Sichtfelds des Fahrers vermittelt werden und indem Erklärungen über den Grund eines veränderten Fahrzeugverhaltens untersucht werden.
Interaktionen, welche von intelligenter Infrastruktur profitieren, waren jedoch bisher nicht im Fokus der Forschung. Dies fĂĽhrt zu WissenslĂĽcken bezĂĽglich der Integration von intelligenter Infrastruktur in das Fahrzeug.
Diese Dissertation exploriert die Möglichkeiten intelligenter Infrastruktur, mit einem Fokus auf die Autobahn.
Der erste Teil erstellt einen Design Space für Anwendungen von augmentierter Realität (AR) in 3D innerhalb des Autos, die unter anderem von Informationen intelligenter Infrastruktur profitieren. Durch das Ergebnis mehrerer Studien werden Anwendungsfälle in einem Katalog gesammelt, welche in die Interaktionsschnittstelle des Autos einfließen können. Diese Anwendungsfälle bauen unter anderem auf Umgebungsinformationen. Aufgrund dieser Anwendungen wird der Design Space entwickelt, mit Hilfe dessen neuartige Anwendungen für den Fahrzeuginnenraum entwickelt werden können.
Der zweite Teil exploriert Visualisierungen für Verkehrssituationen, die außerhalb des Sichtfelds des Fahrers sind. Es wird untersucht, ob durch diese Visualisierungen der Fahrer besser auf ein potentielles Übernahmeszenario vorbereitet wird. Durch mehrere Studien wurden verschiedene Visualisierungen in 2D, stereoskopisches 3D und augmentierter Realität implementiert, die Szenen außerhalb des Sichtfelds des Fahrers darstellen. Diese Visualisierungen verbessern das Situationsbewusstsein über kritische Szenarien in einiger Entfernung während eines Übernahmeszenarios.
Im dritten Teil werden Erklärungen für Situationen gestaltet, in welchen das Fahrzeug ein unerwartetes Fahrmanöver ausführt. Der Grund des Fahrmanövers ist dem Fahrer dabei unbekannt. Mit intelligenter Infrastruktur verbundene Fahrzeuge erhalten Informationen, die außerhalb des Sichtfelds des Fahrers liegen oder von der Cloud bereit gestellt werden. Dadurch könnte der Grund für das unerwartete Fahrverhalten unklar für den Fahrer sein.
Daher werden die Bedürfnisse des Fahrers in diesen Situationen erforscht und Empfehlungen für die Gestaltung einer Schnittstelle, die Erklärungen für das unerwartete Fahrverhalten zur Verfügung stellt, abgeleitet.
Zusammenfassend wird gezeigt wie Daten der Infrastruktur und Informationen von verbundenen Fahrzeugen in die Nutzerschnittstelle des Fahrzeugs implementiert werden können. Zudem wird aufgezeigt, wie innovative Technologien wie AR Brillen, die Wahrnehmung der Umgebung des Fahrers verbessern können.
Durch diese Dissertation werden Fragen über Anwendungsfälle für die Integration von Umgebungsinformationen in Fahrzeugen beantwortet.
Drei wichtige Themengebiete wurden untersucht, welche bei der Betrachtung von Anwendungsfällen der intelligenten Infrastruktur essentiell sind. Durch diese Arbeit wird die Gestaltung innovativer Interaktionen ermöglicht, Einblicke in Visualisierungen von Informationen außerhalb des Sichtfelds des Fahrers gegeben und es wird untersucht, wie Erklärungen für unerwartete Fahrsituationen gestaltet werden können
Recommended from our members
Towards a legal definition of machine intelligence: the argument for artificial personhood in the age of deep learning.
The paper dissects the intricacies of Automated Decision Making (ADM) and urges for refining the current legal definition of AI when pinpointing the role of algorithms in the advent of ubiquitous computing, data analytics and deep learning. ADM relies upon a plethora of algorithmic approaches and has already found a wide range of applications in marketing automation, social networks, computational neuroscience, robotics, and other fields. Our main aim here is to explain how a thorough understanding of the layers of ADM could be a first good step towards this direction: AI operates on a formula based on several degrees of automation employed in the interaction between the programmer, the user, and the algorithm; this can take various shapes and thus yield different answers to key issues regarding agency. The paper offers a fresh look at the concept of "Machine Intelligence", which exposes certain vulnerabilities in its current legal interpretation. Most importantly, it further helps us to explore whether the argument for "artificial personhood" holds any water. To highlight this argument, analysis proceeds in two parts: Part 1 strives to provide a taxonomy of the various levels of automation that reflects distinct degrees of Human - Machine interaction and can thus serve as a point of reference for outlining distinct rights and obligations of the programmer and the consumer: driverless cars are used as a case study to explore the several layers of human and machine interaction. These different degrees of automation reflect various levels of complexities in the underlying algorithms, and pose very interesting questions in terms of agency and dynamic tasks carried out by software agents. Part 2 further discusses the intricate nature of the underlying algorithms and artificial neural networks (ANN) that implement them and considers how one can interpret and utilize observed patterns in acquired data. Is "artificial personhood" a sufficient legal response to highly sophisticated machine learning techniques employed in decision making that successfully emulate or even enhance human cognitive abilities
Recommended from our members
A study into the layers of automated decision-making: emergent normative and legal aspects of deep learning
The paper dissects the intricacies of automated decision making (ADM) and urges for refining the current legal definition of artificial intelligence (AI) when pinpointing the role of algorithms in the advent of ubiquitous computing, data analytics and deep learning. Whilst coming up with a toolkit to measure algorithmic determination in automated/semi-automated tasks might be proven to be a tedious task for the legislator, our main aim here is to explain how a thorough understanding of the layers of ADM could be a first good step towards this direction: AI operates on a formula based on several degrees of automation employed in the interaction between the programmer, the user, and the algorithm. The paper offers a fresh look at AI, which exposes certain vulnerabilities in its current legal interpretation. To highlight this argument, analysis proceeds in two parts: Part 1 strives to provide a taxonomy of the various levels of automation that reflects distinct degrees of human–machine interaction. Part 2 further discusses the intricate nature of AI algorithms and considers how one can utilize observed patterns in acquired data. Finally, the paper explores the legal challenges that result from user empowerment and the requirement for data transparency
A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction
Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become
an important area of focus for both researchers and practitioners. Various
approaches have been used to achieve it, such as confidence scores,
explanations, trustworthiness cues, or uncertainty communication. However, a
comprehensive understanding of the field is lacking due to the diversity of
perspectives arising from various backgrounds that influence it and the lack of
a single definition for appropriate trust. To investigate this topic, this
paper presents a systematic review to identify current practices in building
appropriate trust, different ways to measure it, types of tasks used, and
potential challenges associated with it. We also propose a Belief, Intentions,
and Actions (BIA) mapping to study commonalities and differences in the
concepts related to appropriate trust by (a) describing the existing
disagreements on defining appropriate trust, and (b) providing an overview of
the concepts and definitions related to appropriate trust in AI from the
existing literature. Finally, the challenges identified in studying appropriate
trust are discussed, and observations are summarized as current trends,
potential gaps, and research opportunities for future work. Overall, the paper
provides insights into the complex concept of appropriate trust in human-AI
interaction and presents research opportunities to advance our understanding on
this topic.Comment: 39 Page
Providing and assessing intelligible explanations in autonomous driving
Intelligent vehicles with automated driving functionalities provide many benefits, but also instigate serious concerns around human safety and trust. While the automotive industry has devoted enormous resources to realising vehicle autonomy, there exist uncertainties as to whether the technology would be widely adopted by society. Autonomous vehicles (AVs) are complex systems, and in challenging driving scenarios, they are likely to make decisions that could be confusing to end-users. As a way to bridge the gap between this technology and end-users, the provision of explanations is generally being put forward. While explanations are considered to be helpful, this thesis argues that explanations must also be intelligible (as obligated by the GDPR Article 12) to the intended stakeholders, and should make causal attributions in order to foster confidence and trust in end-users. Moreover, the methods for generating these explanations should be transparent for easy audit. To substantiate this argument, the thesis proceeds in four steps: First, we adopted a mixed method approach (in a user study ) to elicit passengers' requirements for effective explainability in diverse autonomous driving scenarios. Second, we explored different representations, data structures and driving data annotation schemes to facilitate intelligible explanation generation and general explainability research in autonomous driving. Third, we developed transparent algorithms for posthoc explanation generation. These algorithms were tested within a collision risk assessment case study and an AV navigation case study, using the Lyft Level5 dataset and our new SAX dataset---a dataset that we have introduced for AV explainability research. Fourth, we deployed these algorithms in an immersive physical simulation environment and assessed (in a lab study ) the impact of the generated explanations on passengers' perceived safety while varying the prediction accuracy of an AV's perception system and the specificity of the explanations. The thesis concludes by providing recommendations needed for the realisation of more effective explainable autonomous driving, and provides a future research agenda
How Transparency Measures Can Attenuate Initial Failures of Intelligent Decision Support Systems
Owing to high functional complexity, trust plays a critical role for the adoption of intelligent decision support systems (DSS). Especially failures in initial usage phases might endanger trust since users are yet to assess the system’s capabilities over time. Since such initial failures are unavoidable, it is crucial to understand how providers can inform users about system capabilities to rebuild user trust. Using an online experiment, we evaluate the effects of recurring explanations and initial tutorials as transparency measures on trust. We find that recurring explanations are superior to initial tutorials in establishing trust in intelligent DSS. However, recurring explanations are only as effective as tutorials or the combination of both tutorials and recurring explanations in rebuilding trust after initial failures occurred. Our results provide empirical insights for the design of transparency mechanisms for intelligent DSS, especially those with high underlying algorithmic complexity or potentially high damage
Space Applications of Automation, Robotics and Machine Intelligence Systems (ARAMIS), phase 2. Volume 1: Telepresence technology base development
The field of telepresence is defined, and overviews of those capabilities that are now available, and those that will be required to support a NASA telepresence effort are provided. Investigation of NASA's plans and goals with regard to telepresence, extensive literature search for materials relating to relevant technologies, a description of these technologies and their state of the art, and projections for advances in these technologies over the next decade are included. Several space projects are examined in detail to determine what capabilities are required of a telepresence system in order to accomplish various tasks, such as servicing and assembly. The key operational and technological areas are identified, conclusions and recommendations are made for further research, and an example developmental program is presented, leading to an operational telepresence servicer
Calibrating trust between humans and artificial intelligence systems
As machines become increasingly more intelligent, they become more capable of operating with greater degrees of independence from their users. However, appropriate use of these autonomous systems is dependent on appropriate trust from their users. A lack of trust towards an autonomous system will likely lead to the user doubting the capabilities of the system, potentially to the point of disuse. Conversely, too much trust in a system may lead to the user overestimating the capabilities of the system, and potentially result in errors which could have been avoided with appropriate supervision. Thus, appropriate trust is trust which is calibrated to reflect the true performance capabilities of the system. The calibration of trust towards autonomous systems is an area of research of increasing popularity, as more and more intelligent machines are introduced to modern workplaces.
This thesis contains three studies which examine trust towards autonomous technologies. In our first study, in Chapter 2, we used qualitative research methods to explore how participants characterise their trust towards different online technologies. In focus groups, participants discussed a variety of factors which they believed were important when using digital services. We had a particular interest in how they perceived social media platforms, as these services rely upon users continued sharing of their personal information. In our second study, in Chapter 3, using our initial findings we created a human-computer interaction experiment, where participants collaborated with an Autonomous Image Classifier System. In this experiment, we were able to examine the ways that participants placed trust in the classifier during different types of system performance. We also investigated whether users’ trust could be better calibrated by providing different displays of System Confidence Information, to help convey the system’s decision making. In our final study, in Chapter 4, we built directly upon the findings of Chapter 3, by creating an updated version of our human-computer interaction experiment. We provided participants with another cue of system decision making, Gradient-weighted Class Activation Mapping, and investigated whether this cue could promote greater trust towards the classifier. Additionally, we examined whether these cues can improve participants’ subjective understanding of the system’s decision making, as a way of exploring how to improve the interpretability of these systems.
This research contributes to our current understanding of calibrating users’ trust towards autonomous systems, and may be particularly useful when designing Autonomous Image Classifier Systems. While our results were inconclusive, we did find some support for users preferring the more complicated interfaces we provided. Users also reported greater understanding of the classifier’s decision making when provided with the Gradient-weighted Class Activation Mapping cue. Further research may clarify whether this cue is an appropriate method of visualising the decision-making of Autonomous Image Classifier Systems in real-world settings
H2020 COVR FSTP LIAISON – D2.3 Academic publication featuring the future of robot governance.
Horizon 2020(H2020)779966Effective Protection of Fundamental Rights in a pluralist worl
- …