89 research outputs found

    Intent prediction of vulnerable road users for trusted autonomous vehicles

    Full text link
    This study investigated how future autonomous vehicles could be further trusted by vulnerable road users (such as pedestrians and cyclists) that they would be interacting with in urban traffic environments. It focused on understanding the behaviours of such road users on a deeper level by predicting their future intentions based solely on vehicle-based sensors and AI techniques. The findings showed that personal/body language attributes of vulnerable road users besides their past motion trajectories and physics attributes in the environment led to more accurate predictions about their intended actions

    Digital Interaction and Machine Intelligence

    Get PDF
    This book is open access, which means that you have free and unlimited access. This book presents the Proceedings of the 9th Machine Intelligence and Digital Interaction Conference. Significant progress in the development of artificial intelligence (AI) and its wider use in many interactive products are quickly transforming further areas of our life, which results in the emergence of various new social phenomena. Many countries have been making efforts to understand these phenomena and find answers on how to put the development of artificial intelligence on the right track to support the common good of people and societies. These attempts require interdisciplinary actions, covering not only science disciplines involved in the development of artificial intelligence and human-computer interaction but also close cooperation between researchers and practitioners. For this reason, the main goal of the MIDI conference held on 9-10.12.2021 as a virtual event is to integrate two, until recently, independent fields of research in computer science: broadly understood artificial intelligence and human-technology interaction

    Human-Machine Teamwork: An Exploration of Multi-Agent Systems, Team Cognition, and Collective Intelligence

    Get PDF
    One of the major ways through which humans overcome complex challenges is teamwork. When humans share knowledge and information, and cooperate and coordinate towards shared goals, they overcome their individual limitations and achieve better solutions to difficult problems. The rise of artificial intelligence provides a unique opportunity to study teamwork between humans and machines, and potentially discover insights about cognition and collaboration that can set the foundation for a world where humans work with, as opposed to against, artificial intelligence to solve problems that neither human or artificial intelligence can solve on its own. To better understand human-machine teamwork, it’s important to understand human-human teamwork (humans working together) and multi-agent systems (how artificial intelligence interacts as an agent that’s part of a group) to identify the characteristics that make humans and machines good teammates. This perspective lets us approach human-machine teamwork from the perspective of the human as well as the perspective of the machine. Thus, to reach a more accurate understanding of how humans and machines can work together, we examine human-machine teamwork through a series of studies. In this dissertation, we conducted 4 studies and developed 2 theoretical models: First, we focused on human-machine cooperation. We paired human participants with reinforcement learning agents to play two game theory scenarios where individual interests and collective interests are in conflict to easily detect cooperation. We show that different reinforcement models exhibit different levels of cooperation, and that humans are more likely to cooperate if they believe they are playing with another human as opposed to a machine. Second, we focused on human-machine coordination. We once again paired humans with machines to create a human-machine team to make them play a game theory scenario that emphasizes convergence towards a mutually beneficial outcome. We also analyzed survey responses from the participants to highlight how many of the principles of human-human teamwork can still occur in human-machine teams even though communication is not possible. Third, we reviewed the collective intelligence literature and the prediction markets literature to develop a model for a prediction market that enables humans and machines to work together to improve predictions. The model supports artificial intelligence operating as a peer in the prediction market as well as a complementary aggregator. Fourth, we reviewed the team cognition and collective intelligence literature to develop a model for teamwork that integrates team cognition, collective intelligence, and artificial intelligence. The model provides a new foundation to think about teamwork beyond the forecasting domain. Next, we used a simulation of emergency response management to test the different teamwork aspects of a variety of human-machine teams compared to human-human and machine-machine teams. Lastly, we ran another study that used a prediction market to examine the impact that having AI operate as a participant rather than an aggregator has on the predictive capacity of the prediction market. Our research will help identify which principles of human teamwork are applicable to human-machine teamwork, the role artificial intelligence can play in enhancing collective intelligence, and the effectiveness of human-machine teamwork compared to single artificial intelligence. In the process, we expect to produce a substantial amount of empirical results that can lay the groundwork for future research of human-machine teamwork

    Dynamic Switching State Systems for Visual Tracking

    Get PDF
    This work addresses the problem of how to capture the dynamics of maneuvering objects for visual tracking. Towards this end, the perspective of recursive Bayesian filters and the perspective of deep learning approaches for state estimation are considered and their functional viewpoints are brought together

    Computational approaches to Explainable Artificial Intelligence: Advances in theory, applications and trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9 International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications

    Dynamic Switching State Systems for Visual Tracking

    Get PDF
    This work addresses the problem of how to capture the dynamics of maneuvering objects for visual tracking. Towards this end, the perspective of recursive Bayesian filters and the perspective of deep learning approaches for state estimation are considered and their functional viewpoints are brought together

    MIFTel: a multimodal interactive framework based on temporal logic rules

    Get PDF
    Human-computer and multimodal interaction are increasingly used in everyday life. Machines are able to get more from the surrounding world, assisting humans in different application areas. In this context, the correct processing and management of signals provided by the environments is determinant for structuring the data. Different sources and acquisition times can be exploited for improving recognition results. On the basis of these assumptions, we are proposing a multimodal system that exploits Allen’s temporal logic combined with a prevision method. The main object is to correlate user’s events with system’s reactions. After post-elaborating coming data from different signal sources (RGB images, depth maps, sounds, proximity sensors, etc.), the system is managing the correlations between recognition/detection results and events in real-time to create an interactive environment for the user. For increasing the recognition reliability, a predictive model is also associated with the proposed method. The modularity of the system grants a full dynamic development and upgrade with custom modules. Finally, a comparison with other similar systems is shown, underlining the high flexibility and robustness of the proposed event management method
    • …
    corecore