4,063 research outputs found

    Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

    Get PDF
    Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is soun

    Vision for Looking at Traffic Lights:Issues, Survey, and Perspectives

    Get PDF

    Human Motion Trajectory Prediction: A Survey

    Full text link
    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots and advanced surveillance systems. This paper provides a survey of human motion trajectory prediction. We review, analyze and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.Comment: Submitted to the International Journal of Robotics Research (IJRR), 37 page

    Sim2real and Digital Twins in Autonomous Driving: A Survey

    Full text link
    Safety and cost are two important concerns for the development of autonomous driving technologies. From the academic research to commercial applications of autonomous driving vehicles, sufficient simulation and real world testing are required. In general, a large scale of testing in simulation environment is conducted and then the learned driving knowledge is transferred to the real world, so how to adapt driving knowledge learned in simulation to reality becomes a critical issue. However, the virtual simulation world differs from the real world in many aspects such as lighting, textures, vehicle dynamics, and agents' behaviors, etc., which makes it difficult to bridge the gap between the virtual and real worlds. This gap is commonly referred to as the reality gap (RG). In recent years, researchers have explored various approaches to address the reality gap issue, which can be broadly classified into two categories: transferring knowledge from simulation to reality (sim2real) and learning in digital twins (DTs). In this paper, we consider the solutions through the sim2real and DTs technologies, and review important applications and innovations in the field of autonomous driving. Meanwhile, we show the state-of-the-arts from the views of algorithms, models, and simulators, and elaborate the development process from sim2real to DTs. The presentation also illustrates the far-reaching effects of the development of sim2real and DTs in autonomous driving

    Autonomous Vehicles: Open-Source Technologies, Considerations, and Development

    Full text link
    Autonomous vehicles are the culmination of advances in many areas such as sensor technologies, artificial intelligence (AI), networking, and more. This paper will introduce the reader to the technologies that build autonomous vehicles. It will focus on open-source tools and libraries for autonomous vehicle development, making it cheaper and easier for developers and researchers to participate in the field. The topics covered are as follows. First, we will discuss the sensors used in autonomous vehicles and summarize their performance in different environments, costs, and unique features. Then we will cover Simultaneous Localization and Mapping (SLAM) and algorithms for each modality. Third, we will review popular open-source driving simulators, a cost-effective way to train machine learning models and test vehicle software performance. We will then highlight embedded operating systems and the security and development considerations when choosing one. After that, we will discuss Vehicle-to-Vehicle (V2V) and Internet-of-Vehicle (IoV) communication, which are areas that fuse networking technologies with autonomous vehicles to extend their functionality. We will then review the five levels of vehicle automation, commercial and open-source Advanced Driving Assistance Systems, and their features. Finally, we will touch on the major manufacturing and software companies involved in the field, their investments, and their partnerships. These topics will give the reader an understanding of the industry, its technologies, active research, and the tools available for developers to build autonomous vehicles.Comment: 13 pages, 7 figure
    • 

    corecore