5,131 research outputs found

    Safe Intelligent Driver Assistance System in V2X Communication Environments based on IoT

    Get PDF
    In the modern world, power and speed of cars have increased steadily, as traffic continued to increase. At the same time highway-related fatalities and injuries due to road incidents are constantly growing and safety problems come first. Therefore, the development of Driver Assistance Systems (DAS) has become a major issue. Numerous innovations, systems and technologies have been developed in order to improve road transportation and safety. Modern computer vision algorithms enable cars to understand the road environment with low miss rates. A number of Intelligent Transportation Systems (ITSs), Vehicle Ad-Hoc Networks (VANETs) have been applied in the different cities over the world. Recently, a new global paradigm, known as the Internet of Things (IoT) brings new idea to update the existing solutions. Vehicle-to-Infrastructure communication based on IoT technologies would be a next step in intelligent transportation for the future Internet-of-Vehicles (IoV). The overall purpose of this research was to come up with a scalable IoT solution for driver assistance, which allows to combine safety relevant information for a driver from different types of in-vehicle sensors, in-vehicle DAS, vehicle networks and driver`s gadgets. This study brushed up on the evolution and state-of-the-art of Vehicle Systems. Existing ITSs, VANETs and DASs were evaluated in the research. The study proposed a design approach for the future development of transport systems applying IoT paradigm to the transport safety applications in order to enable driver assistance become part of Internet of Vehicles (IoV). The research proposed the architecture of the Safe Intelligent DAS (SiDAS) based on IoT V2X communications in order to combine different types of data from different available devices and vehicle systems. The research proposed IoT ARM structure for SiDAS, data flow diagrams, protocols. The study proposes several IoT system structures for the vehicle-pedestrian and vehicle-vehicle collision prediction as case studies for the flexible SiDAS framework architecture. The research has demonstrated the significant increase in driver situation awareness by using IoT SiDAS, especially in NLOS conditions. Moreover, the time analysis, taking into account IoT, Cloud, LTE and DSRS latency, has been provided for different collision scenarios, in order to evaluate the overall system latency and ensure applicability for real-time driver emergency notification. Experimental results demonstrate that the proposed SiDAS improves traffic safety

    Design of a data-driven communication framework as personalized support for users of ADAS

    Get PDF
    Recently the automotive industry has made a huge leap forward in Automated Driver Assistance Systems (ADAS) development, increasing the level of driving processes automation. However, ADAS design does not imply any individual support to the driver; this results in a poor understanding of how the ADAS works and its limitations. This type of driver uncertainty regarding ADAS performance can erode the user\u27s trust in the system and result in decreasing situations when the system is in use. This paper presents the design of a data-driven communication framework that can utilize historical and real-time vehicle data to support ADAS users. The data-driven communication framework aims to illustrate the ADAS capabilities and limitations and suggests effective use of the system in real-time driving situations. This type of assistance can improve a driver\u27s understanding of ADAS functionality and encourage its usage

    A New Vehicle Localization Scheme Based on Combined Optical Camera Communication and Photogrammetry

    Full text link
    The demand for autonomous vehicles is increasing gradually owing to their enormous potential benefits. However, several challenges, such as vehicle localization, are involved in the development of autonomous vehicles. A simple and secure algorithm for vehicle positioning is proposed herein without massively modifying the existing transportation infrastructure. For vehicle localization, vehicles on the road are classified into two categories: host vehicles (HVs) are the ones used to estimate other vehicles' positions and forwarding vehicles (FVs) are the ones that move in front of the HVs. The FV transmits modulated data from the tail (or back) light, and the camera of the HV receives that signal using optical camera communication (OCC). In addition, the streetlight (SL) data are considered to ensure the position accuracy of the HV. Determining the HV position minimizes the relative position variation between the HV and FV. Using photogrammetry, the distance between FV or SL and the camera of the HV is calculated by measuring the occupied image area on the image sensor. Comparing the change in distance between HV and SLs with the change in distance between HV and FV, the positions of FVs are determined. The performance of the proposed technique is analyzed, and the results indicate a significant improvement in performance. The experimental distance measurement validated the feasibility of the proposed scheme

    Crowd-sensing our Smart Cities: a Platform for Noise Monitoring and Acoustic Urban Planning

    Get PDF
    Environmental pollution and the corresponding control measurements put in place to tackle it play a significant role in determining the actual quality of life in modern cities. Amongst the several pollutant that have to be faced on a daily basis, urban noise represent one of the most widely known for its already ascertained health-related issues. However, no systematic noise management and control activities are performed in the majority of European cities due to a series of limiting factors (e.g., expensive monitoring equipment, few available technician, scarce awareness of the problem in city managers). The recent advances in the Smart City model, which is being progressively adopted in many cities, nowadays offer multiple possibilities to improve the effectiveness in this area. The Mobile Crowd Sensing paradigm allows collecting data streams from smartphone built-in sensors on large geographical scales at no cost and without involving expert data captors, provided that an adequate IT infrastructure has been implemented to manage properly the gathered measurements. In this paper, we present an improved version of a MCS-based platform, named City Soundscape, which allows exploiting any Android-based device as a portable acoustic monitoring station and that offers city managers an effective and straightforward tool for planning Noise Reduction Interventions (NRIs) within their cities. The platform also now offers a new logical microservices architecture

    Implicit Smartphone User Authentication with Sensors and Contextual Machine Learning

    Full text link
    Authentication of smartphone users is important because a lot of sensitive data is stored in the smartphone and the smartphone is also used to access various cloud data and services. However, smartphones are easily stolen or co-opted by an attacker. Beyond the initial login, it is highly desirable to re-authenticate end-users who are continuing to access security-critical services and data. Hence, this paper proposes a novel authentication system for implicit, continuous authentication of the smartphone user based on behavioral characteristics, by leveraging the sensors already ubiquitously built into smartphones. We propose novel context-based authentication models to differentiate the legitimate smartphone owner versus other users. We systematically show how to achieve high authentication accuracy with different design alternatives in sensor and feature selection, machine learning techniques, context detection and multiple devices. Our system can achieve excellent authentication performance with 98.1% accuracy with negligible system overhead and less than 2.4% battery consumption.Comment: Published on the IEEE/IFIP International Conference on Dependable Systems and Networks (DSN) 2017. arXiv admin note: substantial text overlap with arXiv:1703.0352

    Context Awareness for Navigation Applications

    Get PDF
    This thesis examines the topic of context awareness for navigation applications and asks the question, “What are the benefits and constraints of introducing context awareness in navigation?” Context awareness can be defined as a computer’s ability to understand the situation or context in which it is operating. In particular, we are interested in how context awareness can be used to understand the navigation needs of people using mobile computers, such as smartphones, but context awareness can also benefit other types of navigation users, such as maritime navigators. There are countless other potential applications of context awareness, but this thesis focuses on applications related to navigation. For example, if a smartphone-based navigation system can understand when a user is walking, driving a car, or riding a train, then it can adapt its navigation algorithms to improve positioning performance. We argue that the primary set of tools available for generating context awareness is machine learning. Machine learning is, in fact, a collection of many different algorithms and techniques for developing “computer systems that automatically improve their performance through experience” [1]. This thesis examines systematically the ability of existing algorithms from machine learning to endow computing systems with context awareness. Specifically, we apply machine learning techniques to tackle three different tasks related to context awareness and having applications in the field of navigation: (1) to recognize the activity of a smartphone user in an indoor office environment, (2) to recognize the mode of motion that a smartphone user is undergoing outdoors, and (3) to determine the optimal path of a ship traveling through ice-covered waters. The diversity of these tasks was chosen intentionally to demonstrate the breadth of problems encompassed by the topic of context awareness. During the course of studying context awareness, we adopted two conceptual “frameworks,” which we find useful for the purpose of solidifying the abstract concepts of context and context awareness. The first such framework is based strongly on the writings of a rhetorician from Hellenistic Greece, Hermagoras of Temnos, who defined seven elements of “circumstance”. We adopt these seven elements to describe contextual information. The second framework, which we dub the “context pyramid” describes the processing of raw sensor data into contextual information in terms of six different levels. At the top of the pyramid is “rich context”, where the information is expressed in prose, and the goal for the computer is to mimic the way that a human would describe a situation. We are still a long way off from computers being able to match a human’s ability to understand and describe context, but this thesis improves the state-of-the-art in context awareness for navigation applications. For some particular tasks, machine learning has succeeded in outperforming humans, and in the future there are likely to be tasks in navigation where computers outperform humans. One example might be the route optimization task described above. This is an example of a task where many different types of information must be fused in non-obvious ways, and it may be that computer algorithms can find better routes through ice-covered waters than even well-trained human navigators. This thesis provides only preliminary evidence of this possibility, and future work is needed to further develop the techniques outlined here. The same can be said of the other two navigation-related tasks examined in this thesis

    An Indoor Navigation System Using a Sensor Fusion Scheme on Android Platform

    Get PDF
    With the development of wireless communication networks, smart phones have become a necessity for people’s daily lives, and they meet not only the needs of basic functions for users such as sending a message or making a phone call, but also the users’ demands for entertainment, surfing the Internet and socializing. Navigation functions have been commonly utilized, however the navigation function is often based on GPS (Global Positioning System) in outdoor environments, whereas a number of applications need to navigate indoors. This paper presents a system to achieve high accurate indoor navigation based on Android platform. To do this, we design a sensor fusion scheme for our system. We divide the system into three main modules: distance measurement module, orientation detection module and position update module. We use an efficient way to estimate the stride length and use step sensor to count steps in distance measurement module. For orientation detection module, in order to get the optimal result of orientation, we then introduce Kalman filter to de-noise the data collected from different sensors. In the last module, we combine the data from the previous modules and calculate the current location. Results of experiments show that our system works well and has high accuracy in indoor situations
    • …
    corecore