11,891 research outputs found

    VANET Applications: Hot Use Cases

    Get PDF
    Current challenges of car manufacturers are to make roads safe, to achieve free flowing traffic with few congestions, and to reduce pollution by an effective fuel use. To reach these goals, many improvements are performed in-car, but more and more approaches rely on connected cars with communication capabilities between cars, with an infrastructure, or with IoT devices. Monitoring and coordinating vehicles allow then to compute intelligent ways of transportation. Connected cars have introduced a new way of thinking cars - not only as a mean for a driver to go from A to B, but as smart cars - a user extension like the smartphone today. In this report, we introduce concepts and specific vocabulary in order to classify current innovations or ideas on the emerging topic of smart car. We present a graphical categorization showing this evolution in function of the societal evolution. Different perspectives are adopted: a vehicle-centric view, a vehicle-network view, and a user-centric view; described by simple and complex use-cases and illustrated by a list of emerging and current projects from the academic and industrial worlds. We identified an empty space in innovation between the user and his car: paradoxically even if they are both in interaction, they are separated through different application uses. Future challenge is to interlace social concerns of the user within an intelligent and efficient driving

    Information and communication technology solutions for outdoor navigation in dementia

    Get PDF
    INTRODUCTION: Information and communication technology (ICT) is potentially mature enough to empower outdoor and social activities in dementia. However, actual ICT-based devices have limited functionality and impact, mainly limited to safety. What is an ideal operational framework to enhance this field to support outdoor and social activities? METHODS: Review of literature and cross-disciplinary expert discussion. RESULTS: A situation-aware ICT requires a flexible fine-tuning by stakeholders of system usability and complexity of function, and of user safety and autonomy. It should operate by artificial intelligence/machine learning and should reflect harmonized stakeholder values, social context, and user residual cognitive functions. ICT services should be proposed at the prodromal stage of dementia and should be carefully validated within the life space of users in terms of quality of life, social activities, and costs. DISCUSSION: The operational framework has the potential to produce ICT and services with high clinical impact but requires substantial investment

    Modeling an ontology on accessible evacuation routes for emergencies

    Get PDF
    Providing alert communication in emergency situations is vital to reduce the number of victims. However, this is a challenging goal for researchers and professionals due to the diverse pool of prospective users, e.g. people with disabilities as well as other vulnerable groups. Moreover, in the event of an emergency situation, many people could become vulnerable because of exceptional circumstances such as stress, an unknown environment or even visual impairment (e.g. fire causing smoke). Within this scope, a crucial activity is to notify affected people about safe places and available evacuation routes. In order to address this need, we propose to extend an ontology, called SEMA4A (Simple EMergency Alert 4 [for] All), developed in a previous work for managing knowledge about accessibility guidelines, emergency situations and communication technologies. In this paper, we introduce a semi-automatic technique for knowledge acquisition and modeling on accessible evacuation routes. We introduce a use case to show applications of the ontology and conclude with an evaluation involving several experts in evacuation procedures. © 2014 Elsevier Ltd. All rights reserved

    Personalized Routing Dynamically Adjusted to Avoid Adverse Situations

    Get PDF
    Routes that are optimal in general for a particular travel need may at times be unsuitable in a specific instance because of various factors such as construction, weather, accidents, rush hour traffic, road surface conditions, presence of crowds, etc. The impact of such factors on a person’s travel plans can depend on the specifics of the person’s travel circumstances, such as travel mode, vehicle characteristics, etc. This disclosure describes techniques to proactively deliver travel routing guidance personalized to a user, based on user-permitted contextual factors. If the user permits, the personalized routing guidance is adjusted to avoid one or more of a number of dynamic factors that can adversely impact specific routes at the time of the user’s travel. With user permission, route guidance can be generated passively, without an actual request from the user, and notifications or alerts can be proactively provided to the user

    An autonomous navigational system using GPS and computer vision for futuristic road traffic

    Get PDF
    Navigational service is one of the most essential dependency towards any transport system and at present, there are various revolutionary approaches that has contributed towards its improvement. This paper has reviewed the global positioning system (GPS) and computer vision based navigational system and found that there is a large gap between the actual demand of navigation and what currently exists. Therefore, the proposed study discusses about a novel framework of an autonomous navigation system that uses GPS as well as computer vision considering the case study of futuristic road traffic system. An analytical model is built up where the geo-referenced data from GPS is integrated with the signals captured from the visual sensors are considered to implement this concept. The simulated outcome of the study shows that proposed study offers enhanced accuracy as well as faster processing in contrast to existing approaches

    Unifying terrain awareness for the visually impaired through real-time semantic segmentation.

    Get PDF
    Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework

    Seamless Interactions Between Humans and Mobility Systems

    Full text link
    As mobility systems, including vehicles and roadside infrastructure, enter a period of rapid and profound change, it is important to enhance interactions between people and mobility systems. Seamless human—mobility system interactions can promote widespread deployment of engaging applications, which are crucial for driving safety and efficiency. The ever-increasing penetration rate of ubiquitous computing devices, such as smartphones and wearable devices, can facilitate realization of this goal. Although researchers and developers have attempted to adapt ubiquitous sensors for mobility applications (e.g., navigation apps), these solutions often suffer from limited usability and can be risk-prone. The root causes of these limitations include the low sensing modality and limited computational power available in ubiquitous computing devices. We address these challenges by developing and demonstrating that novel sensing techniques and machine learning can be applied to extract essential, safety-critical information from drivers natural driving behavior, even actions as subtle as steering maneuvers (e.g., left-/righthand turns and lane changes). We first show how ubiquitous sensors can be used to detect steering maneuvers regardless of disturbances to sensing devices. Next, by focusing on turning maneuvers, we characterize drivers driving patterns using a quantifiable metric. Then, we demonstrate how microscopic analyses of crowdsourced ubiquitous sensory data can be used to infer critical macroscopic contextual information, such as risks present at road intersections. Finally, we use ubiquitous sensors to profile a driver’s behavioral patterns on a large scale; such sensors are found to be essential to the analysis and improvement of drivers driving behavior.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163127/1/chendy_1.pd
    • …
    corecore