1,241 research outputs found

    Seamless Interactions Between Humans and Mobility Systems

    Full text link
    As mobility systems, including vehicles and roadside infrastructure, enter a period of rapid and profound change, it is important to enhance interactions between people and mobility systems. Seamless human—mobility system interactions can promote widespread deployment of engaging applications, which are crucial for driving safety and efficiency. The ever-increasing penetration rate of ubiquitous computing devices, such as smartphones and wearable devices, can facilitate realization of this goal. Although researchers and developers have attempted to adapt ubiquitous sensors for mobility applications (e.g., navigation apps), these solutions often suffer from limited usability and can be risk-prone. The root causes of these limitations include the low sensing modality and limited computational power available in ubiquitous computing devices. We address these challenges by developing and demonstrating that novel sensing techniques and machine learning can be applied to extract essential, safety-critical information from drivers natural driving behavior, even actions as subtle as steering maneuvers (e.g., left-/righthand turns and lane changes). We first show how ubiquitous sensors can be used to detect steering maneuvers regardless of disturbances to sensing devices. Next, by focusing on turning maneuvers, we characterize drivers driving patterns using a quantifiable metric. Then, we demonstrate how microscopic analyses of crowdsourced ubiquitous sensory data can be used to infer critical macroscopic contextual information, such as risks present at road intersections. Finally, we use ubiquitous sensors to profile a driver’s behavioral patterns on a large scale; such sensors are found to be essential to the analysis and improvement of drivers driving behavior.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163127/1/chendy_1.pd

    A unified ecological framework for studying effects of digital places on well-being

    Get PDF
    Social media has greatly expanded opportunities to study place and well-being through the availability of human expressions tagged with physical location. Such research often uses social media content to study how specific places in the offline world influence well-being without acknowledging that digital platforms (e.g., Twitter, Facebook, Youtube, Yelp) are designed in unique ways that structure certain types of interactions in online and offline worlds, which can influence place-making and well-being. To expand our understanding of the mechanisms that influence social media expressions about well-being, we describe an ecological framework of person-place interactions that asks, “at what broad levels of interaction with digital platforms and physical environments do effects on well-being manifest?” The person is at the centre of the ecological framework to recognize how people define and organize both digital and physical communities and interactions. The relevance of interactions in physical environments depends on the built and natural characteristics encountered across modes of activity (e.g., domestic, work, study). Here, social interactions are stratified into the meso-social (e.g., local social norms) and micro-social (e.g., personal conversations) levels. The relevance of interactions in digital platforms is contingent on specific hardware and software elements. Social interactions at the meso-social level include platform norms and passive use of social media, such as observing the expressions of others, whereas interactions at the micro-level include more active uses, like direct messaging. Digital platforms are accessed in a physical location, and physical locations are partly experienced through online interactions; therefore, interactions between these environments are also acknowledged. We conclude by discussing the strengths and limitations of applying the framework to studies of place and well-being

    Egocentric Vision-based Action Recognition: A survey

    Get PDF
    [EN] The egocentric action recognition EAR field has recently increased its popularity due to the affordable and lightweight wearable cameras available nowadays such as GoPro and similars. Therefore, the amount of egocentric data generated has increased, triggering the interest in the understanding of egocentric videos. More specifically, the recognition of actions in egocentric videos has gained popularity due to the challenge that it poses: the wild movement of the camera and the lack of context make it hard to recognise actions with a performance similar to that of third-person vision solutions. This has ignited the research interest on the field and, nowadays, many public datasets and competitions can be found in both the machine learning and the computer vision communities. In this survey, we aim to analyse the literature on egocentric vision methods and algorithms. For that, we propose a taxonomy to divide the literature into various categories with subcategories, contributing a more fine-grained classification of the available methods. We also provide a review of the zero-shot approaches used by the EAR community, a methodology that could help to transfer EAR algorithms to real-world applications. Finally, we summarise the datasets used by researchers in the literature.We gratefully acknowledge the support of the Basque Govern-ment's Department of Education for the predoctoral funding of the first author. This work has been supported by the Spanish Government under the FuturAAL-Context project (RTI2018-101045-B-C21) and by the Basque Government under the Deustek project (IT-1078-16-D)
    corecore