206 research outputs found

    Investigation of low-cost infrared sensing for intelligent deployment of occupant restraints

    Get PDF
    In automotive transport, airbags and seatbelts are effective at restraining the driver and passenger in the event of a crash, with statistics showing a dramatic reduction in the number of casualties from road crashes. However, statistics also show that a small number of these people have been injured or even killed from striking the airbag, and that the elderly and small children are especially at risk of airbag-related injury. This is the result of the fact that in-car restraint systems were designed for the average male at an average speed of 50 km/hr, and people outside these norms are at risk. Therefore one of the future safety goals of the car manufacturers is to deploy sensors that would gain more information about the driver or passenger of their cars in order to tailor the safety systems specifically for that person, and this is the goal of this project. This thesis describes a novel approach to occupant detection, position measurement and monitoring using a low-cost thermal imaging based system, which is a departure from traditional video camera-based systems, and at an affordable price. Experiments were carried out using a specially designed test rig and a car driving simulator with members of the public. Results have shown that the thermal imager can detect a human in a car cabin mock up and provide crucial real-time position data, which could be used to support intelligent restraint deployment. Other valuable information has been detected such as whether the driver is smoking, drinking a hot or cold drink, using a mobile phone, which can help to infer the level of driver attentiveness or engagement

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Safety impacts of using smartphone voice control interfaces on driving performance

    Full text link
    Distraction from the use of mobile phones has been identified as one of the causes of road traffic crashes. Voice control technology has been suggested as a potential solution to driver distraction by the manual use of mobile phones. However, new evidence has shown that using voice control interfaces while driving could require more from drivers in terms of cognitive load and visual attention compared to using a mobile phone manually. Further, several factors that moderate the use of voice control interfaces, for example, usability and acceptance are poorly understood. Thus, the current study aims to investigate the safety impact of using voice control interfaces on driving performance. A preliminary study, an online survey and a driving experiment were conducted to investigate how drivers interact with smartphone voice control interfaces and their effects on driving performance. First, the usage pattern of voice control interfaces while driving was explored using focus groups and interviews (preliminary study) and an online survey. Next, 55 participants completed a simulated driving task that utilises a valid and standardised method called the Lane Change Test (LCT). The purpose was to measure degradation of driving performance due to the concurrent performance of secondary tasks; either contact calling, playing music or text messaging task. These secondary tasks were identified as common tasks in the survey of the pattern of use of voice control interfaces while driving. Secondary tasks were performed in both visual-manual and voice control modes with either an Apple or a Samsung smartphone. Data on eye glance behaviour, workload and, usability and acceptance of the voice control interfaces were also collected. Findings support the view that interacting with voice control interfaces while driving reduces distraction from visual-manual interfaces but is still distracting compared to driving without using any devices. Texting was found to degrade task and driving performance regardless of control modes and phone type. Moreover, poor system performance leads to low acceptance of voice control technology. Smartphone voice control interfaces have an apparent advantage over visual-manual interfaces. However, they still can impose some elements of distraction that may have negative implications for road safety

    Towards perceptual intelligence : statistical modeling of human individual and interactive behaviors

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Architecture, 2000.Includes bibliographical references (p. 279-297).This thesis presents a computational framework for the automatic recognition and prediction of different kinds of human behaviors from video cameras and other sensors, via perceptually intelligent systems that automatically sense and correctly classify human behaviors, by means of Machine Perception and Machine Learning techniques. In the thesis I develop the statistical machine learning algorithms (dynamic graphical models) necessary for detecting and recognizing individual and interactive behaviors. In the case of the interactions two Hidden Markov Models (HMMs) are coupled in a novel architecture called Coupled Hidden Markov Models (CHMMs) that explicitly captures the interactions between them. The algorithms for learning the parameters from data as well as for doing inference with those models are developed and described. Four systems that experimentally evaluate the proposed paradigm are presented: (1) LAFTER, an automatic face detection and tracking system with facial expression recognition; (2) a Tai-Chi gesture recognition system; (3) a pedestrian surveillance system that recognizes typical human to human interactions; (4) and a SmartCar for driver maneuver recognition. These systems capture human behaviors of different nature and increasing complexity: first, isolated, single-user facial expressions, then, two-hand gestures and human-to-human interactions, and finally complex behaviors where human performance is mediated by a machine, more specifically, a car. The metric that is used for quantifying the quality of the behavior models is their accuracy: how well they are able to recognize the behaviors on testing data. Statistical machine learning usually suffers from lack of data for estimating all the parameters in the models. In order to alleviate this problem, synthetically generated data are used to bootstrap the models creating 'prior models' that are further trained using much less real data than otherwise it would be required. The Bayesian nature of the approach let us do so. The predictive power of these models lets us categorize human actions very soon after the beginning of the action. Because of the generic nature of the typical behaviors of each of the implemented systems there is a reason to believe that this approach to modeling human behavior would generalize to other dynamic human-machine systems. This would allow us to recognize automatically people's intended action, and thus build control systems that dynamically adapt to suit the human's purposes better.by Nuria M. Oliver.Ph.D

    The Sixth Annual Workshop on Space Operations Applications and Research (SOAR 1992)

    Get PDF
    This document contains papers presented at the Space Operations, Applications, and Research Symposium (SOAR) hosted by the U.S. Air Force (USAF) on 4-6 Aug. 1992 and held at the JSC Gilruth Recreation Center. The symposium was cosponsored by the Air Force Material Command and by NASA/JSC. Key technical areas covered during the symposium were robotic and telepresence, automation and intelligent systems, human factors, life sciences, and space maintenance and servicing. The SOAR differed from most other conferences in that it was concerned with Government-sponsored research and development relevant to aerospace operations. The symposium's proceedings include papers covering various disciplines presented by experts from NASA, the USAF, universities, and industry

    State of the Art of Audio- and Video-Based Solutions for AAL

    Get PDF
    It is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters. Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals. Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely lifelogging and self-monitoring, remote monitoring of vital signs, emotional state recognition, food intake monitoring, activity and behaviour recognition, activity and personal assistance, gesture recognition, fall detection and prevention, mobility assessment and frailty recognition, and cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed

    Rearward visibility issues related to agricultural machinery: Contributing factors, potential solutions

    Get PDF
    As the size, complexity, and speed of tractors and other agricultural self-propelled machinery have increased, so have the visibility-related issues, placing significant importance on the visual skills, alertness, and reactive abilities of the operator. Rearward movement of large agricultural equipment has been identified in the literature as causing not only damage to both machine and stationary objects, but also injuries (even fatalities) to bystanders not visible to the operator. Fortunately, monitoring assistance, while not a new concept, has advanced significantly, offering operators today more options for increasing awareness of the area surrounding their machines. In this research, an attempt is made to (1) identify and describe the key contributors to agricultural machinery visibility issues (both operator and machine-related), and (2) enumerate and evaluate the potential solutions and technologies that address these issues via modifications of ISO, SAE, and DOT standardized visibility testing methods. Enhanced operator safety and efficiency should result from a better understanding of the visibility problems (especially with regard to rearward movement) inherent in large tractors and self-propelled agricultural machinery. Used in this study were nine machines of different types that varied widely in size, horsepower rating, and operator station configuration to provide a broad representation of what is found on many U.S. farms/ranches. The two main rearward monitoring ‘technologies’ evaluated were the machines’ factory-equipped mirrors and cameras that the researchers affixed to these machines. A 58.06 m2 (625 ft2) testing grid was centered on the rear-most location of the tested machinery with height indicators centered in each of twenty-five grid cells. In general, the findings were consistent across all the machines tested—i.e., rather obstructed rearward visibility using mirrors alone versus considerably less obstructed rearward visibility with the addition of cameras. For example, having exterior extended-arm and interior mirrors only, a MFWD tractor with 1,100-bushel grain cart in tow measured, from the operator’s perspective, 68% obstructed view of the grid’s kneeling-worker-height markers and 100% throughout the midline of rearward travel; but when equipped with a rearview camera system, the obstructed area was decreased to only 4%. The visibility models created identified (1) a moderate-positive Pearson r correlation, indicating that many of the obstructed locations of the rearward area affected both mirrors and cameras similarly and (2) a strong-positive Pearson r correlation of kneeling worker height visibility, indicating that mirrors and camera systems share commonality of areas with high visibility (along the midline of travel and outward with greater distance from the rear of the machine, without implements in tow). Of the recommendations coming from this research, the key one is for establishment of engineering standards aimed at (1) enhancing operator ability to identify those locations around agricultural machinery that are obstructed from view, (2) reducing the risk of run-overs through improved monitoring capabilities of machine surroundings and components, and (3) alerting operators and co-workers of these hazardous locations

    Technology 2001: The Second National Technology Transfer Conference and Exposition, volume 2

    Get PDF
    Proceedings of the workshop are presented. The mission of the conference was to transfer advanced technologies developed by the Federal government, its contractors, and other high-tech organizations to U.S. industries for their use in developing new or improved products and processes. Volume two presents papers on the following topics: materials science, robotics, test and measurement, advanced manufacturing, artificial intelligence, biotechnology, electronics, and software engineering
    • …
    corecore