199 research outputs found

    Kinect depth recovery via the cooperative profit random forest algorithm

    Get PDF

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    The eyes have it

    Get PDF

    3D Gaze Estimation from Remote RGB-D Sensors

    Get PDF
    The development of systems able to retrieve and characterise the state of humans is important for many applications and fields of study. In particular, as a display of attention and interest, gaze is a fundamental cue in understanding people activities, behaviors, intentions, state of mind and personality. Moreover, gaze plays a major role in the communication process, like for showing attention to the speaker, indicating who is addressed or averting gaze to keep the floor. Therefore, many applications within the fields of human-human, human-robot and human-computer interaction could benefit from gaze sensing. However, despite significant advances during more than three decades of research, current gaze estimation technologies can not address the conditions often required within these fields, such as remote sensing, unconstrained user movements and minimum user calibration. Furthermore, to reduce cost, it is preferable to rely on consumer sensors, but this usually leads to low resolution and low contrast images that current techniques can hardly cope with. In this thesis we investigate the problem of automatic gaze estimation under head pose variations, low resolution sensing and different levels of user calibration, including the uncalibrated case. We propose to build a non-intrusive gaze estimation system based on remote consumer RGB-D sensors. In this context, we propose algorithmic solutions which overcome many of the limitations of previous systems. We thus address the main aspects of this problem: 3D head pose tracking, 3D gaze estimation, and gaze based application modeling. First, we develop an accurate model-based 3D head pose tracking system which adapts to the participant without requiring explicit actions. Second, to achieve a head pose invariant gaze estimation, we propose a method to correct the eye image appearance variations due to head pose. We then investigate on two different methodologies to infer the 3D gaze direction. The first one builds upon machine learning regression techniques. In this context, we propose strategies to improve their generalization, in particular, to handle different people. The second methodology is a new paradigm we propose and call geometric generative gaze estimation. This novel approach combines the benefits of geometric eye modeling (normally restricted to high resolution images due to the difficulty of feature extraction) with a stochastic segmentation process (adapted to low-resolution) within a Bayesian model allowing the decoupling of user specific geometry and session specific appearance parameters, along with the introduction of priors, which are appropriate for adaptation relying on small amounts of data. The aforementioned gaze estimation methods are validated through extensive experiments in a comprehensive database which we collected and made publicly available. Finally, we study the problem of automatic gaze coding in natural dyadic and group human interactions. The system builds upon the thesis contributions to handle unconstrained head movements and the lack of user calibration. It further exploits the 3D tracking of participants and their gaze to conduct a 3D geometric analysis within a multi-camera setup. Experiments on real and natural interactions demonstrate the system is highly accuracy. Overall, the methods developed in this dissertation are suitable for many applications, involving large diversity in terms of setup configuration, user calibration and mobility

    Standard Interfaces and Protocols at Sensor Network and Cloud Level Definition

    Get PDF
    In this paper we presented full design of the system for monitoring forest which consists of cloud platform, sensor networks and mobile (drone) technologies for data collection and cameras. We first present the advanced design and structural model of an advanced system for monitoring of forest area. This model integrate sensor networks and mobile (drone) technologies for data collection and acquisition of those data at existing Crisis Management Information Systems (CMIS). Then we demonstrate the possibility to map different technological solutions and the main result was the definition of the set of standard interfaces and protocols for network interoperability

    End-userApplication for Early Forest Fire Detection and Prevention

    Get PDF
    n this paper, we describe a Web application that has been designed and implemented by Fulda University of Applied Sciences in the context of the ASPires project. The application extends the functionality available to Crisis Management Centers (CMC). Actual readings from sensors installed in the test areas, for example national parks, are made available to CMC personnel, as well as pictures from cameras that are either mounted on stationary observation towers or taken by Unmanned Aerial Vehicles (UAVs) in the area of an actual of supposed forest fire. Data are transmitted to the Aspires cloud and delivered swiftly to the Web application via an open interface. Furthermore, fire alarms raised by novel detection algorithms are forwarded automatically to the application. This clearly improves the potential for the early detection of forest fires in rural areas

    State of the Art of Audio- and Video-Based Solutions for AAL

    Get PDF
    It is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters. Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals. Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely lifelogging and self-monitoring, remote monitoring of vital signs, emotional state recognition, food intake monitoring, activity and behaviour recognition, activity and personal assistance, gesture recognition, fall detection and prevention, mobility assessment and frailty recognition, and cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed

    The applications of autonomous systems to forestry management

    Get PDF
    Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division; in conjunction with the Leaders for Global Operations Program at MIT, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 132-137).Public and private timberland owners continually search for new, cost effective methods to monitor and nurture their timber stand investments. Common management tasks include monitoring tree growth and tree health, estimating timber value and preventing wildfire. Many of these tasks are both manual and costly due to the vast areas and remote locations involved. Forestry experts predict that multi-vehicle autonomous systems may enable new, cost effective methods for performing various forest management tasks[1]. However, it remains unclear how these technologies may be applied, or where to focus development efforts. This research attempts to address this gap in literature, linking state-of-the-art research in forestry management science, robotics and autonomous systems, and product design and development. This thesis begins by reviewing existing forestry management practices and discussing a number of challenges identified through industry interviews and research. Modem product design methods are reviewed, and used to generate ideas for a number of new concept systems. Three design concepts are presented as detailed case studies. The data sets, methods and proposed systems discussed in this thesis may be used to guide future research in forestry management science, and drive further innovation in the emerging field of commercial and civilian autonomous systems. Key words: Forestry Management, Forestry Science, Robotics and Autonomous Systems, Unmanned Aerial Vehicles (UAV), Unmanned Aerial Systems (UAS), Product Design and Development, Light Detection and Ranging (LiDAR)by Joshua Przybylko.S.M.M.B.A

    Proceedings, MSVSCC 2013

    Get PDF
    Proceedings of the 7th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 11, 2013 at VMASC in Suffolk, Virginia

    Intrusion Detection for Cyber-Physical Attacks in Cyber-Manufacturing System

    Get PDF
    In the vision of Cyber-Manufacturing System (CMS) , the physical components such as products, machines, and tools are connected, identifiable and can communicate via the industrial network and the Internet. This integration of connectivity enables manufacturing systems access to computational resources, such as cloud computing, digital twin, and blockchain. The connected manufacturing systems are expected to be more efficient, sustainable and cost-effective. However, the extensive connectivity also increases the vulnerability of physical components. The attack surface of a connected manufacturing environment is greatly enlarged. Machines, products and tools could be targeted by cyber-physical attacks via the network. Among many emerging security concerns, this research focuses on the intrusion detection of cyber-physical attacks. The Intrusion Detection System (IDS) is used to monitor cyber-attacks in the computer security domain. For cyber-physical attacks, however, there is limited work. Currently, the IDS cannot effectively address cyber-physical attacks in manufacturing system: (i) the IDS takes time to reveal true alarms, sometimes over months; (ii) manufacturing production life-cycle is shorter than the detection period, which can cause physical consequences such as defective products and equipment damage; (iii) the increasing complexity of network will also make the detection period even longer. This gap leaves the cyber-physical attacks in manufacturing to cause issues like over-wearing, breakage, defects or any other changes that the original design didn’t intend. A review on the history of cyber-physical attacks, and available detection methods are presented. The detection methods are reviewed in terms of intrusion detection algorithms, and alert correlation methods. The attacks are further broken down into a taxonomy covering four dimensions with over thirty attack scenarios to comprehensively study and simulate cyber-physical attacks. A new intrusion detection and correlation method was proposed to address the cyber-physical attacks in CMS. The detection method incorporates IDS software in cyber domain and machine learning analysis in physical domain. The correlation relies on a new similarity-based cyber-physical alert correlation method. Four experimental case studies were used to validate the proposed method. Each case study focused on different aspects of correlation method performance. The experiments were conducted on a security-oriented manufacturing testbed established for this research at Syracuse University. The results showed the proposed intrusion detection and alert correlation method can effectively disclose unknown attack, known attack and attack interference that causes false alarms. In case study one, the alarm reduction rate reached 99.1%, with improvement of detection accuracy from 49.6% to 100%. The case studies also proved the proposed method can mitigate false alarms, detect attacks on multiple machines, and attacks from the supply chain. This work contributes to the security domain in cyber-physical manufacturing systems, with the focus on intrusion detection. The dataset collected during the experiments has been shared with the research community. The alert correlation methodology also contributes to cyber-physical systems, such as smart grid and connected vehicles, which requires enhanced security protection in today’s connected world
    • …
    corecore