135 research outputs found

    Mirror mirror on the wall... an unobtrusive intelligent multisensory mirror for well-being status self-assessment and visualization

    Get PDF
    A person’s well-being status is reflected by their face through a combination of facial expressions and physical signs. The SEMEOTICONS project translates the semeiotic code of the human face into measurements and computational descriptors that are automatically extracted from images, videos and 3D scans of the face. SEMEOTICONS developed a multisensory platform in the form of a smart mirror to identify signs related to cardio-metabolic risk. The aim was to enable users to self-monitor their well-being status over time and guide them to improve their lifestyle. Significant scientific and technological challenges have been addressed to build the multisensory mirror, from touchless data acquisition, to real-time processing and integration of multimodal data

    Persistent homology to analyse 3D faces and assess body weight gain

    Get PDF
    In this paper, we analyse patterns in face shape variation due to weight gain. We propose the use of persistent homology descriptors to get geometric and topological information about the configuration of anthropometric 3D face landmarks. In this way, evaluating face changes boils down to comparing the descriptors computed on 3D face scans taken at different times. By applying dimensionality reduction techniques to the dissimilarity matrix of descriptors, we get a space in which each face is a point and face shape variations are encoded as trajectories in that space. Our results show that persistent homology is able to identify features which are well related to overweight and may help assessing individual weight trends. The research was carried out in the context of the European project SEMEOTICONS, which developed a multisensory platform which detects and monitors over time facial signs of cardio-metabolic risk

    Face morphology: Can it tell us something about body weight and fat?

    Get PDF
    This paper proposes a method for an automatic extraction of geometric features, related to weight parameters, from 3D facial data acquired with low-cost depth scanners. The novelty of the method relies both on the processing of the 3D facial data and on the definition of the geometric features which are conceptually simple, robust against noise and pose estimation errors, computationally efficient, invariant with respect to rotation, translation, and scale changes. Experimental results show that these measurements are highly correlated with weight, BMI, and neck circumference, and well correlated with waist and hip circumference, which are markers of central obesity. Therefore the proposed method strongly supports the development of interactive, non-obtrusive systems able to provide a support for the detection of weight-related problems

    Methods for acquisition and integration of personal wellness parameters

    Get PDF
    Wellness indicates the state or condition of being in good physical and mental health. Stress is a common state of emotional strain that plays a crucial role in the everyday quality of life. Nowadays, there is a growing individual awareness of the importance of a proper lifestyle and a generalized trend to become an active part in monitoring, preserving, and improving personal wellness for both physical and emotional aspects. The majority studies in this field relies on the evaluation of the changes of sensed parameters passing from rest to “maximal” stress. However, the vast majority of people usually experiences stressing circumstances in everyday life. This led us to investigate the impact of mild cognitive activation which can be somehow comparable to usual situations that everyone can face in daily life. Several signals and data can be useful to characterize the state of a person, but not all of them are equally important. So it is crucial to analyse the mutual relevance of the different pieces of information. In this work we focus on a subset of well-established psychophysical descriptors and we identified a set of devices enabling the measurement of these parameters . The design of the experimental setup and the selection of sensing devices were driven by qualitative criteria such as intrusiveness, reliability, and ease of use. These are deemed crucial for implementing effective (self-)monitoring strategies. A reference dataset, named “Mild Cognitive Activation” (MCA), was collected. The last aim of the project was the definition of a quantitative model for data integration providing a concise description of the wellness status of a person. This process was based on unsupervised learning paradigms. Data from MCA were integrated with data from the “Stress Recognition in Automobile Drivers” dataset . This allowed a cross validation of the integration methodology

    Ubiquitous Integration and Temporal Synchronisation (UbilTS) framework : a solution for building complex multimodal data capture and interactive systems

    Get PDF
    Contemporary Data Capture and Interactive Systems (DCIS) systems are tied in with various technical complexities such as multimodal data types, diverse hardware and software components, time synchronisation issues and distributed deployment configurations. Building these systems is inherently difficult and requires addressing of these complexities before the intended and purposeful functionalities can be attained. The technical issues are often common and similar among diverse applications. This thesis presents the Ubiquitous Integration and Temporal Synchronisation (UbiITS) framework, a generic solution to address the technical complexities in building DCISs. The proposed solution is an abstract software framework that can be extended and customised to any application requirements. UbiITS includes all fundamental software components, techniques, system level layer abstractions and reference architecture as a collection to enable the systematic construction of complex DCISs. This work details four case studies to showcase the versatility and extensibility of UbiITS framework’s functionalities and demonstrate how it was employed to successfully solve a range of technical requirements. In each case UbiITS operated as the core element of each application. Additionally, these case studies are novel systems by themselves in each of their domains. Longstanding technical issues such as flexibly integrating and interoperating multimodal tools, precise time synchronisation, etc., were resolved in each application by employing UbiITS. The framework enabled establishing a functional system infrastructure in these cases, essentially opening up new lines of research in each discipline where these research approaches would not have been possible without the infrastructure provided by the framework. The thesis further presents a sample implementation of the framework on a device firmware exhibiting its capability to be directly implemented on a hardware platform. Summary metrics are also produced to establish the complexity, reusability, extendibility, implementation and maintainability characteristics of the framework.Engineering and Physical Sciences Research Council (EPSRC) grants - EP/F02553X/1, 114433 and 11394

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio
    • 

    corecore