97 research outputs found

    3D-LIVE : live interactions through 3D visual environments

    Get PDF
    This paper explores Future Internet (FI) 3D-Media technologies and Internet of Things (IoT) in real and virtual environments in order to sense and experiment Real-Time interaction within live situations. The combination of FI testbeds and Living Labs (LL) would enable both researchers and users to explore capacities to enter the 3D Tele-Immersive (TI) application market and to establish new requirements for FI technology and infrastructure. It is expected that combining both FI technology pull and TI market pull would promote and accelerate the creation and adoption, by user communities such as sport practitioners, of innovative TI Services within sport events

    Context-aware gestural interaction in the smart environments of the ubiquitous computing era

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms

    3D-LIVE : live interactions through 3D visual environments

    Get PDF
    This paper explores Future Internet (FI) 3D-Media technologies and Internet of Things (IoT) in real and virtual environments in order to sense and experiment Real-Time interaction within live situations. The combination of FI testbeds and Living Labs (LL) would enable both researchers and users to explore capacities to enter the 3D Tele-Immersive (TI) application market and to establish new requirements for FI technology and infrastructure. It is expected that combining both FI technology pull and TI market pull would promote and accelerate the creation and adoption, by user communities such as sport practitioners, of innovative TI Services within sport events

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Instrumentation and validation of a robotic cane for transportation and fall prevention in patients with affected mobility

    Get PDF
    Dissertação de mestrado integrado em Engenharia Física, (especialização em Dispositivos, Microssistemas e Nanotecnologias)O ato de andar é conhecido por ser a forma primitiva de locomoção do ser humano, sendo que este traz muitos benefícios que motivam um estilo de vida saudável e ativo. No entanto, há condições de saúde que dificultam a realização da marcha, o que por consequência pode resultar num agravamento da saúde, e adicionalmente, levar a um maior risco de quedas. Nesse sentido, o desenvolvimento de um sistema de deteção e prevenção de quedas, integrado num dispositivo auxiliar de marcha, seria essencial para reduzir estes eventos de quedas e melhorar a qualidade de vida das pessoas. Para ultrapassar estas necessidades e limitações, esta dissertação tem como objetivo validar e instrumentar uma bengala robótica, denominada Anti-fall Robotic Cane (ARCane), concebida para incorporar um sistema de deteção de quedas e um mecanismo de atuação que possibilite a prevenção de quedas, ao mesmo tempo que assiste a marcha. Para esse fim, foi realizada uma revisão do estado da arte em bengalas robóticas para adquirir um conhecimento amplo e aprofundado dos componentes, mecanismos e estratégias utilizadas, bem como os protocolos experimentais, principais resultados, limitações e desafios em dispositivos existentes. Numa primeira fase, foi estipulado o objetivo de: (i) adaptar a missão do produto; (ii) estudar as necessidades do consumidor; e (iii) atualizar as especificações alvo da ARCane, continuação do trabalho de equipa, para obter um produto com design e engenharia compatível com o mercado. Foi depois estabelecida a arquitetura de hardware e discutidos os componentes a ser instrumentados na ARCane. Em seguida foram realizados testes de interoperabilidade a fim de validar o funcionamento singular e coletivo dos componentes. Relativamente ao controlo de movimento, foi desenvolvido um sistema inovador, de baixo custo e intuitivo, capaz de detetar a intenção do movimento e de reconhecer as fases da marcha do utilizador. Esta implementação foi validada com seis voluntários saudáveis que realizaram testes de marcha com a ARCane para testar sua operabilidade num ambiente de contexto real. Obteve-se uma precisão de 97% e de 90% em relação à deteção da intenção de movimento e ao reconhecimento da fase da marcha do utilizador. Por fim, foi projetado um método de deteção de quedas e mecanismo de prevenção de quedas para futura implementação na ARCane. Foi ainda proposta uma melhoria do método de deteção de quedas, de modo a superar as limitações associadas, bem como a proposta de dispositivos de deteção a serem implementados na ARCane para obter um sistema completo de deteção de quedas.The act of walking is known to be the primitive form of the human being, and it brings many benefits that motivate a healthy and active lifestyle. However, there are health conditions that make walking difficult, which, consequently, can result in worse health and, in addition, lead to a greater risk of falls. Thus, the development of a fall detection and prevention system integrated with a walking aid would be essential to reduce these fall events and improve people quality of life. To overcome these needs and limitations, this dissertation aims to validate and instrument a cane-type robot, called Anti-fall Robotic Cane (ARCane), designed to incorporate a fall detection system and an actuation mechanism that allow the prevention of falls, while assisting the gait. Therefore, a State-of-the-Art review concerning robotic canes was carried out to acquire a broad and in-depth knowledge of the used components, mechanisms and strategies, as well as the experimental protocols, main results, limitations and challenges on existing devices. On a first stage, it was set an objective to (i) enhance the product's mission statement; (ii) study the consumer needs; and (iii) update the target specifications of the ARCane, extending teamwork, to obtain a product with a market-compatible design and engineering that meets the needs and desires of the ARCane users. It was then established the hardware architecture of the ARCane and discussed the electronic components that will instrument the control, sensory, actuator and power units, being afterwards subjected to interoperability tests to validate the singular and collective functioning of cane components altogether. Regarding the motion control of robotic canes, an innovative, cost-effective and intuitive motion control system was developed, providing user movement intention recognition, and identification of the user's gait phases. This implementation was validated with six healthy volunteers who carried out gait trials with the ARCane, in order to test its operability in a real context environment. An accuracy of 97% was achieved for user motion intention recognition and 90% for user gait phase recognition, using the proposed motion control system. Finally, it was idealized a fall detection method and fall prevention mechanism for a future implementation in the ARCane, based on methods applied to robotic canes in the literature. It was also proposed an improvement of the fall detection method in order to overcome its associated limitations, as well as detection devices to be implemented into the ARCane to achieve a complete fall detection system

    Unsupervised monitoring of an elderly person\u27s activities of daily living using Kinect sensors and a power meter

    Get PDF
    The need for greater independence amongst the growing population of elderly people has made the concept of “ageing in place” an important area of research. Remote home monitoring strategies help the elderly deal with challenges involved in ageing in place and performing the activities of daily living (ADLs) independently. These monitoring approaches typically involve the use of several sensors, attached to the environment or person, in order to acquire data about the ADLs of the occupant being monitored. Some key drawbacks associated with many of the ADL monitoring approaches proposed for the elderly living alone need to be addressed. These include the need to label a training dataset of activities, use wearable devices or equip the house with many sensors. These approaches are also unable to concurrently monitor physical ADLs to detect emergency situations, such as falls, and instrumental ADLs to detect deviations from the daily routine. These are all indicative of deteriorating health in the elderly. To address these drawbacks, this research aimed to investigate the feasibility of unsupervised monitoring of both physical and instrumental ADLs of elderly people living alone via inexpensive minimally intrusive sensors. A hybrid framework was presented which combined two approaches for monitoring an elderly occupant’s physical and instrumental ADLs. Both approaches were trained based on unlabelled sensor data from the occupant’s normal behaviours. The data related to physical ADLs were captured from Kinect sensors and those related to instrumental ADLs were obtained using a combination of Kinect sensors and a power meter. Kinect sensors were employed in functional areas of the monitored environment to capture the occupant’s locations and 3D structures of their physical activities. The power meter measured the power consumption of home electrical appliances (HEAs) from the electricity panel. A novel unsupervised fuzzy approach was presented to monitor physical ADLs based on depth maps obtained from Kinect sensors. Epochs of activities associated with each monitored location were automatically identified, and the occupant’s behaviour patterns during each epoch were represented through the combinations of fuzzy attributes. A novel membership function generation technique was presented to elicit membership functions for attributes by analysing the data distribution of attributes while excluding noise and outliers in the data. The occupant’s behaviour patterns during each epoch of activity were then classified into frequent and infrequent categories using a data mining technique. Fuzzy rules were learned to model frequent behaviour patterns. An alarm was raised when the occupant’s behaviour in new data was recognised as frequent with a longer than usual duration or infrequent with a duration exceeding a data-driven value. Another novel unsupervised fuzzy approach to monitor instrumental ADLs took unlabelled training data from Kinect sensors and a power meter to model the key features of instrumental ADLs. Instrumental ADLs in the training dataset were identified based on associating the occupant’s locations with specific power signatures on the power line. A set of fuzzy rules was then developed to model the frequency and regularity of the instrumental activities tailored to the occupant. This set was subsequently used to monitor new data and to generate reports on deviations from normal behaviour patterns. As a proof of concept, the proposed monitoring approaches were evaluated using a dataset collected from a real-life setting. An evaluation of the results verified the high accuracy of the proposed technique to identify the epochs of activities over alternative techniques. The approach adopted for monitoring physical ADLs was found to improve elderly monitoring. It generated fuzzy rules that could represent the person’s physical ADLs and exclude noise and outliers in the data more efficiently than alternative approaches. The performance of different membership function generation techniques was compared. The fuzzy rule set obtained from the output of the proposed technique could accurately classify more scenarios of normal and abnormal behaviours. The approach for monitoring instrumental ADLs was also found to reliably distinguish power signatures generated automatically by self-regulated devices from those generated as a result of an elderly person’s instrumental ADLs. The evaluations also showed the effectiveness of the approach in correctly identifying elderly people’s interactions with specific HEAs and tracking simulated upward and downward deviations from normal behaviours. The fuzzy inference system in this approach was found to be robust in regards to errors when identifying instrumental ADLs as it could effectively classify normal and abnormal behaviour patterns despite errors in the list of the used HEAs

    Design and Experimental Evaluation of a Context-aware Social Gaze Control System for a Humanlike Robot

    Get PDF
    Nowadays, social robots are increasingly being developed for a variety of human-centered scenarios in which they interact with people. For this reason, they should possess the ability to perceive and interpret human non-verbal/verbal communicative cues, in a humanlike way. In addition, they should be able to autonomously identify the most important interactional target at the proper time by exploring the perceptual information, and exhibit a believable behavior accordingly. Employing a social robot with such capabilities has several positive outcomes for human society. This thesis presents a multilayer context-aware gaze control system that has been implemented as a part of a humanlike social robot. Using this system the robot is able to mimic the human perception, attention, and gaze behavior in a dynamic multiparty social interaction. The system enables the robot to direct appropriately its gaze at the right time to the environmental targets and humans who are interacting with each other and with the robot. For this reason, the attention mechanism of the gaze control system is based on features that have been proven to guide human attention: the verbal and non-verbal cues, proxemics, the effective field of view, the habituation effect, and the low-level visual features. The gaze control system uses skeleton tracking and speech recognition,facial expression recognition, and salience detection to implement the same features. As part of a pilot evaluation, the gaze behavior of 11 participants was collected with a professional eye-tracking device, while they were watching a video of two-person interactions. Analyzing the average gaze behavior of participants, the importance of human-relevant features in human attention triggering were determined. Based on this finding, the parameters of the gaze control system were tuned in order to imitate the human behavior in selecting features of environment. The comparison between the human gaze behavior and the gaze behavior of the developed system running on the same videos shows that the proposed approach is promising as it replicated human gaze behavior 89% of the time

    State of the Art of Audio- and Video-Based Solutions for AAL

    Get PDF
    It is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters. Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals. Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely lifelogging and self-monitoring, remote monitoring of vital signs, emotional state recognition, food intake monitoring, activity and behaviour recognition, activity and personal assistance, gesture recognition, fall detection and prevention, mobility assessment and frailty recognition, and cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed

    Model-driven Personalisation of Human-Computer Interaction across Ubiquitous Computing Applications

    Get PDF
    Personalisation is essential to Ubiquitous Computing (Ubicomp), which focuses on a human-centred paradigm aiming to provide interaction with adaptive content, services, and interfaces towards each one of its users, according to the context of the applications’ scenarios. However, the provision of that appropriated personalised interaction is a true challenge due to different reasons, such as the user interests, heterogeneous environments and devices, dynamic user behaviour and data capture. This dissertation focuses on a model-driven personalisation solution that has the main goal of facili-tating the implementation of a personalised human-computer interaction across different Ubicomp scenarios and applications. The research reported here investigates how a generic and interoperable model for personalisation can be used, shared and processed by different applications, among diverse devices, and across different scenarios, studying how it can enrich human-computer interaction. The research started by the definition of a consistent user model with the integration of context to end in a pervasive model for the definition of personalisations across different applications. Besides the model proposal, the other key contributions within the solution are the modelling frame-work, which encapsulates the model and integrates the user profiling module, and a cloud-based platform to pervasively support developers in the implementation of personalisation across different applications and scenarios. This platform provides tools to put end users in control of their data and to support developers through web services based operations implemented on top of a personalisa-tion API, which can also be used independently of the platform for testing purposes, for instance. Several Ubicomp applications prototypes were designed and used to evaluate, at different phases, both the solution as a whole and each one of its components. Some were specially created with the goal of evaluating specific research questions of this work. Others were being developed with a pur-pose other than for personalisation evaluation, but they ended up as personalised prototypes to better address their initial goals. The process of applying the personalisation model to the design of the latter should also work as a proof of concept on the developer side. On the one hand, developers have been probed with the implementation of personalised applications using the proposed solution, or a part of it, to assess how it works and can help them. The usage of our solution by developers was also important to assess how the model and the platform respond to the developers’ needs. On the other hand, some prototypes that implement our model-driven per-sonalisation solution have been selected for end user evaluation. Usually, user testing was conducted at two different stages of the development, using: (1) a non-personalised version; (2) the final per-sonalised version. This procedure allowed us to assess if personalisation improved the human-com-puter interaction. The first stage was also important to know who were the end users and gather interaction data to come up with personalisation proposals for each prototype. Globally, the results of both developers and end users tests were very positive. Finally, this dissertation proposes further work, which is already ongoing, related to the study of a methodology to the implementation and evaluation of personalised applications, supported by the development of three mobile health applications for rehabilitation

    Determining principles for the development of virtual environments for future clinical applications

    Get PDF
    The aim of the present research was to determine a range of principles for the development of virtual natural environments (VNEs), using low-cost commercial-off-the-shelf simulation technologies, for bedside and clinical healthcare applications. A series of studies have been conducted to systematically investigate different aspects of the VNEs on a wide variety of participants, ranging from undergraduate and postgraduate students, hospital patients and clinicians, to West Country villagers. The results of these studies suggest that naturalistic environmental spatial sounds can have a positive impact on user ratings of presence and stress levels. High visual fidelity and real-world-based VNEs can increase participants’ reported ratings of presence, quality and realism. The choice of input devices also has a significant impact on usability with these types of virtual environment (VE). Overall, the findings provide a strong set of principles supporting the future development of VNEs. Highly transferrable tools and techniques have also been developed in order to investigate the exploitation of new digital technology approaches in the generation of believable and engaging real-time, interactive virtual natural environments that can be modified and updated relatively easily, thereby delivering a system that can be regularly modified and updated to meet the needs of individual patients
    • …
    corecore