762 research outputs found

    Activities recognition and worker profiling in the intelligent office environment using a fuzzy finite state machine

    Get PDF
    Analysis of the office workers’ activities of daily working in an intelligent office environment can be used to optimize energy consumption and also office workers’ comfort. To achieve this end, it is essential to recognise office workers’ activities including short breaks, meetings and non-computer activities to allow an optimum control strategy to be implemented. In this paper, fuzzy finite state machines are used to model an office worker’s behaviour. The model will incorporate sensory data collected from the environment as the input and some pre-defined fuzzy states are used to develop the model. Experimental results are presented to illustrate the effectiveness of this approach. The activity models of different individual workers as inferred from the sensory devices can be distinguished. However, further investigation is required to create a more complete model

    Central monitoring system for ambient assisted living

    Get PDF
    Smart homes for aged care enable the elderly to stay in their own homes longer. By means of various types of ambient and wearable sensors information is gathered on people living in smart homes for aged care. This information is then processed to determine the activities of daily living (ADL) and provide vital information to carers. Many examples of smart homes for aged care can be found in literature, however, little or no evidence can be found with respect to interoperability of various sensors and devices along with associated functions. One key element with respect to interoperability is the central monitoring system in a smart home. This thesis analyses and presents key functions and requirements of a central monitoring system. The outcomes of this thesis may benefit developers of smart homes for aged care

    Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition

    Get PDF
    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation

    In-home monitoring system based on WiFi fingerprints for ambient assisted living

    Get PDF
    This paper presents an in-home monitoring system based on WiFi fingerprints for Ambient Assisted Living. WiFi fingerprints are used to continuously locate a patient at the different rooms in her/his home. The experiments performed provide a correctly location rate of 96% in the best case of all studied scenarios. The behavior obtained by location monitoring allows to detect anomalous behavior such as long stays in rooms out of the common schedule. The main characteristics of the presented system are: a) it is robust enough to work without an own WiFi access point, which in turn means a very affordable solution; b) low obtrusiveness, as it is based on the use of a mobile phone; c) highly interoperable with other wireless connections (bluetooth, RFID) present in current mobile phones; d) alarms are triggered when any anomalous behavior is detected

    MULTI-WEAR: A Multi-Wearable Platform for Enhancing Mobile Experiences

    Get PDF
    The uptake of wearable technology suggests that the time is ripe to explore new opportunities for improving mobile experiences. Apps, however, are not keeping up with the pace of technological advancement because wearables are treated as standalone devices, although their individual capabilities better classify them as peripherals with complementary roles. We foresee that the next generation of apps will orchestrate multiple wearable devices to enhance mobile user experiences. However, currently there is limited support for combining heterogeneous devices. This paper introduces Multi-Wear, a platform to scaffold the development of apps that span multiple wearables. It demonstrates experimentally how MULTI-WEAR can help bring changes to mobile apps that go beyond conventional practices

    An IoT based Virtual Coaching System (VSC) for Assisting Activities of Daily Life

    Get PDF
    Nowadays aging of the population is becoming one of the main concerns of theworld. It is estimated that the number of people aged over 65 will increase from 461million to 2 billion in 2050. This substantial increment in the elderly population willhave significant consequences in the social and health care system. Therefore, in thecontext of Ambient Intelligence (AmI), the Ambient Assisted Living (AAL) has beenemerging as a new research area to address problems related to the aging of the population. AAL technologies based on embedded devices have demonstrated to be effectivein alleviating the social- and health-care issues related to the continuous growing of theaverage age of the population. Many smart applications, devices and systems have beendeveloped to monitor the health status of elderly, substitute them in the accomplishment of activities of the daily life (especially in presence of some impairment or disability),alert their caregivers in case of necessity and help them in recognizing risky situations.Such assistive technologies basically rely on the communication and interaction be-tween body sensors, smart environments and smart devices. However, in such contextless effort has been spent in designing smart solutions for empowering and supportingthe self-efficacy of people with neurodegenerative diseases and elderly in general. Thisthesis fills in the gap by presenting a low-cost, non intrusive, and ubiquitous VirtualCoaching System (VCS) to support people in the acquisition of new behaviors (e.g.,taking pills, drinking water, finding the right key, avoiding motor blocks) necessary tocope with needs derived from a change in their health status and a degradation of theircognitive capabilities as they age. VCS is based on the concept of extended mind intro-duced by Clark and Chalmers in 1998. They proposed the idea that objects within theenvironment function as a part of the mind. In my revisiting of the concept of extendedmind, the VCS is composed of a set of smart objects that exploit the Internet of Things(IoT) technology and machine learning-based algorithms, in order to identify the needsof the users and react accordingly. In particular, the system exploits smart tags to trans-form objects commonly used by people (e.g., pillbox, bottle of water, keys) into smartobjects, it monitors their usage according to their needs, and it incrementally guidesthem in the acquisition of new behaviors related to their needs. To implement VCS, thisthesis explores different research directions and challenges. First of all, it addresses thedefinition of a ubiquitous, non-invasive and low-cost indoor monitoring architecture byexploiting the IoT paradigm. Secondly, it deals with the necessity of developing solu-tions for implementing coaching actions and consequently monitoring human activitiesby analyzing the interaction between people and smart objects. Finally, it focuses on the design of low-cost localization systems for indoor environment, since knowing theposition of a person provides VCS with essential information to acquire information onperformed activities and to prevent risky situations. In the end, the outcomes of theseresearch directions have been integrated into a healthcare application scenario to imple-ment a wearable system that prevents freezing of gait in people affected by Parkinson\u2019sDisease

    Person Identification through Harvesting Kinetic Energy

    Get PDF
    Energy-based devices made this possible to recognize the need for batteryless wearables. The batteryless wearable notion created an opportunity for continuous and ubiquitous human identification. Traditionally, securing device passwords, PINs, and fingerprints based on the accelerometer to sample the acceleration traces for identification, but the accelerometer's energy consumption has been a critical issue for the existing ubiquitous self-enabled devices. In this paper, a novel method harvesting kinetic energy for identification improves energy efficiency and reduces energy demand to provide the identification. The idea of utilizing harvested power for personal identification is actuated by the phenomena that people walk distinctly and generate different kinetic energy levels leaving their signs with a harvested power signal. The statistical evaluation of experimental results proves that power traces contain sufficient information for person identification. The experimental analysis is conducted on 85 persons walking data for kinetic power signal-based person identification. We select five different classifiers that provide exemplary performance for identifying an individual for their generated power traces, namely NaiveBayes, OneR, and Meta Bagging. The experimental outcomes demonstrate the classifier's accuracy of 90%, 97%, and 98%, respectively. The Dataset used is publicly available for the gait acceleration series

    Electrostatic Sensors – Their Principles and Applications

    Get PDF
    Over the past three decades electrostatic sensors have been proposed, developed and utilised for the continuous monitoring and measurement of a range of industrial processes, mechanical systems and clinical environments. Electrostatic sensors enjoy simplicity in structure, cost-effectiveness and suitability for a wide range of installation conditions. They either provide unique solutions to some measurement challenges or offer more cost-effective options to the more established sensors such as those based on acoustic, capacitive, optical and electromagnetic principles. The established or potential applications of electrostatic sensors appear wide ranging, but the underlining sensing principle and resultant system characteristics are very similar. This paper presents a comprehensive review of the electrostatic sensors and sensing systems that have been developed for the measurement and monitoring of a range of process variables and conditions. These include the flow measurement of pneumatically conveyed solids, measurement of particulate emissions, monitoring of fluidised beds, on-line particle sizing, burner flame monitoring, speed and radial vibration measurement of mechanical systems, and condition monitoring of power transmission belts, mechanical wear, and human activities. The fundamental sensing principles together with the advantages and limitations of electrostatic sensors for a given area of applications are also introduced. The technology readiness level for each area of applications is identified and commented. Trends and future development of electrostatic sensors, their signal conditioning electronics, signal processing methods as well as possible new applications are also discussed

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Improving Activity Recognition Accuracy in Ambient-Assisted Living Systems by Automated Feature Engineering

    Get PDF
    Ambient-assisted living (AAL) is promising to become a supplement of the current care models, providing enhanced living experience to people within context-aware homes and smart environments. Activity recognition based on sensory data in AAL systems is an important task because 1) it can be used for estimation of levels of physical activity, 2) it can lead to detecting changes of daily patterns that may indicate an emerging medical condition, or 3) it can be used for detection of accidents and emergencies. To be accepted, AAL systems must be affordable while providing reliable performance. These two factors hugely depend on optimizing the number of utilized sensors and extracting robust features from them. This paper proposes a generic feature engineering method for selecting robust features from a variety of sensors, which can be used for generating reliable classi cation models. From the originally recorded time series and some newly generated time series [i.e., magnitudes, rst derivatives, delta series, and fast Fourier transformation (FFT)-based series], a variety of time and frequency domain features are extracted. Then, using two-phase feature selection, the number of generated features is greatly reduced. Finally, different classi cation models are trained and evaluated on an independent test set. The proposed method was evaluated on ve publicly available data sets, and on all of them, it yielded better accuracy than when using hand-tailored features. The bene ts of the proposed systematic feature engineering method are quickly discovering good feature sets for any given task than manually nding ones suitable for a particular task, selecting a small feature set that outperforms manually determined features in both execution time and accuracy, and identi cation of relevant sensor types and body locations automatically. Ultimately, the proposed method could reduce the cost of AAL systems by facilitating execution of algorithms on devices with limited resources and by using as few sensors as possible.info:eu-repo/semantics/publishedVersio
    corecore