8 research outputs found

    Sequential Feature Selection Using Hybridized Differential Evolution Algorithm and Haar Cascade for Object Detection Framework

    Get PDF
    Intelligent systems an aspect of artificial intelligence have been developed to improve satellite image interpretation with several foci on object-based machine learning methods but lack an optimal feature selection technique. Existing techniques applied to satellite images for feature selection and object detection have been reported to be ineffective in detecting objects. In this paper, differential Evolution (DE) algorithm has been introduced as a technique for selecting and mapping features to Haarcascade machine learning classifier for optimal detection of satellite image was acquired, pre-processed and features engineering was carried out and mapped using adopted DE algorithm. The selected feature was trained using Haarcascade machine learning algorithm. The result shows that the proposed technique has performance Accuracy of 86.2%, sensitivity 89.7%, and Specificity 82.2% respectively

    Real-Time Sensor Observation Segmentation For Complex Activity Recognition Within Smart Environments

    Get PDF
    The file attached to this record is the author's final peer reviewed versionActivity Recognition (AR) is at the heart of any types of assistive living systems. One of the key challenges faced in AR is segmentation of the sensor events when inhabitant performs simple or composite activities of daily living (ADLs). In addition, each inhabitant may follow a particular ritual or a tradition in performing different ADLs and their patterns may change overtime. Many recent studies apply methods to segment and recognise generic ADLs performed in a composite manner. However, little has been explored in semantically distinguishing individual sensor events and directly passing it to the relevant ongoing/new atomic activities. This paper proposes to use the ontological model to capture generic knowledge of ADLs and methods which also takes inhabitant-specific preferences into considerations when segmenting sensor events. The system implementation was developed, deployed and evaluated against 84 use case scenarios. The result suggests that all sensor events were adequately segmented with 98% accuracy and the average classification time of 3971ms and 62183ms for single and composite ADL scenarios were recorded, respectively

    A Review of Physical Human Activity Recognition Chain Using Sensors

    Get PDF
    In the era of Internet of Medical Things (IoMT), healthcare monitoring has gained a vital role nowadays. Moreover, improving lifestyle, encouraging healthy behaviours, and decreasing the chronic diseases are urgently required. However, tracking and monitoring critical cases/conditions of elderly and patients is a great challenge. Healthcare services for those people are crucial in order to achieve high safety consideration. Physical human activity recognition using wearable devices is used to monitor and recognize human activities for elderly and patient. The main aim of this review study is to highlight the human activity recognition chain, which includes, sensing technologies, preprocessing and segmentation, feature extractions methods, and classification techniques. Challenges and future trends are also highlighted.

    A Survey on Human-aware Robot Navigation

    Full text link
    Intelligent systems are increasingly part of our everyday lives and have been integrated seamlessly to the point where it is difficult to imagine a world without them. Physical manifestations of those systems on the other hand, in the form of embodied agents or robots, have so far been used only for specific applications and are often limited to functional roles (e.g. in the industry, entertainment and military fields). Given the current growth and innovation in the research communities concerned with the topics of robot navigation, human-robot-interaction and human activity recognition, it seems like this might soon change. Robots are increasingly easy to obtain and use and the acceptance of them in general is growing. However, the design of a socially compliant robot that can function as a companion needs to take various areas of research into account. This paper is concerned with the navigation aspect of a socially-compliant robot and provides a survey of existing solutions for the relevant areas of research as well as an outlook on possible future directions.Comment: Robotics and Autonomous Systems, 202

    Análise do movimento humano : classificação temporal de ações humanas

    Get PDF
    Este estudo tem como objetivo identificar atividades diárias de diferentes pessoas com recurso a métodos de classificação supervisionados. Neste sentido, começou-se por analisar várias tecnologias associadas à captura e análise do movimento humano, tais como sensores (e.g., inertial measurement unit) e câmaras de filmar (e.g., RGB, infravermelhos e time-of-flight). A revisão da literatura indica claramente que, contrariamente ao uso das câmaras de filmar, a tecnologia wearable tende a ser mais adequada para a análise cinemática de movimentos desportivos. Este tipo de tecnologia permite ainda obter uma estimativa da orientação e produção de movimento dos membros superiores e inferiores com elevado nível de precisão e exatidão, bem como imunidade a ângulos mortos, aumentando deste modo a quantidade e qualidade da informação obtida. Tendo isto presente, este trabalho apresenta uma metodologia para classificar atividades diárias do movimento humano com recurso a um fato sensorial (wearable), Ingeniarius FatoXtract. O desempenho da solução proposta é ainda comparado com a utilização de uma câmara time-of-flight, Microsoft Kinect v2. A metodologia proposta considera a integração probabilística de três classificadores: o Naïve Bayes, as Redes Neuronais Artificiais e as Máquinas de Vetor de Suporte. Com vista a alcançar um desempenho superior na classificação geral do movimento, foram consideradas diversas features no domínio do tempo (e.g., velocidade) e no domínio da frequência (e.g., Transformada Rápida de Fourier), combinado com as tradicionais features geométricas (e.g., posição angular das juntas). Realizou-se a aquisição de dados de cinco atividades comuns do dia-a-dia, realizadas por seis participantes com repetições de 20 ensaios cada, usando o FatoXtract e o Kinect v2. O conjunto de dados foi projetado para ser extremamente desafiador, uma vez que a duração das atividades varia drasticamente e algumas atividades são muito semelhantes (e.g., lavar os dentes e acenar)

    Inferring Complex Activities for Context-aware Systems within Smart Environments

    Get PDF
    The rising ageing population worldwide and the prevalence of age-related conditions such as physical fragility, mental impairments and chronic diseases have significantly impacted the quality of life and caused a shortage of health and care services. Over-stretched healthcare providers are leading to a paradigm shift in public healthcare provisioning. Thus, Ambient Assisted Living (AAL) using Smart Homes (SH) technologies has been rigorously investigated to help address the aforementioned problems. Human Activity Recognition (HAR) is a critical component in AAL systems which enables applications such as just-in-time assistance, behaviour analysis, anomalies detection and emergency notifications. This thesis is aimed at investigating challenges faced in accurately recognising Activities of Daily Living (ADLs) performed by single or multiple inhabitants within smart environments. Specifically, this thesis explores five complementary research challenges in HAR. The first study contributes to knowledge by developing a semantic-enabled data segmentation approach with user-preferences. The second study takes the segmented set of sensor data to investigate and recognise human ADLs at multi-granular action level; coarse- and fine-grained action level. At the coarse-grained actions level, semantic relationships between the sensor, object and ADLs are deduced, whereas, at fine-grained action level, object usage at the satisfactory threshold with the evidence fused from multimodal sensor data is leveraged to verify the intended actions. Moreover, due to imprecise/vague interpretations of multimodal sensors and data fusion challenges, fuzzy set theory and fuzzy web ontology language (fuzzy-OWL) are leveraged. The third study focuses on incorporating uncertainties caused in HAR due to factors such as technological failure, object malfunction, and human errors. Hence, existing studies uncertainty theories and approaches are analysed and based on the findings, probabilistic ontology (PR-OWL) based HAR approach is proposed. The fourth study extends the first three studies to distinguish activities conducted by more than one inhabitant in a shared smart environment with the use of discriminative sensor-based techniques and time-series pattern analysis. The final study investigates in a suitable system architecture with a real-time smart environment tailored to AAL system and proposes microservices architecture with sensor-based off-the-shelf and bespoke sensing methods. The initial semantic-enabled data segmentation study was evaluated with 100% and 97.8% accuracy to segment sensor events under single and mixed activities scenarios. However, the average classification time taken to segment each sensor events have suffered from 3971ms and 62183ms for single and mixed activities scenarios, respectively. The second study to detect fine-grained-level user actions was evaluated with 30 and 153 fuzzy rules to detect two fine-grained movements with a pre-collected dataset from the real-time smart environment. The result of the second study indicate good average accuracy of 83.33% and 100% but with the high average duration of 24648ms and 105318ms, and posing further challenges for the scalability of fusion rule creations. The third study was evaluated by incorporating PR-OWL ontology with ADL ontologies and Semantic-Sensor-Network (SSN) ontology to define four types of uncertainties presented in the kitchen-based activity. The fourth study illustrated a case study to extended single-user AR to multi-user AR by combining RFID tags and fingerprint sensors discriminative sensors to identify and associate user actions with the aid of time-series analysis. The last study responds to the computations and performance requirements for the four studies by analysing and proposing microservices-based system architecture for AAL system. A future research investigation towards adopting fog/edge computing paradigms from cloud computing is discussed for higher availability, reduced network traffic/energy, cost, and creating a decentralised system. As a result of the five studies, this thesis develops a knowledge-driven framework to estimate and recognise multi-user activities at fine-grained level user actions. This framework integrates three complementary ontologies to conceptualise factual, fuzzy and uncertainties in the environment/ADLs, time-series analysis and discriminative sensing environment. Moreover, a distributed software architecture, multimodal sensor-based hardware prototypes, and other supportive utility tools such as simulator and synthetic ADL data generator for the experimentation were developed to support the evaluation of the proposed approaches. The distributed system is platform-independent and currently supported by an Android mobile application and web-browser based client interfaces for retrieving information such as live sensor events and HAR results
    corecore