469 research outputs found

    A Review of Physical Human Activity Recognition Chain Using Sensors

    Get PDF
    In the era of Internet of Medical Things (IoMT), healthcare monitoring has gained a vital role nowadays. Moreover, improving lifestyle, encouraging healthy behaviours, and decreasing the chronic diseases are urgently required. However, tracking and monitoring critical cases/conditions of elderly and patients is a great challenge. Healthcare services for those people are crucial in order to achieve high safety consideration. Physical human activity recognition using wearable devices is used to monitor and recognize human activities for elderly and patient. The main aim of this review study is to highlight the human activity recognition chain, which includes, sensing technologies, preprocessing and segmentation, feature extractions methods, and classification techniques. Challenges and future trends are also highlighted.

    A Very Brief Introduction to Machine Learning With Applications to Communication Systems

    Get PDF
    Given the unprecedented availability of data and computing resources, there is widespread renewed interest in applying data-driven machine learning methods to problems for which the development of conventional engineering solutions is challenged by modelling or algorithmic deficiencies. This tutorial-style paper starts by addressing the questions of why and when such techniques can be useful. It then provides a high-level introduction to the basics of supervised and unsupervised learning. For both supervised and unsupervised learning, exemplifying applications to communication networks are discussed by distinguishing tasks carried out at the edge and at the cloud segments of the network at different layers of the protocol stack

    Human Activity Recognition using Machine Learning Approach

    Get PDF
    The growing development in the sensory implementation has facilitated that the human activity can be used either as a tool for remote control of the device or as a tool for sophisticated human behaviour analysis. With the aid of the skeleton of the human action input image, the proposed system implements a basic but novel process that can only recognize the significant joints. The proposed system contributes a cost-effective human activity recognition system along with efficient performance in recognizing the significant joints. A template for an activity recognition system is also provided in which the reliability of the process of recognition and system quality is preserved with a good balance. The research presents a condensed method of extraction of features from spatial and temporal features of event feeds that are further subject to the mechanism of machine learning to improve the performance of recognition. The significance of the proposed study is reflected in the results, which when trained using KNN, show higher accuracy performance. The proposed system demonstrated 10-15% of memory usage over 532 MB of digitized real-time event information with 0.5341 seconds of processing time consumption. Therefore on a practical basis, the supportability of the proposed system is higher. The outcomes are the same for both real-time object flexibility captures and static frames as well

    Confluence of Vision and Natural Language Processing for Cross-media Semantic Relations Extraction

    Get PDF
    In this dissertation, we focus on extracting and understanding semantically meaningful relationships between data items of various modalities; especially relations between images and natural language. We explore the ideas and techniques to integrate such cross-media semantic relations for machine understanding of large heterogeneous datasets, made available through the expansion of the World Wide Web. The datasets collected from social media websites, news media outlets and blogging platforms usually contain multiple modalities of data. Intelligent systems are needed to automatically make sense out of these datasets and present them in such a way that humans can find the relevant pieces of information or get a summary of the available material. Such systems have to process multiple modalities of data such as images, text, linguistic features, and structured data in reference to each other. For example, image and video search and retrieval engines are required to understand the relations between visual and textual data so that they can provide relevant answers in the form of images and videos to the users\u27 queries presented in the form of text. We emphasize the automatic extraction of semantic topics or concepts from the data available in any form such as images, free-flowing text or metadata. These semantic concepts/topics become the basis of semantic relations across heterogeneous data types, e.g., visual and textual data. A classic problem involving image-text relations is the automatic generation of textual descriptions of images. This problem is the main focus of our work. In many cases, large amount of text is associated with images. Deep exploration of linguistic features of such text is required to fully utilize the semantic information encoded in it. A news dataset involving images and news articles is an example of this scenario. We devise frameworks for automatic news image description generation based on the semantic relations of images, as well as semantic understanding of linguistic features of the news articles

    Machine learning algorithms for structured decision making

    Get PDF

    Map Validation for Autonomous Driving Systems

    Get PDF
    High-definition mapping is fundamental for self-driving vehicles. In this thesis we describe different approaches for online map validation whose goal is to verify if reality and map data are inconsistent. A probabilistic framework to perform the sensor fusion is defined and a spatial correlation is introduced to interpolate the information. The result is a probabilistic representation of the map whose assumed values represent the probability with which the map is valid in every point
    • …
    corecore