1,433 research outputs found

    A Comparison of Machine Learning and Deep Learning Techniques for Activity Recognition using Mobile Devices

    Get PDF
    We have compared the performance of different machine learning techniques for human activity recognition. Experiments were made using a benchmark dataset where each subject wore a device in the pocket and another on the wrist. The dataset comprises thirteen activities, including physical activities, common postures, working activities and leisure activities. We apply a methodology known as the activity recognition chain, a sequence of steps involving preprocessing, segmentation, feature extraction and classification for traditional machine learning methods; we also tested convolutional deep learning networks that operate on raw data instead of using computed features. Results show that combination of two sensors does not necessarily result in an improved accuracy. We have determined that best results are obtained by the extremely randomized trees approach, operating on precomputed features and on data obtained from the wrist sensor. Deep learning architectures did not produce competitive results with the tested architecture.This research was funded by the Spanish Ministry of Education, Culture and Sports under grant number FPU13/03917

    Spatial networks with wireless applications

    Get PDF
    Many networks have nodes located in physical space, with links more common between closely spaced pairs of nodes. For example, the nodes could be wireless devices and links communication channels in a wireless mesh network. We describe recent work involving such networks, considering effects due to the geometry (convex,non-convex, and fractal), node distribution, distance-dependent link probability, mobility, directivity and interference.Comment: Review article- an amended version with a new title from the origina

    Multimodal Deep Learning for Robust RGB-D Object Recognition

    Full text link
    Robust object recognition is a crucial ingredient of many, if not all, real-world robotics applications. This paper leverages recent progress on Convolutional Neural Networks (CNNs) and proposes a novel RGB-D architecture for object recognition. Our architecture is composed of two separate CNN processing streams - one for each modality - which are consecutively combined with a late fusion network. We focus on learning with imperfect sensor data, a typical problem in real-world robotics tasks. For accurate learning, we introduce a multi-stage training methodology and two crucial ingredients for handling depth data with CNNs. The first, an effective encoding of depth information for CNNs that enables learning without the need for large depth datasets. The second, a data augmentation scheme for robust learning with depth images by corrupting them with realistic noise patterns. We present state-of-the-art results on the RGB-D object dataset and show recognition in challenging RGB-D real-world noisy settings.Comment: Final version submitted to IROS'2015, results unchanged, reformulation of some text passages in abstract and introductio

    Learning about Learning with Deep Learning: Satellite Estimates of School Test Scores

    Get PDF
    Convolutional neural networks are deep-learning models commonly applied when analyzing imagery. Convolutional neural networks and satellite imagery have shown potential for the global estimation of key factors driving socioeconomic ability to adapt to global change. Unlike more traditional approaches to data collection, such as surveys, approaches based on satellite data are low cost, timely, and allow replication by a wide range of parties. We illustrate the potential of this approach with a case study estimating school test scores based solely on publicly available imagery in both the Philippines (2010, 2014) and Brazil (2016), with predictive accuracy across years and regions ranging from 76% to 80%. Finally, we discuss the numerous obstacles remaining to the operational use of CNN-based approaches for understanding multiple dimensions of socioeconomic vulnerability, and provide open source computer code for community use

    Gait rehabilitation monitor

    Get PDF
    This work presents a simple wearable, non-intrusive affordable mobile framework that allows remote patient monitoring during gait rehabilitation, by doctors and physiotherapists. The system includes a set of 2 Shimmer3 9DoF Inertial Measurement Units (IMUs), Bluetooth compatible from Shimmer, an Android smartphone for collecting and primary processing of data and persistence in a local database. Low computational load algorithms based on Euler angles and accelerometer, gyroscope and magnetometer signals were developed and used for the classification and identification of several gait disturbances. These algorithms include the alignment of IMUs sensors data by means of a common temporal reference as well as heel strike and stride detection algorithms to help segmentation of the remotely collected signals by the System app to identify gait strides and extract relevant features to feed, train and test a classifier to predict gait abnormalities in gait sessions. A set of drivers from Shimmer manufacturer is used to make the connection between the app and the set of IMUs using Bluetooth. The developed app allows users to collect data and train a classification model for identifying abnormal and normal gait types. The system provides a REST API available in a backend server along with Java and Python libraries and a PostgreSQL database. The machine-learning type is Supervised using Extremely Randomized Trees method. Frequency, time and time-frequency domain features were extracted from the collected and processed signals to train the classifier. To test the framework a set of gait abnormalities and normal gait were used to train a model and test the classifier.Este trabalho apresenta uma estrutura móvel acessível, simples e não intrusiva, que permite a monitorização e a assistência remota de pacientes durante a reabilitação da marcha, por médicos e fisioterapeutas que monitorizam a reabilitação da marcha do paciente. O sistema inclui um conjunto de 2 IMUs (Inertial Mesaurement Units) Shimmer3 da marca Shimmer, compatíveís com Bluetooth, um smartphone Android para recolha, e pré-processamento de dados e armazenamento numa base de dados local. Algoritmos de baixa carga computacional baseados em ângulos Euler e sinais de acelerómetros, giroscópios e magnetómetros foram desenvolvidos e utilizados para a classificação e identificação de diversas perturbações da marcha. Estes algoritmos incluem o alinhamento e sincronização dos dados dos sensores IMUs usando uma referência temporal comum, além de algoritmos de detecção de passos e strides para auxiliar a segmentação dos sinais recolhidos remotamente pelaappdestaframeworke identificar os passos da marcha extraindo as características relevantes para treinar e testar um classificador que faça a predição de deficiências na marcha durante as sessões de monitorização. Um conjunto de drivers do fabricante Shimmer é usado para fazer a conexão entre a app e o conjunto de IMUs através de Bluetooth. A app desenvolvida permite aos utilizadores recolher dados e treinar um modelo de classificação para identificar os tipos de marcha normais e patológicos. O sistema fornece uma REST API disponível num servidor backend recorrendo a bibliotecas Java e Python e a uma base de dados PostgreSQL. O tipo de machine-learning é Supervisionado usando Extremely Randomized Trees. Features no domínio do tempo, da frequência e do tempo-frequência foram extraídas dos sinais recolhidos e processados para treinar o classificador. Para testar a estrutura, um conjunto de marchas patológicas e normais foram utilizadas para treinar um modelo e testar o classificador

    A Survey of Applications and Human Motion Recognition with Microsoft Kinect

    Get PDF
    Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this paper, we present, a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection and human pose estimation
    corecore