24 research outputs found

    Online motion recognition using an accelerometer in a mobile device

    Get PDF
    This paper introduces a new method to implement a motion recognition process using a mobile phone fitted with an accelerometer. The data collected from the accelerometer are interpreted by means of a statistical study and machine learning algorithms in order to obtain a classification function. Then, that function is implemented in a mobile phone and online experiments are carried out. Experimental results show that this approach can be used to effectively recognize different human activities with a high-level accuracy.Peer ReviewedPreprin

    Sit-to-Stand Movement Recognition Using Kinect

    Get PDF
    This paper examines the application of machine-learning techniques to human movement data in order to recognise and compare movements made by different people. Data from an experimental set-up using a sit-to-stand movement are first collected using the Microsoft Kinect input sensor, then normalized and subsequently compared using the assigned labels for correct and incorrect movements. We show that attributes can be extracted from the time series produced by the Kinect sensor using a dynamic time-warping technique. The extracted attributes are then fed to a random forest algorithm, to recognise anomalous behaviour in time series of joint measurements over the whole movement. For comparison, the k-Nearest Neighbours algorithm is also used on the same attributes with good results. Both methods’ results are compared using Multi-Dimensional Scaling for clustering visualisation

    Learning Online Smooth Predictors for Realtime Camera Planning using Recurrent Decision Trees

    Get PDF
    We study the problem of online prediction for realtime camera planning, where the goal is to predict smooth trajectories that correctly track and frame objects of interest (e.g., players in a basketball game). The conventional approach for training predictors does not directly consider temporal consistency, and often produces undesirable jitter. Although post-hoc smoothing (e.g., via a Kalman filter) can mitigate this issue to some degree, it is not ideal due to overly stringent modeling assumptions (e.g., Gaussian noise). We propose a recurrent decision tree framework that can directly incorporate temporal consistency into a data-driven predictor, as well as a learning algorithm that can efficiently learn such temporally smooth models. Our approach does not require any post-processing, making online smooth predictions much easier to generate when the noise model is unknown. We apply our approach to sports broadcasting: given noisy player detections, we learn where the camera should look based on human demonstrations. Our experiments exhibit significant improvements over conventional baselines and showcase the practicality of our approach

    DeCaf: Diagnosing and Triaging Performance Issues in Large-Scale Cloud Services

    Full text link
    Large scale cloud services use Key Performance Indicators (KPIs) for tracking and monitoring performance. They usually have Service Level Objectives (SLOs) baked into the customer agreements which are tied to these KPIs. Dependency failures, code bugs, infrastructure failures, and other problems can cause performance regressions. It is critical to minimize the time and manual effort in diagnosing and triaging such issues to reduce customer impact. Large volume of logs and mixed type of attributes (categorical, continuous) in the logs makes diagnosis of regressions non-trivial. In this paper, we present the design, implementation and experience from building and deploying DeCaf, a system for automated diagnosis and triaging of KPI issues using service logs. It uses machine learning along with pattern mining to help service owners automatically root cause and triage performance issues. We present the learnings and results from case studies on two large scale cloud services in Microsoft where DeCaf successfully diagnosed 10 known and 31 unknown issues. DeCaf also automatically triages the identified issues by leveraging historical data. Our key insights are that for any such diagnosis tool to be effective in practice, it should a) scale to large volumes of service logs and attributes, b) support different types of KPIs and ranking functions, c) be integrated into the DevOps processes.Comment: To be published in the proceedings of ICSE-SEIP '20, Seoul, Republic of Kore

    Human Body Poses Recognition Using Neural Networks with Class Based Data Augmentation

    Get PDF
    Infotehnoloogia ja selle pidev arenemine on võimaldanud arvutitel näha ja õppida. Mida meie näeme oma silmadega, on võimalik jagada piksliteks ja anda pikslid sisendiks arvutile. Pikslite väärtuste põhjal saab arvuti näha ja õppida tundma ära erinevaid objekte, mida neile tundma õpetatakse. Arvutinägemisel ja õppimisel on palju võimalikke rakendusi. Antud lõputöös pakume välja raamistiku, mis suudab automaatselt tuvastada inimese keha poose traditsioonilise odava kaameraga tehtud pildilt. Meie lähenemisviis ühendab arvutinägemise ja neurovõrgud, et tuvastada inimene pildilt. Antud protsess algab silueti eraldamisega pildilt ning jätkub neurovõrgu kasutamisega. Neurovõrk tuvastab keha poosi eraldatud silueti põhjal. Suutmaks siluetti seostada keha poosiga, treeniti neurovõrku eelnevalt töödeldud piltidega siluettidest. Saadud tulemuste põhjal pakub antud raamistik paljulubavaid tulemusi aktsepteeritava täpsusega.Information technologies and continuous development of information technologies have made it possible for computers to see and learn. What we see with our eyes, can be divided into pixels and fed to a computer, giving the computer the ability to see and learn based on the pixel values. Based on the input values computers can learn to recognize different objects depending on the examples taught to them. There are many possible applications for computers to see and learn in order to solve new tasks. In this thesis, we propose a framework, capable of automatically recognizing human body poses from a single image, obtained with a traditional low-cost camera. Our approach combines computer vision with neural networks to detect a human from an image. This process starts by extracting the silhouette from an image and then using a neural network to recognize body poses based on the extracted silhouettes. In order to match detected silhouettes with body poses, the neural network was trained with an already classified augmented dataset of preprocessed images depicting silhouettes. According to our results, we show that the proposed method provides promising results with acceptable accuracy
    corecore