37 research outputs found

    Learning Incoherent Subspaces: Classification via Incoherent Dictionary Learning

    Get PDF
    In this article we present the supervised iterative projections and rotations (s-ipr) algorithm, a method for learning discriminative incoherent subspaces from data. We derive s-ipr as a supervised extension of our previously proposed iterative projections and rotations (ipr) algorithm for incoherent dictionary learning, and we employ it to learn incoherent sub-spaces that model signals belonging to different classes. We test our method as a feature transform for supervised classification, first by visualising transformed features from a synthetic dataset and from the ‘iris’ dataset, then by using the resulting features in a classification experiment

    Investigation of sensor placement for accurate fall detection

    Get PDF
    Fall detection is typically based on temporal and spectral analysis of multi-dimensional signals acquired from wearable sensors such as tri-axial accelerometers and gyroscopes which are attached at several parts of the human body. Our aim is to investigate the location where such wearable sensors should be placed in order to optimize the discrimination of falls from other Activities of Daily Living (ADLs). To this end, we perform feature extraction and classification based on data acquired from a single sensor unit placed on a specific body part each time. The investigated sensor locations include the head, chest, waist, wrist, thigh and ankle. Evaluation of several classification algorithms reveals the waist and the thigh as the optimal locations. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2017

    Digital biomarkers of cognitive function

    No full text

    Roweis Discriminant Analysis: A Generalized Subspace Learning Method

    Full text link
    We present a new method which generalizes subspace learning based on eigenvalue and generalized eigenvalue problems. This method, Roweis Discriminant Analysis (RDA), is named after Sam Roweis to whom the field of subspace learning owes significantly. RDA is a family of infinite number of algorithms where Principal Component Analysis (PCA), Supervised PCA (SPCA), and Fisher Discriminant Analysis (FDA) are special cases. One of the extreme special cases, which we name Double Supervised Discriminant Analysis (DSDA), uses the labels twice; it is novel and has not appeared elsewhere. We propose a dual for RDA for some special cases. We also propose kernel RDA, generalizing kernel PCA, kernel SPCA, and kernel FDA, using both dual RDA and representation theory. Our theoretical analysis explains previously known facts such as why SPCA can use regression but FDA cannot, why PCA and SPCA have duals but FDA does not, why kernel PCA and kernel SPCA use kernel trick but kernel FDA does not, and why PCA is the best linear method for reconstruction. Roweisfaces and kernel Roweisfaces are also proposed generalizing eigenfaces, Fisherfaces, supervised eigenfaces, and their kernel variants. We also report experiments showing the effectiveness of RDA and kernel RDA on some benchmark datasets.Comment: This is the paper for the methods Roweis Discriminant Analysis (RDA), dual RDA, kernel RDA, and Roweisfaces. This is in memory of Sam Roweis (rest in peace) to whom subspace and manifold learning owes significantl

    InVis: A Tool for Interactive Visual Data Analysis

    No full text
    Abstract. We present InVis 1, a tool to visually analyse data by interactively shaping a two dimensional embedding of it. Traditionally, embedding techniques focus on finding one fixed embedding, which emphasizes a single aspects of the data. In contrast, our application enables the user to explore the structures of a dataset by observing and controlling a projection of it. Ultimately it provides a way to search and find an embedding, emphasizing aspects that the user desires to highlight
    corecore