1,600 research outputs found

    Improved AURA k-Nearest Neighbour approach

    Get PDF
    The k-Nearest Neighbour (kNN) approach is a widely-used technique for pattern classification. Ranked distance measurements to a known sample set determine the classification of unknown samples. Though effective, kNN, like most classification methods does not scale well with increased sample size. This is due to their being a relationship between the unknown query and every other sample in the data space. In order to make this operation scalable, we apply AURA to the kNN problem. AURA is a highly-scalable associative-memory based binary neural-network intended for high-speed approximate search and match operations on large unstructured datasets. Previous work has seen AURA methods applied to this problem as a scalable, but approximate kNN classifier. This paper continues this work by using AURA in conjunction with kernel-based input vectors, in order to create a fast scalable kNN classifier, whilst improving recall accuracy to levels similar to standard kNN implementations

    A binary neural k-nearest neighbour technique

    Get PDF
    K-Nearest Neighbour (k-NN) is a widely used technique for classifying and clustering data. K-NN is effective but is often criticised for its polynomial run-time growth as k-NN calculates the distance to every other record in the data set for each record in turn. This paper evaluates a novel k-NN classifier with linear growth and faster run-time built from binary neural networks. The binary neural approach uses robust encoding to map standard ordinal, categorical and real-valued data sets onto a binary neural network. The binary neural network uses high speed pattern matching to recall the k-best matches. We compare various configurations of the binary approach to a conventional approach for memory overheads, training speed, retrieval speed and retrieval accuracy. We demonstrate the superior performance with respect to speed and memory requirements of the binary approach compared to the standard approach and we pinpoint the optimal configurations

    A Binary Neural Network Framework for Attribute Selection and Prediction

    Get PDF
    In this paper, we introduce an implementation of the attribute selection algorithm, Correlation-based Feature Selection (CFS) integrated with our k-nearest neighbour (k-NN) framework. Binary neural networks underpin our k-NN and allow us to create a unified framework for attribute selection, prediction and classification. We apply the framework to a real world application of predicting bus journey times from traffic sensor data and show how attribute selection can both speed our k-NN and increase the prediction accuracy by removing noise and redundant attributes from the data

    On the origins of the Finnis-Sinclair potentials

    Get PDF
    International audienceI trace back the origins of the famous Finnis-Sinclair potentials. These potentials mimic the results of tight binding theory through their use of the square root embedding function. From the tentative beginnings of tight binding in the 1930s up to 1984 or so, some of the famous names involved are Bloch, Seitz, Montroll, Friedel, Cyrot-Lackmann, Ducastelle, to name just a few. The application of the method of moments to the description of densities of states and its connexion to the physics of closed paths linking nearest neighbours interacting atoms helped to formalize Friedel's rectangular band model for the d electrons in transition metals. Extension from perfectly periodic structures to defective ones could not be but a slow process due to the change of paradigm for solid state scientists and to the necessary caution to be paid to self-consistency. The British scientists school also contributed significantly in the 80s. Computer progress and pragmatism helped to go from mainly analytical developments to numerical experiments (another change of paradigm). I also digress on various not so well known historical points of interest to this story

    Development and validation of a classification algorithm to diagnose and differentiate spontaneous episodic vertigo syndromes: results from the DizzyReg patient registry

    Get PDF
    BACKGROUND Spontaneous episodic vertigo syndromes, namely vestibular migraine (VM) and Menière's disease (MD), are difficult to differentiate, even for an experienced clinician. In the presence of complex diagnostic information, automated systems can support human decision making. Recent developments in machine learning might facilitate bedside diagnosis of VM and MD. METHODS Data of this study originate from the prospective patient registry of the German Centre for Vertigo and Balance Disorders, a specialized tertiary treatment center at the University Hospital Munich. The classification task was to differentiate cases of VM, MD from other vestibular disease entities. Deep Neural Networks (DNN) and Boosted Decision Trees (BDT) were used for classification. RESULTS A total of 1357 patients were included (mean age 52.9, SD 15.9, 54.7% female), 9.9% with MD and 15.6% with VM. DNN models yielded an accuracy of 98.4 ± 0.5%, a precision of 96.3 ± 3.9%, and a sensitivity of 85.4 ± 3.9% for VM, and an accuracy of 98.0 ± 1.0%, a precision of 90.4 ± 6.2% and a sensitivity of 89.9 ± 4.6% for MD. BDT yielded an accuracy of 84.5 ± 0.5%, precision of 51.8 ± 6.1%, sensitivity of 16.9 ± 1.7% for VM, and an accuracy of 93.3 ± 0.7%, precision 76.0 ± 6.7%, sensitivity 41.7 ± 2.9% for MD. CONCLUSION The correct diagnosis of spontaneous episodic vestibular syndromes is challenging in clinical practice. Modern machine learning methods might be the basis for developing systems that assist practitioners and clinicians in their daily treatment decisions

    Context Aware Computing for The Internet of Things: A Survey

    Get PDF
    As we are moving towards the Internet of Things (IoT), the number of sensors deployed around the world is growing at a rapid pace. Market research has shown a significant growth of sensor deployments over the past decade and has predicted a significant increment of the growth rate in the future. These sensors continuously generate enormous amounts of data. However, in order to add value to raw sensor data we need to understand it. Collection, modelling, reasoning, and distribution of context in relation to sensor data plays critical role in this challenge. Context-aware computing has proven to be successful in understanding sensor data. In this paper, we survey context awareness from an IoT perspective. We present the necessary background by introducing the IoT paradigm and context-aware fundamentals at the beginning. Then we provide an in-depth analysis of context life cycle. We evaluate a subset of projects (50) which represent the majority of research and commercial solutions proposed in the field of context-aware computing conducted over the last decade (2001-2011) based on our own taxonomy. Finally, based on our evaluation, we highlight the lessons to be learnt from the past and some possible directions for future research. The survey addresses a broad range of techniques, methods, models, functionalities, systems, applications, and middleware solutions related to context awareness and IoT. Our goal is not only to analyse, compare and consolidate past research work but also to appreciate their findings and discuss their applicability towards the IoT.Comment: IEEE Communications Surveys & Tutorials Journal, 201
    corecore