66,818 research outputs found

    Alphabetic Letter Identification: Effects of perceivability, similarity, and bias

    Get PDF
    The legibility of the letters in the Latin alphabet has been measured numerous times since the beginning of\ud experimental psychology. To identify the theoretical mechanisms attributed to letter identification, we report\ud a comprehensive review of literature, spanning more than a century. This review revealed that identification\ud accuracy has frequently been attributed to a subset of three common sources: perceivability, bias, and simi-\ud larity. However, simultaneous estimates of these values have rarely (if ever) been performed. We present the\ud results of two new experiments which allow for the simultaneous estimation of these factors, and examine\ud how the shape of a visual mask impacts each of them, as inferred through a new statistical model. Results showed that the shape and identity of the mask impacted the inferred perceivability, bias, and similarity space of a letter set, but that there were aspects of similarity that were robust to the choice of mask. The results illustrate how the psychological concepts of perceivability, bias, and similarity can be estimated simultaneously, and how each make powerful contributions to visual letter identification

    Detection of postural transitions using machine learning

    Get PDF
    The purpose of this project is to study the nature of human activity recognition and prepare a dataset from volunteers doing various activities which can be used for constructing the various parts of a machine learning model which is used to identify each volunteers posture transitions accurately. This report presents the problem definition, equipment used, previous work in this area of human activity recognition and the resolution of the problem along with results. Also this report sheds light on the process and the steps taken to undertake this endeavour of human activity recognition such as building of a dataset, pre-processing the data by applying filters and various windowing length techniques, splitting the data into training and testing data, performance of feature selection and feature extraction and finally selecting the model for training and testing which provides maximum accuracy and least misclassification rates. The tools used for this project includes a laptop equipped with MATLAB and EXCEL and MEDIA PLAYER CLASSIC respectively which have been used for data processing, model training and feature selection and Labelling respectively. The data has been collected using an Inertial Measurement Unit contains 3 tri-axial Accelerometers, 1 Gyroscope, 1 Magnetometer and 1 Pressure sensor. For this project only the Accelerometers, Gyroscope and the Pressure sensor is used. The sensor is made by the members of the lab named ‘The Technical Research Centre for Dependency Care and Autonomous Living (CETpD) at the UPC-ETSEIB campus. The results obtained have been satisfactory, and the objectives set have been fulfilled. There is room for possible improvements through expanding the scope of the project such as detection of chronic disorders or providing posture based statistics to the end user or even just achieving a higher rate of sensitivity of transitions of posture by using better features and increasing the dataset size by increasing the number of volunteers.Incomin

    Humans and deep networks largely agree on which kinds of variation make object recognition harder

    Get PDF
    View-invariant object recognition is a challenging problem, which has attracted much attention among the psychology, neuroscience, and computer vision communities. Humans are notoriously good at it, even if some variations are presumably more difficult to handle than others (e.g. 3D rotations). Humans are thought to solve the problem through hierarchical processing along the ventral stream, which progressively extracts more and more invariant visual features. This feed-forward architecture has inspired a new generation of bio-inspired computer vision systems called deep convolutional neural networks (DCNN), which are currently the best algorithms for object recognition in natural images. Here, for the first time, we systematically compared human feed-forward vision and DCNNs at view-invariant object recognition using the same images and controlling for both the kinds of transformation as well as their magnitude. We used four object categories and images were rendered from 3D computer models. In total, 89 human subjects participated in 10 experiments in which they had to discriminate between two or four categories after rapid presentation with backward masking. We also tested two recent DCNNs on the same tasks. We found that humans and DCNNs largely agreed on the relative difficulties of each kind of variation: rotation in depth is by far the hardest transformation to handle, followed by scale, then rotation in plane, and finally position. This suggests that humans recognize objects mainly through 2D template matching, rather than by constructing 3D object models, and that DCNNs are not too unreasonable models of human feed-forward vision. Also, our results show that the variation levels in rotation in depth and scale strongly modulate both humans' and DCNNs' recognition performances. We thus argue that these variations should be controlled in the image datasets used in vision research

    An Overview of Classifier Fusion Methods

    Get PDF
    A number of classifier fusion methods have been recently developed opening an alternative approach leading to a potential improvement in the classification performance. As there is little theory of information fusion itself, currently we are faced with different methods designed for different problems and producing different results. This paper gives an overview of classifier fusion methods and attempts to identify new trends that may dominate this area of research in future. A taxonomy of fusion methods trying to bring some order into the existing “pudding of diversities” is also provided

    An Overview of Classifier Fusion Methods

    Get PDF
    A number of classifier fusion methods have been recently developed opening an alternative approach leading to a potential improvement in the classification performance. As there is little theory of information fusion itself, currently we are faced with different methods designed for different problems and producing different results. This paper gives an overview of classifier fusion methods and attempts to identify new trends that may dominate this area of research in future. A taxonomy of fusion methods trying to bring some order into the existing “pudding of diversities” is also provided

    Cramér-Rao sensitivity limits for astronomical instruments: implications for interferometer design

    Get PDF
    Multiple-telescope interferometry for high-angular-resolution astronomical imaging in the optical–IR–far-IR bands is currently a topic of great scientific interest. The fundamentals that govern the sensitivity of direct-detection instruments and interferometers are reviewed, and the rigorous sensitivity limits imposed by the Cramér–Rao theorem are discussed. Numerical calculations of the Cramér–Rao limit are carried out for a simple example, and the results are used to support the argument that interferometers that have more compact instantaneous beam patterns are more sensitive, since they extract more spatial information from each detected photon. This argument favors arrays with a larger number of telescopes, and it favors all-on-one beam-combining methods as compared with pairwise combination

    Quantum-inspired Machine Learning on high-energy physics data

    Get PDF
    Tensor Networks, a numerical tool originally designed for simulating quantum many-body systems, have recently been applied to solve Machine Learning problems. Exploiting a tree tensor network, we apply a quantum-inspired machine learning technique to a very important and challenging big data problem in high energy physics: the analysis and classification of data produced by the Large Hadron Collider at CERN. In particular, we present how to effectively classify so-called b-jets, jets originating from b-quarks from proton-proton collisions in the LHCb experiment, and how to interpret the classification results. We exploit the Tensor Network approach to select important features and adapt the network geometry based on information acquired in the learning process. Finally, we show how to adapt the tree tensor network to achieve optimal precision or fast response in time without the need of repeating the learning process. These results pave the way to the implementation of high-frequency real-time applications, a key ingredient needed among others for current and future LHCb event classification able to trigger events at the tens of MHz scale.Comment: 13 pages, 4 figure
    corecore