41 research outputs found

    Visual scene recognition with biologically relevant generative models

    No full text
    This research focuses on developing visual object categorization methodologies that are based on machine learning techniques and biologically inspired generative models of visual scene recognition. Modelling the statistical variability in visual patterns, in the space of features extracted from them by an appropriate low level signal processing technique, is an important matter of investigation for both humans and machines. To study this problem, we have examined in detail two recent probabilistic models of vision: a simple multivariate Gaussian model as suggested by (Karklin & Lewicki, 2009) and a restricted Boltzmann machine (RBM) proposed by (Hinton, 2002). Both the models have been widely used for visual object classification and scene analysis tasks before. This research highlights that these models on their own are not plausible enough to perform the classification task, and suggests Fisher kernel as a means of inducing discrimination into these models for classification power. Our empirical results on standard benchmark data sets reveal that the classification performance of these generative models could be significantly boosted near to the state of the art performance, by drawing a Fisher kernel from compact generative models that computes the data labels in a fraction of total computation time. We compare the proposed technique with other distance based and kernel based classifiers to show how computationally efficient the Fisher kernels are. To the best of our knowledge, Fisher kernel has not been drawn from the RBM before, so the work presented in the thesis is novel in terms of its idea and application to vision problem

    “I Can See the Forest for the Trees”: Examining Personality Traits with Trasformers

    Get PDF
    Our understanding of Personality and its structure is rooted in linguistic studies operating under the assumptions made by the Lexical Hypothesis: personality characteristics that are important to a group of people will at some point be codified in their language, with the number of encoded representations of a personality characteristic indicating their importance. Qualitative and quantitative efforts in the dimension reduction of our lexicon throughout the mid-20th century have played a vital role in the field’s eventual arrival at the widely accepted Five Factor Model (FFM). However, there are a number of presently unresolved conflicts regarding the breadth and structure of this model (c.f., Hough, Oswald, & Ock, 2015). The present study sought to address such issues through previously unavailable language modeling techniques. The Distributional Semantic Hypothesis (DSH) argues that the meaning of words may be formed through some function of their co-occurrence with other words. There is evidence that DSH-based techniques are cognitively valid, serving as a proxy for learned associations between stimuli (Günther et al., 2019). Given that Personality is often measured through self-report surveys, the present study proposed that a Personality measure be created directly from this source data, using large pre-trained Transformers (a type of neural network that is adept at encoding and decoding semantic representations from natural language). An inventory was constructed, administered, and response data was analyzed using partial correlation networks. This exploratory study identifies differences in the internal structure of trait-domains, while simultaneously demonstrating a quantitative approach to item creation and survey development
    corecore