99 research outputs found

    Graph Embedding via High Dimensional Model Representation for Hyperspectral Images

    Full text link
    Learning the manifold structure of remote sensing images is of paramount relevance for modeling and understanding processes, as well as to encapsulate the high dimensionality in a reduced set of informative features for subsequent classification, regression, or unmixing. Manifold learning methods have shown excellent performance to deal with hyperspectral image (HSI) analysis but, unless specifically designed, they cannot provide an explicit embedding map readily applicable to out-of-sample data. A common assumption to deal with the problem is that the transformation between the high-dimensional input space and the (typically low) latent space is linear. This is a particularly strong assumption, especially when dealing with hyperspectral images due to the well-known nonlinear nature of the data. To address this problem, a manifold learning method based on High Dimensional Model Representation (HDMR) is proposed, which enables to present a nonlinear embedding function to project out-of-sample samples into the latent space. The proposed method is compared to manifold learning methods along with its linear counterparts and achieves promising performance in terms of classification accuracy of a representative set of hyperspectral images.Comment: This is an accepted version of work to be published in the IEEE Transactions on Geoscience and Remote Sensing. 11 page

    Large-scale Machine Learning in High-dimensional Datasets

    Get PDF

    Aesthetic preference for art emerges from a weighted integration over hierarchically structured visual features in the brain

    Get PDF
    It is an open question whether preferences for visual art can be lawfully predicted from the basic constituent elements of a visual image. Moreover, little is known about how such preferences are actually constructed in the brain. Here we developed and tested a computational framework to gain an understanding of how the human brain constructs aesthetic value. We show that it is possible to explain human preferences for a piece of art based on an analysis of features present in the image. This was achieved by analyzing the visual properties of drawings and photographs by multiple means, ranging from image statistics extracted by computer vision tools, subjective human ratings about attributes, to a deep convolutional neural network. Crucially, it is possible to predict subjective value ratings not only within but also across individuals, speaking to the possibility that much of the variance in human visual preference is shared across individuals. Neuroimaging data revealed that preference computations occur in the brain by means of a graded hierarchical representation of lower and higher level features in the visual system. These features are in turn integrated to compute an overall subjective preference in the parietal and prefrontal cortex. Our findings suggest that rather than being idiosyncratic, human preferences for art can be explained at least in part as a product of a systematic neural integration over underlying visual features of an image. This work not only advances our understanding of the brain-wide computations underlying value construction but also brings new mechanistic insights to the study of visual aesthetics and art appreciation

    Talking About Uncertainty

    Get PDF
    In the first article we review existing theories of uncertainty. We devote particular attention to the relation between metacognition, uncertainty and probabilistic expectations. We also analyse the role of natural language and communication for the emergence and resolution of states of uncertainty. We hypothesize that agents feel uncertainty in relation to their levels of expected surprise, which depends on probabilistic expectations-gaps elicited during communication processes. Under this framework above tolerance levels of expected surprise can be considered informative signals. These signals can be used to coordinate, at the group and social level, processes of revision of probabilistic expectations. When above tolerance levels of uncertainty are explicated by agents through natural language, in communication networks and public information arenas, uncertainty acquires a systemic role of coordinating device for the revision of probabilistic expectations. The second article of this research seeks to empirically demonstrate that we can crowd source and aggregate decentralized signals of uncertainty, i.e. expected surprise, coming from market agents and civil society by using the web and more specifically Twitter as an information source that contains the wisdom of the crowds concerning the degree of uncertainty of targeted communities/groups of agents at a given moment in time. We extract and aggregate these signals to construct a set of civil society uncertainty proxies by country. We model the dependence among our civil society uncertainty indexes and existing policy and market uncertainty proxies, highlighting contagion channels and differences in their reactiveness to real-world events that occurred in the year 2016, like the EU-referendum vote and the US presidential elections. In the third article, we propose a new instrument, called Worldwide Uncertainty Network, to analyse the uncertainty contagion dynamics across time and areas of the world. Such an instrument can be used to identify the systemic importance of countries in terms of their civil society uncertainty social percolation role. Our results show that civil society uncertainty signals coming from the web may be fruitfully used to improve our understanding of uncertainty contagion and amplification mechanisms among countries and between markets, civil society and political systems
    • …
    corecore