196 research outputs found

    Approximate Inference in Graphical Models using Tensor Decompositions

    Get PDF
    Contains fulltext : 35136.pdf (publisher's version ) (Open Access)20 p

    Eye movements explain decodability during perception and cued attention in MEG

    Get PDF
    Contains fulltext : 202750.pdf (publisher's version ) (Open Access)Eye movements are an integral part of human perception, but can induce artifacts in many magneto-encephalography (MEG) and electroencephalography (EEG) studies. For this reason, investigators try to minimize eye movements and remove these artifacts from their data using different techniques. When these artifacts are not purely random, but consistent regarding certain stimuli or conditions, the possibility arises that eye movements are actually inducing effects in the MEG signal. It remains unclear how much of an influence eye movements can have on observed effects in MEG, since most MEG studies lack a control analysis to verify whether an effect found in the MEG signal is induced by eye movements. Here, we find that we can decode stimulus location from eye movements in two different stages of a working memory match-to-sample task that encompass different areas of research typically done with MEG. This means that the observed MEG effect might be (partly) due to eye movements instead of any true neural correlate. We suggest how to check for eye movement effects in the data and make suggestions on how to minimize eye movement artifacts from occurring in the first place.10 p

    Social & Indigenous Entrepreneurship

    Get PDF
    This lecture will discuss social entrepreneurship, students and remote indigenous Australia. Researching, teaching or learning about entrepreneurship is very different to researching teaching or learning about functional disciplines such as accounting and finance. In functional disciplines there is generally a well defined skill set, this is not the case with entrepreneurship as it is as much a mind set as it is a set of activities. Identification of opportunities, learning about them and taking actions all take place within a context

    Real-world indoor mobility with simulated prosthetic vision:The benefits and feasibility of contour-based scene simplification at different phosphene resolutions

    Get PDF
    Contains fulltext : 246314.pdf (Publisher’s version ) (Open Access)Neuroprosthetic implants are a promising technology for restoring some form of vision in people with visual impairments via electrical neurostimulation in the visual pathway. Although an artificially generated prosthetic percept is relatively limited compared with normal vision, it may provide some elementary perception of the surroundings, re-enabling daily living functionality. For mobility in particular, various studies have investigated the benefits of visual neuroprosthetics in a simulated prosthetic vision paradigm with varying outcomes. The previous literature suggests that scene simplification via image processing, and particularly contour extraction, may potentially improve the mobility performance in a virtual environment. In the current simulation study with sighted participants, we explore both the theoretically attainable benefits of strict scene simplification in an indoor environment by controlling the environmental complexity, as well as the practically achieved improvement with a deep learning-based surface boundary detection implementation compared with traditional edge detection. A simulated electrode resolution of 26 x 26 was found to provide sufficient information for mobility in a simple environment. Our results suggest that, for a lower number of implanted electrodes, the removal of background textures and within-surface gradients may be beneficial in theory. However, the deep learning-based implementation for surface boundary detection did not improve mobility performance in the current study. Furthermore, our findings indicate that, for a greater number of electrodes, the removal of within-surface gradients and background textures may deteriorate, rather than improve, mobility. Therefore, finding a balanced amount of scene simplification requires a careful tradeoff between informativity and interpretability that may depend on the number of implanted electrodes.14 p

    PhenoScore quantifies phenotypic variation for rare genetic diseases by combining facial analysis with other clinical features using a machine-learning framework

    Get PDF
    Several molecular and phenotypic algorithms exist that establish genotype-phenotype correlations, including facial recognition tools. However, no unified framework that investigates both facial data and other phenotypic data directly from individuals exists. We developed PhenoScore: an open-source, artificial intelligence-based phenomics framework, combining facial recognition technology with Human Phenotype Ontology data analysis to quantify phenotypic similarity. Here we show PhenoScore's ability to recognize distinct phenotypic entities by establishing recognizable phenotypes for 37 of 40 investigated syndromes against clinical features observed in individuals with other neurodevelopmental disorders and show it is an improvement on existing approaches. PhenoScore provides predictions for individuals with variants of unknown significance and enables sophisticated genotype-phenotype studies by testing hypotheses on possible phenotypic (sub)groups. PhenoScore confirmed previously known phenotypic subgroups caused by variants in the same gene for SATB1, SETBP1 and DEAF1 and provides objective clinical evidence for two distinct ADNP-related phenotypes, already established functionally.PhenoScore is an open-source machine-learning tool that combines facial image recognition with Human Phenotype Ontology for genetic syndrome identification without genomic data, with applications to subgroup analysis and variants of unknown significance classification.Genetics of disease, diagnosis and treatmen

    Influence of Conversion and Anastomotic Leakage on Survival in Rectal Cancer Surgery; Retrospective Cross-sectional Study

    Get PDF

    Computational foundations of natural intelligence

    Get PDF
    Contains fulltext : 180760.pdf (publisher's version ) (Open Access)New developments in AI and neuroscience are revitalizing the quest to understanding natural intelligence, offering insight about how to equip machines with human-like capabilities. This paper reviews some of the computational principles relevant for understanding natural intelligence and, ultimately, achieving strong AI. After reviewing basic principles, a variety of computational modeling approaches is discussed. Subsequently, I concentrate on the use of artificial neural networks as a framework for modeling cognitive processes. This paper ends by outlining some of the challenges that remain to fulfill the promise of machines that show human-like intelligence.24 p

    Stability Conditions for L1/Lp Regularization

    Get PDF
    Contains fulltext : 72584.pdf (preprint version ) (Open Access)2 p

    Unsupervised feature learning improves prediction of human brain activity in response to natural images

    Get PDF
    Contains fulltext : 131512.pdf (publisher's version ) (Open Access)Encoding and decoding in functional magnetic resonance imaging has recently emerged as an area of research to noninvasively characterize the relationship between stimulus features and human brain activity. To overcome the challenge of formalizing what stimulus features should modulate single voxel responses, we introduce a general approach for making directly testable predictions of single voxel responses to statistically adapted representations of ecologically valid stimuli. These representations are learned from unlabeled data without supervision. Our approach is validated using a parsimonious computational model of (i) how early visual cortical representations are adapted to statistical regularities in natural images and (ii) how populations of these representations are pooled by single voxels. This computational model is used to predict single voxel responses to natural images and identify natural images from stimulus-evoked multiple voxel responses. We show that statistically adapted low-level sparse and invariant representations of natural images better span the space of early visual cortical representations and can be more effectively exploited in stimulus identification than hand-designed Gabor wavelets. Our results demonstrate the potential of our approach to better probe unknown cortical representations.12 p

    L1/Lp regularization of Differences

    Get PDF
    Contains fulltext : 72126.pdf (preprint version ) (Open Access)10 p
    corecore