224,644 research outputs found

    Dynamics of trimming the content of face representations for categorization in the brain

    Get PDF
    To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) Over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) Concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g. the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g. the wide opened eyes in 'fear'; the detailed mouth in 'happy'). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300

    Profiling user activities with minimal traffic traces

    Full text link
    Understanding user behavior is essential to personalize and enrich a user's online experience. While there are significant benefits to be accrued from the pursuit of personalized services based on a fine-grained behavioral analysis, care must be taken to address user privacy concerns. In this paper, we consider the use of web traces with truncated URLs - each URL is trimmed to only contain the web domain - for this purpose. While such truncation removes the fine-grained sensitive information, it also strips the data of many features that are crucial to the profiling of user activity. We show how to overcome the severe handicap of lack of crucial features for the purpose of filtering out the URLs representing a user activity from the noisy network traffic trace (including advertisement, spam, analytics, webscripts) with high accuracy. This activity profiling with truncated URLs enables the network operators to provide personalized services while mitigating privacy concerns by storing and sharing only truncated traffic traces. In order to offset the accuracy loss due to truncation, our statistical methodology leverages specialized features extracted from a group of consecutive URLs that represent a micro user action like web click, chat reply, etc., which we call bursts. These bursts, in turn, are detected by a novel algorithm which is based on our observed characteristics of the inter-arrival time of HTTP records. We present an extensive experimental evaluation on a real dataset of mobile web traces, consisting of more than 130 million records, representing the browsing activities of 10,000 users over a period of 30 days. Our results show that the proposed methodology achieves around 90% accuracy in segregating URLs representing user activities from non-representative URLs

    Gender differences in the temporal voice areas

    Get PDF
    There is not only evidence for behavioral differences in voice perception between female and male listeners, but also recent suggestions for differences in neural correlates between genders. The fMRI functional voice localizer (comprising a univariate analysis contrasting stimulation with vocal versus non-vocal sounds) is known to give robust estimates of the temporal voice areas (TVAs). However there is growing interest in employing multivariate analysis approaches to fMRI data (e.g. multivariate pattern analysis; MVPA). The aim of the current study was to localize voice-related areas in both female and male listeners and to investigate whether brain maps may differ depending on the gender of the listener. After a univariate analysis, a random effects analysis was performed on female (n = 149) and male (n = 123) listeners and contrasts between them were computed. In addition, MVPA with a whole-brain searchlight approach was implemented and classification maps were entered into a second-level permutation based random effects models using statistical non-parametric mapping (SnPM; Nichols & Holmes 2002). Gender differences were found only in the MVPA. Identified regions were located in the middle part of the middle temporal gyrus (bilateral) and the middle superior temporal gyrus (right hemisphere). Our results suggest differences in classifier performance between genders in response to the voice localizer with higher classification accuracy from local BOLD signal patterns in several temporal-lobe regions in female listeners

    Implicit Smartphone User Authentication with Sensors and Contextual Machine Learning

    Full text link
    Authentication of smartphone users is important because a lot of sensitive data is stored in the smartphone and the smartphone is also used to access various cloud data and services. However, smartphones are easily stolen or co-opted by an attacker. Beyond the initial login, it is highly desirable to re-authenticate end-users who are continuing to access security-critical services and data. Hence, this paper proposes a novel authentication system for implicit, continuous authentication of the smartphone user based on behavioral characteristics, by leveraging the sensors already ubiquitously built into smartphones. We propose novel context-based authentication models to differentiate the legitimate smartphone owner versus other users. We systematically show how to achieve high authentication accuracy with different design alternatives in sensor and feature selection, machine learning techniques, context detection and multiple devices. Our system can achieve excellent authentication performance with 98.1% accuracy with negligible system overhead and less than 2.4% battery consumption.Comment: Published on the IEEE/IFIP International Conference on Dependable Systems and Networks (DSN) 2017. arXiv admin note: substantial text overlap with arXiv:1703.0352

    Eye-movements in implicit artificial grammar learning

    Get PDF
    Artificial grammar learning (AGL) has been probed with forced-choice behavioral tests (active tests). Recent attempts to probe the outcomes of learning (implicitly acquired knowledge) with eye-movement responses (passive tests) have shown null results. However, these latter studies have not tested for sensitivity effects, for example, increased eye movements on a printed violation. In this study, we tested for sensitivity effects in AGL tests with (Experiment 1) and without (Experiment 2) concurrent active tests (preference- and grammaticality classification) in an eye-tracking experiment. Eye movements discriminated between sequence types in passive tests and more so in active tests. The eye-movement profile did not differ between preference and grammaticality classification, and it resembled sensitivity effects commonly observed in natural syntax processing. Our findings show that the outcomes of implicit structured sequence learning can be characterized in eye tracking. More specifically, whole trial measures (dwell time, number of fixations) showed robust AGL effects, whereas first-pass measures (first-fixation duration) did not. Furthermore, our findings strengthen the link between artificial and natural syntax processing, and they shed light on the factors that determine performance differences in preference and grammaticality classification tests.Max Planck Institute for PsycholinguisticsDonders Institute for Brain, Cognition and BehaviorVetenskapsradetSwedish Dyslexia Foundatio
    • …
    corecore