43,462 research outputs found

    An Adaptive Threshold in Mammalian Neocortical Evolution

    Full text link
    Expansion of the neocortex is a hallmark of human evolution. However, it remains an open question what adaptive mechanisms facilitated its expansion. Here we show, using gyrencephaly index (GI) and other physiological and life-history data for 102 mammalian species, that gyrencephaly is an ancestral mammalian trait. We provide evidence that the evolution of a highly folded neocortex, as observed in humans, requires the traversal of a threshold of 10^9 neurons, and that species above and below the threshold exhibit a bimodal distribution of physiological and life-history traits, establishing two phenotypic groups. We identify, using discrete mathematical models, proliferative divisions of progenitors in the basal compartment of the developing neocortex as evolutionarily necessary and sufficient for generating a fourteen-fold increase in daily prenatal neuron production and thus traversal of the neuronal threshold. We demonstrate that length of neurogenic period, rather than any novel progenitor-type, is sufficient to distinguish cortical neuron number between species within the same phenotypic group.Comment: Currently under review; 38 pages, 5 Figures, 13 Supplementary Figures, 2 Table

    Patterns of eye-movements when Male and Female observers judge female attractiveness, body fat and waist-to-hip ratio

    Get PDF
    Behavioural studies of the perceptual cues for female physical attractiveness have suggested two potentially important features; body fat distribution (the waist-to-hip ratio or WHR) and overall body fat (often estimated by the body mass index or BMI). However none of these studies tell us directly which regions of the stimulus images inform observers’ judgments. Therefore, we recorded the eye-movements of 3 groups of 10 male observers and 3 groups of 10 female observers, when they rated a set of 46 photographs of female bodies. The first sets of observers rated the images for attractiveness, the second sets rated for body fat and the third sets for WHR. If either WHR and/or body fat are used to judge attractiveness, then observers rating attractiveness should look at those areas of the body which allow assessment of these features, and they should look in the same areas when they are directly asked to estimate WHR and body fat. So we are able to compare the fixation patterns for the explicit judgments with those for attractiveness judgments, and infer which features were used for attractiveness. Prior to group analysis of the eye-movement data, the locations of individual eye fixations were transformed into a common reference space to permit comparisons of fixation density at high resolution across all stimuli. This manipulation allowed us to use spatial statistical analysis techniques to show: 1) Observers’ fixations for attractiveness and body fat clustered in the central and upper abdomen and chest, but not the pelvic or hip areas, consistent with the finding that WHR had little influence over attractiveness judgments. 2) The pattern of fixations for attractiveness ratings was very similar to the fixation patterns for body fat judgments. 3) The fixations for WHR ratings were significantly different from those for attractiveness and body fat

    Body size estimation in women with anorexia nervosa and healthy controls using 3D avatars

    Get PDF
    A core feature of anorexia nervosa is an over-estimation of body size. However, quantifying this over-estimation has been problematic as existing methodologies introduce a series of artefacts and inaccuracies in the stimuli used for judgements of body size. To overcome these problems, we have: (i) taken 3D scans of 15 women who have symptoms of anorexia (referred to henceforth as anorexia spectrum disorders, ANSD) and 15 healthy control women, (ii) used a 3D modelling package to build avatars from the scans, (iii) manipulated the body shapes of these avatars to reflect biometrically accurate, continuous changes in body mass index (BMI), (iv) used these personalized avatars as stimuli to allow the women to estimate their body size. The results show that women who are currently receiving treatment for ANSD show an over-estimation of body size which rapidly increases as their own BMI increases. By contrast, the women acting as healthy controls can accurately estimate their body size irrespective of their own BMI. This study demonstrates the viability of combining 3D scanning and CGI techniques to create personalised realistic avatars of individual patients to directly assess their body image perception

    On Body Mass Index Analysis from Human Visual Appearance

    Get PDF
    In the past few decades, overweight and obesity are spreading widely like an epidemic. Generally, a person is considered overweight by body mass index (BMI). In addition to a body fat measurement, BMI is also a risk factor for many diseases, such as cardiovascular diseases, cancers and diabetes, etc. Therefore, BMI is important for personal health monitoring and medical research. Currently, BMI is measured in person with special devices. It is an urgent demand to explore conveniently preventive tools. This work investigates the feasibility of analyzing BMI from human visual appearances, including 2-dimensional (2D)/3-dimensional (3D) body and face data. Motivated by health science studies which have shown that anthropometric measures, such as waist-hip ratio, waist circumference, etc., are indicators for obesity, we analyze body weight from frontal view human body images. A framework is developed for body weight analysis from body images, along with the computation methods of five anthropometric features for body weight characterization. Then, we study BMI estimation from the 3D data by measuring the correlation between the estimated body volume and BMIs, and develop an efficient BMI computation method which consists of body weight and height estimation from normally dressed people in 3D space. We also intensively study BMI estimation from frontal view face images via two key aspects: facial representation extracting and BMI estimator learning. First, we investigate the visual BMI estimation problem from the aspect of the characteristics and performance of different facial representation extracting methods by three designed experiments. Then we study visual BMI estimation from facial images by a two-stage learning framework. BMI related facial features are learned in the first stage. To address the ambiguity of BMI labels, a label distribution based BMI estimator is proposed for the second stage. The experimental results show that this framework improves the performance step by step. Finally, to address the challenges caused by BMI data and labels, we integrate feature learning and estimator learning in one convolutional neural network (CNN). A label assignment matching scheme is proposed which successfully achieves an improvement in BMI estimation from face images

    Facial Image Analysis for Body Mass Index, Makeup and Identity

    Get PDF
    The principal aim of facial image analysis in computer vision is to extract valuable information(e.g., age, gender, ethnicity, and identity) by interpreting perceived electronic signals from face images. In this dissertation, we develop facial image analysis systems for body mass index (BMI) prediction, makeup detection, as well as facial identity with makeup changes and BMI variations.;BMI is a commonly used measure of body fatness. In the first part of this thesis, we study BMI related topics. At first, we develop a computational method to predict BMI information from face images automatically. We formulate the BMI prediction from facial features as a machine vision problem. Three regression methods, including least square estimation, Gaussian processes for regression, and support vector regression are employed to predict the BMI value. Our preliminary results show that it is feasible to develop a computational system for BMI prediction from face images. Secondly, we address the influence of BMI changes on face identity. Both synthesized and real face images are assembled as the databases to facilitate our study. Empirically, we found that large BMI alterations can significantly reduce the matching accuracy of the face recognition system. Then we study if the influence of BMI changes can be reduced to improve the face recognition performance. The partial least squares (PLS) method is applied for this purpose. Experimental results show the feasibility to develop algorithms to address the influence of facial adiposity variations on face recognition, caused by BMI changes.;Makeup can affect facial appearance obviously. In the second part of this thesis, we deal with makeup influence on face identity. It is principal to perform makeup detection at first to address makeup influence. Four categories of features are proposed to characterize facial makeup cues in our study, including skin color tone, skin smoothness, texture, and highlight. A patch selection scheme and discriminative mapping are presented to enhance the performance of makeup detection. Secondly, we study dual attributes from makeup and non-makeup faces separately to reflect facial appearance changes caused by makeup in a semantic level. Cross-makeup attribute classification and accuracy change analysis is operated to divide dual attributes into four categories according to different makeup effects. To develop a face recognition system that is robust to facial makeup, PLS method is proposed on features extracted from local patches. We also propose a dual-attributes based method for face verification. Shared dual attributes can be used to measure facial similarity, rather than a direct matching with low-level features. Experimental results demonstrate the feasibility to eliminate the influence of makeup on face recognition.;In summary, contributions of this dissertation center in developing facial image analysis systems to deal with newly emerged topics effectively, i.e., BMI prediction, makeup detection, and the rcognition of face identity with makeup and BMI changes. In particular,to the best of our knowledge, BMI related topics, i.e., BMI prediction; the influence of BMI changes on face recognition; and face recognition robust to BMI changes are first explorations to the biometrics society

    Machine Learning Approaches to Human Body Shape Analysis

    Get PDF
    Soft biometrics, biomedical sciences, and many other fields of study pay particular attention to the study of the geometric description of the human body, and its variations. Although multiple contributions, the interest is particularly high given the non-rigid nature of the human body, capable of assuming different poses, and numerous shapes due to variable body composition. Unfortunately, a well-known costly requirement in data-driven machine learning, and particularly in the human-based analysis, is the availability of data, in the form of geometric information (body measurements) with related vision information (natural images, 3D mesh, etc.). We introduce a computer graphics framework able to generate thousands of synthetic human body meshes, representing a population of individuals with stratified information: gender, Body Fat Percentage (BFP), anthropometric measurements, and pose. This contribution permits an extensive analysis of different bodies in different poses, avoiding the demanding, and expensive acquisition process. We design a virtual environment able to take advantage of the generated bodies, to infer the body surface area (BSA) from a single view. The framework permits to simulate the acquisition process of newly introduced RGB-D devices disentangling different noise components (sensor noise, optical distortion, body part occlusions). Common geometric descriptors in soft biometric, as well as in biomedical sciences, are based on body measurements. Unfortunately, as we prove, these descriptors are not pose invariant, constraining the usability in controlled scenarios. We introduce a differential geometry approach assuming body pose variations as isometric transformations of the body surface, and body composition changes covariant to the body surface area. This setting permits the use of the Laplace-Beltrami operator on the 2D body manifold, describing the body with a compact, efficient, and pose invariant representation. We design a neural network architecture able to infer important body semantics from spectral descriptors, closing the gap between abstract spectral features, and traditional measurement-based indices. Studying the manifold of body shapes, we propose an innovative generative adversarial model able to learn the body shapes. The method permits to generate new bodies with unseen geometries as a walk on the latent space, constituting a significant advantage over traditional generative methods
    • 

    corecore