134 research outputs found

    Integrated information increases with fitness in the evolution of animats

    Get PDF
    One of the hallmarks of biological organisms is their ability to integrate disparate information sources to optimize their behavior in complex environments. How this capability can be quantified and related to the functional complexity of an organism remains a challenging problem, in particular since organismal functional complexity is not well-defined. We present here several candidate measures that quantify information and integration, and study their dependence on fitness as an artificial agent ("animat") evolves over thousands of generations to solve a navigation task in a simple, simulated environment. We compare the ability of these measures to predict high fitness with more conventional information-theoretic processing measures. As the animat adapts by increasing its "fit" to the world, information integration and processing increase commensurately along the evolutionary line of descent. We suggest that the correlation of fitness with information integration and with processing measures implies that high fitness requires both information processing as well as integration, but that information integration may be a better measure when the task requires memory. A correlation of measures of information integration (but also information processing) and fitness strongly suggests that these measures reflect the functional complexity of the animat, and that such measures can be used to quantify functional complexity even in the absence of fitness data.Comment: 27 pages, 8 figures, one supplementary figure. Three supplementary video files available on request. Version commensurate with published text in PLoS Comput. Bio

    A survey of energy drink consumption patterns among college students

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Energy drink consumption has continued to gain in popularity since the 1997 debut of Red Bull, the current leader in the energy drink market. Although energy drinks are targeted to young adult consumers, there has been little research regarding energy drink consumption patterns among college students in the United States. The purpose of this study was to determine energy drink consumption patterns among college students, prevalence and frequency of energy drink use for six situations, namely for insufficient sleep, to increase energy (in general), while studying, driving long periods of time, drinking with alcohol while partying, and to treat a hangover, and prevalence of adverse side effects and energy drink use dose effects among college energy drink users.</p> <p>Methods</p> <p>Based on the responses from a 32 member college student focus group and a field test, a 19 item survey was used to assess energy drink consumption patterns of 496 randomly surveyed college students attending a state university in the Central Atlantic region of the United States.</p> <p>Results</p> <p>Fifty one percent of participants (<it>n </it>= 253) reported consuming greater than one energy drink each month in an average month for the current semester (defined as energy drink user). The majority of users consumed energy drinks for insufficient sleep (67%), to increase energy (65%), and to drink with alcohol while partying (54%). The majority of users consumed one energy drink to treat most situations although using three or more was a common practice to drink with alcohol while partying (49%). Weekly jolt and crash episodes were experienced by 29% of users, 22% reported ever having headaches, and 19% heart palpitations from consuming energy drinks. There was a significant dose effect only for jolt and crash episodes.</p> <p>Conclusion</p> <p>Using energy drinks is a popular practice among college students for a variety of situations. Although for the majority of situations assessed, users consumed one energy drink with a reported frequency of 1 – 4 days per month, many users consumed three or more when combining with alcohol while partying. Further, side effects from consuming energy drinks are fairly common, and a significant dose effect was found with jolt and crash episodes. Future research should identify if college students recognize the amounts of caffeine that are present in the wide variety of caffeine-containing products that they are consuming, the amounts of caffeine that they are consuming in various situations, and the physical side effects associated with caffeine consumption.</p

    Multiple Classifier Systems for the Classification of Audio-Visual Emotional States

    Full text link
    Abstract. Research activities in the field of human-computer inter-action increasingly addressed the aspect of integrating some type of emotional intelligence. Human emotions are expressed through differ-ent modalities such as speech, facial expressions, hand or body gestures, and therefore the classification of human emotions should be considered as a multimodal pattern recognition problem. The aim of our paper is to investigate multiple classifier systems utilizing audio and visual features to classify human emotional states. For that a variety of features have been derived. From the audio signal the fundamental frequency, LPC-and MFCC coefficients, and RASTA-PLP have been used. In addition to that two types of visual features have been computed, namely form and motion features of intermediate complexity. The numerical evaluation has been performed on the four emotional labels Arousal, Expectancy, Power, Valence as defined in the AVEC data set. As classifier architec-tures multiple classifier systems are applied, these have been proven to be accurate and robust against missing and noisy data.

    Qualia: The Geometry of Integrated Information

    Get PDF
    According to the integrated information theory, the quantity of consciousness is the amount of integrated information generated by a complex of elements, and the quality of experience is specified by the informational relationships it generates. This paper outlines a framework for characterizing the informational relationships generated by such systems. Qualia space (Q) is a space having an axis for each possible state (activity pattern) of a complex. Within Q, each submechanism specifies a point corresponding to a repertoire of system states. Arrows between repertoires in Q define informational relationships. Together, these arrows specify a quale—a shape that completely and univocally characterizes the quality of a conscious experience. Φ— the height of this shape—is the quantity of consciousness associated with the experience. Entanglement measures how irreducible informational relationships are to their component relationships, specifying concepts and modes. Several corollaries follow from these premises. The quale is determined by both the mechanism and state of the system. Thus, two different systems having identical activity patterns may generate different qualia. Conversely, the same quale may be generated by two systems that differ in both activity and connectivity. Both active and inactive elements specify a quale, but elements that are inactivated do not. Also, the activation of an element affects experience by changing the shape of the quale. The subdivision of experience into modalities and submodalities corresponds to subshapes in Q. In principle, different aspects of experience may be classified as different shapes in Q, and the similarity between experiences reduces to similarities between shapes. Finally, specific qualities, such as the “redness” of red, while generated by a local mechanism, cannot be reduced to it, but require considering the entire quale. Ultimately, the present framework may offer a principled way for translating qualitative properties of experience into mathematics

    Adaptive Gain Modulation in V1 Explains Contextual Modifications during Bisection Learning

    Get PDF
    The neuronal processing of visual stimuli in primary visual cortex (V1) can be modified by perceptual training. Training in bisection discrimination, for instance, changes the contextual interactions in V1 elicited by parallel lines. Before training, two parallel lines inhibit their individual V1-responses. After bisection training, inhibition turns into non-symmetric excitation while performing the bisection task. Yet, the receptive field of the V1 neurons evaluated by a single line does not change during task performance. We present a model of recurrent processing in V1 where the neuronal gain can be modulated by a global attentional signal. Perceptual learning mainly consists in strengthening this attentional signal, leading to a more effective gain modulation. The model reproduces both the psychophysical results on bisection learning and the modified contextual interactions observed in V1 during task performance. It makes several predictions, for instance that imagery training should improve the performance, or that a slight stimulus wiggling can strongly affect the representation in V1 while performing the task. We conclude that strengthening a top-down induced gain increase can explain perceptual learning, and that this top-down signal can modify lateral interactions within V1, without significantly changing the classical receptive field of V1 neurons

    Task-Specific Codes for Face Recognition: How they Shape the Neural Representation of Features for Detection and Individuation

    Get PDF
    The variety of ways in which faces are categorized makes face recognition challenging for both synthetic and biological vision systems. Here we focus on two face processing tasks, detection and individuation, and explore whether differences in task demands lead to differences both in the features most effective for automatic recognition and in the featural codes recruited by neural processing.Our study appeals to a computational framework characterizing the features representing object categories as sets of overlapping image fragments. Within this framework, we assess the extent to which task-relevant information differs across image fragments. Based on objective differences we find among task-specific representations, we test the sensitivity of the human visual system to these different face descriptions independently of one another. Both behavior and functional magnetic resonance imaging reveal effects elicited by objective task-specific levels of information. Behaviorally, recognition performance with image fragments improves with increasing task-specific information carried by different face fragments. Neurally, this sensitivity to the two tasks manifests as differential localization of neural responses across the ventral visual pathway. Fragments diagnostic for detection evoke larger neural responses than non-diagnostic ones in the right posterior fusiform gyrus and bilaterally in the inferior occipital gyrus. In contrast, fragments diagnostic for individuation evoke larger responses than non-diagnostic ones in the anterior inferior temporal gyrus. Finally, for individuation only, pattern analysis reveals sensitivity to task-specific information within the right "fusiform face area".OUR RESULTS DEMONSTRATE: 1) information diagnostic for face detection and individuation is roughly separable; 2) the human visual system is independently sensitive to both types of information; 3) neural responses differ according to the type of task-relevant information considered. More generally, these findings provide evidence for the computational utility and the neural validity of fragment-based visual representation and recognition

    Crayfish Recognize the Faces of Fight Opponents

    Get PDF
    The capacity to associate stimuli underlies many cognitive abilities, including recognition, in humans and other animals. Vertebrates process different categories of information separately and then reassemble the distilled information for unique identification, storage and recall. Invertebrates have fewer neural networks and fewer neural processing options so study of their behavior may reveal underlying mechanisms still not fully understood for any animal. Some invertebrates form complex social colonies and are capable of visual memory–bees and wasps, for example. This ability would not be predicted in species that interact in random pairs without strong social cohesion; for example, crayfish. They have chemical memory but the extent to which they remember visual features is unknown. Here we demonstrate that the crayfish Cherax destructor is capable of visual recognition of individuals. The simplicity of their interactions allowed us to examine the behavior and some characteristics of the visual features involved. We showed that facial features are learned during face-to-face fights, that highly variable cues are used, that the type of variability is important, and that the learning is context-dependent. We also tested whether it is possible to engineer false identifications and for animals to distinguish between twin opponents

    How Can Selection of Biologically Inspired Features Improve the Performance of a Robust Object Recognition Model?

    Get PDF
    Humans can effectively and swiftly recognize objects in complex natural scenes. This outstanding ability has motivated many computational object recognition models. Most of these models try to emulate the behavior of this remarkable system. The human visual system hierarchically recognizes objects in several processing stages. Along these stages a set of features with increasing complexity is extracted by different parts of visual system. Elementary features like bars and edges are processed in earlier levels of visual pathway and as far as one goes upper in this pathway more complex features will be spotted. It is an important interrogation in the field of visual processing to see which features of an object are selected and represented by the visual cortex. To address this issue, we extended a hierarchical model, which is motivated by biology, for different object recognition tasks. In this model, a set of object parts, named patches, extracted in the intermediate stages. These object parts are used for training procedure in the model and have an important role in object recognition. These patches are selected indiscriminately from different positions of an image and this can lead to the extraction of non-discriminating patches which eventually may reduce the performance. In the proposed model we used an evolutionary algorithm approach to select a set of informative patches. Our reported results indicate that these patches are more informative than usual random patches. We demonstrate the strength of the proposed model on a range of object recognition tasks. The proposed model outperforms the original model in diverse object recognition tasks. It can be seen from the experiments that selected features are generally particular parts of target images. Our results suggest that selected features which are parts of target objects provide an efficient set for robust object recognition

    From upright to upside-down presentation: A spatio-temporal ERP study of the parametric effect of rotation on face and house processing

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>While there is a general agreement that picture-plane inversion is more detrimental to face processing than to other seemingly complex visual objects, the origin of this effect is still largely debatable. Here, we address the question of whether face inversion reflects a quantitative or a qualitative change in processing mode by investigating the pattern of event-related potential (ERP) response changes with picture plane rotation of face and house pictures. Thorough analyses of topographical (Scalp Current Density maps, SCD) and dipole source modeling were also conducted.</p> <p>Results</p> <p>We find that whilst stimulus orientation affected in a similar fashion participants' response latencies to make face and house decisions, only the ERPs in the N170 latency range were modulated by picture plane rotation of faces. The pattern of N170 amplitude and latency enhancement to misrotated faces displayed a curvilinear shape with an almost linear increase for rotations from 0° to 90° and a dip at 112.5° up to 180° rotations. A similar discontinuity function was also described for SCD occipito-temporal and temporal current foci with no topographic distribution changes, suggesting that upright and misrotated faces activated similar brain sources. This was confirmed by dipole source analyses showing the involvement of bilateral sources in the fusiform and middle occipital gyri, the activity of which was differentially affected by face rotation.</p> <p>Conclusion</p> <p>Our N170 findings provide support for both the quantitative and qualitative accounts for face rotation effects. Although the qualitative explanation predicted the curvilinear shape of N170 modulations by face misrotations, topographical and source modeling findings suggest that the same brain regions, and thus the same mechanisms, are probably at work when processing upright and rotated faces. Taken collectively, our results indicate that the same processing mechanisms may be involved across the whole range of face orientations, but would operate in a non-linear fashion. Finally, the response tuning of the N170 to rotated faces extends previous reports and further demonstrates that face inversion affects perceptual analyses of faces, which is reflected within the time range of the N170 component.</p
    corecore