856 research outputs found

    Improving proactive decision making with object trend displays

    Get PDF
    Operators of dynamic systems often use time-series data to support their diagnostic and proactive decision-making. Those data have traditionally been displayed in the form of separate trend charts, for example, line graphs of pressure and temperature over time. Configural object displays are a widely advocated approach to the visual integration of information yet have been applied only rarely to time-series data. One example was the 'time tunnel' format but its benefits were equivocal, seemingly compromised by its graphical complexity. There is then the need to investigate other graphical forms for object displays of time series data. This research will require a microworld representing a knowledge-rich task domain accessible to multiple participants (the nuclear power plant simulation used with the time tunnel display studies required participants to have 20 hours of experience with the system). We report a design for such a microworld that adopts the domain of financial control of a business where decisions need to be made about the pricing of products to optimize returns in a changing and sometimes volatile market. Alternative visual displays of the essential time series data for this domain are possible and whilst decision making is knowledge rich, involving reasoning about high level relationships, pilot tests showed that it is accessible to participants with only moderate training

    Configural Displays: The effects of salience on multi-level data extraction

    Get PDF
    Displays are a useful tool for users and operators to understand information quickly. Configural displays are effective in supporting focus and divided attention tasks through the use of emergent features. Emergent features are highly salient and are generally used to support divided attention task However, due to the salience of emergent features, a potential performance costs to focused attention tasks arises with configural displays. To address this cost, semantic mapping has been used to map salience techniques to information needed by focus attention tasks to increase their saliency (Bennett & Walters, 2001; Bennett et al., 2000). Semantic mapping is the process of mapping the domain constrains to the display, which in turn is mapped to the users capabilities and limitations to understand that domain data. The objective of this dissertation is to extend the use of semantic mapping to address potential performance costs of configural displays for hierarchical domains using the scenario-based training (SBT) instructor domain. Two studies were conducted to examine the effects of salience application and salience type on data extraction accuracy and response time performances at low-level, mid-level, high-level, and a remediation task. The first study examined the effects of one salience technique mapped to the display. This study employed a 2(low or mid application) X 3(baseline, color techniques, and alphanumeric techniques) mixed model design in which 63 participants completed 3 blocks of 32 trails each using displays with the salience techniques mapped to either low- or mid-level data. Results from the first study showed that salience type had a significant impact on multi-level data extraction performance, but interactions were not found. The second study changed the manipulation of application and mapped two salience techniques display at the same time, using either the same technique or a combination of different techniques. The same experimental design was utilized and 65 participants completed study 2. Results of study 2 showed that different application resulted in greater improvements of performance and specific salience combinations were found better support data extraction performance. Across study analyses were also performed and revealed that more salience is not better than less salience. Instead it is the specific mapping of salience type and application which improves performance the most. Overall, these findings have major implications for theories of semantic mapping, attention and performance, and display design of hierarchical domains

    Cognitive representation of facial asymmetry

    Get PDF
    The human face displays mild asymmetry, with measurements of facial structure differing from left to right of the meridian by an average of three percent. Presently this source of variation is of theoretical interest primarily to researchers studying the perception of beauty, but a very limited amount of research has addressed the question of how this variation contributes to the cognitive processes underlying face recognition. This is surprising given that measurement of facial asymmetry can reliably distinguish between even the most similar of faces. Furthermore, brain regions responsible for symmetry detection support face-processing regions, and detection of symmetry is superior in upright faces relative to inverted and contrast-reversed face stimuli. In addition, facial asymmetry provides a useful biometric for automatic face recognition systems, and understanding the contribution of facial asymmetry in human face recognition may therefore inform the development of these systems. In this thesis the extent to which facial asymmetry is implicated in the process of recognition in human participants is quantified. By measuring the effect of left-right reversal on various tasks of face processing, the degree to which facial asymmetry is represented by memory is investigated. Marginal sensitivity to mirror reversal is demonstrated in a number of instances, and it is therefore concluded that cognitive representations of faces specify structural asymmetry. Reversal effects are typically slight however and on a number of occasions no reliable effect of this stimulus manipulation is detected. It is likely that a general tendency to treat mirror reversals as equivalent stimuli, in addition to an inability to recall lateral orientation of objects from memory, somewhat obscure the effect of reversal. The findings are discussed in the context of existing literature examining the way in which faces are cognitively represented

    Spatial contrast sensitivity in adolescents with autism spectrum disorders

    Get PDF
    Adolescents with autism spectrum disorders (ASD) and typically developing (TD) controls underwent a rigorous psychophysical assessment that measured contrast sensitivity to seven spatial frequencies (0.5-20 cycles/degree). A contrast sensitivity function (CSF) was then fitted for each participant, from which four measures were obtained: visual acuity, peak spatial frequency, peak contrast sensitivity, and contrast sensitivity at a low spatial frequency. There were no group differences on any of the four CSF measures, indicating no differential spatial frequency processing in ASD. Although it has been suggested that detail-oriented visual perception in individuals with ASD may be a result of differential sensitivities to low versus high spatial frequencies, the current study finds no evidence to support this hypothesis

    Mind in Action: Action Representation and the Perception of Biological Motion

    Get PDF
    The ability to understand and communicate about the actions of others is a fundamental aspect of our daily activity. How can we talk about what others are doing? What qualities do different actions have such that they cause us to see them as being different or similar? What is the connection between what we see and the development of concepts and words or expressions for the things that we see? To what extent can two different people see and talk about the same things? Is there a common basis for our perception, and is there then a common basis for the concepts we form and the way in which the concepts become lexicalized in language? The broad purpose of this thesis is to relate aspects of perception, categorization and language to action recognition and conceptualization. This is achieved by empirically demonstrating a prototype structure for action categories and by revealing the effect this structure has on language via the semantic organization of verbs for natural actions. The results also show that implicit access to categorical information can affect the perceptual processing of basic actions. These findings indicate that our understanding of human actions is guided by the activation of high level information in the form of dynamic action templates or prototypes. More specifically, the first two empirical studies investigate the relation between perception and the hierarchical structure of action categories, i.e., subordinate, basic, and superordinate level action categories. Subjects generated lists of verbs based on perceptual criteria. Analyses based on multidimensional scaling showed a significant correlation for the semantic organization of a subset of the verbs for English and Swedish speaking subjects. Two additional experiments were performed in order to further determine the extent to which action categories exhibit graded structure, which would indicate the existence of prototypes for action categories. The results from typicality ratings and category verification showed that typicality judgments reliably predict category verification times for instances of different actions. Finally, the results from a repetition (short-term) priming paradigm suggest that high level information about the categorical differences between actions can be implicitly activated and facilitates the later visual processing of displays of biological motion. This facilitation occurs for upright displays, but appears to be lacking for displays that are shown upside down. These results show that the implicit activation of information about action categories can play a critical role in the perception of human actions

    Category bias in facial memory

    Get PDF
    Existing knowledge has been shown to interact with episodic information in a variety of memory tasks. The present study examined a known bias due to existing knowledge in the context of memory for facial features. Specifically, we examined if the category bias, a systematic error in remembering a target toward the prototypical location of its region, increased as a function of distance away from its prototypical location and if time and degree of distortion moderated the bias. We manipulated eye width along a horizontal axis to create a set of face stimuli. In Experiment 1, participants saw one face at a time, and after a short delay, they were asked to reproduce the location of one of the eyes and complete a recognition task. In Experiment 2, we increased the delay from 2000ms to 5000ms. We hypothesized and found that bias towards the prototype increased for the moderately distorted face conditions; however, the decrease in bias in the highly distorted conditions was not statistically significant. Additionally, bias did not increase over time. We discuss our results in the context of Huttenlocher et al.\u27s (1991) category adjustment model, as well as the practical implications of our study in the field of eyewitness memory

    Experience-dependent reshaping of body processing: from perception to clinical implications

    Get PDF
    Starting from the moment we come into the world, we are compelled to pay large attention to the body and its representation, which can be considered as a set of cognitive structures that have the function of tracing and coding our state (de Vignemont, 2010). However, we cannot consider body aside from its image, which can determine the way we emotionally perceive ourselves and other people as well as the way we experience the world. With a brief look to the body, we can identify a persons’ identity, thus catching distinctive elements such as her age or gender; further, by means of body posture and movements we can understand the affective state of others and appropriately shape our social interaction and communication. Several socially significant cues can be detected and provided through the body, but this thesis principally aims to increase the knowledge about how we perceive gender from bodily features and shape. Specifically, I report on a series of behavioral studies designed to investigate the influence of the visual experience on the detection of gender dimension, considering the contribution of brain networks which may also have a role in the development of mental disorders related to body misperception (i.e. Eating Disorders; ED). In the first chapter, I provide evidence for the interdependence of morphologic and dynamic cues in shaping gender judgment. By manipulating various characteristics of virtual-human body stimuli, the experiment I carried out demonstrates the association between stillness and femininity rating, addressing the evolutionary meaning of sexual selection and the influence of cultural norms (D’Argenio et al., 2020). In the second chapter, I present a study that seeks to define the relative role of parvo- and magnocellular visual streams in the identification of both morphologic and dynamic cues of the body. For these experiments, I used the differential tuning of the two streams to low- (LSF) and high-spatial frequencies (HSF) and I tested how the processing of body gender and postures is affected by filtering images to keep only the LSF or HSF (D’Argenio et al., submitted). The third chapter is dedicated to a series of experiments aimed at understanding how gender perception can be biased by the previous exposure to specific body models. I utilized a visual adaptation paradigm to investigate the mechanisms that drives the observers’ perception to a masculinity or femininity judgement (D’Argenio et al., 2021) and manipulates the spatial frequency content of the bodies in order to account for the contribution of parvo- and magnocellular system in in this process. In conclusion, in the last two chapters, I briefly report the preliminary results emerging from two visual adaptation studies. The first one, which is described in the fourth chapter, explored the role of cortical connections in body gender adaptation by means of Transcranial Magnetic Stimulation (TMS), with the aim to investigate neural correlates of dysfunctional body perception. The second represents the intent to explain, at least partially, body misperception disorders by applying adaptation paradigms to ED clinical population. Results were discussed in the fifth chapter.Starting from the moment we come into the world, we are compelled to pay large attention to the body and its representation, which can be considered as a set of cognitive structures that have the function of tracing and coding our state (de Vignemont, 2010). However, we cannot consider body aside from its image, which can determine the way we emotionally perceive ourselves and other people as well as the way we experience the world. With a brief look to the body, we can identify a persons’ identity, thus catching distinctive elements such as her age or gender; further, by means of body posture and movements we can understand the affective state of others and appropriately shape our social interaction and communication. Several socially significant cues can be detected and provided through the body, but this thesis principally aims to increase the knowledge about how we perceive gender from bodily features and shape. Specifically, I report on a series of behavioral studies designed to investigate the influence of the visual experience on the detection of gender dimension, considering the contribution of brain networks which may also have a role in the development of mental disorders related to body misperception (i.e. Eating Disorders; ED). In the first chapter, I provide evidence for the interdependence of morphologic and dynamic cues in shaping gender judgment. By manipulating various characteristics of virtual-human body stimuli, the experiment I carried out demonstrates the association between stillness and femininity rating, addressing the evolutionary meaning of sexual selection and the influence of cultural norms (D’Argenio et al., 2020). In the second chapter, I present a study that seeks to define the relative role of parvo- and magnocellular visual streams in the identification of both morphologic and dynamic cues of the body. For these experiments, I used the differential tuning of the two streams to low- (LSF) and high-spatial frequencies (HSF) and I tested how the processing of body gender and postures is affected by filtering images to keep only the LSF or HSF (D’Argenio et al., submitted). The third chapter is dedicated to a series of experiments aimed at understanding how gender perception can be biased by the previous exposure to specific body models. I utilized a visual adaptation paradigm to investigate the mechanisms that drives the observers’ perception to a masculinity or femininity judgement (D’Argenio et al., 2021) and manipulates the spatial frequency content of the bodies in order to account for the contribution of parvo- and magnocellular system in in this process. In conclusion, in the last two chapters, I briefly report the preliminary results emerging from two visual adaptation studies. The first one, which is described in the fourth chapter, explored the role of cortical connections in body gender adaptation by means of Transcranial Magnetic Stimulation (TMS), with the aim to investigate neural correlates of dysfunctional body perception. The second represents the intent to explain, at least partially, body misperception disorders by applying adaptation paradigms to ED clinical population. Results were discussed in the fifth chapter

    Contrast Negation Differentiates Visual Pathways Underlying Dynamic and Invariant Facial Processing

    Get PDF
    Abstract Bruce and Young (1986) proposed a model for face processing that begins with structural encoding, followed by a split into two processing streams: one for the dynamic aspects of the face (e.g., facial expressions of emotion) and the other for the invariant aspects of the face (e.g., gender, identity). Yet how this is accomplished remains unclear. Here, we took a psychophysical approach using contrast negation to test the Bruce and Young model. Previous research suggests that contrast negation impairs processing of invariant features (e.g., gender) but not dynamic features (e.g., expression). In our first experiment, participants discriminated differences in gender and facial expressions of emotion in upright, inverted, and contrast-negated faces. Results revealed a profound impairment for contrast-negated gender discrimination, whereas expression discrimination remained relatively robust to contrast negation. To test whether this differential effect occurs during perceptual encoding, we conducted three additional experiments in which we measured aftereffects following upright, inverted, or contrast-negated face adaptation for the same discrimination task as in the first experiment. Results showed a mild impairment with contrast negation during perceptual encoding for both gender and expression, followed by a marked gender-specific deficit during contrast-negated face discrimination. Taken together, our results suggest that there are shared neural mechanisms during perceptual encoding, and at least partially separate neural mechanisms during recognition and decision making for dynamic and invariant facial-feature processing
    • …
    corecore