1,897 research outputs found

    Residence time distribution and hold-up in a cocurrent upflow packed bed reactor at elevated pressure

    Get PDF
    The residence time distribution in liquid phase was measured in a cocurrent upflow packed bed reactor for the system methanol-hydrogen at low Reynolds numbers and at elevated pressure. The plug flow with axial dispersion model was used to describe mixing in the system. The imperfect pulse method was used to measure the system response to a tracer pulse input. The parameters were calculated using the weighted moments method. The influence of the weighting factor was investigated. The experimental and theoretical outputs, as calculated by convolution, agreed very well. Different types of correlations were used for the Bodenstein number and liquid hold-up. From these correlations, the optimal one was selected for each parameter. A comparison was made between the ordinary moments and the weighted moments methods which led to the conclusion that the latter method is superior with respect to the accuracy of the estimated parameters and therefore strongly recommended

    Looking Back From the Future: Perspective Taking in Virtual Reality Increases Future Self-Continuity

    Get PDF
    In the current study, we tested a novel perspective-taking exercise aimed at increasing the connection participants felt toward their future self, i.e., future self-continuity. Participants role-played as their successful future self and answered questions about what it feels like to become their future and the path to get there. The exercise was also conducted in a virtual reality environment and in vivo to investigate the possible added value of the virtual environment with respect to improved focus, perspective-taking, and effectiveness for participants with less imagination. Results show that the perspective taking exercise in virtual reality substantially increased all four domains of future self-continuity, i.e., connectedness, similarity, vividness, and liking, while the in vivo equivalent increased only liking and vividness. Although connectedness and similarity were directionally, but not significantly different between the virtual and in vivo environments, neither the focus, perspective taking, or individual differences in imagination could explain this difference—which suggests a small, but non-significant, placebo effect of the virtual reality environment. However, lower baseline vividness in the in vivo group may explain this difference and suggests preliminary evidence for the dependency of connectedness and similarity domains upon baseline vividness. These findings show that the perspective taking exercise in a VR environment can reliably increase the future self-continuity domains

    Emotional Voice and Emotional Body Postures Influence Each Other Independently of Visual Awareness

    Get PDF
    Multisensory integration may occur independently of visual attention as previously shown with compound face-voice stimuli. We investigated in two experiments whether the perception of whole body expressions and the perception of voices influence each other when observers are not aware of seeing the bodily expression. In the first experiment participants categorized masked happy and angry bodily expressions while ignoring congruent or incongruent emotional voices. The onset between target and mask varied from −50 to +133 ms. Results show that the congruency between the emotion in the voice and the bodily expressions influences audiovisual perception independently of the visibility of the stimuli. In the second experiment participants categorized the emotional voices combined with masked bodily expressions as fearful or happy. This experiment showed that bodily expressions presented outside visual awareness still influence prosody perception. Our experiments show that audiovisual integration between bodily expressions and affective prosody can take place outside and independent of visual awareness

    Visual recalibration and selective adaptation in auditory-visual speech perception:Contrasting build-up courses

    Get PDF
    Exposure to incongruent auditory and visual speech produces both visual recalibration and selective adaptation of auditory speech identification. In an earlier study, exposure to an ambiguous auditory utterance (intermediate between /aba/ and /ada/) dubbed onto the video of a face articulating either /aba/ or /ada/, recalibrated the perceived identity of auditory targets in the direction of the visual component, while exposure to congruent non-ambiguous /aba/ or /ada/ pairs created selective adaptation, i.e. a shift of perceived identity in the opposite direction [Bertelson, P. Vroomen, J. & de Gelder, B. (2003). Visual recalibration of auditory speech identification: a McGurk aftereffect. Psychological Science, 14, 592-597]. Here, we examined the build-up course of the after-effects produced by the same two types of bimodal adapters, over a 1-256 range of presentations. The (negative) after-effects of non-ambiguous congruent adapters increased monotonically across that range, while those of ambiguous incongruent adapters followed a curvilinear course, going up and then down with increasing exposure. This pattern is discussed in terms of an asynchronous interaction between recalibration and selective adaptation processes. © 2006 Elsevier Ltd. All rights reserved.SCOPUS: ar.jinfo:eu-repo/semantics/publishe
    • …
    corecore