22 research outputs found

    Robust Camera Calibration and Evaluation Procedure Based on Images Rectification and 3D Reconstruction

    Get PDF
    This paper presents a robust camera calibration algorithm based on contour matching of a known pattern object. The method does not require a fastidious selection of particular pattern points. We introduce two versions of our algorithm, depending on whether we dispose of a single or several calibration images. We propose an evaluation procedure which can be applied for all calibration methods for stereo systems with unlimited number of cameras. We apply this evaluation framework to 3 camera calibration techniques, our proposed robust algorithm, the modified Zhang algorithm implemented by J. Bouguet and Faugeras-Toscani method. Experiments show that our proposed robust approach presents very good results in comparison with the two other methods. The proposed evaluation procedure gives a simple and interactive tool to evaluate any camera calibration method

    Robust Camera Calibration and Evaluation Procedure Based on Images Rectification and 3D Reconstruction

    Get PDF
    This paper presents a robust camera calibration algorithm based on contour matching of a known pattern object. The method does not require a fastidious selection of particular pattern points. We introduce two versions of our algorithm, depending on whether we dispose of a single or several calibration images. We propose an evaluation procedure which can be applied for all calibration methods for stereo systems with unlimited number of cameras. We apply this evaluation framework to 3 camera calibration techniques, our proposed robust algorithm, the modified Zhang algorithm implemented by J. Bouguet and Faugeras-Toscani method. Experiments show that our proposed robust approach presents very good results in comparison with the two other methods. The proposed evaluation procedure gives a simple and interactive tool to evaluate any camera calibration method

    Auditory-Visual Aversive Stimuli Modulate the Conscious Experience of Fear

    Get PDF
    International audiencevision. However, the impact of multisensory information on affect remains relatively undiscovered. In this study, we investigated whether the auditory-visual presentation of aversive stimuli influences the experience of fear. We used the advantages of virtual reality to manipulate multisensory presentation and to display potentially fearful dog stimuli embedded in a natural context. We manipulated the affective reactions evoked by the dog stimuli by recruiting two groups of participants: dog-fearful and non-fearful participants. The sensitivity to dog fear was assessed psychometrically by a questionnaire and also at behavioral and subjective levels using a Behavioral Avoidance Test (BAT). Participants navigated in virtual environments, in which they encountered virtual dog stimuli presented through the auditory channel, the visual channel or both. They were asked to report their fear using Subjective Units of Distress. We compared the fear for unimodal (visual or auditory) and bimodal (auditory- visual) dog stimuli. Dog-fearful participants as well as non-fearful participants reported more fear in response to bimodal audiovisual compared to unimodal presentation of dog stimuli. These results suggest that fear is more intense when the affective information is processed via multiple sensory pathways, which might be due to a cross-modal potentiation. Our findings have implications for the field of virtual reality-based therapy of phobias. Therapies could be refined and improved by implicating and manipulating the multisensory presentation of the feared situations

    Is it possible to use highly realistic virtual reality in the elderly? A feasibility study with image-based rendering

    Get PDF
    International audienceBackground: Virtual reality (VR) opens up a vast number of possibilities in many domains of therapy. The primary objective of the present study was to evaluate the acceptability for elderly subjects of a VR experience using the image-based rendering virtual environment (IBVE) approach and secondly to test the hypothesis that visual cues using VR may enhance the generation of autobiographical memories.Methods: Eighteen healthy volunteers (mean age 68.2 years) presenting memory complaints with a Mini-Mental State Examination score higher than 27 and no history of neuropsychiatric disease were included. Participants were asked to perform an autobiographical fluency task in four conditions. The first condition was a baseline grey screen, the second was a photograph of a well-known location in the participant’s home city (FamPhoto), and the last two conditions displayed VR, ie, a familiar image-based virtual environment (FamIBVE) consisting of an image-based representation of a known landmark square in the center of the city of experimentation (Nice) and an unknown image-based virtual environment (UnknoIBVE), which was captured in a public housing neighborhood containing unrecognizable building fronts. After each of the four experimental conditions, participants filled in self-report questionnaires to assess the task acceptability (levels of emotion, motivation, security, fatigue, and familiarity). CyberSickness and Presence questionnaires were also assessed after the two VR conditions. Autobiographical memory was assessed using a verbal fluency task and quality of the recollection was assessed using the “remember/know” procedure.Results: All subjects completed the experiment. Sense of security and fatigue were not significantly different between the conditions with and without VR. The FamPhoto condition yielded a higher emotion score than the other conditions (P,0.05). The CyberSickness questionnaire showed that participants did not experience sickness during the experiment across the VR conditions. VR stimulates autobiographical memory, as demonstrated by the increased total number of responses on the autobiographical fluency task and the increased number of conscious recollections of memories for familiar versus unknown scenes (P,0.01).Conclusion: The study indicates that VR using the FamIBVE system is well tolerated by the elderly. VR can also stimulate recollections of autobiographical memory and convey familiarity of a given scene, which is an essential requirement for use of VR during reminiscence therapy

    Utilisation de l'analyse automatisée de la parole et des mesures des émotions faciales sur des vidéos pour évaluer les effets des dispositifs de relaxation: une étude pilote

    Get PDF
    International audienceRapid relaxation installations in order to reduce stress appear more and more in public or work places. However, the effects of such devices on physiological and psychological parameters have not been scientifically tested yet. This pilot study (N=40) evaluates the variations of vocal speech and facial emotions parameters in 3-minute videos of participant recorded just before and after relaxation, on four different groups, three of them using a different rapid (15 minutes) sensorial immersion relaxation devices and a control group using no device. Vocal speech parameters included sound duration, pause mean duration, sound duration ratio, mean vocal frequency (F0), standard deviation of F0, minimum and maximum of F0, jitter and shimmer. Facial emotion analysis included neutral, happy, sad, surprised, angry, disgusted, scared, contempt, valence and arousal. The objective of this study is to evaluate different parameters of the automated vocal and facial emotions analysis that could be of use to evaluate the relaxation effect of different devices and to measure their variations in the different experimental groups. We identified significant parameters that can be of use for evaluating rapid relaxation devices, particularly voice prosody and minimum vocal frequency, and some facial emotion such as happy, sad, the valence and arousal. Those parameters allowed us to discriminate distinct effects of the different devices used: in G1 (control) and G2 (spatialized sounds), we observed a slowdown in voice prosody; in G3 (Be-Breathe) a decrease in minimum vocal frequency and an increase of arousal; while in G4 (3D-video) we found an increase in facial emotion valence (happy increasing and sad decreasing). Other parameters tested were not affected by relaxation.Les installations de relaxation rapide afin de réduire le stress apparaissent de plus en plus dans les lieux publics ou de travail. Cependant, les effets de ces dispositifs sur les paramètres physiologiques et psychologiques n'ont pas encore été testés scientifiquement. Cette étude pilote (N = 40) évalue les variations des paramètres de la parole vocale et des émotions faciales dans des vidéos de 3 minutes de participant enregistrées juste avant et après la relaxation, sur quatre groupes différents, trois d'entre eux utilisant une immersion sensorielle rapide (15 minutes) différente. des appareils de relaxation et un groupe témoin n'utilisant aucun appareil. Les paramètres de la parole vocale comprenaient la durée du son, la durée moyenne de la pause, le rapport de durée du son, la fréquence vocale moyenne (F0), l'écart type de F0, le minimum et le maximum de F0, la gigue et le miroitement. L'analyse des émotions faciales comprenait la neutralité, la joie, la tristesse, la surprise, la colère, le dégoût, la peur, le mépris, la valence et l'excitation. L'objectif de cette étude est d'évaluer différents paramètres de l'analyse automatisée des émotions vocales et faciales qui pourraient être utiles pour évaluer l'effet de relaxation de différents appareils et mesurer leurs variations dans les différents groupes expérimentaux. Nous avons identifié des paramètres significatifs qui peuvent être utiles pour évaluer les dispositifs de relaxation rapide, en particulier la prosodie de la voix et la fréquence vocale minimale, et certaines émotions faciales telles que le bonheur, la tristesse, la valence et l'excitation. Ces paramètres nous ont permis de discriminer des effets distincts des différents appareils utilisés: pour G1 (contrôle) et G2 (sons spatialisés), nous avons observé un ralentissement de la prosodie vocale; dans le groupe G3 (Be-Breathe) une diminution de la fréquence vocale minimale et une augmentation de l'éveil; enfin, pour G4 (vidéo 3D), nous avons trouvé une augmentation de la valence des émotions faciales (augmentation de la joie et diminution de la tristesse). Les autres paramètres testés n'ont pas été affectés par la relaxation

    Camera Calibration Methods Evaluation Procedure for Images Rectification and 3D Reconstruction

    Get PDF
    Camera calibration and images rectification are two necessary steps in most 3D reconstruction methods using image acquisition. This paper proposes an evaluation procedure for camera calibration methods for the case of 3D reconstruction using rectified multi-stereo images. The evaluation is based on the accuracy of the rectification and of the 3D reconstruction which are directly related to the calibration precision. Three methods are thus compared: Faugeras-Toscani, Zhang and a robust calibration algorithm. The procedure can be applied for computer vision systems with an arbitrary number of cameras and for any other calibration method. We show that, although the three methods provide significantly different intrinsic and stereo system parameter estimations, the rectified images of the planar target that we use for evaluation are relatively coherent and lead to close 3D reconstruction errors

    Analyse multi-vues d'objets 3D pour interactions collaboratives

    No full text
    The objective of this thesis is the 3D reconstruction of real objects for collaborative interactions. Within this framework, the goal is to reconstruct an object from a small number of calibrated views (8 to 12 images), and then insert the obtained numerical models into shared collaborative environments for further visualization and manipulation. The thesis is organized as follows. First, in the general introduction, we introduce the context of the research presented in this work. We present the acquisition system developed within our laboratory, and show the relationship between the calibration of this system, the image acquisition process, and the collaborative interactions with the reconstructed objects. Then, in a first chapter, we present some aspects of 3D geometry applied to computer vision. We present the projective geometry theory, the classical linear and non-linear camera models, and some stereoscopic vision results useful for the rest of the document. The second chapter is dedicated to the issue of camera calibration. After a state-of-the-art of camera calibration methods, we propose a robust camera calibration method based on the robust estimation of the perspective projection matrix. The calibration pattern used is a cube with faces of different colours. The proposed calibration algorithm uses one image per camera to perform the calibration. However, in order to increase the accuracy of the camera parameters estimation, multiple images can also be used. Our method yields a robust estimation of the camera parameters while minimizing the amount of user interaction requested. In order to validate the method, we introduce a set of new objective criteria for evaluation and comparison of camera calibration methods. The proposed criteria are based on rectification and 3D reconstruction of an unknown coplanar point set, a virtual pattern, and the re-estimation of the known parameters of stereoscopic systems. Our calibration method is finally validated according to the proposed criteria. The third chapter3D tackles the issue of 3D reconstruction of real objects . After a comprehensive state-of-the-art of 3D reconstruction methods, we present our proposed multiresolutiuon 3D reconstruction algorithm, which is adapted to collaborative interaction tasks. Our contributions specifically concern new algorithms for voxel visibility and photo-consistency estimation. The proposed 3D reconstruction method is then tested and validated upon a set of images of real objects from existing benchmark databases. The fourth chapter handles the collaborative interactions with 3D objects. First, the calibration of our acquisition system composed of eight cameras is presented. Experimental results concerning 3D reconstruction of objects available in our laboratory are then presented. Finally, collaborative interactions with the reconstructed objects are illustrated within the framework of three existing interfaces in the France Telecom R\&D laboratories: MOWGLI, DigiTable and Spin3D. A concluding section summarizes the contributions of this thesis and opens perspectives of future work.EVRY-INT (912282302) / SudocEVRY-BU (912282101) / SudocSudocFranceF

    Reminiscence Therapy using Image-Based Rendering in VR

    Get PDF
    We present a novel VR solution for Reminiscence Therapy (RT), developed jointly by a group of memory clinicians and computer scientists. RT involves the discussion of past activities, events or experiences with others, often with the aid of tangible props which are familiar items from the past; it is a popular intervention in de-mentia care. We introduce an immersive VR system designed for RT, which allows easy presentation of familiar environments. In particular, our system supports highly-realistic Image-Based Ren-dering in an immersive setting. To evaluate the effectiveness and utility of our system for RT, we perform a study with healthy elderly participants to test if our VR system can help with the generation of autobiographical memories. We adapt a verbal Autobiographical Fluency protocol to our VR context, in which elderly participants are asked to generate memories based on images they are shown. We compare the use of our image-based system for an unknown and a familiar environment. The results of our study show that the number of memories generated for a familiar environment is higher than that for an unknown environment using our system. This indi-cates that IBR can convey familiarity of a given scene, which is an essential requirement for the use of VR in RT. Our results also show that our system is as effective as traditional RT protocols, while ac-ceptability and motivation scores demonstrate that our system is well tolerated by elderly participants
    corecore