10 research outputs found

    A comparative study of the sense of presence and anxiety in an invisible marker versus a marker Augmented Reality system for the treatment of phobia towards small animals

    Full text link
    Phobia towards small animals has been treated using exposure in vivo and virtual reality. Recently, augmented reality (AR) has also been presented as a suitable tool. The first AR system developed for this purpose used visible markers for tracking. In this first system, the presence of visible markers warns the user of the appearance of animals. To avoid this warning, this paper presents a second version in which the markers are invisible. First, the technical characteristics of a prototype are described. Second, a comparative study of the sense of presence and anxiety in a non-phobic population using the visible marker-tracking system and the invisible marker-tracking system is presented. Twenty-four participants used the two systems. The participants were asked to rate their anxiety level (from 0 to 10) at 8 different moments. Immediately after their experience, the participants were given the SUS questionnaire to assess their subjective sense of presence. The results indicate that the invisible marker-tracking system induces a similar or higher sense of presence than the visible marker-tracking system, and it also provokes a similar or higher level of anxiety in important steps for therapy. Moreover, 83.33% of the participants reported that they did not have the same sensations/surprise using the two systems, and they scored the advantage of using the invisible marker-tracking system (IMARS) at 5.19 +/- 2.25 (on a scale from 1 to 10). However, if only the group with higher fear levels is considered, 100% of the participants reported that they did not have the same sensations/surprise with the two systems, scoring the advantage of using IMARS at 6.38 +/- 1.60 (on a scale from 1 to 10). (C) 2011 Elsevier Ltd. All rights reserved.Juan, M.; Joele, D. (2011). A comparative study of the sense of presence and anxiety in an invisible marker versus a marker Augmented Reality system for the treatment of phobia towards small animals. International Journal of Human-Computer Studies. 69(6):440-453. doi:10.1016/j.ijhcs.2011.03.00244045369

    A Multi-view Pixel-wise Voting Network for 6DoF Pose Estimation

    Get PDF
    6DoF pose estimation is an important task in the Computer Vision field for what regards robotic and automotive applications. Many recent approaches successfully perform pose estimation on monocular images, which lack depth information. In this work, the potential of extending such methods to a multi-view setting is explored, in order to recover depth information from geometrical relations between the views. In particular two different multi-view adaptations for a particular monocular pose estimator, called PVNet, are developed, by either combining monocular results on the individual views or by modifying the original method to take in input directly the set of views. The new models are evaluated on the TOD transparent object dataset and compared against the original PVNet implementation, a depth-based pose estimation called DenseFusion, and the method proposed by the authors of the dataset, called Keypose. Experimental results show that integrating multi-view information significantly increases test accuracy and that both models outperform DenseFusion, while still being slightly surpassed by Keypose

    Dataset of Panoramic Images for People Tracking in Service Robotics

    Get PDF
    We provide a framework for constructing a guided robot for usage in hospitals in this thesis. The omnidirectional camera on the robot allows it to recognize and track the person who is following it. Furthermore, when directing the individual to their preferred position in the hospital, the robot must be aware of its surroundings and avoid accidents with other people or items. To train and evaluate our robot's performance, we developed an auto-labeling framework for creating a dataset of panoramic videos captured by the robot's omnidirectional camera. We labeled each person in the video and their real position in the robot's frame, enabling us to evaluate the accuracy of our tracking system and guide the development of the robot's navigation algorithms. Our research expands on earlier work that has established a framework for tracking individuals using omnidirectional cameras. We want to contribute to the continuing work to enhance the precision and dependability of these tracking systems, which is essential for the creation of efficient guiding robots in healthcare facilities, by developing a benchmark dataset. Our research has the potential to improve the patient experience and increase the efficiency of healthcare institutions by reducing staff time spent guiding patients through the facility.We provide a framework for constructing a guided robot for usage in hospitals in this thesis. The omnidirectional camera on the robot allows it to recognize and track the person who is following it. Furthermore, when directing the individual to their preferred position in the hospital, the robot must be aware of its surroundings and avoid accidents with other people or items. To train and evaluate our robot's performance, we developed an auto-labeling framework for creating a dataset of panoramic videos captured by the robot's omnidirectional camera. We labeled each person in the video and their real position in the robot's frame, enabling us to evaluate the accuracy of our tracking system and guide the development of the robot's navigation algorithms. Our research expands on earlier work that has established a framework for tracking individuals using omnidirectional cameras. We want to contribute to the continuing work to enhance the precision and dependability of these tracking systems, which is essential for the creation of efficient guiding robots in healthcare facilities, by developing a benchmark dataset. Our research has the potential to improve the patient experience and increase the efficiency of healthcare institutions by reducing staff time spent guiding patients through the facility

    Map-Based Localization for Unmanned Aerial Vehicle Navigation

    Get PDF
    Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments. Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments. The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%

    Video indexing with combined tracking and object recognition for improved object understanding in scenes

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. ).Automatic understanding of video content is a problem which grows in importance every day. Video understanding algorithms require accuracy, robustness, speed, and scalability. Accuracy generates user confidence in usage. Robustness enables greater autonomy and reduced human intervention. Applications such as navigation and mapping demand real-time performance. Scalability is also important for maintaining high speed while expanding capacity to multiple users and sensors. In this thesis, I propose a "bag-of-phrases" model to improve the accuracy and robustness of the popular "bag-of-words" models. This model applies a "geometric grammar" to add structural constraints to the unordered "bag-of-words." I incorporate this model into an architecture which combines an object recognizer, a tracker, and a geolocation module. This architecture has the ability to use the complementarity of its components to compensate for its weaknesses. This allows for improvements in accuracy, robustness, and speed. Subsequently, I introduce VICTORIOUS, a fast implementation of the proposed architecture. Evaluation on computer-generated data as well as Caltech-101 indicate that this implementation is accurate, robust, and capable of performing in real time on current generation hardware. This implementation, together with the "bag-of-phrases" model and integrated architecture, forms a step towards meeting the requirements for an accurate, robust, real-time vision system.by Yuetian Xu.M.Eng

    A simple and efficient template matching algorithm

    Get PDF
    We propose a general framework for object tracking in video images. It consists in low-order parametric models for the image motion of a target region. These models are used to predict the movement and to track the target. The difference of intensity between the pixels belonging to the current region and the pixels of the selected target (learnt during an off-line stage) allows a straightforward prediction of the region position in the current image. The proposed algorithm allows to track in real time (less than 10ms) any planar textured target under homographic motions. This algorithm is very simple (a few lines of code) and very efficient (less than 10 ms on a 150Mhz hardware). 1

    A simple and efficient template matching algorithm

    No full text
    corecore