666 research outputs found

    Augmented Reality: An Overview and Five Directions for AR in Education

    Get PDF
    Augmented Reality (AR) is an emerging form of experience in which the Real World (RW) is enhanced by computer-generated content tied to specific locations and/or activities. Over the last several years, AR applications have become portable and widely available on mobile de­vices. AR is becoming visible in our audio-visual media (e.g., news, entertainment, sports) and is beginning to enter other aspects of our lives (e.g., e-commerce, travel, marketing) in tangible and exciting ways. Facilitating ubiquitous learning, AR will give learners instant access to location-specific information compiled and provided by numerous sources (2009). Both the 2010 and 2011 Horizon Reports predict that AR will soon see widespread use on US college campuses. In prepa­ration, this paper offers an overview of AR, examines recent AR developments, explores the impact of AR on society, and evaluates the implications of AR for learning and education

    Mitigation Of Motion Sickness Symptoms In 360 Degree Indirect Vision Systems

    Get PDF
    The present research attempted to use display design as a means to mitigate the occurrence and severity of symptoms of motion sickness and increase performance due to reduced “general effects” in an uncoupled motion environment. Specifically, several visual display manipulations of a 360° indirect vision system were implemented during a target detection task while participants were concurrently immersed in a motion simulator that mimicked off-road terrain which was completely separate from the target detection route. Results of a multiple regression analysis determined that the Dual Banners display incorporating an artificial horizon (i.e., AH Dual Banners) and perceived attentional control significantly contributed to the outcome of total severity of motion sickness, as measured by the Simulator Sickness Questionnaire (SSQ). Altogether, 33.6% (adjusted) of the variability in Total Severity was predicted by the variables used in the model. Objective measures were assessed prior to, during and after uncoupled motion. These tests involved performance while immersed in the environment (i.e., target detection and situation awareness), as well as postural stability and cognitive and visual assessment tests (i.e., Grammatical Reasoning and Manikin) both before and after immersion. Response time to Grammatical Reasoning actually decreased after uncoupled motion. However, this was the only significant difference of all the performance measures. Assessment of subjective workload (as measured by NASA-TLX) determined that participants in Dual Banners display conditions had a significantly lower level of perceived physical demand than those with Completely Separated display designs. Further, perceived iv temporal demand was lower for participants exposed to conditions incorporating an artificial horizon. Subjective sickness (SSQ Total Severity, Nausea, Oculomotor and Disorientation) was evaluated using non-parametric tests and confirmed that the AH Dual Banners display had significantly lower Total Severity scores than the Completely Separated display with no artificial horizon (i.e., NoAH Completely Separated). Oculomotor scores were also significantly different for these two conditions, with lower scores associated with AH Dual Banners. The NoAH Completely Separated condition also had marginally higher oculomotor scores when compared to the Completely Separated display incorporating the artificial horizon (AH Completely Separated). There were no significant differences of sickness symptoms or severity (measured by self-assessment, postural stability, and cognitive and visual tests) between display designs 30- and 60-minutes post-exposure. Further, 30- and 60- minute post measures were not significantly different from baseline scores, suggesting that aftereffects were not present up to 60 minutes post-exposure. It was concluded that incorporating an artificial horizon onto the Dual Banners display will be beneficial in mitigating symptoms of motion sickness in manned ground vehicles using 360° indirect vision systems. Screening for perceived attentional control will also be advantageous in situations where selection is possible. However, caution must be made in generalizing these results to missions under terrain or vehicle speed different than what is used for this study, as well as those that include a longer immersion time

    A virtual reality classroom to teach and explore crystal solid state structures

    Get PDF
    We present an educational application of virtual reality that we created to help students gain an in-depth understanding of the internal structure of crystals and related key concepts. Teachers can use it to give lectures to small groups (10-15) of students in a shared virtual environment, both remotely (with teacher and students in different locations) and locally (while sharing the same physical space). Lectures can be recorded, stored in an online repository, and shared with students who can either review a recorded lecture in the same virtual environment or can use the application for self-studying by exploring a large collection of available crystal structures. We validated our application with human subjects receiving positive feedback

    Vision Based Calibration and Localization Technique for Video Sensor Networks

    Get PDF
    The recent evolutions in embedded systems have now made the video sensor networks a reality. A video sensor network consists of a large number of low cost camera-sensors that are deployed in random manner. It pervades both the civilian and military fields with huge number of applications in various areas like health-care, environmental monitoring, surveillance and tracking. As most of the applications demand the knowledge of the sensor-locations and the network topology before proceeding with their tasks, especially those based on detecting events and reporting, the problem of localization and calibration assumes a significance far greater than most others in video sensor network. The literature is replete with many localization and calibration algorithms that basically rely on some a-priori chosen nodes, called seeds, with known coordinates to help determine the network topology. Some of these algorithms require additional hardware, like arrays of antenna, while others require having to regularly reacquire synchronization among the seeds so as to calculate the time difference of the received signals. Very few of these localization algorithms use vision based technique. In this work, a vision based technique is proposed for localizing and configuring the camera nodes in video wireless sensor networks. The camera network is assumed randomly deployed. One a-priori selected node chooses to act as the core of the network and starts to locate some other two reference nodes. These three nodes, in turn, participate in locating the entire network using tri-lateration method with some appropriate vision characteristics. In this work, the vision characteristics that are used the relationship between the height of the image in the image plane and the real distance between the sensor node and the camera. Many experiments have been simulated to demonstrate the feasibility of the proposed technique. Apart from this work, experiments are also carried out to locate any other new object in the video sensor network. The experimental results showcase the accuracy of building up one-plane network topology in relative coordinate system and also the robustness of the technique against the accumulated error in configuring the whole network
    corecore