70 research outputs found

    Virtual laboratories for education in science, technology, and engineering: A review

    Get PDF
    Within education, concepts such as distance learning, and open universities, are now becoming more widely used for teaching and learning. However, due to the nature of the subject domain, the teaching of Science, Technology, and Engineering are still relatively behind when using new technological approaches (particularly for online distance learning). The reason for this discrepancy lies in the fact that these fields often require laboratory exercises to provide effective skill acquisition and hands-on experience. Often it is difficult to make these laboratories accessible for online access. Either the real lab needs to be enabled for remote access or it needs to be replicated as a fully software-based virtual lab. We argue for the latter concept since it offers some advantages over remotely controlled real labs, which will be elaborated further in this paper. We are now seeing new emerging technologies that can overcome some of the potential difficulties in this area. These include: computer graphics, augmented reality, computational dynamics, and virtual worlds. This paper summarizes the state of the art in virtual laboratories and virtual worlds in the fields of science, technology, and engineering. The main research activity in these fields is discussed but special emphasis is put on the field of robotics due to the maturity of this area within the virtual-education community. This is not a coincidence; starting from its widely multidisciplinary character, robotics is a perfect example where all the other fields of engineering and physics can contribute. Thus, the use of virtual labs for other scientific and non-robotic engineering uses can be seen to share many of the same learning processes. This can include supporting the introduction of new concepts as part of learning about science and technology, and introducing more general engineering knowledge, through to supporting more constructive (and collaborative) education and training activities in a more complex engineering topic such as robotics. The objective of this paper is to outline this problem space in more detail and to create a valuable source of information that can help to define the starting position for future research

    New Trends in Using Augmented Reality Apps for Smart City Contexts

    Get PDF
    The idea of virtuality is not new, as research on visualization and simulation dates back to the early use of ink and paper sketches for alternative design comparisons. As technology has advanced so the way of visualizing simulations as well, but the progress is slow due to difficulties in creating workable simulations models and effectively providing them to the users. Augmented Reality and Virtual Reality, the evolving technologies that have been haunting the tech industry, receiving excessive attention from the media and colossal growing are redefining the way we interact, communicate and work together. From consumer application to manufacturers these technologies are used in different sectors providing huge benefits through several applications. In this work, we demonstrate the potentials of Augmented Reality techniques in a Smart City (Smart Campus) context. A multiplatform mobile app featuring Augmented Reality capabilities connected to GIS services are developed to evaluate different features such as performance, usability, effectiveness and satisfaction of the Augmented Reality technology in the context of a Smart Campus

    Design and implementation of an educational game control module through Microsoft Kinect

    Get PDF
    This Project describes the design and implementation of an Educational Game Control Module using a motion detection device. The solution is focused on providing a customizable platform for educators to design games that may be controlled through gestures. In order to develop the project the latest trends on Human-Computer interaction and motion detection have been analyzed thoroughly.Este proyecto describe el diseño e implementación de un Módulo de Control para Juegos Educativos utilizando un sensor de movimiento. La solución pretende proporcionar una plataforma customizable para que educadores puedan diseñar juegos controlados con gestos. Para desarrollar el proyecto las últimas tendencias en Interacción Humano- Máquina y detección de movimiento han sido analizadas a fondo.Ingeniería Informátic

    INTEGRATION OF THE SIMULATION ENVIRONMENT FOR AUTONOMOUS ROBOTS WITH ROBOTICS MIDDLEWARE

    Get PDF
    Robotic simulators have long been used to test code and designs before any actual hardware is tested to ensure safety and efficiency. Many current robotics simulators are either closed source (calling into question the fidelity of their simulations) or are very complicated to install and use. There is a need for software that provides good quality simulation as well as being easy to use. Another issue arises when moving code from the simulator to actual hardware. In many cases, the code must be changed drastically to accommodate the final hardware on the robot, which can possibly invalidate aspects of the simulation. This defense describes methods and techniques for developing high fidelity graphical and physical simulation of autonomous robotic vehicles that is simple to use as well as having minimal distinction between simulated hardware, and actual hardware. These techniques and methods were proven by the development of the Simulation Environment for Autonomous Robots (SEAR) described here. SEAR is a 3-dimensional open source robotics simulator written by Adam Harris in Java that provides high fidelity graphical and physical simulations of user-designed vehicles running user-defined code in user-designed virtual terrain. Multiple simulated sensors are available and include a GPS, triple axis accelerometer, triple axis gyroscope, a compass with declination calculation, LIDAR, and a class of distance sensors that includes RADAR, SONAR, Ultrasonic and infrared. Several of these sensors have been validated against real-world sensors and other simulation software

    Establishing a Framework for the development of Multimodal Virtual Reality Interfaces with Applicability in Education and Clinical Practice

    Get PDF
    The development of Virtual Reality (VR) and Augmented Reality (AR) content with multiple sources of both input and output has led to countless contributions in a great many number of fields, among which medicine and education. Nevertheless, the actual process of integrating the existing VR/AR media and subsequently setting it to purpose is yet a highly scattered and esoteric undertaking. Moreover, seldom do the architectures that derive from such ventures comprise haptic feedback in their implementation, which in turn deprives users from relying on one of the paramount aspects of human interaction, their sense of touch. Determined to circumvent these issues, the present dissertation proposes a centralized albeit modularized framework that thus enables the conception of multimodal VR/AR applications in a novel and straightforward manner. In order to accomplish this, the aforesaid framework makes use of a stereoscopic VR Head Mounted Display (HMD) from Oculus Rift©, a hand tracking controller from Leap Motion©, a custom-made VR mount that allows for the assemblage of the two preceding peripherals and a wearable device of our own design. The latter is a glove that encompasses two core modules in its innings, one that is able to convey haptic feedback to its wearer and another that deals with the non-intrusive acquisition, processing and registering of his/her Electrocardiogram (ECG), Electromyogram (EMG) and Electrodermal Activity (EDA). The software elements of the aforementioned features were all interfaced through Unity3D©, a powerful game engine whose popularity in academic and scientific endeavors is evermore increasing. Upon completion of our system, it was time to substantiate our initial claim with thoroughly developed experiences that would attest to its worth. With this premise in mind, we devised a comprehensive repository of interfaces, amid which three merit special consideration: Brain Connectivity Leap (BCL), Ode to Passive Haptic Learning (PHL) and a Surgical Simulator

    GPU Computing for Cognitive Robotics

    Get PDF
    This thesis presents the first investigation of the impact of GPU computing on cognitive robotics by providing a series of novel experiments in the area of action and language acquisition in humanoid robots and computer vision. Cognitive robotics is concerned with endowing robots with high-level cognitive capabilities to enable the achievement of complex goals in complex environments. Reaching the ultimate goal of developing cognitive robots will require tremendous amounts of computational power, which was until recently provided mostly by standard CPU processors. CPU cores are optimised for serial code execution at the expense of parallel execution, which renders them relatively inefficient when it comes to high-performance computing applications. The ever-increasing market demand for high-performance, real-time 3D graphics has evolved the GPU into a highly parallel, multithreaded, many-core processor extraordinary computational power and very high memory bandwidth. These vast computational resources of modern GPUs can now be used by the most of the cognitive robotics models as they tend to be inherently parallel. Various interesting and insightful cognitive models were developed and addressed important scientific questions concerning action-language acquisition and computer vision. While they have provided us with important scientific insights, their complexity and application has not improved much over the last years. The experimental tasks as well as the scale of these models are often minimised to avoid excessive training times that grow exponentially with the number of neurons and the training data. This impedes further progress and development of complex neurocontrollers that would be able to take the cognitive robotics research a step closer to reaching the ultimate goal of creating intelligent machines. This thesis presents several cases where the application of the GPU computing on cognitive robotics algorithms resulted in the development of large-scale neurocontrollers of previously unseen complexity enabling the conducting of the novel experiments described herein.European Commission Seventh Framework Programm
    corecore