2,513 research outputs found
Virtual laboratories for education in science, technology, and engineering: A review
Within education, concepts such as distance learning, and open universities, are now becoming more widely used for teaching and learning. However, due to the nature of the subject domain, the teaching of Science, Technology, and Engineering are still relatively behind when using new technological approaches (particularly for online distance learning). The reason for this discrepancy lies in the fact that these fields often require laboratory exercises to provide effective skill acquisition and hands-on experience. Often it is difficult to make these laboratories accessible for online access. Either the real lab needs to be enabled for remote access or it needs to be replicated as a fully software-based virtual lab. We argue for the latter concept since it offers some advantages over remotely controlled real labs, which will be elaborated further in this paper. We are now seeing new emerging technologies that can overcome some of the potential difficulties in this area. These include: computer graphics, augmented reality, computational dynamics, and virtual worlds. This paper summarizes the state of the art in virtual laboratories and virtual worlds in the fields of science, technology, and engineering. The main research activity in these fields is discussed but special emphasis is put on the field of robotics due to the maturity of this area within the virtual-education community. This is not a coincidence; starting from its widely multidisciplinary character, robotics is a perfect example where all the other fields of engineering and physics can contribute. Thus, the use of virtual labs for other scientific and non-robotic engineering uses can be seen to share many of the same learning processes. This can include supporting the introduction of new concepts as part of learning about science and technology, and introducing more general engineering knowledge, through to supporting more constructive (and collaborative) education and training activities in a more complex engineering topic such as robotics. The objective of this paper is to outline this problem space in more detail and to create a valuable source of information that can help to define the starting position for future research
AltURI: a thin middleware for simulated robot vision applications
Fast software performance is often the focus when developing real-time vision-based control applications for robot simulators. In this paper we have developed a thin, high performance middleware for USARSim and other simulators designed for real-time vision-based control applications. It includes a fast image server providing images in OpenCV, Matlab or web formats and a simple command/sensor processor. The interface has been tested in USARSim with an Unmanned Aerial Vehicle using two control applications; landing using a reinforcement learning algorithm and altitude control using elementary motion detection. The middleware has been found to be fast enough to control the flying robot as well as very easy to set up and use
Virtual Reality Games for Motor Rehabilitation
This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion
Choreographic and Somatic Approaches for the Development of Expressive Robotic Systems
As robotic systems are moved out of factory work cells into human-facing
environments questions of choreography become central to their design,
placement, and application. With a human viewer or counterpart present, a
system will automatically be interpreted within context, style of movement, and
form factor by human beings as animate elements of their environment. The
interpretation by this human counterpart is critical to the success of the
system's integration: knobs on the system need to make sense to a human
counterpart; an artificial agent should have a way of notifying a human
counterpart of a change in system state, possibly through motion profiles; and
the motion of a human counterpart may have important contextual clues for task
completion. Thus, professional choreographers, dance practitioners, and
movement analysts are critical to research in robotics. They have design
methods for movement that align with human audience perception, can identify
simplified features of movement for human-robot interaction goals, and have
detailed knowledge of the capacity of human movement. This article provides
approaches employed by one research lab, specific impacts on technical and
artistic projects within, and principles that may guide future such work. The
background section reports on choreography, somatic perspectives,
improvisation, the Laban/Bartenieff Movement System, and robotics. From this
context methods including embodied exercises, writing prompts, and community
building activities have been developed to facilitate interdisciplinary
research. The results of this work is presented as an overview of a smattering
of projects in areas like high-level motion planning, software development for
rapid prototyping of movement, artistic output, and user studies that help
understand how people interpret movement. Finally, guiding principles for other
groups to adopt are posited.Comment: Under review at MDPI Arts Special Issue "The Machine as Artist (for
the 21st Century)"
http://www.mdpi.com/journal/arts/special_issues/Machine_Artis
Recommended from our members
The Robotics Academy: An Immersive Learning Game for Training Industrial Roboticists
Emerging technologies, including artificial intelligence (AI), robotics, digital fabrication, spatial computing, and immersive media such as Augmented Reality (AR) and Virtual Reality (VR), are changing the employment landscape across a broad range of industries. It is anticipated that these technologies will enhance research and innovation, increase productivity, and spur new types of occupations and entrepreneurship. In the architecture, engineering, and construction (AEC) fields, automated building design with advanced software facilitating mass customization will change how buildings are designed. Robotics and automation, particularly in prefabrication and large-scale 3D printing of buildings is expected to change how buildings are built. Automation technology will also transform how work is managed and conducted in the AEC sector. Therefore, it is imperative to prepare students for future changes brought by automation.
The Robotics Academy project is a cloud-based training platform designed to support AEC students in learning industrial robotics. This platform uses advances in cloud computing, VR, and learning games to create a personalized and engaging experience for developing programming and robotics operations skills. The Robotics Academy\u27s immersive learning environment aims to offer a solution for teaching robotics in a safe simulated workspace that delivers creative training while minimizing risk. This virtual modality also allows the students to acquire knowledge remotely and without relying on accessing a robotics lab.
This paper outlines the process of designing the Robotics Academy\u27s pedagogical approach, its curriculum development, and its delivery method as a VR application. The paper will also describe plans for its future development as a fully customizable and immersive learning game for training students for future jobs in industrial robotics
Robot@VirtualHome, an ecosystem of virtual environments and tools for realistic indoor robotic simulation
Simulations and synthetic datasets have historically empower the research in different service robotics-related problems, being revamped nowadays with the utilization of rich virtual environments. However, with their use, special attention must be paid so the resulting algorithms are not biased by the synthetic data and can generalize to real world conditions. These aspects are usually compromised when the virtual environments are manually designed. This article presents Robot@VirtualHome, an ecosystem of virtual environments and tools that allows for the management of realistic virtual environments where robotic simulations can be performed. Here “realistic” means that those environments have been designed by mimicking the rooms’ layout and objects appearing in 30 real houses, hence not being influenced by the designer’s knowledge. The provided virtual environments are highly customizable (lighting conditions, textures, objects’ models, etc.), accommodate meta-information about the elements appearing therein (objects’ types, room categories and layouts, etc.), and support the inclusion of virtual service robots and sensors. To illustrate the possibilities of Robot@VirtualHome we show how it has been used to collect a synthetic dataset, and also exemplify how to exploit it to successfully face two service robotics-related problems: semantic mapping and appearance-based localization.This work has been supported by the research projects WISER (DPI2017-84827-R), funded by the Spanish Government and financed by the European Regional Development’s funds (FEDER), ARPEGGIO (PID2020-117057GB-I00), funded by the European H2020 program, by the grant number FPU17/04512 and the UG PHD scholarship pro-gram from the University of Groningen. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal used for this research. We would like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluste
Gym-Ignition: Reproducible Robotic Simulations for Reinforcement Learning
This paper presents Gym-Ignition, a new framework to create reproducible
robotic environments for reinforcement learning research. It interfaces with
the new generation of Gazebo, part of the Ignition Robotics suite, which
provides three main improvements for reinforcement learning applications
compared to the alternatives: 1) the modular architecture enables using the
simulator as a C++ library, simplifying the interconnection with external
software; 2) multiple physics and rendering engines are supported as plugins,
simplifying their selection during the execution; 3) the new distributed
simulation capability allows simulating complex scenarios while sharing the
load on multiple workers and machines. The core of Gym-Ignition is a component
that contains the Ignition Gazebo simulator and exposes a simple interface for
its configuration and execution. We provide a Python package that allows
developers to create robotic environments simulated in Ignition Gazebo.
Environments expose the common OpenAI Gym interface, making them compatible
out-of-the-box with third-party frameworks containing reinforcement learning
algorithms. Simulations can be executed in both headless and GUI mode, the
physics engine can run in accelerated mode, and instances can be parallelized.
Furthermore, the Gym-Ignition software architecture provides abstraction of the
Robot and the Task, making environments agnostic on the specific runtime. This
abstraction allows their execution also in a real-time setting on actual
robotic platforms, even if driven by different middlewares.Comment: Accepted in SII202
- …