12,824 research outputs found
Multi-criteria Evolution of Neural Network Topologies: Balancing Experience and Performance in Autonomous Systems
Majority of Artificial Neural Network (ANN) implementations in autonomous
systems use a fixed/user-prescribed network topology, leading to sub-optimal
performance and low portability. The existing neuro-evolution of augmenting
topology or NEAT paradigm offers a powerful alternative by allowing the network
topology and the connection weights to be simultaneously optimized through an
evolutionary process. However, most NEAT implementations allow the
consideration of only a single objective. There also persists the question of
how to tractably introduce topological diversification that mitigates
overfitting to training scenarios. To address these gaps, this paper develops a
multi-objective neuro-evolution algorithm. While adopting the basic elements of
NEAT, important modifications are made to the selection, speciation, and
mutation processes. With the backdrop of small-robot path-planning
applications, an experience-gain criterion is derived to encapsulate the amount
of diverse local environment encountered by the system. This criterion
facilitates the evolution of genes that support exploration, thereby seeking to
generalize from a smaller set of mission scenarios than possible with
performance maximization alone. The effectiveness of the single-objective
(optimizing performance) and the multi-objective (optimizing performance and
experience-gain) neuro-evolution approaches are evaluated on two different
small-robot cases, with ANNs obtained by the multi-objective optimization
observed to provide superior performance in unseen scenarios
Toward future 'mixed reality' learning spaces for STEAM education
Digital technology is becoming more integrated and part of modern society. As this begins to happen, technologies including augmented reality, virtual reality, 3d printing and user supplied mobile devices (collectively referred to as mixed reality) are often being touted as likely to become more a part of the classroom and learning environment. In the discipline areas of STEAM education, experts are expected to be at the forefront of technology and how it might fit into their classroom. This is especially important because increasingly, educators are finding themselves surrounded by new learners that expect to be engaged with participatory, interactive, sensory-rich, experimental activities with greater opportunities for student input and creativity. This paper will explore learner and academic perspectives on mixed reality case studies in 3d spatial design (multimedia and architecture), paramedic science and information technology, through the use of existing data as well as additional one-on-one interviews around the use of mixed reality in the classroom. Results show that mixed reality can provide engagement, critical thinking and problem solving benefits for students in line with this new generation of learners, but also demonstrates that more work needs to be done to refine mixed reality solutions for the classroom
Visualising mixed reality simulation for multiple users
Cowling, MA ORCiD: 0000-0003-1444-1563Blended reality seeks to encourage co-presence in the classroom, blending student experience across virtual and physical worlds. In a similar way, Mixed Reality, a continuum between virtual and real environments, is now allowing learners to work in both the physical and the digital world simultaneously, especially when combined with an immersive headset experience. This experience provides innovative new experiences for learning, but faces the challenge that most of these experiences are single user, leaving others outside the new environment. The question therefore becomes, how can a mixed reality simulation be experienced by multiple users, and how can we present that simulation effectively to users to create a true blended reality environment? This paper proposes a study that uses existing screen production research into the user and spectator to produce a mixed reality simulation suitable for multiple users. A research method using Design Based Research is also presented to assess the usability of the approach
Prevalence of haptic feedback in robot-mediated surgery : a systematic review of literature
© 2017 Springer-Verlag. This is a post-peer-review, pre-copyedit version of an article published in Journal of Robotic Surgery. The final authenticated version is available online at: https://doi.org/10.1007/s11701-017-0763-4With the successful uptake and inclusion of robotic systems in minimally invasive surgery and with the increasing application of robotic surgery (RS) in numerous surgical specialities worldwide, there is now a need to develop and enhance the technology further. One such improvement is the implementation and amalgamation of haptic feedback technology into RS which will permit the operating surgeon on the console to receive haptic information on the type of tissue being operated on. The main advantage of using this is to allow the operating surgeon to feel and control the amount of force applied to different tissues during surgery thus minimising the risk of tissue damage due to both the direct and indirect effects of excessive tissue force or tension being applied during RS. We performed a two-rater systematic review to identify the latest developments and potential avenues of improving technology in the application and implementation of haptic feedback technology to the operating surgeon on the console during RS. This review provides a summary of technological enhancements in RS, considering different stages of work, from proof of concept to cadaver tissue testing, surgery in animals, and finally real implementation in surgical practice. We identify that at the time of this review, while there is a unanimous agreement regarding need for haptic and tactile feedback, there are no solutions or products available that address this need. There is a scope and need for new developments in haptic augmentation for robot-mediated surgery with the aim of improving patient care and robotic surgical technology further.Peer reviewe
Modeling Camera Effects to Improve Visual Learning from Synthetic Data
Recent work has focused on generating synthetic imagery to increase the size
and variability of training data for learning visual tasks in urban scenes.
This includes increasing the occurrence of occlusions or varying environmental
and weather effects. However, few have addressed modeling variation in the
sensor domain. Sensor effects can degrade real images, limiting
generalizability of network performance on visual tasks trained on synthetic
data and tested in real environments. This paper proposes an efficient,
automatic, physically-based augmentation pipeline to vary sensor effects
--chromatic aberration, blur, exposure, noise, and color cast-- for synthetic
imagery. In particular, this paper illustrates that augmenting synthetic
training datasets with the proposed pipeline reduces the domain gap between
synthetic and real domains for the task of object detection in urban driving
scenes
- …