3,239 research outputs found

    A Planning Pipeline for Large Multi-Agent Missions

    Get PDF
    In complex multi-agent applications, human operators are often tasked with planning and managing large heterogeneous teams of humans and autonomous vehicles. Although the use of these autonomous vehicles broadens the scope of meaningful applications, many of their systems remain unintuitive and difficult to master for human operators whose expertise lies in the application domain and not at the platform level. Current research focuses on the development of individual capabilities necessary to plan multi-agent missions of this scope, placing little emphasis on the integration of these components in to a full pipeline. The work presented in this paper presents a complete and user-agnostic planning pipeline for large multiagent missions known as the HOLII GRAILLE. The system takes a holistic approach to mission planning by integrating capabilities in human machine interaction, flight path generation, and validation and verification. Components modules of the pipeline are explored on an individual level, as well as their integration into a whole system. Lastly, implications for future mission planning are discussed

    Natural user interfaces for interdisciplinary design review using the Microsoft Kinect

    Get PDF
    As markets demand engineered products faster, waiting on the cyclical design processes of the past is not an option. Instead, industry is turning to concurrent design and interdisciplinary teams. When these teams collaborate, engineering CAD tools play a vital role in conceptualizing and validating designs. These tools require significant user investment to master, due to challenging interfaces and an overabundance of features. These challenges often prohibit team members from using these tools for exploring designs. This work presents a method allowing users to interact with a design using intuitive gestures and head tracking, all while keeping the model in a CAD format. Specifically, Siemens\u27 TeamcenterÂź Lifecycle Visualization Mockup (Mockup) was used to display design geometry while modifications were made through a set of gestures captured by a Microsoft KinectTM in real time. This proof of concept program allowed a user to rotate the scene, activate Mockup\u27s immersive menu, move the immersive wand, and manipulate the view based on head position. This work also evaluates gesture usability and task completion time for this proof of concept system. A cognitive model evaluation method was used to evaluate the premise that gesture-based user interfaces are easier to use and learn with regards to time than a traditional mouse and keyboard interface. Using a cognitive model analysis tool allowed the rapid testing of interaction concepts without the significant overhead of user studies and full development cycles. The analysis demonstrated that using the KinectTM is a feasible interaction mode for CAD/CAE programs. In addition, the analysis pointed out limitations in the gesture interfaces ability to compete time wise with easily accessible customizable menu options

    UX Evaluation of a Tractor Cabin Digital Twin Using Mixed Reality

    Get PDF
    Understanding user experience (UX) is essential to design engaging and attractive products, so nowadays has emerged an increasingly interest in user- centred design approach; in this perspective, digital technologies such as Virtual Reality (VR) and Mixed Reality (MR) could help designers and engineers to create a digital prototype through which the user feedback can be considered during the product design stage. This research aims at creating an interactive Digital Twin (DT) using MR to enable a tractor driving simulation and involve real users to carry out an early UX evaluation, with the scope to validate the design of the control dashboard through a transdisciplinary approach. MR combines virtual simulation with real physical hardware devices which the user can interact with and have control through both visual and tactile feedback. The result is a MR simulator that combines virtual contents and physical controls, capable of reproducing a plowing activity close to reality. The principles of UX design was applied to this research for a continuous and dynamic UX evaluation during the project development

    Mixed reality simulators

    Get PDF
    A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science Johannesburg, May 2017.Virtual Reality (VR) is widely used in training simulators of dangerous or expensive vehicles such as aircraft or heavy mining machinery. The vehicles often have very complicated controls that users need to master before attempting to operate a real world version of the machine. VR allows users to safely train in a simulated environment without the risk of injury or damaging expensive equipment in the ïŹeld. VR however visually cuts off the user from the real environment,whichmayobtainobstructions. Usersareunabletosafelymoveorgesturewhilewearing aVRheadset. Additionallyusersareunabletousestandardinputdevicessuchasmiceandkeyboards. Bymixinginaliveviewofthetherealworld,theusercanstillseeandinteractwiththe physical environment. The contribution of this research is presenting ways of using Mixed RealitytoenhancetheuserexperienceoftraditionalVRbasedsimulators. MixedRealityimproves on traditional VR simulators by allowing the user the safety and freedom of not being cut off from the real world, allowing interaction and the tactile feedback of interacting with complex physical controls, while still allowing simultaneous use of virtual controls and by adding a real world reference point to aid in diminishing simulator sickness caused by visual motionA dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulïŹlment of the requirements for the degree of Master of ScienceGR201

    Assessing the Impact of Multi-variate Steering-rate Vehicle Control on Driver Performance in a Simulation Framework

    Get PDF
    When a driver turns a steering-wheel, he or she normally expects the vehicle\u27s steering system to communicate an equivalent amount of signal to the road-wheels. This relationship is linear and occurs regardless of the steering-wheel\u27s position within its rotational travel. The linear steering paradigm in passenger vehicles has gone largely unchanged since mass production of passenger vehicles began in 1901. However, as more electronically-controlled steering systems appear in conjunction with development of autonomous steering functions in vehicles, an opportunity to advance the existing steering paradigms arises. The following framework takes a human-factors approach toward examining and evaluating alternative steering systems by using Modeling and Simulation methods to track and score human performance. Present conventional steering systems apply a linear relationship between the steering-wheel and the road wheels of a vehicle. The rotational travel of the steering-wheel is 900° and requires two-and-a-half revolutions to travel from end-stop to opposite end-stop. The experimental steering system modeled and employed in this study applies a dynamic curve response to the steering input within a shorter, 225° rotational travel. Accommodation variances, based on vehicle speed and steering-wheel rotational position and acceleration, moderate the apparent steering input to augment a more-practical, effective steering rate. This novel model follows a paradigm supporting the full range of steering-wheel actuation without necessitating hand repositioning or the removal of the driver\u27s hands from the steering-wheel during steering maneuvers. In order to study human performance disparities between novel and conventional steering models, a custom simulator was constructed and programmed to render representative models in a test scenario. Twenty-seven males and twenty-seven females, ranging from the ages of eighteen to sixty-five were tested and scored using the driving simulator that presented two successive driving test vignettes: One vignette using conventional 900° steering with linear response and the other employing the augmented 225° multivariate, non-linear steering. The results from simulator testing suggest that both males and females perform better with the novel system, supporting the hypothesis that drivers of either gender perform better with a system augmented with 225° multivariate, non-linear steering than with a conventional steering system. Further analysis of the simulated-driving scores indicates performance parity between male and female participants, supporting the hypothesis positing no significant difference in driver performance between male and female drivers using the augmented steering system. Finally, composite data from written questionnaires support the hypothesis that drivers will prefer driving the augmented system over conventional steering. These collective findings support justification for testing and refining novel steering systems using Modeling and Simulation methods. As a product of this particular study, a tested and open-sourced simulation framework now exists such that researchers and automotive designers can develop, as well as evaluate their own steering-oriented products within a valid human-factors construct. The open-source nature of this framework implies a commonality by which otherwisedisparate research and development work can be associated. Extending this framework beyond basic investigation to reach applications requiring morespecialized parameters may even impact drivers having special needs. For example, steeringsystem functional characteristics could be comparatively optimized to accommodate individuals afflicted with upper-body deficits or limited use of either or both arms. Moreover, the combined human-factors and open-source approaches distinguish the products of this research as a common and extensible platform by which purposeful automotive-industry improvements can be realized—contrasted with arbitrary improvements that might be brought about predominantly to showcase technological advancements

    An Overview of Self-Adaptive Technologies Within Virtual Reality Training

    Get PDF
    This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training

    Interactions in Virtual Worlds:Proceedings Twente Workshop on Language Technology 15

    Get PDF
    • 

    corecore