29 research outputs found

    Matlab and Simulink Creation and Animation of X3D in Web-Based Simulation

    Get PDF
    The article of record as published may be located at http://dx.doi.org/10.1145/2775292.2778306Matlab is a powerful tool to compute high-fidelity engineering model and plot the result in figures. Simulink implements Matlab .m source code into block diagrams and flow charts to execute the simulation. This project demonstrates how physics equations implemented in Simulink can animate X3D or VRML models, along with the methods to convert Matlab .fig format into an X3D object so we can apply it into Web-based animations

    Position / force control of systems subjected to communicaton delays and interruptions in bilateral teleoperation

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Mechanical Engineering, Izmir, 2012Includes bibliographical references (leaves: 65-68)Text in English; Abstract: Turkish and Englishix, 76 leavesTeleoperation technology allows to remotely operate robotic (slave) systems located in hazardous, risky and distant environments. The human operator sends commands through the controller (master) system to execute the tasks from a distance. The operator is provided with necessary (visual, audio or haptic) feedback to accomplish the mission remotely. In bilateral teleoperation, continuous feedback from the remote environment is generated. Thus, the operator can handle the task as if the operator is in the remote environment relying on the relevant feedback. Since teleoperation deals with systems controlled from a distance, time delays and package losses in transmission of information are present. These communication failures affect the human perception and system stability, and thus, the ability of operator to handle the task successfully. The objective of this thesis is to investigate and develop a control algorithm, which utilizes model mediated teleoperation integrating parallel position/force controllers, to compensate for the instability issues and excessive forcing applied to the environment arising from communication failures. Model mediation technique is extended for three-degrees-of-freedom teleoperation and a parallel position/force controller, impedance controller, is integrated in the control algorithm. The proposed control method is experimentally tested by using Matlab Simulink blocksets for real-time experimentation in which haptic desktop devices, Novint Falcon and Phantom Desktop are configured as master and slave subsystems of the bilateral teleoperation. The results of these tests indicate that the stability and passivity of proposed bilateral teleoperation systems are preserved during constant and variable time delays and data losses while the position and force tracking test results provide acceptable performance with bounded errors

    Development of a dynamic virtual reality model of the inner ear sensory system as a learning and demonstrating tool

    Get PDF
    In order to keep track of the position and motion of our body in space, nature has given us a fascinating and very ingenious organ, the inner ear. Each inner ear includes five biological sensors - three angular and two linear accelerometers - which provide the body with the ability to sense angular and linear motion of the head with respect to inertial space. The aim of this paper is to present a dynamic virtual reality model of these sensors. This model, implemented in Matlab/Simulink, simulates the rotary chair testing which is one of the tests carried out during a diagnosis of the vestibular system. High-quality 3D-animations linked to the Simulink model are created using the export of CAD models into Virtual Reality Modeling Language (VRML) files. This virtual environment shows not only the test but also the state of each sensor (excited or inhibited) in real time. Virtual reality is used as a tool of integrated learning of the dynamic behavior of the inner ear using ergonomic paradigm of user interactivity (zoom, rotation, mouse interaction,…). It can be used as a learning and demonstrating tool either in the medicine field - to understand the behavior of the sensors during any kind of motion - or in the aeronautical field to relate the inner ear functioning to some sensory illusions

    Robot kinematics: applications in virtual reality based pedagogy and sensor calibration

    Get PDF
    Conventions exist to describe the kinematics of a robot concisely, providing information about both its form and pose (position and orientation). Although mathematically convenient, the physical correlation between the parameters of these conventions and the robot that they represent is not necessarily intuitively obvious. Those who are new to the field of robotics may find it especially difficult to visualize these relationships. After presenting relevant background information on kinematics, robotics, virtual reality, and inertial sensors, this thesis investigates the effectiveness of using desktop virtual reality tools to help university-level students with the visualization of fundamental concepts in robot kinematics. Specifically, it examines how the new “Rotation Tool” assists students in the visualization of fixed and mobile frame compound rotations while verifying their non-commutative nature. It also explains how the new “Build-A-Robot” aids students in identifying the role that each of the Denavit-Hartenberg parameters plays in the description of the position and orientation of a serial manipulator’s component links. To enable flexible, real-time user interaction, Build-A-Robot employed a novel approach wherein MATLAB was used to directly manipulate the fundamental geometry of Virtual Reality Modeling Language (VRML) objects. Survey feedback and examination results are presented which indicate the students’ increased understanding that resulted after using both of these tools. This improvement was especially apparent among students who struggled to understand the concepts when traditional teaching methods alone were used. Tolerances in the manufacturing and assembly of robot arms introduce errors to the nominal kinematic models specified by manufacturers. This thesis also considers the impact of non-ideal kinematic parameters on the motion of the end-effector of a SCARA robot, which was used to calibrate an attached dual-axis accelerometer. Two novel, in-place calibration routines that employ dynamic accelerations are presented and validated using experimental data

    Development of an Environment Framework for a Modular Vehicle Simulation System

    Get PDF
    Recent vehicle developments are forced by continuously decreasing development times. To reach this aim virtual methods are necessary. Concerning the research area of vehicle dynamics, virtual analyses play an important role during the whole development process of a vehicle. Vehicle dynamics simulations using different manoeuvres, models and environments become a key component. This important role is even more significant for the early design phase of the vehicle, where only few and uncertain parameters are available. The present thesis expands the simulation platform MOVES2, designed for modular vehicle dynamic simulations during the early design phase, at the Institute of Automotive Engineering at Graz University of Technology. The presented expansion implements a set of modules to generate arbitrary roads within arbitrary scenarios, to animate the simulated vehicle motion within a virtual reality world based on the generated scenario, to define the desired trajectory for the simulation and to configure the modelling complexity of the model. To ensure a high level of user-friendliness, the Graphical User Interface of MOVES2 manages the required data from these modules, by creating, editing and deleting it.Outgoin

    A survey of free software for the design, analysis, modelling, and simulation of an unmanned aerial vehicle

    Get PDF
    The objective of this paper is to analyze free software for the design, analysis, modelling, and simulation of an unmanned aerial vehicle (UAV). Free software is the best choice when the reduction of production costs is necessary; nevertheless, the quality of free software may vary. This paper probably does not include all of the free software, but tries to describe or mention at least the most interesting programs. The first part of this paper summarizes the essential knowledge about UAVs, including the fundamentals of flight mechanics and aerodynamics, and the structure of a UAV system. The second section generally explains the modelling and simulation of a UAV. In the main section, more than 50 free programs for the design, analysis, modelling, and simulation of a UAV are described. Although the selection of the free software has been focused on small subsonic UAVs, the software can also be used for other categories of aircraft in some cases; e.g. for MAVs and large gliders. The applications with an historical importance are also included. Finally, the results of the analysis are evaluated and discussed—a block diagram of the free software is presented, possible connections between the programs are outlined, and future improvements of the free software are suggested. © 2015, CIMNE, Barcelona, Spain.Internal Grant Agency of Tomas Bata University in Zlin [IGA/FAI/2015/001, IGA/FAI/2014/006

    An Overview of Self-Adaptive Technologies Within Virtual Reality Training

    Get PDF
    This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training

    Perceptually Optimized Visualization on Autostereoscopic 3D Displays

    Get PDF
    The family of displays, which aims to visualize a 3D scene with realistic depth, are known as "3D displays". Due to technical limitations and design decisions, such displays create visible distortions, which are interpreted by the human vision as artefacts. In absence of visual reference (e.g. the original scene is not available for comparison) one can improve the perceived quality of the representations by making the distortions less visible. This thesis proposes a number of signal processing techniques for decreasing the visibility of artefacts on 3D displays. The visual perception of depth is discussed, and the properties (depth cues) of a scene which the brain uses for assessing an image in 3D are identified. Following the physiology of vision, a taxonomy of 3D artefacts is proposed. The taxonomy classifies the artefacts based on their origin and on the way they are interpreted by the human visual system. The principles of operation of the most popular types of 3D displays are explained. Based on the display operation principles, 3D displays are modelled as a signal processing channel. The model is used to explain the process of introducing distortions. It also allows one to identify which optical properties of a display are most relevant to the creation of artefacts. A set of optical properties for dual-view and multiview 3D displays are identified, and a methodology for measuring them is introduced. The measurement methodology allows one to derive the angular visibility and crosstalk of each display element without the need for precision measurement equipment. Based on the measurements, a methodology for creating a quality profile of 3D displays is proposed. The quality profile can be either simulated using the angular brightness function or directly measured from a series of photographs. A comparative study introducing the measurement results on the visual quality and position of the sweet-spots of eleven 3D displays of different types is presented. Knowing the sweet-spot position and the quality profile allows for easy comparison between 3D displays. The shape and size of the passband allows depth and textures of a 3D content to be optimized for a given 3D display. Based on knowledge of 3D artefact visibility and an understanding of distortions introduced by 3D displays, a number of signal processing techniques for artefact mitigation are created. A methodology for creating anti-aliasing filters for 3D displays is proposed. For multiview displays, the methodology is extended towards so-called passband optimization which addresses Moiré, fixed-pattern-noise and ghosting artefacts, which are characteristic for such displays. Additionally, design of tuneable anti-aliasing filters is presented, along with a framework which allows the user to select the so-called 3d sharpness parameter according to his or her preferences. Finally, a set of real-time algorithms for view-point-based optimization are presented. These algorithms require active user-tracking, which is implemented as a combination of face and eye-tracking. Once the observer position is known, the image on a stereoscopic display is optimised for the derived observation angle and distance. For multiview displays, the combination of precise light re-direction and less-precise face-tracking is used for extending the head parallax. For some user-tracking algorithms, implementation details are given, regarding execution of the algorithm on a mobile device or on desktop computer with graphical accelerator

    Proceedings of the GPEA Polytechnic Summit 2022: Session Papers

    Get PDF
    Welcome to GPEA PS 2022 Each year the Polytechnic Summit assembles leaders, influencers and contributors who shape the future of polytechnic education. The Polytechnic Summit provides a forum to enable opportunities for collaboration and partnerships and for participants to focus on innovation in curriculum and pedagogy, to share best practices in active and applied learning, and discuss practice-based research to enhance student learning. This year a view on the aspects of applied research will be added. How to conduct research in a teaching first environment and make use of this. Which characteristics of applied research are important to be used in teaching and vice versa?The Summit will – once again - also provide an opportunity to examine the challenges and opportunities presented by COVID-19 and will offer us all an opportunity to explore the ways in which we can collaborate more effectively using our new-found virtual engagement skills and prepare for a hybrid future. PS2022 Themes: Design (Programmes, Curriculum, Organisation);Practice-Based Learning;Applied Research; Employability and Graduate Skills; Internationalisation, Global Teaching & Collaboration and Sustainability Theme
    corecore