8,508 research outputs found

    DEVELOPMENT OF NORTHROP-GRUMMAN MARK VIIE TRAINING UNIT AND WIRELESS VIDEO SYSTEM FOR USE IN IMMERSIVE ENVIRONMENTS

    Get PDF
    A training unit has been developed that allows NVESD researchers to develop training simulations within virtual environments to enhance infantry skill and awareness. A ground station was developed to house a computer, power system, and video transmission system. This station will allow for a remote operator to wirelessly send a video/audio stream to the handset. The ground station also allows the use of external video and audio inputs to be sent using onboard converters. Different wireless frequencies were evaluated to determine the best for long-range transmission of content. A handset was developed from a carbon fiber prototype shell. The handset features a video receiver, display, power system, OSD system, and external video inputs. The user can view transmitted video and audio while obtaining real-time GPS feedback from the OSD. The alternate video input allows the handset to be used within the virtual environments developed at the University of Kentucky’s Center for Visualization for virtual environments. This thesis will present the research conducted in order to develop Mark VIIE training unit including the requirements for the project, the desired functionality, the NVESD provided equipment, the analysis of the prospective components, the design of custom fabricated parts, and the assembly and integration of the components into a complete system

    The Application of Mixed Reality Within Civil Nuclear Manufacturing and Operational Environments

    Get PDF
    This thesis documents the design and application of Mixed Reality (MR) within a nuclear manufacturing cell through the creation of a Digitally Assisted Assembly Cell (DAAC). The DAAC is a proof of concept system, combining full body tracking within a room sized environment and bi-directional feedback mechanism to allow communication between users within the Virtual Environment (VE) and a manufacturing cell. This allows for training, remote assistance, delivery of work instructions, and data capture within a manufacturing cell. The research underpinning the DAAC encompasses four main areas; the nuclear industry, Virtual Reality (VR) and MR technology, MR within manufacturing, and finally the 4 th Industrial Revolution (IR4.0). Using an array of Kinect sensors, the DAAC was designed to capture user movements within a real manufacturing cell, which can be transferred in real time to a VE, creating a digital twin of the real cell. Users can interact with each other via digital assets and laser pointers projected into the cell, accompanied by a built-in Voice over Internet Protocol (VoIP) system. This allows for the capture of implicit knowledge from operators within the real manufacturing cell, as well as transfer of that knowledge to future operators. Additionally, users can connect to the VE from anywhere in the world. In this way, experts are able to communicate with the users in the real manufacturing cell and assist with their training. The human tracking data fills an identified gap in the IR4.0 network of Cyber Physical System (CPS), and could allow for future optimisations within manufacturing systems, Material Resource Planning (MRP) and Enterprise Resource Planning (ERP). This project is a demonstration of how MR could prove valuable within nuclear manufacture. The DAAC is designed to be low cost. It is hoped this will allow for its use by groups who have traditionally been priced out of MR technology. This could help Small to Medium Enterprises (SMEs) close the double digital divide between themselves and larger global corporations. For larger corporations it offers the benefit of being low cost, and, is consequently, easier to roll out across the value chain. Skills developed in one area can also be transferred to others across the internet, as users from one manufacturing cell can watch and communicate with those in another. However, as a proof of concept, the DAAC is at Technology Readiness Level (TRL) five or six and, prior to its wider application, further testing is required to asses and improve the technology. The work was patented in both the UK (S. R EDDISH et al., 2017a), the US (S. R EDDISH et al., 2017b) and China (S. R EDDISH et al., 2017c). The patents are owned by Rolls-Royce and cover the methods of bi-directional feedback from which users can interact from the digital to the real and vice versa. Stephen Reddish Mixed Mode Realities in Nuclear Manufacturing Key words: Mixed Mode Reality, Virtual Reality, Augmented Reality, Nuclear, Manufacture, Digital Twin, Cyber Physical Syste

    Design of a Tracking Glove for use in Virtual Reality Training Environments

    Get PDF
    A thesis presented to the faculty of the College of Business and Technology at Morehead State University in partial fulfillment of the requirements for the degree Master of Science by Thomas A. Buteyn on April 25, 2022

    Virtual reality for assembly methods prototyping: a review

    Get PDF
    Assembly planning and evaluation is an important component of the product design process in which details about how parts of a new product will be put together are formalized. A well designed assembly process should take into account various factors such as optimum assembly time and sequence, tooling and fixture requirements, ergonomics, operator safety, and accessibility, among others. Existing computer-based tools to support virtual assembly either concentrate solely on representation of the geometry of parts and fixtures and evaluation of clearances and tolerances or use simulated human mannequins to approximate human interaction in the assembly process. Virtual reality technology has the potential to support integration of natural human motions into the computer aided assembly planning environment (Ritchie et al. in Proc I MECH E Part B J Eng 213(5):461–474, 1999). This would allow evaluations of an assembler’s ability to manipulate and assemble parts and result in reduced time and cost for product design. This paper provides a review of the research in virtual assembly and categorizes the different approaches. Finally, critical requirements and directions for future research are presented

    Computer Aided Drafting Virtual Reality Interface

    Get PDF
    Computer Aided Drafting (CAD) is pervasive in engineering fields today. It has become indispensable for planning, creating, visualizing, troubleshooting, collaborating, and communicating designs before they exist in physical form. From the beginning, CAD was created to be used by means of a mouse, keyboard, and monitor. Along the way, other, more specialized interface devices were created specifically for CAD that allowed for easier and more intuitive navigation within a 3D space, but they were at best stopgap solutions. Virtual Reality (VR) allows users to navigate and interact with digital 3D objects and environments the same way they would in the real world. For this reason, VR is a natural CAD interface solution. Using VR as an interface for CAD software, creating will be more intuitive and visualizing will be second nature. For this project, a prototype VR CAD program was created using Unreal Engine for use with the HTC Vive to compare against traditional WIMP (windows, icons, menus, pointer) interface CAD programs for the time it takes to learn each program, create similar models, and impressions of using each program, specifically the intuitiveness of the user interface and model manipulation. FreeCAD, SolidWorks, and Blender were the three traditional interface modeling programs chosen to compare against VR because of their wide-spread use for modeling in 3D printing, industry, and gaming, respectively. During the course of the project, two VR modeling programs were released, Google Blocks and MakeVR Pro; because they were of a similar type as the prototype software created in Unreal Engine, they were included for comparison as part of this project. The comparison showed that the VR CAD programs were faster to learn and create models and more intuitive to use than the traditional interface CAD programs

    Integration Process for the Habitat Demonstration Unit

    Get PDF
    The Habitat Demonstration Unit (HDU) is an experimental exploration habitat technology and architecture test platform designed for analog demonstration activities The HDU project has required a team to integrate a variety of contributions from NASA centers and outside collaborators and poses a challenge in integrating these disparate efforts into a cohesive architecture To complete the development of the HDU from conception in June 2009 to rollout for operations in July 2010, a cohesive integration strategy has been developed to integrate the various systems of HDU and the payloads, such as the Geology Lab, that those systems will support The utilization of interface design standards and uniquely tailored reviews have allowed for an accelerated design process Scheduled activities include early fit-checks and the utilization of a Habitat avionics test bed prior to equipment installation into HDU A coordinated effort to utilize modeling and simulation systems has aided in design and integration concept development Modeling tools have been effective in hardware systems layout, cable routing and length estimation, and human factors analysis Decision processes on the shell development including the assembly sequence and the transportation have been fleshed out early on HDU to maximize the efficiency of both integration and field operations Incremental test operations leading up to an integrated systems test allows for an orderly systems test program The HDU will begin its journey as an emulation of a Pressurized Excursion Module (PEM) for 2010 field testing and then may evolve to a Pressurized Core Module (PCM) for 2011 and later field tests, depending on agency architecture decisions The HDU deployment will vary slightly from current lunar architecture plans to include developmental hardware and software items and additional systems called opportunities for technology demonstration One of the HDU challenges has been designing to be prepared for the integration of presently unanticipated systems Results of the HDU field tests will influence future designs of habitat systems

    JUNO Conceptual Design Report

    Get PDF
    The Jiangmen Underground Neutrino Observatory (JUNO) is proposed to determine the neutrino mass hierarchy using an underground liquid scintillator detector. It is located 53 km away from both Yangjiang and Taishan Nuclear Power Plants in Guangdong, China. The experimental hall, spanning more than 50 meters, is under a granite mountain of over 700 m overburden. Within six years of running, the detection of reactor antineutrinos can resolve the neutrino mass hierarchy at a confidence level of 3-4σ\sigma, and determine neutrino oscillation parameters sin2θ12\sin^2\theta_{12}, Δm212\Delta m^2_{21}, and Δmee2|\Delta m^2_{ee}| to an accuracy of better than 1%. The JUNO detector can be also used to study terrestrial and extra-terrestrial neutrinos and new physics beyond the Standard Model. The central detector contains 20,000 tons liquid scintillator with an acrylic sphere of 35 m in diameter. \sim17,000 508-mm diameter PMTs with high quantum efficiency provide \sim75% optical coverage. The current choice of the liquid scintillator is: linear alkyl benzene (LAB) as the solvent, plus PPO as the scintillation fluor and a wavelength-shifter (Bis-MSB). The number of detected photoelectrons per MeV is larger than 1,100 and the energy resolution is expected to be 3% at 1 MeV. The calibration system is designed to deploy multiple sources to cover the entire energy range of reactor antineutrinos, and to achieve a full-volume position coverage inside the detector. The veto system is used for muon detection, muon induced background study and reduction. It consists of a Water Cherenkov detector and a Top Tracker system. The readout system, the detector control system and the offline system insure efficient and stable data acquisition and processing.Comment: 328 pages, 211 figure

    A Framework for Extended Reality System Development in Manufacturing

    Get PDF
    This paper presents a framework for developing extended reality (XR) systems within manufacturing context. The aim of this study is to develop a systematic framework to improve the usability and user acceptance of future XR systems. So that manufacturing industry can move from the “wow effect” of XR demonstrators into the stage whereas XR systems can be successfully integrated and improve the conventional work routines. It is essential to ensure the usability and user acceptance of XR systems for the wider adoption in manufacturing. The proposed framework was developed through six case studies that covered different XR system developments for different application areas of manufacturing. The framework consists of five iterative phases: (1) requirements analysis, (2) solution selection, (3) data preparation, (4) system implementation and (5) system evaluation. It is validated through one empirical case and seven identified previous studies, which partly aligned with the proposed framework. The proposed framework provides a clear guideline on the steps needed to integrate XR in manufacturing and it extends the XR usage with increased usability and user acceptance. Furthermore, it strengthens the importance of user-centered approach for XR system development in manufacturing
    corecore