1,421 research outputs found

    Improving Big Data Visual Analytics with Interactive Virtual Reality

    Full text link
    For decades, the growth and volume of digital data collection has made it challenging to digest large volumes of information and extract underlying structure. Coined 'Big Data', massive amounts of information has quite often been gathered inconsistently (e.g from many sources, of various forms, at different rates, etc.). These factors impede the practices of not only processing data, but also analyzing and displaying it in an efficient manner to the user. Many efforts have been completed in the data mining and visual analytics community to create effective ways to further improve analysis and achieve the knowledge desired for better understanding. Our approach for improved big data visual analytics is two-fold, focusing on both visualization and interaction. Given geo-tagged information, we are exploring the benefits of visualizing datasets in the original geospatial domain by utilizing a virtual reality platform. After running proven analytics on the data, we intend to represent the information in a more realistic 3D setting, where analysts can achieve an enhanced situational awareness and rely on familiar perceptions to draw in-depth conclusions on the dataset. In addition, developing a human-computer interface that responds to natural user actions and inputs creates a more intuitive environment. Tasks can be performed to manipulate the dataset and allow users to dive deeper upon request, adhering to desired demands and intentions. Due to the volume and popularity of social media, we developed a 3D tool visualizing Twitter on MIT's campus for analysis. Utilizing emerging technologies of today to create a fully immersive tool that promotes visualization and interaction can help ease the process of understanding and representing big data.Comment: 6 pages, 8 figures, 2015 IEEE High Performance Extreme Computing Conference (HPEC '15); corrected typo

    Coupling BIM and game engine technologies for construction knowledge enhancement

    Get PDF
    Interactions and collaboration between parties in construction projects are often characterised by misunderstandings and poor information exchange. Game engine technologies, when employed with building information modelling (BIM), can help address these shortcomings. Quite often, the visualisation capabilities of BIM models are not explored fully partly because of their limited interactive capability. While game engines are powerful in visualisation and interactions in the gaming industry, the literature suggests a lack of understanding of the applicability of the same in construction. This study investigates the potential of the use of game engines in construction practice which culminated in a framework that can guide the implementation of the same in enhancing interactive building walkthroughs

    Game-day football visualization experience on dissimilar virtual reality platforms

    Get PDF
    College football recruiting is a competitive process. Athletic administrations attempt to gain an edge by bringing recruits to a home game, highlighting the atmosphere unique to campus. This is however not always possible since most recruiting efforts happen off-season. So, they relate the football game experience through video recordings and visits to football facilities. While these substitutes provide a general idea of a game, they cannot capture the feeling of playing while cheered on by a crowd of 55,000 people. To address this challenge and improve the recruitment process, the Iowa State University (ISU) athletic department and the Virtual Reality Applications Center (VRAC) teamed up to build an alternative to the game-day experience using the world’s highest resolution six-sided virtual reality (VR) environment - the C6, and a portable low-cost head-mounted display (HMD) system. This paper presents techniques used in the development of the immersive and portable VR environments followed by validation of the work through quantifying immersion and presence through a formal user study. Results from the user study indicate that both the HMD and C6 are an improvement over the standard practice of showing videos to convey the atmosphere of an ISU Cyclone football game. In addition, both the C6 and HMD were scored similar in immersion and presence categories. This indicates that the low-cost portable HMD version of the application produces minimal trade off in experience for a fraction of the cost

    Software techniques for improving head mounted displays to create comfortable user experiences in virtual reality

    Get PDF
    Head Mounted Displays (HMDs) allow users to experience Virtual Reality (VR) with a great level of immersion. Advancements in hardware technologies have led to a reduction in cost of producing good quality VR HMDs bringing them out from research labs to consumer markets. However, the current generation of HMDs suffer from a few fundamental problems that can deter their widespread adoption. For this thesis, we explored two techniques to overcome some of the challenges of experiencing VR when using HMDs. When experiencing VR with HMDs strapped to your head, even simple physical tasks like drinking a beverage can be difficult and awkward. We explored mixed reality renderings that selectively incorporate the physical world into the virtual world for interactions with physical objects. We conducted a user study comparing four rendering techniques that balance immersion in the virtual world with ease of interaction with the physical world. Users of VR systems often experience vection, the perception of self-motion in the absence of any physical movement. While vection helps to improve presence in VR, it often leads to a form of motion sickness called cybersickness. Prior work has discovered that changing vection (changing the perceived speed or moving direction) causes more severe cybersickness than steady vection (walking at a constant speed or in a constant direction). Based on this idea, we tried to reduce cybersickness caused by character movements in a First Person Shooter (FPS) game in VR. We propose Rotation Blurring (RB), uniformly blurring the screen during rotational movements to reduce cybersickness. We performed a user study to evaluate the impact of RB in reducing cybersickness and found that RB led to an overall reduction in sickness levels of the participants and delayed its onset. Participants who experienced acute levels of cybersickness benefited significantly from this technique

    Sub space: Enhancing the spatial awareness of trainee submariners using 3D simulation environments

    Get PDF
    Rapid advancements in computer technology have facilitated the development of practical and economically feasible three dimensional (3D) computer-generated simulation environments that have been utilized for training in a number of different fields. In particular, this development has been heavily influenced by innovations within the gaming industry, where First Person Shooter (FPS) games are often considered to be on the cutting edge of gaming technology in terms of visual fidelity and performance. 3D simulation environments built upon FPS gaming technologies can be used to realistically represent real world places, while also providing a dynamic and responsive experiential based learning environment for trainees. This type of training environment can be utilized effectively when training within the corresponding real world space may not be safe, practical, or economically feasible. This thesis explores the effectiveness of 3D simulation environments based on FPS gaming technologies to enhance the spatial awareness of trainees in unfamiliar real world spaces. The purpose was to identify the characteristics that contribute to effective learning within such environments. In order to identify these characteristics, a model was proposed representing the interrelationships between, and determinant factors of, the concepts of spatial cognition, learning within a simulation environment, and computer-generated 3D environments. The Location and Scenario Training System (LASTS), developed by the Royal Australian Navy, was evaluated to determine whether experience within the LASTS environment could benefit trainee submariners on Collins class submarines. The LASTS environment utilises the Unreal Runtime FPS game engine to provide a realistic representation of the Main Generator Room (MGR) on-board a Collins class submarine. This simulation was used to engage trainees in a simplified exercise based on the location of items relevant to a 12 Point Safety Round performed inside the MGR. Five trainee submariners were exposed to LASTS and then required to conduct the same exercise on-board a Collins class submarine. This mode of learning was compared to traditional non-immersive classroom teaching involving five additional trainee submariners who were also required to complete the same exercise inside the MGR. A mixture of qualitative and quantitative approaches to data collection and analysis was used to ascertain the effectiveness of LASTS as well as the contributing factors to this and learners\u27 perception of the value of the environment. Results indicated that LASTS could be successfully used as a training tool to enhance the spatial awareness of trainee submariners with regard to the MGR on-board a Collins class submarine. LASTS trainees also demonstrated a better spatial understanding of the MGR environment as a result of their experience compared to trainees who were the recipients of traditional classroom based training. The contributing characteristics of the proposed model were also validated with reference to the data gathered from the LASTS case study. This indicated that the model could be utilized in the design of future 3D simulation environments based on gaming technology in order to facilitate effective spatial awareness training
    corecore