3 research outputs found

    A multi-modal interface for road planning tasks using vision, haptics and sound

    Get PDF
    The planning of transportation infrastructure requires analyzing many different types of geo-spatial information in the form of maps. Displaying too many of these maps at the same time can lead to visual clutter or information overload, which results in sub-optimal effectiveness. Multimodal interfaces (MMIs) try to address this visual overload and improve the user\u27s interaction with large amounts of data by combining several sensory modalities. Previous research into MMIs seems to indicate that using multiple sensory modalities leads to more efficient human-computer interactions when used properly. The motivation from this previous work has lead to the creation of this thesis, which describes a novel GIS system for road planning using vision, haptics and sound. The implementation of this virtual environment is discussed, including some of the design decisions used when trying to ascertain how we map visual data to our other senses. A user study was performed to see how this type of system could be utilized, and the results of the study are presented

    A multi-modal interface for road planning tasks using vision, haptics and sound

    Get PDF
    The planning of transportation infrastructure requires analyzing many different types of geo-spatial information in the form of maps. Displaying too many of these maps at the same time can lead to visual clutter or information overload, which results in sub-optimal effectiveness. Multimodal interfaces (MMIs) try to address this visual overload and improve the user's interaction with large amounts of data by combining several sensory modalities. Previous research into MMIs seems to indicate that using multiple sensory modalities leads to more efficient human-computer interactions when used properly. The motivation from this previous work has lead to the creation of this thesis, which describes a novel GIS system for road planning using vision, haptics and sound. The implementation of this virtual environment is discussed, including some of the design decisions used when trying to ascertain how we map visual data to our other senses. A user study was performed to see how this type of system could be utilized, and the results of the study are presented.</p

    3D oceanographic data compression using 3D-ODETLAP

    Get PDF
    This paper describes a 3D environmental data compression technique for oceanographic datasets. With proper point selection, our method approximates uncompressed marine data using an over-determined system of linear equations based on, but essentially different from, the Laplacian partial differential equation. Then this approximation is refined via an error metric. These two steps work alternatively until a predefined satisfying approximation is found. Using several different datasets and metrics, we demonstrate that our method has an excellent compression ratio. To further evaluate our method, we compare it with 3D-SPIHT. 3D-ODETLAP averages 20% better compression than 3D-SPIHT on our eight test datasets, from World Ocean Atlas 2005. Our method provides up to approximately six times better compression on datasets with relatively small variance. Meanwhile, with the same approximate mean error, we demonstrate a significantly smaller maximum error compared to 3D-SPIHT and provide a feature to keep the maximum error under a user-defined limit
    corecore