5,855 research outputs found
Visualization of and Access to CloudSat Vertical Data through Google Earth
Online tools, pioneered by the Google Earth (GE), are facilitating the way in which scientists and general public interact with geospatial data in real three dimensions. However, even in Google Earth, there is no method for depicting vertical geospatial data derived from remote sensing satellites as an orbit curtain seen from above. Here, an effective solution is proposed to automatically render the vertical atmospheric data on Google Earth. The data are first processed through the Giovanni system, then, processed to be 15-second vertical data images. A generalized COLLADA model is devised based on the 15-second vertical data profile. Using the designed COLLADA models and satellite orbit coordinates, a satellite orbit model is designed and implemented in KML format to render the vertical atmospheric data in spatial and temporal ranges vividly. The whole orbit model consists of repeated model slices. The model slices, each representing 15 seconds of vertical data, are placed on the CloudSat orbit based on the size, scale, and angle with the longitude line that are precisely and separately calculated on the fly for each slice according to the CloudSat orbit coordinates. The resulting vertical scientific data can be viewed transparently or opaquely on Google Earth. Not only is the research bridged the science and data with scientists and the general public in the most popular way, but simultaneous visualization and efficient exploration of the relationships among quantitative geospatial data, e.g. comparing the vertical data profiles with MODIS and AIRS precipitation data, becomes possible
Developing Interaction 3D Models for E-Learning Applications
Some issues concerning the development of interactive 3D models for e-learning applications are considered. Given that 3D data sets are normally large and interactive display demands high performance computation, a natural solution would be placing the computational burden on the client machine rather than on the server. Mozilla and Google opted for a combination of client-side languages, JavaScript and OpenGL, to handle 3D graphics in a web browser (Mozilla 3D and O3D respectively). Based on the O3D model, core web technologies are considered and an example of the full process involving the generation of a 3D model and their interactive visualization in a web browser is described. The challenging issue of creating realistic 3D models of objects in the real world is discussed and a method based on line projection for fast 3D reconstruction is presented. The generated model is then visualized in a web browser. The experiments demonstrate that visualization of 3D data in a web browser can provide quality user experience. Moreover, the development of web applications are facilitated by O3D JavaScript extension allowing web designers to focus on 3D contents generation
The Visualization of Historical Structures and Data in a 3D Virtual City
Google Earth is a powerful tool that allows users to navigate through 3D representations of many cities and places all over the world. Google Earth has a huge collection of 3D models and it only continues to grow as users all over the world continue to contribute new models. As new buildings are built new models are also created. But what happens when a new building replaces another? The same thing that happens in reality also happens in Google Earth. Old models are replaced with new models. While Google Earth shows the most current data, many users would also benefit from being able to view historical data. Google Earth has acknowledged this with the ability to view historical images with the manipulation of a time slider. However, this feature does not apply to 3D models of buildings, which remain in the environment even when viewing a time before their existence. I would like to build upon this concept by proposing a system that stores 3D models of historical buildings that have been demolished and replaced by new developments. People may want to view the old cities that they grew up in which have undergone huge developments over the years. Old neighborhoods may be completely transformed with new road and buildings. In addition to being able to view historical buildings, users may want to view statistics of a given area. Users can view such data in their raw format but using 3D visualizations of statistical data allows for a greater understanding and appreciation of historical changes. I propose to enhance the visualization of the 3D world by allowing users to graphically view statistical data such as population, ethnic groups, education, crime, and income. With this feature users will not only be able to see physical changes in the environment, but also statistical changes over time
A study of user perceptions of the relationship between bump-mapped and non-bump-mapped materials, and lighting intensity in a real-time virtual environment
The video and computer games industry has taken full advantage of the human sense of vision by producing games that utilize complex high-resolution textures and materials, and lighting technique. This results to the creation of an almost life-like real-time 3D virtual environment that can immerse the end-users. One of the visual techniques used is real-time display of bump-mapped materials. However, this sense of visual phenomenon has yet to be fully utilized for 3D design visualization in the architecture and construction domain. Virtual environments developed in the architecture and construction domain are often basic and use low-resolution images, which under represent the real physical environment. Such virtual environment is seen as being non-realistic to the user resulting in a misconception of the actual potential of it as a tool for 3D design visualization. A study was conducted to evaluate whether subjects can see the difference between bump-mapped and nonbump-mapped materials in different lighting conditions. The study utilized a real-time 3D virtual environment that was created using a custom-developed software application tool called BuildITC4. BuildITC4 was developed based upon the C4Engine which is classified as a next-generation 3D Game Engine. A total of thirty-five subjects were exposed to the virtual environment and were asked to compare the various types of material in different lighting conditions. The number of lights activated, the lighting intensity, and the materials used in the virtual environment were all interactive and changeable in real-time. The goal is to study how subjects perceived bump-mapped and non-bump mapped materials, and how different lighting conditions affect realistic representation. Results from this study indicate that subjects could tell the difference between the bump-mapped and non-bump mapped materials, and how different material reacts to different lighting condition
Artimate: an articulatory animation framework for audiovisual speech synthesis
We present a modular framework for articulatory animation synthesis using
speech motion capture data obtained with electromagnetic articulography (EMA).
Adapting a skeletal animation approach, the articulatory motion data is applied
to a three-dimensional (3D) model of the vocal tract, creating a portable
resource that can be integrated in an audiovisual (AV) speech synthesis
platform to provide realistic animation of the tongue and teeth for a virtual
character. The framework also provides an interface to articulatory animation
synthesis, as well as an example application to illustrate its use with a 3D
game engine. We rely on cross-platform, open-source software and open standards
to provide a lightweight, accessible, and portable workflow.Comment: Workshop on Innovation and Applications in Speech Technology (2012
Realising the open virtual commissioning of modular automation systems
To address the challenges in the automotive industry posed by the need to rapidly manufacture more
product variants, and the resultant need for more adaptable production systems, radical changes are
now required in the way in which such systems are developed and implemented. In this context, two
enabling approaches for achieving more agile manufacturing, namely modular automation systems
and virtual commissioning, are briefly reviewed in this contribution. Ongoing research conducted at
Loughborough University which aims to provide a modular approach to automation systems design
coupled with a virtual engineering toolset for the (re)configuration of such manufacturing
automation systems is reported. The problems faced in the virtual commissioning of modular
automation systems are outlined. AutomationML - an emerging neutral data format which has
potential to address integration problems is discussed. The paper proposes and illustrates a
collaborative framework in which AutomationML is adopted for the data exchange and data
representation of related models to enable efficient open virtual prototype construction and virtual
commissioning of modular automation systems. A case study is provided to show how to create the
data model based on AutomationML for describing a modular automation system
Relating geometry descriptions to its derivatives on the web
Sharing building information over the Web is becoming more popular, leading to advances in describing building models in a Semantic Web context. However, those descriptions lack unified approaches for linking geometry descriptions to building elements, derived properties and derived other geometry descriptions. To bridge this gap, we analyse the basic characteristics of geometric dependencies and propose the Ontology for Managing Geometry (OMG) based on this analysis. In this paper, we present our results and show how the OMG provides means to link geometric and non-geometric data in meaningful ways. Thus, exchanging building data, including geometry, on the Web becomes more efficient
- …